title
string
paper_decision
string
review_1
string
rebuttals_1
string
review_2
string
rebuttals_2
string
review_3
string
rebuttals_3
string
review_4
string
rebuttals_4
string
global_rebuttals
string
dataset_source
string
conference_year
int64
review_5
string
rebuttals_5
string
review_6
string
rebuttals_6
string
review_7
string
rebuttals_7
string
review_8
string
rebuttals_8
string
Principled Bayesian Optimization in Collaboration with Human Experts
Accept (spotlight)
Summary: This submission discusses the application of human's expertise knowledge into Bayesian Optimization algorithm. A principled approach, COBOL, was proposed which provides two novel guarantees: handover guarantee that ensures queries for expert's label diminishes to zero with iterations; and no-harm guarantee that preserves comparable convergence rate to vanilla BO using a data-driven trust level adjustment. Numerical results show the robustness of COBOL and real world application proves COBOL outperforms state-of-the-art methods. Strengths: 1. This paper presents comprehensive theoretical proofs to support the proposed method. The math seems strong to me. Notations and assumptions are clearly stated at the beginning. Follow-up theorems are easy to follow. 2. I quite like the trust-level adjustment method. The authors present novelty of their work by differentiating themselves from those who requires hand-tuned trust-level functions. I like the idea of treating expert belief as the regularization of the acquisition function and the corresponding primal-dual optimization problem formulation seems reasonable to me. Weaknesses: My only concern regards the no-harm guarantee. From observation in Figure 3, the no-harm guarantee indeed presents comparable convergence rate as vanilla LCB, but never arrives at the same optimum as vanilla LCB within the given iterations under adversarial cases. By looking at line 5 in Algorithm 1, it seems adversarial expert's label could harm the quality of query point continuously, especially when the constant value of $\eta$ is set to be large so the confidence condition can be easily satisfied. To my understanding, $\lambda_0$ improves the quality of solution from expert-augmented LCB by reducing its value when the data tells expert's feedback is not trustful. Resilience to adversarial label is achieved through updating the primal-dual weight $\lambda_0$, but I feel like $\eta$ should also vary with iterations since it is a critical hyperparameter in the algorithm that actually controls whether to accept expert-augmented query point $x_t^c$. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. The estimation on the hyperparameter $B_g$ in lines 182-189 seems heuristic to me. Is there any reasons to initialize the norm bound at a small value and double it during the optimization? 2. Could the authors discuss briefly the possibility of applying such human collaborated BO formulation to different acquisition functions (e.g. Expected Improvement)? 3. In the plotting of real world Li+ methyl-acetate experiment in Figure 5, how can the curve for [44] KDD 2023 go up after 45th evaluation? Best observed value should be non-increasing with iterations, could the authors explain how they achieve these results? 4. Could the authors be more specific when referring to proofs and sections? e.g. not using phrases like "will be seen" (line 148), "detailed later" (line 171), "later experimental section" (line 176) Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors mention limited capability of the algorithm on high-dimensional problems due to its GP-UCB based nature. The experiments are only conducted up to 4D. See also above for limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your valuable feedback and helpful suggestions. The following are our detailed responses. # Response to the concern regarding the no-harm guarantee and $\eta$. The suggestion for an adaptive trust weight $\eta$ is interesting and could offer better resilience to adversity. Given the limited time, we leave the design of a principled scheme to adapt the weight $\eta$ as future work. However, we want to emphasize that, even without such a scheme, our no-harm guarantee holds, both theoretically and empirically. We have prepared several experiments for further clarification. - **Supporting experimental results.** To clarify the robustness in the concerned cases, we added experiments (Fig. R2(b) in the global response) with longer iterations ($T = 200$) to confirm the convergence on par with vanilla LCB. We confirmed the no-harm guarantee for both cases of adversarial feedback accuracy and large trust weight $\eta=100$ on adversarial feedback. For the large trust weight $\eta=100$, we observed saturation behavior, where larger $\eta$ does not significantly change the convergence rate with given increase in $\eta$. - **Adaptation through the posterior standard deviation.** Although $\eta$ is set to be a constant in our current design of the algorithm, there is still adaptation on trusting human or the vanilla BO algorithm through the time-varying posterior standard deviation. Intuitively, if originally the expert-augmented solution $x_t^c$ is trusted more, more samples are allocated to human-preferred region and $\sigma_t(x_t^c)$ drops quickly. Intuitively, if we keep sampling $x_t^c$ and $x_t^u\neq x_t^c$, $\sigma_t(x_t^u)$ would finally be larger than $\eta\sigma_t(x_t^c)$ and we switch to sampling $x_t^u$. - **Choice of $\eta$ does not need to be very large in practice.** Intuitively, $\eta$ captures the belief on the expertise level of the human. The more trust we have on the expertise of the human, the larger $\eta$ we can choose. But larger $\eta$ increases the risk of higher regret due to potential over-trust in adversarial human labeler. In our experience, $\eta$ does not need to be very large. Indeed, $\eta=3$ already achieves superior performance in our experiment. And in our updated Figure R2(b), the experimental results become insensitive to $\eta$ when $\eta>10$. We have added more discussions in the manuscript around Algorithm 1 to discuss the choice of $\eta$. # Responses to other comments. > The estimation on the hyperparameter $B_g$ in lines 182-189 seems heuristic to me. Is there any reasons to initialize the norm bound at a small value and double it during the optimization? Yes, this is a rough estimation. Our algorithm requires knowledge of a valid upper bound on the norm of $g$, which is unknown a priori. Thus, we empirically check the LL value as a criterion. If we find the current bound $B_g$ to be invalid (that is, smaller than the norm of $g$), we double it. We do not use a too big (big enough so that $\geq||g||$) initial $B_g$ to avoid over-exploration in $g$. We would like to note that the MLE in GP also does not provide a principled method to estimate its hyperparameters (Karvonen et al.). Regret theory typically assumes the true hyperparameters are given. Thus, this is a shared challenge, and we have found heuristics that work in practice, similar to GP. Developing a more principled tuning method is our future work, and we believe a statistical test is a promising line of research. - T. Karvonen et al., Maximum Likelihood Estimation in Gaussian Process is Ill-Posed. JMLR 2023. > Could the authors discuss briefly the possibility of applying such human collaborated BO formulation to different acquisition functions (e.g. Expected Improvement)? Our algorithm can be easily extended to other acquisition functions. For expected improvement, we can indeed use similar idea to constrained expected improvement to generate $x_t^c$, $$x_t^c \in \arg\max_{x\in\mathcal{X}} \mathbb{P}(x\text{ is accepted by human})\mathrm{EI}(x). $$ However, our convergence rate depends on the GP-UCB algorithm, so the convergence rate will change to that of the selected acquisition function. > In the plotting of real world Li+ methyl-acetate experiment in Figure 5, how can the curve for [44] KDD 2023 go up after 45th evaluation? Best observed value should be non-increasing with iterations, could the authors explain how they achieve these results? Upon checking the raw data, we found that a few results failed at the point of ascent. We were averaging after performing individual minimum operations. We have reviewed all results and rerun the experiments. Please review the new Figure for the confirmation (Fig. R3(b) in global response). > Could the authors be more specific when referring to proofs and sections? e.g. not using phrases like "will be seen" (line 148), "detailed later" (line 171), "later experimental section" (line 176) Yes, we updated the manuscript to be more specific. Both lines 148 and 171 refers to Theorem 4.1 and Appendix B, and line 176 refers to Figure 3. > The authors mention limited capability of the algorithm on high-dimensional problems due to its GP-UCB based nature. The experiments are only conducted up to 4D. See also above for limitations. Scalability to high dimensions is a common challenge for BO. In practice, existing generic techniques, such as decomposed kernels, can be applied in our algorithm to choose kernel functions and achieve scalability in high-dimensional spaces. We realized one minor mistake in the explanation of the Michalewicz function in Appendix F.2.1, which was written as 2D but is actually a 5D function. Thus, our results are up to 5D. However, we are not claiming our results are scalable to high dimensions without the above-mentioned techniques. We have added similar discussions around Table 1 in the manuscript. --- Rebuttal 2: Title: Any further questions or suggestions? Comment: Dear reviewer iJqg, Your comments have helped us improve the quality of the manuscript a lot. And we hope our responses have addressed your comments adequately. Otherwise, please let us know if you have any further questions or suggestions. We are more than happy to provide further responses. Best, All the authors. --- Rebuttal Comment 2.1: Comment: Thank the authors for the response. My concerns have been well answered and the new results from rerunning the experiments seem convincing to me now. I have raised my score. --- Reply to Comment 2.1.1: Title: Many thanks Comment: Dear reviewer iJqg, Many thanks for acknowledging our rebuttal and raising the score! We are glad that your concerns are addressed. Best, All the authors
Summary: The paper introduces a Bayesian optimization algorithm designed to include human expert knowledge. Expert input comes in the form of simple accept/reject feedback (as in "this experiment is / is not worth doing"). To incorporate feedback two models are maintained, (1) a standard GP model of the objective, and (2) a likelihood ratio model of human expert feedback. Theoretical performance guarantees are proven, precisely a so-called handover guarantee demonstrating sub-linearity in the number of expert queries, and a convergence-rate guarantee to ensure sub-linear performance. Experiments include a practical example on li-ion battery design. Strengths: I am very impressed by this paper! The problem it considers is important but under-studied. The algorithm and its justification is novel and well-motivated, and the experimental results are impressive. - The core idea of using expert feedback in simple accept/reject form is certainly attractive compared to the more usual ranking approach and avoids many of the problems therein. - While I'm not entirely familiar with the model used for human feedback (likelihood ratio model) it seems to be a sensible choice. - The paper is reasonably easy to follow and has a clear flow (for the most part). - The theoretical analysis is interesting - I particularly appreciate the handover guarantee, which (as far as I know) is unique in the literature. - The results clearly demonstrate the potential of the algorithm. Weaknesses: Small points: - does assumption 2.3 really need a justification? This is a standard assumption. - line 114: is it usual to call $r$ a regularization term? Would it be easier to simply use $\xi$ as per assumption 2.6? - equation (4): use of $\xi$ here is a potential notational clash with assumption 3.7. Technical Quality: 4 Clarity: 3 Questions for Authors: - are you assuming that the expert labeling model is time-invariant here? I am curious what might happen as the expert learns over the course of the optimization, particularly if the BO turns up new and unexpected results (which, in my experience, is not uncommon). - if I'm reading the regret-bound (6a) correctly this simply shows that, in the worst-case, the algorithm converges at the usual rate for BO. In the optimistic case - that is, assuming an expert providing non-adversarial feedback - do you have any thoughts on how/if this might be used to improve the regret bound? This would be non-trivial I suspect, but it would be good to better understand/bound the potential acceleration from human expert feedback. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 9 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your positive feedback and the helpful suggestions. The following are our detailed responses. ## P1 > does assumption 2.3 really need a justification? This is a standard assumption. We do some justification because it is also an important assumption and we want to differentiate from gradient-based optimization (where compact set assumption is often not needed). ## P2 > line 114: is it usual to call $r$ a regularization term? Would it be easier to simply use $\xi$ as per assumption 2.6? Yes, this is common in theoretical studies. We are aware that $r$ is typically referred to as the Gaussian noise variance. Nevertheless, we follow the definition in [16], because even in the noiseless case, we need the regularisation term $rI$ for the invertibility of $K_{\mathcal{Q}_t^f}$ to avoid numerical instability. ## P3 > equation (4): use of $\xi$ here is a potential notational clash with assumption 3.7. We have replaced $\xi$ with $\zeta$ to represent the dual update step size in the manuscript. ## P4 > are you assuming that the expert labeling model is time-invariant here? I am curious what might happen as the expert learns over the course of the optimization, particularly if the BO turns up new and unexpected results (which, in my experience, is not uncommon). Here are our thoughts on extending our results to time-varying human behavior model: - **Simple extension, yet not promising performance gain.** We have added the experiments (Fig. R1 in global response). The most naïve approach for non-stationary model is windowing, i.e., forgetting the previous queried dataset. This can be very easy to apply to our setting, as it simply removes the old data from $g_t$ using the predefined iteration window. This increases the predictive uncertainty of $g_t$, and therefore requires more queries to the human per iteration, yet our framework still works. As shown in the Fig. R1, this non-stationary approach can offer a slight performance gain when the human learning rate is very quick ($\alpha_{lr} = 1$). However, we do not know if this is always the case, and it also requires more labeling effort from humans. - **More sophisticated extension.** Another more sophisticated approach is modelling the dynamics of behavioural change. A potential idea is modelling the human behaviour change as an implicit online learning process of the latent function $g$. That is, $g_{t+1}=F(g_t, x_t, y_t),$ where $g_t$ is the human latent function at step $t$. The forward dynamics $F$ captures the update of human latent function $g$ when observing the new data point $(x_t, y_t)$. One potential $F$ is gradient ascent of log-likelihood as shown in $g_{t+1}=g_t+\lambda\nabla_g\log p_g(x_t, y_t)$, where $p_g(x_t, y_t)$ is the probability of observing $y_t$ at the input $x_t$ given the black-box objective function is $g$. We can then combine this dynamic with our likelihood ratio model. Since this part requires significantly different analysis and experiments, we leave it as future work. We have added similar discussions to the appendix of the manuscript. ## P5 > if I'm reading the regret-bound (6a) correctly this simply shows that, in the worst-case, the algorithm converges at the usual rate for BO. In the optimistic case - that is, assuming an expert providing non-adversarial feedback - do you have any thoughts on how/if this might be used to improve the regret bound? This would be non-trivial I suspect, but it would be good to better understand/bound the potential acceleration from human expert feedback. Thanks for the suggestion. Under our current mild assumption on the latent expert function $g$, an *order-wise* convergence improvement is not attainable. Indeed, assumption becomes unrealistic if we really want an order-wise improvement. Therefore, we only demonstrate the empirical superiority of expert-assisted BO in the experiments. The following are the more specific discussions. - **Order-wise improvement can not be attained under current mild assumption**. $g$ may contain no information (e.g., $g=0$) or even adversarial. Even if human expertise is helpful, we can not guarantee an *order-wise* improvement either. For example, consider the following $g$, $g(x)= f(x^\star)+c, \text{if } f(x)-f(x^\star)\leq c, \text{ and } f(x) \text{ otherwise},$ where $c>0$ is a positive constant. In practice, such a scenario means the human expert has some rough idea in a near-optimal region but not exactly sure where the exact optimum is. This is common in practice. In this case, human expert is helpful in identifying the region with $f(x)\leq f(x^\star)+c$ but no longer helpful for further optimization inside the region {$x\in\mathcal{X}|f(x)\leq f(x^\star)+c$}. However, convergence rate is defined in the asymptotic sense. Hence, an order-wise improvement can not be guaranteed. - **Assumption becomes unrealistic if we really want it**. Some papers that show theoretical superiority [2, 6], yet the assumptions are unrealistic. For example, [6] assumed that the human knows the true kernel hyperparameters while GP is misspecified, and [2] assumed the human belief function $g$ has better and tighter confidence intervals over the entire domain. We can derive the better convergence rate of our algorithm than AI-only ones if we use [2] assumption, but this is unlikely to be true in reality. In fact, our method outperforms these method empirically (see Figure 5). This supports the superiority based on unrealistic conditions is not meaningful in practice. We have updated the manuscript by adding some discussion near Theorem 4.1 and in the Appendix. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their response and will leave my assessment unchanged. --- Reply to Comment 1.1.1: Title: Many thanks Comment: Dear reviewer sMdb, Many thanks for acknowledging our response and again, for the very positive review! Best, All the authors
Summary: The paper proposes to use human rejection as a feedback in human-AI collaborated Bayesian optimization. The proposed solution uses constrained optimization to explore the regions humans think might be beneficial. The authors offered guarantee that 1) the proposed algorithm has the same regret as the vanilla BO algorithm and 2) the number of queries needed from humans converges to 0. Empirically, the authors show that the proposed method outperforms existing method and has a human study with real human experts. Strengths: The human-AI collaboration in bayesian optimization is important. The authors show an interesting human-AI collaboration system that humans provide acceptance-rejection feedback to help the BO process. The empirical results are good. Weaknesses: Insufficient theoretical results - what is human's value in the proposed Human-AI system: The authors showed two theoretical results, 1) the human-AI system is no worse than the AI system and 2) humans will be out of the system in the long run. But the key theoretical results are missing. The most important results should be how humans improve the proposed system compared to AI-only systems. Moreover, the authors should show how humans' expertise affect the convergence rate. In my opinion, the current theoretical results are not meaningful. This is the most important point the authors should address. Feedback form: the authors use 'acceptance-rejection' feedback, which seems limited. Intuitively, humans can directly provide $x_t^c$ in this problem setting. Why the authors choose this form of feedback instead of a richer form of feedback. Human choice model: humans are only required to choose from $x_t^c$ and reject in the paper. The vanilla LCB is never presented to the humans. Can authors explain why it is the case, maybe humans can also reject bad vanilla LCB suggestions. Changing human behaviors: in practice, humans' behavior is often changing, especially in such interactive systems that humans observe new feedback at each time step. How can the proposed method can adapted into such settings? Human study - No IRB approval: my understanding is for any human study. Investigators are required to obtain either an exempt determination or IRB approval before these activities can be initiated. While authors seem don't have either based on the checklist. Technical Quality: 3 Clarity: 3 Questions for Authors: See weakness. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['Ethics review needed: Research involving human subjects'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the valuable feedback. The following are our responses. # Insufficient theoretical results We agree that acceleration through expert knowledge is central to our contribution. However, we do not believe that "theoretical" acceleration is essential for the completeness of our work. Our main contribution is the **algorithm**, which outperforms 6 baselines on 5 synthetic and 4 real-world tasks, which should be sufficient for empirical validation. While theoretical validation is nice to have for showing its effectiveness, it is not the only form of validation. Notably, our setting has the **gap** between what theory can offer and what users want. In the expert-guided BO, there is a near consensus that the expert is a warm starter [34, 36, 37, 42, 44]. It is natural that experts can guide AI to more promising regions in the beginning. However, in the later stages, with a larger dataset, GP is more accurate in finding the global optimum, as confirmed in various literature [34, 36, 37, 42, 44]. This means that convergence in the later stage will not be very different from an AI-only system. Still, early-stage acceleration is of paramount importance for high-stakes optimization, such as battery design, where engineers strive to launch better products than competitors as early as possible. However, theoretical analysis focuses on worst-case asymptotic convergence rates. This means that the warm-starter assumption does not significantly affect the theoretical convergence rate. This is why theoretical acceleration is not crucial in our setting. We believe the value of "conservative" theory lies more in providing safeguard guarantees, assuring we can safely and effectively combine experts and AI suggestions for early-stage acceleration. Indeed, ours is the *first* algorithm to enjoy a full guarantee (see Table 2). We are constrained by the space to explain theoretical details, but please read the P5 for sMdb. In short, order-wise improvement is unattainable under current mild assumption and assumption can become unrealistic if we really want it. The baselines [2, 6] have theoretical acceleration under unrealistic assumptions. Figure 5 shows they do not outperform ours, highlighting such theoretical acceleration do not always translate to empirical success. # Feedback form We choose `acceptance-rejection' feedback for two reasons: **Pinpoint form feedback does not necessarily perform better empirically.** We added an experiment (Fig. R3(a) in the global response) comparing pinpoint and primal-dual approaches. The results show that our primal-dual approach performs better across varying feedback accuracy, at least within our framework. Existing works [6, 44] employ pinpoint feedback in different frameworks. However, Figure 5 clearly shows that neither [6] nor [44] performs better than ours. This fact is confirmed by several studies from different institutions, such as [2] and [63], both of which demonstrate that [6] performs even worse than vanilla LCB. Similarly, [33] shows that this type of feedback only works better if the expert's manual sampling is consistently superior to vanilla LCB. Such cases are rare in our examples (e.g., Rosenbrock), and [2, 63] confirm the same conclusion. **Acceptance-Rejection feedback is efficient and accurate in extracting human knowledge for BO.** From the BO perspective, there are two reasons why we adopt the acceptance-rejection feedback. (1) Functional estimation approaches like ours can exploit more information. They can query multiple locations to experts to elicit their belief function, and reflects on the next query. (2) The precision of human pinpointing can be low. Humans excels at qualitative comparison than absolute quantities in feedback [41]. This is a widely accepted concept in human preference learning and economics, which has a long history of moving from cardinal utility (absolute) to ordinal utility (comparison) for robust estimation. Furthermore, acceptance-rejection feedback is easier for human to give. # Human choice model We do not for two reasons. **The value of presenting the vanilla LCB suggestion is low.** Rejecting LCB suggestions may be helpful to elicit human knowledge, but using $g_t$ distributional information to select the most informative points would be more query-efficient. More specifically, as the logic in line 5 of Algorithm 1 indicates, the vanilla LCB solution $x_t^u$ is used when either human augmented solution $x_t^c$ can not be the optimal solution for sure or the uncertainty of $x_t^c$ is already very low as compared to $x_t^u$. In both cases, we would sample the vanilla LCB solution $x_t^u$ regardless of the acceptance/reject feedback from human. So it is not necessary to query human for this point. **Not presenting the vanilla LCB suggestion reduces the human labeling effort significantly.** If we presented the vanilla LCB solution to the human, we would have suffered from a linear growth of cumulative queries to the human labeler. Instead, by not presenting it, we show a small sublinear bound on the cumulative human label queries. This is meaningful since it reduces the human labeling effort in a provable order-wise way. # Changing human behaviors This is the same with P4 of sMdb, please read them as our response. In short, yes, our algorithm can do and we ran experiments in Fig. 1 in global response. # No IRB approval Although we do not have IRB approval, our institution has reviewed and approved our study as low-risk since all experiments involve running software on open-source datasets. According to the NeurIPS 2024 ethics guidelines, adhering to existing protocols at authors' institution is required, and IRB approval is just one form of such protocols. Due to anonymity, we cannot provide evidence now, but we promise to attach it if accepted. Thank you again for your valuable comments and suggestions. We have incorporated all of our discussions into the relevant sections of the manuscript. --- Rebuttal Comment 1.1: Comment: Thank authors for the feedback. My fundamental reason for rejection is the current paper does not provide an improvement guarantee, so theoretically this algorithm is no better than the vanilla LCB algorithm. I believe a finite-sample analysis in the early stage would greatly improve the paper. Currently, the theoretical analysis in the paper suggests the same regret as LCB and worse regret than UCB (see below), and both LCB and UCB do not have to query humans at all, so I should never use the proposed system from a theoretical perspective. In this sense, this paper feels incomplete to me. Also I realize the ``no-harm guarantee'' is a little over-claimed, since the constant in the regret bound should be the constant of the LCB bound, not the UCB's bound. so the proposed human-AI system has a worse regret (in the constant) compared to the vanilla UCB algorithm even when there are a lot of samples. It is not clear when the proposed human-AI system can outperform the UCB algorithm. --- Rebuttal 2: Title: Reply for the comment (1/2) Comment: Thank you for your feedback and timely response. As requested, we demonstrate the theoretical acceleration if the expert is helpful. # Theoretical convergence acceleration We begin by a "helpfulness" assumption on $g$. We assume $g(x^\star)\leq0$, meaning the expert accepts the ground-truth optimal solution with probability at least $0.5$, i.e., better than random. The primal-dual algorithm implicitly solves the constrained problem $\min \underline{f}_t(x) \text{ subject to } \underline{g}_t(x)\leq0$. It can be seen that we implicitly restrict the sample in the set $\mathcal{X}_t^g= \lbrace x\in\mathcal{X} \mid \underline{g}_t(x)\leq0 \rbrace$ (otherwise, we can run primal-dual dynamics for multiple steps until $x_t^c$ satisfies $\underline{g}_t(x_t^c)\leq0$). By the assumption, $x^* \in \mathcal{X}_t^g$ is guaranteed. As we already assume $\underline{g}_t(x^\star)\leq g(x^\star)\leq0$, we can trust the human more. To reflect this trust in the algorithm, we add a filter that requires each sample $x_t$ to satisfy, $$ \underline{g}_t (x_t) \leq 0 \ \text{and} \ \bar{g}_t(x_t) - \underline{g}_t(x_t) \leq g _\mathrm{thr} $$ Intuitively, this condition means that the human is quite certain that $x_t$ is worth sampling. Otherwise, we can query human for the "acceptance-rejection" feedback at the point $x_t$ without evaluating on the black-box objective function $f$, and continue the loop. Interestingly, our original algorithm can be seen as a soft implementation of the above constraints since we do not assume helpful human a priori. With this filter, we restrict all the samples to the set $\mathcal{X}^g = \lbrace x\in\mathcal{X} \mid {g}(x)\leq g_\mathrm{thr} \rbrace$ (since all samples on the black-box objective function satisfies $\underline{g}\_t(x_t)\leq0$ and $g(x_t)\leq\bar{g}\_t(x\_t)\leq \underline{g}\_t(x_t)+g\_\mathrm{thr}$, which implies $g(x_t)\leq g_\mathrm{thr}$). Let $\text{vol}(\mathcal{X})$ be the volume of the domain. Then, we have $\text{vol}(\mathcal{X}_g) \leq \text{vol}(\mathcal{X})$ since $\mathcal{X}_g\subset\mathcal{X}$. This is illustrated in Example 2.2, where accuracy coefficient $a$ is how effectively the expert can identify the global optimal $x^*$. As shown in Kandasamy, et al., the maximum information gain (MIG) depends on the domain volume. Let $\Psi_t(\mathcal{X})$ be the MIG at $|\mathcal{Q}^f_t|$-th iteration over the domain $\mathcal{X}$. For instance, with the squared exponential kernel, MIG can be expressed as $\Psi_t(\mathcal{X}) \propto \text{vol}(\mathcal{X}) \log(t)^{d+1}$. The simple regret of the GP-UCB algorithm has a bound written as $\Psi_t(\mathcal{X}) / \sqrt{|\mathcal{Q}^f_t|}$. By incorporating our additional assumption with the adapted algorithm, the simple regret of our algorithm improves the vanilla GP-UCB by a factor of $\Psi_t(\mathcal{X}^g) / \Psi_t(\mathcal{X}) = \text{vol}(\mathcal{X}^g) / \text{vol}(\mathcal{X})$. Thus, if expert suggestions reduce the domain volume, for example, by a factor of 10, the theoretical regret bound of our algorithm could be reduced by a factor of 10. This is broadly consistent with the experiments shown in Figure 4: when expert sampling is strong (i.e., the expert’s belief is closely centred around $x^*$), the acceleration becomes more pronounced. # No-Harm Guarantee We are not entirely sure of your intent regarding "the constant of the LCB bound, not the UCB’s bound." Please clarify if our understanding is incorrect. Below is our understanding of your statement and our response. To avoid confusion, let us define LCB and UCB: we consider UCB as the well-known GP-UCB algorithm [75] and LCB as our approach. Although our objective is minimisation (Eq. (1)), $x^* = \text{argmin} f(x)$, while GP-UCB uses maximisation, $x^* = \text{argmax} f(x)$, these are equivalent by negating the objective: $\text{argmin} f(x) = \text{argmax} -f(x)$. Thus, vanilla LCB is the same as the GP-UCB algorithm, and we will use LCB and GP-UCB interchangeably. We interpret "worse regret (in the constant)" to mean that our convergence rate in Appendix B.5 includes an additional constant $(2 + \eta)$ compared to the GP-UCB convergence rate. Setting $\eta = 0$ aligns our rate with GP-UCB, while $\eta > 0$ can worsen the rate only if experts provide adversarial feedback. The constant factor $\eta$ can be adjusted by the user to balance robustness against adversity with potential acceleration from beneficial feedback, reflecting the principle of 'no risk, no gain.' Please note that adversarial feedback is not intentional; human experts aim to help but may unintentionally provide ineffective guidance compared to an AI-only system. If experts are uncertain, setting $\eta=0$ can recover the same rate with GP-UCB. Thus, choosing $\eta > 0$ means that the users believe that our algorithm with their own advice offers greater value than GP-UCB. CONTINUE TO THE NEXT REPLY --- Rebuttal 3: Title: Reply for the comment (2/2) Comment: Experts seek to be involved in decision-making due to their trustworthiness, self-efficacy, and responsibility for success—qualities that current BO algorithms, such as GP-UCB, do not address. At the very least, our algorithm is theoretically superior to a manual search, which lacks convergence guarantees, especially as users might avoid using vanilla BO due to concerns about its trustworthiness. The term "no-harm guarantee" refers to the invariance of the order-wise convergence rate in the asymptotic sense. Existing works, such as [2], Mikkola et al., use this term for adversarial cases, where the convergence rate is slightly worse by a constant factor. If this terminology seems overstated, we can use "robustness guarantee" instead. # Clarification of our contribution We view the acceleration analysis as a valuable but supplementary aspect of the paper. Our core contribution—the algorithm—remains unchanged regardless of this analysis. Our primary contribution includes the algorithm, which is supported by extensive empirical validation, and providing both theoretical no-harm and handover guarantees, the first-of-its-kind in the literature. We hope this clarification resolves your concerns: our algorithm indeed outperforms GP-UCB both theoretically and empirically when expert advice is helpful. We have added the above discussion in Section 4 in the manuscript. # Citations - K. Kandasamy, et al., Multi-fidelity Bayesian Optimisation with Continuous Approximations, ICML 2017 - P. Mikkola, et al., Multi-Fidelity Bayesian Optimization with Unreliable Information Sources, AISTATS 2023. --- Rebuttal 4: Title: Any further questions or suggestions? Comment: Dear reviewer dzPz, Your review and comments have helped us improve the quality of the manuscript a lot. And we hope our rebuttal and further responses have addressed your concerns adequately. Otherwise, please let us know if you have any further questions or suggestions, especially on the theoretical convergence improvement. We are more than happy to provide further responses. Best, All the authors. --- Rebuttal 5: Comment: Thank authors for the response. For the improvement guarantee, I think the argument for the new algorithm is correct. Now my main concern is the algorithm in the paper/experiment does not match the algorithm that enjoys the improvement guarantee authors offered in the previous response. The algorithm presented in the paper does not enjoy this improvement guarantee. As I suggested earlier, it seems the paper is not complete and should go through another iteration. For the no-harm guarantee, I was using the notation in the reward maximizing case (sorry for the confusion). Let me be more specific - I will use reward maximizing notation, so UCB chooses reward upper bound and LCB chooses reward lower bound. $x^c$: expert augmented selection $x^*$: optimal x $x^u$: UCB selection For a vanilla UCB algorithm: the regret is $f(x^*)-f(x^u)=f(x^*)-UCB(f(x^u))+UCB(f(x^u))-f(x^u) \leq UCB(f(x^u)) - f(x^u)$ For the authors' proposed human-AI system (authors' proof above equation 41): the regret is $f(x^*)-f(x^c)=f(x^*)-UCB(f(x^c))+UCB(f(x^c))-f(x^c)$ $\leq UCB(f(x^*))-UCB(f(x^c))+UCB(f(x^c))-f(x^c)$ $= UCB(f(x^*))-UCB(f(x^u))+UCB(f(x^u))-UCB(f(x^c))+UCB(f(x^c))-f(x^c)$ $\leq UCB(f(x^u))-UCB(f(x^c))+UCB(f(x^c))-f(x^c)$ $\leq UCB(f(x^u))-LCB(f(x^u)) + UCB(f(x^c))-f(x^c)$ By comparing these two bounds, the authors' proposed system seem three times worse than the vanilla UCB algorithm, so not really no harm. It seems to me the harm is hidden in the constant. (I am using reward maximizing notation, so the UCB corresponds to the LCB in the paper. the above proof is a translation of authors' proof above eq 41) --- Rebuttal 6: Title: Response to the new concern Comment: Many thanks for your engagement and constructiveness. We are pleased that you acknowledge our theoretical improvement guarantee. > Now my main concern is the algorithm in the paper/experiment does not match the algorithm that enjoys the improvement guarantee authors offered in the previous response. The algorithm presented in the paper does not enjoy this improvement guarantee. As I suggested earlier, it seems the paper is not complete and should go through another iteration. Regarding your **new concern**, we want to clarify that the algorithm 1 shown in our submission and the slightly adapted version indeed match from a higher-level algorithmic perspective, with the difference being a slight implementation choice. The key algorithmic idea is the same: using human function $g$ as a constraint when selecting the new point. The difference is, the adapted version implements this as a hard constraint, while in our Algorithm 1, it is implemented as primal-dual dynamics, which is a popular method to solve a constrained problem, as shown in Eq. (4). The active learning constraint $\bar{g}_t(x)-\underline{g}_t(x)\leq g_\mathrm{thr}$ is also implemented in line 7 of our algorithm 1 as a soft constraint. As such, algorithm 1 can be seen as a relaxed implementation of the hard-constrained algorithm. Actually, the hard-constrained version was our initial implementation, but we finally choose to relax the hard constraints in the submission due to the following reasons: - **Relaxed version is more robust to adversity.** The theoretical improvement guarantee highly relies on the "helpfulness" assumption $g(x^\star)\leq0$. However, in practice, whether or not this assumption holds is unknown a priori. If $g(x^\star)>g_\mathrm{thr}$ instead, which can happen in practice if the human is adversarial or just not so experienced, adding the hard constraint would lead to the missing of global optimum since the hard constraint restricts all samples to satisfy $g(x)\leq g_\mathrm{thr}$. - **Relaxed version empirically performs better.** We indeed tested the hard constrained version before submission, but we found that performance significantly improved when we relaxed the hard constraints, as in our current Algorithm 1. The practical superiority of the primal-dual method is well-known in general (S. Boyd, et al., 2004), even though it relaxes the hard constraint. The success lies in its flexibility; it can handle problems that may not be strictly feasible, which can occur during the learning process of both $f$ and $g$. Therefore, this is a deliberate design choice, not an indication of incompleteness. We prioritised practicality and the worst-case guarantee over conditioned guarantees, considering the high-stakes nature of many expert-in-the-loop applications. Achieving both simultaneously is challenging in general because it has the trade-off between robustness guarantee and theoretical performance guarantee, see e.g., Tsipras et al., ICLR 2019. Practical relaxation of constraints is a common approach in many works. For instance, the well-known paper "Proximal Policy Optimization" (Schulman et al., 2017) proposes a practical implementation that employs a relaxed version of the "Trust Region Policy Optimization" (Schulman et al., 2015). As such, relaxing hard constraints at the implementation level is common, and we believe this new concern may not be grounds for rejection. Indeed, the key point of theory here is to understand its effectiveness in an ideal setting. We do not believe our manuscript requires another iteration, as this algorithm design choice is intentional based on robustness and practical considerations, and the main content does not need a major revision. Still, we have included our fruitful discussion and experimental comparison in the manuscript. **Citations** - S. Boyd, et al., Convex Optimization, 2004 - D. Tsipras, et al., Robustness May Be at Odds with Accuracy, ICLR 2019 - J. Schulman., et al., Proximal Policy Optimization Algorithms, arXiv 2017 - J. Schulman., et al., Trust Region Policy Optimization, ICML 2015 --- Rebuttal 7: Title: Response on the no-harm guarantee Comment: # On the no-harm guarantee Thank you for your clarification! Now it is clear that we are on the same page. As you read further, on line 742, you will notice that the bound reaches $2(2 + \eta) \beta_{f_T} \sqrt{4(|Q^f_T|+2) \gamma^f_{|Q^f_T|}}$. In comparison, the bound given by the GP-UCB algorithm [75] is $4 \beta_{f_T} \sqrt{4(|Q^f_T|+2) \gamma^f_{|Q^f_T|}}$. The only difference lies in the constant term $(2 + \eta)$, and our previous response provides its justification. In short, this happens only when the users' advice is unintentionally ineffective, and the decision to trust their advice is left to the users themselves. If they are aware that their advice is not reliable a priori, setting $\eta=0$ can align the convergence rate with that of GP-UCB. To benefit from human expertise, such a potential harm of constant-wise worse regret bound can not be avoided when the human expert turns out to be adversarial or ineffective. Although the term "no-harm guarantee" is standard in the literature in this asymptotic order-wise sense, we understand your concern. Therefore, we have changed it to "robustness guarantee." --- Rebuttal Comment 7.1: Comment: Thank authors for confirming my concerns. For the improvement guarantee, my **original and current** concern is the algorithm described in the submitted manuscript should have an improvement guarantee. I think assuming some version of helpfulness is okay as long as it satisfies the robustness guarantee. The key is authors should show the proposed algorithm **helps** when humans are good, and does not hurt too much when humans are bad. So far, I don't think authors gave such an improvement for the algorithm in the manuscript. For the original called 'no-harm guarantee' in the paper, authors confirmed that when humans are bad, there is harm in the constant (only no harm in the rate of T). I think such harm is natural since there is no free lunch and there is a price to pay when humans are bad. But in the initial submitted version, the presentation does not reflect it and the harm is hidden, so the presentation of the paper is not very satisfactory to me. I think authors should discuss this point in detail in the future version of the paper. --- Rebuttal 8: Title: Responses and summarization Comment: Thank you for investing your time in reviewing our paper. To avoid confusion, let us first summarise the current status to confirm our mutual understanding: # Summary of Our Discussion We would like to emphasize that the core of our contribution is a novel algorithm with both theoretical robustness and handover guarantees, and empirical improvement, for which we have a relevant theorem and compelling empirical results. Additionally, we have, in the course of rebuttal, brought up an adapted novel algorithm (which was indeed our initial design), with an additional theoretical improvement guarantee --- we view this additional guarantee as supplementary to our core contribution, beyond its scope, not as a replacement. Our view is that our core contributions are significant and potentially impactful, just with what we have agreed (see below), as also appreciated by all the other reviewers. ## What we have agreed - Our algorithm is the first-of-its-kind one that has two theoretical contributions of both robustness and handover guarantees. Numerous experimental results show that, human experts help when they are good and do not hurt much when they are not. - Our hard-constrained algorithm enjoys the **third** theoretical improvement guarantee, while our implemented algorithm in the original submission is a relaxed version of it. - Our robustness guarantee may be slightly worse at the constant level compared to GP-UCB, but the constant $\eta$ is controllable by users and not worse in an order-wise sense, as is commonly accepted in the literature [2, 36, Mikkola, et al., AISTATS 2023]. Nevertheless, this is worth noting for readers to understand the risk involving setting $\eta$, so we have included it as a minor revision (which has been completed). ## What we disagree for now - Whether it is even theoretically possible that the robustness-guaranteed algorithm also has an improvement guarantee, and whether it **must** have both for completeness. - Whether we should prioritize the non-robust algorithm with the conditional improvement guarantee. This depends on the hidden assumption that strict adherence to the improvement guarantee is superior to its relaxed counterpart; otherwise, strict adherence may not be justifiable. ## Our responses. **Guaranteeing Both Robustness and Improvement May Be Incompatible in Theory**. To begin with, we want to emphasize that guaranteeing both improvement and robustness may be incompatible theoretically. From a theoretical perspective, they are in a trade-off relationship [D. Tsipras, et al., ICLR 2019]. This can be intuitively explained by the no-free-lunch theorem [Wolpert, et al. IEEE TEVC 1997]: if algorithm A outperforms B, it does so by exploiting 'biased' information. The 'bias' inherent in the improvement is contradictory to robustness. Our setting is **unbiased**, meaning we do not have prior knowledge of helpful or adversarial human expert. Therefore, we must make a **design choice** between prioritizing robustness or improvement as a theoretical contribution, depending on whether we assume that expert input can be adversarial (weak bias) or that it will always be helpful (strong bias). Indeed, there are lower bound results for the average-case regret of Bayesian optimization in the literature (e.g., see [J. Scarlett, et al., COLT 2017]). GP-UCB is alreadly nearly rate-optimal in achieving this lower bound. This means theoretical improvement is obtained in the price of worse robustness. **We Need to Balance between Improvement and Robustness**. Table R1: Comparison of cumulative regret bound. | Our algorithms | human | order | constant ratio over GP-UCB | |----------|----------|----------|----------| | Hard-constrained (Improvement-guaranteed) | adversarial | **linear** | $\to\infty$ (linear / sublinear) | | Hard-constrained (Improvement-guaranteed) | helpful | sublinear | $\text{vol}(\mathcal{X}_g) / \text{vol}(\mathcal{X})$ ($\leq 1$) | | Relaxed (Robustness-guaranteed) | adversarial | sublinear |$2(2+\eta) / 4$| | Relaxed (Robustness-guaranteed) | helpful | sublinear |Empirically comparable to $\text{vol}(\mathcal{X}_g) / \text{vol}(\mathcal{X})$| We have two options: a robustness-guaranteed relaxed algorithm, which is our submitted version, and an improvement-guaranteed algorithm, which is the hard-constrained but essentially the same algorithmic idea. Refer to Table R1 for the theoretical comparison. In the case of helpful feedback, both algorithms achieve constant-wise improvement. However, when the human expert is adversarial, the improvement-guaranteed algorithm performs infinitely worse at the constant level than GP-UCB, as its cumulative regret growth becomes linear due to restricting sampling in a subset and missing global optimum, while the relaxed algorithm still achieves sublinear growth. Therefore, our current relaxed version is clearly more balanced than the hard-constrained one. --- Rebuttal Comment 8.1: Title: Responses and summarization (cont.) Comment: While we agree that this is an interesting discussion for readers, it is supplementary content. The main paper should focus on clearly presenting the contributions of the main algorithm, which is already complex enough for a 9-page paper. Including a comparison with another algorithm in the main text could disrupt the smooth flow of the narrative. We appreciate your contribution to the discussion on algorithmic comparison and have respectfully included this content in the Appendix. **Citations** - D.H., Wolpert, et al., "No Free Lunch Theorems for Optimization". IEEE TEVC 1997 - J. Scarlett, et al., "Lower Bounds on Regret for Noisy Gaussian Process Bandit Optimization", COLT 2017. # Regarding your concern on "no-harm guarantee". Regarding your concern on "no-harm guarantee", we understand it and have made the regret bound in Theorem 4.1 explicit in the parameter $\eta$ to reflect this constant-wise harm when the human expert is bad. Since this only requires revising the name of "no-harm guarantee", making the regret bound explicit in $\eta$, and adding some relevant discussions, this issue can be addressed with minor revision (we have done so) and our paper does not need a new iteration because of this. --- Rebuttal 9: Comment: Thank authors for the responses. I think authors did a good job addressing my concerns. I understand you cannot edit the manuscript this year so I cannot verify these changes in the manuscript, but I hope authors can discuss in detail about the improvement guarantee and the trade-offs of involving humans in the systems. I personally feel this is a more interesting part to me as a reader. I raised my score. I only other concern is the IRB but I will defer that to AC and ethic reviewers. --- Rebuttal Comment 9.1: Title: Many Thanks Comment: Dear reviewer dzPz, Many thanks for acknowledging our responses and raising the score! For sure, we will discuss in detail about the improvement guarantee and the trade-offs of involving humans in the systems in the final version if our submission is accepted. (Indeed, we have been doing so.) Best, All the authors
Summary: This paper introduces a robust Bayesian optimization algorithm that incorporates human expert knowledge to accelerate the optimization process. The key theoretic contributions include a handover guarantee, ensuring the number of expert labels required decreases over time, and a no-harm guarantee, ensuring the optimization performance is not adversely affected even with unreliable expert advice. The algorithm combines Gaussian processes for objective functions and likelihood ratio models for expert beliefs, using a primal-dual approach to mix these two surrogates, thereby enhancing the efficiency of handling expert inputs. Empirical validation on synthetic benchmarks and real-world tasks, such as lithium-ion battery design, shows the method's effectiveness and robustness, outperforming existing baselines. Strengths: This paper introduces a first-of-its-kind principled approach to Bayesian optimization by integrating human expert knowledge. The authors provide solid theoretical guarantees, including the handover and no-harm guarantees, which are well-justified and supported by detailed proofs. The empirical experiments are comprehensive, covering five synthetic benchmarks and one real-world application, as well as robustness and sensitivity analysis. The experiment results show the algorithm's improvement over existing baselines. The paper is overall well-written and structured, with a clear presentation of its contributions, theoretical foundations, and experimental results. The inclusion of visual aids and detailed plots enhances understanding and reproducibility, though there is a lack of explanations on notations and terminologies, which I will elaborate in the **Weakness** section. This paper's contribution has the potential to advance the state of the art in Bayesian optimization and inspire further research in human-AI collaboration. Weaknesses: The main weakness of this paper lies in its clarity. 1. The term "cost function" is misleading as it typically refers to black-box objective function evaluation costs in cost-aware Bayesian optimization, whereas this paper uses it to denote expert belief. 2. Despite having a notation paragraph at the end of Section 2 and an algorithm paragraph at the beginning of Section 4.2, it is still difficult to locate the definitions of the notations used in Algorithm 1. For instance, the notations in Line 1 are not clearly defined until Line 9, where it becomes apparent they represent the prior confidence set. Including a line of parameters at the beginning of the pseudocode would be helpful. 3. This issue extends to Figure 2. The caption could be more detailed to clarify the notations used in the figure, as it currently requires effort to understand. 4. Both LCB and UCB are defined, but only LCB is used in their algorithm, and the authors claim it is still based on the GP-UCB algorithm. It needs more clarification on when to use LCB and when to use UCB. 5. The term "LL values" appears multiple times but lacks a clear explanation. It seems to be associated with the LL maximizer mentioned somewhere in the paper but needs more clarification. 6. The x-axis of Figure 3 is confusing. For example, the number of function evaluations seems to go up to 50, yet the number "50" does not appear. 7. There is a lack of ablation study on the kernel choice and the number of dimensions. While I understand these may not be the primary focus of this paper, I am interested in the authors' thoughts on these aspects. 8. Additionally, it takes time to figure out how the synthetic agent response (including feedback accuracy and expert belief distribution) is modeled in the synthetic experiments. The authors should refer to Example 2.2 again or directly restate the expression in the **synthetic dataset** paragraph. Technical Quality: 3 Clarity: 3 Questions for Authors: Some suggestions to improve clarity have been mentioned in the **Weakness** section. I have two additional questions: 1. I would like to see a justification for the linear scaling in the synthetic agent response model in Example 2.2, as it is used throughout the synthetic experiment as well as the robustness and sensitivity analysis. Is this scaling general enough to represent typical synthetic agent responses? 2. I am interested in learning about the actual human labor efforts involved in the battery design task to understand the applicability of this work in real-world applications. The experiment results include the cumulative queries, but I would like to know more about the actual time spent on labeling. This can be a rough estimate, like a comparison in magnitude to the computational time and the function evaluation time. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors do have adequately addressed the limitations, even though I still have concerns in how costly the human labors in a real-world task (e.g., the battery design) can be. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for the positive feedback and the helpful comments. The following are our detailed responses. # Point-by-Point Responses to Weaknesses 1. To avoid any confusion, we have changed it to 'expert function' or just 'function' instead. 2. We have added the notations with explanations and some initializations at the beginning of the Algorithm 1. 3. We have added more details to clarify the notations used in Figure 2. 4. In the GP-UCB paper [75], maximization formulation is adopted. However, we consider the minimization formulation (as common for many applications in engineering) in our paper. So GP-UCB is modified to LCB in our paper. Vanilla LCB in this paper is essentially the same as GP-UCB in [75]. To clarify this issue, we have inserted a footnote in line 321. 5. We have added an explanation clearly stating that 'LL value' refers to the log likelihood value over the historical data up to step $t$ with a function $\hat{g}\in\mathcal{B}_g$. 6. We have updated Figure 3 to avoid any confusion. 7. Kernel choice and scalability to high dimension are common challenges for BO. Theoretically, Table 1 shows the impact of the kernel function and dimension on both the cumulative regret and cumulative queries. This is similar to the GP-UCB algorithm. In practice, existing generic techniques, such as decomposed kernels, can be applied in our algorithm to choose kernel functions and achieve scalability in high-dimensional spaces. We have added similar discussions around Table 1 in the manuscript. 8. We have added some discussions to state it again at line 271. # Point-by-Point Responses to Questions > I would like to see a justification for the linear scaling in the synthetic agent response model in Example 2.2, as it is used throughout the synthetic experiment as well as the robustness and sensitivity analysis. Is this scaling general enough to represent typical synthetic agent responses? It is challenging to encompass all possible agent responses in the experiments. In the synthetic response model $p_{x\succ_g0} = S(a \rho(f(x)))$, our linear scaling function, combined with the accuracy coefficient, can cover the three important and common scenarios, * **Helpful human expert**: $a$ is significantly positive (e.g., $a=1$). In this case, $x^\star$ is mapped to a high probability of being accepted, and the point with large $f(x)$ value is mapped to a low probability of being accepted. * **Purely random response**: $a=0$, every point is accepted with probability of $0.5$. * **Adversarial human expert**: $a$ is significantly negative (e.g., $a=-1$). In this case, $x^\star$ is mapped to a low probability of being accepted. While Theorem 4.1 assures the versatility of our algorithm for any response function, we also wanted to empirically confirm its reliability with another possible response function. Thus, we added experiments in Fig. R2(a) in the global response, selecting [37] as a representative example from the literature. [37] adopted human belief as a multivariate normal distribution (MVN), where the mean and variance of the MVN correspond to the most promising location of the global optimum and the confidence in the belief, respectively. We have tested the strong, weak, and wrong models by following [37]. Fig. R2(a) show that the outcomes do not differ significantly from those in Example 2.2. In particular, the “wrong” case is centred at the point farthest from $x^*$ within the domain, representing another adversarial response and demonstrating the efficacy of the no-harm guarantee. > I am interested in learning about the actual human labor efforts involved in the battery design task to understand the applicability of this work in real-world applications. The experiment results include the cumulative queries, but I would like to know more about the actual time spent on labeling. This can be a rough estimate, like a comparison in magnitude to the computational time and the function evaluation time. Human labelling costs vary among experts but typically range from a few seconds to several minutes. As shown in Figures 6 and 7, the BO overhead ranges from a few seconds to tens of seconds, which is comparable. However, we assume that the function evaluation time is very expensive. The battery design tasks in Figure 5 are demonstration purposes using open datasets. In the real-world development, creating a prototype battery requires at least three days for manufacturing and one week for testing. This experiment costs at least \$10,000 when outsourced, making the labelling cost negligible by comparison. This high expense is why experts are reluctant to rely solely on opaque AI algorithms for decision-making. They prefer to be involved in the design process for trustworthiness, despite being uncertain of their effectiveness of aiding BO in advance. From this human perspective, our approach can be seen as augmenting human ability through BO with a provable convergence guarantee. Thus, human involvement is more of a prerequisite in our setting rather than an additional cost. In fact, the handover guarantee can reduce their effort compared to a manual search (see Figure 7, where the cumulative queries of ours are smaller than those of a manual search). However, even though the above cases are our main target and perspective, one may wish to extend to other scenarios. For example, users might want to be involved in scenarios with cheap feedback, such as hyperparameter tuning of lightweight ML models. In this setting, we agree that the labelling cost would matter. We have added this limitation and similar discussion to the manuscript. # Response to Limitations > The authors do have adequately addressed the limitations, even though I still have concerns in how costly the human labors in a real-world task (e.g., the battery design) can be. Please see our response to the last comment. Thank you again for valuable feedback. We hope our rebuttal clarifies our points. --- Rebuttal Comment 1.1: Comment: Thanks for your detailed responses. The discussions on the response model and the human labeling costs now appear more comprehensive to me. However, I still have further concerns regarding the practicability of your approach. Although the human labeling times might be comparable to the BO overheads, involving human experts introduces additional labor costs. This means that merely comparable theoretical guarantees and experiment results might not be sufficient to justify the use of human experts over simply running a vanilla LCB algorithm. Nonetheless, this paper opens a door to further exploration into BO with human-AI collaboration, so I have increased my score. --- Rebuttal 2: Title: Many thanks Comment: Dear reviewer Ct1E, Many thanks for acknowledging our rebuttal and raising the score! We understand your further concern and will try to discuss it further in the paper if accepted. Best, All the authors
Rebuttal 1: Rebuttal: # Global response and added experiments The authors would like to thank all reviewers for their effort. As per the reviewers' requests, we have added five additional plots to the rebuttal PDF and incorporated these figures into our manuscript. - **Fig. R1**: Nonstationary Human Accuracy (**dzPz, sMdb**) We tested the scenario where the accuracy of human experts’ labeling improves over time. Our algorithm can extend to such scenarios, but the performance gain is limited to certain conditions. Further work is needed for the future work. - **Fig. R2(a)**: Variants of Human Belief Model (**Ct1E**) We tested another human belief model proposed by [37]. Our algorithm works for the strong, weak, and wrong belief models introduced in [37]. - **Fig. R2(b)**: Confirming No-Harm Guarantee (**iJqg**) To confirm that our algorithm converges on par with vanilla LCB in adversarial scenarios, we extended the iterations to 200. Our algorithm converges to the same regret as vanilla LCB for both feedback accuracy adversity and strong trust weights on adversarial feedback. - **Fig. R3(a)**: Pinpoint Form (**dzPz**) We tested the efficacy of pinpoint feedback suggested by **dzPz**. The results do not show a promising performance gain over our primal-dual approach. - **Fig. R3(b)**: Bug in Figure 5 (**iJqg**) Thank you for pointing this out. We have rerun the experiments, and the bug has been resolved. We would like to thank all reviewers again for their comments and suggestions, which have significantly improved the quality of our manuscript. We have already updated our manuscript to reflect our fruitful discussion. We hope this rebuttal clarifies any initial confusions and concerns. We are happy to discuss further if more clarification is needed. Pdf: /pdf/a5c56897a5951c3f72e0e9de919978ba8b08bfce.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper studies the integration of human expert knowledge into Bayesian Optimization (BO) processes through binary accept/reject recommendations. The authors introduce an approach that ensures two key guarantees: Handover Guarantee: The approach proves a sublinear bound on the cumulative number of binary labels needed from experts. Over time, the need for expert labels decreases, thereby saving effort and computation time. No-Harm Guarantee: The algorithm adapts the trust level in expert advice based on data, ensuring that the convergence rate will not be worse than using BO without expert advice, even in cases where the expert's advice may be adversarial. The proposed method is empirically validated in real-world tasks, particularly in the design of lithium-ion batteries, demonstrating its robustness and superiority over existing methods. Strengths: Originality: The paper presents an innovative approach by integrating human expert knowledge into BO with theoretical guarantees. The introduction of the "handover" and "no-harm" guarantees is novel, addressing a significant gap in existing literature where human-in-the-loop methods often lack formal theoretical guarantees. Quality: The authors provide clear theoretical formulations with proofs, that establish the validity of their approach. The empirical evaluations uses both synthetic and real-world datasets, and the results are compelling, showing the practical effectiveness of the method. Clarity: The paper is well-organized and clearly written. The concepts, while complex, are explained in a step-by-step manner, making the paper accessible to readers. Given this is 41-page long paper, I especially like the table of contents in Appendix. Significance: The approach could have broad implications for how expert knowledge is integrated into automated systems, making it a valuable addition to the literature. Weaknesses: Limited Generalizability: While the paper provides strong theoretical guarantees, the method's effectiveness may be limited to specific types of expert interactions (binary labeling). The approach might not generalize well to scenarios where expert input is more complex or continuous. Computational Complexity: The algorithm's dependence on GP and the primal-dual method could be computationally expensive, particularly in high-dimensional settings. This could limit the practicality of the approach in real-time applications or for problems with very large datasets. Technical Quality: 3 Clarity: 4 Questions for Authors: Given the potential computational overhead of the proposed method, have the authors compared the computational time of your proposed method with other baselines? Moreover, particularly in high-dimensional spaces, have the authors considered any strategies for reducing computational complexity? For example, could dimensionality reduction techniques be integrated into the process? Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: na Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Title: Responses to the review Comment: Thank you very much for the positive feedback and the helpful comments. The following are our detailed responses. # On 'Limited Generalizability' In fact, our rebuttal experiments (Figures R2, R3 in the global response) demonstrated that our approach can be extended to accommodate a broader range of human feedback. We chose the "binary labeling" format primarily due to its empirical success, as our experiments showed that our approach outperformed other baselines that adapted to different forms of feedback. The details are as follows: ## Other feedback forms. We understand your reference to "more complex" as pertaining to forms (b) and (d), and "continuous" as relating to forms (a) and (c). - [(a)] **Pinpoint form** [6, 33, 44] adopt this form that the algorithm asks the humans to directly pinpoint the next query location. - [(b)] **Pairwise comparison** [2] adopts this form that the algorithm presents paired candidates, and the human selects the preferred one. - [(c)] **Ranking** [7] adopts this form that the algorithm proposes a list of candidates, and the human provides a preferential ranking. - [(d)] **Belief function** [36, 37] adopt a Gaussian distribution as expert input. Unlike the others, this form assumes an offline setting where the input is defined at the beginning and remains unchanged during the optimization. Human experts must specify the mean and variance of the Gaussian, which represent their belief in the location of the global optimum and their confidence in this estimation, respectively. ## Slight modification can adapt these forms to our method. - [(a)] **Pinpoint form** We can simply replace the expert-augmented LCB $x^c_t$ in line 3 of Algorithm 1 with the pinpointed candidate (see Figure R3 of the global response). - [(b)] **Pairwise comparison** By adopting the Bradley-Terry-Luce (BTL) model (Bradley et al., 1952), we can extend our likelihood ratio model to incorporate preferential feedback. This allows us to obtain the surrogate $g$, while the other parts of our algorithm remain unchanged. - [(c)] **Ranking** Ranking feedback can be decomposed into multiple pairwise comparisons. Therefore, we can apply the same method as in the pairwise comparison. - [(d)] **Belief function** We can use this Gaussian distribution model as the surrogate $g$ (see Fig. R2 in the global response for details and results). ## Binary labelling empirically performs best in our experiments. The primary reason we adopted binary labeling is due to its empirical success, as demonstrated in Fig. 5. None of the other formats, including (a) pinpoint form [6, 44] and (b) pairwise comparison [2], outperforms our method. In the experiments by [2], the authors showed that (a) pairwise comparison outperforms both (d) belief form [36] and (c) preferential ranking. Therefore, it logically follows that our binary labeling format yields the best performance. Additionally, Fig. R3 in the global response reaffirms that the pinpoint form performs worse than our binary labeling format. The main reasons why the binary format works better are as follows: - [(a)] **Pinpoint form** The accuracy of pinpointing is generally lower than that of kernel-based models. Humans excel at qualitative comparison rather than estimating absolute quantities [41]. Numerous studies [2, 42, 44, 63] have confirmed that manual search (pinpointing) by human experts only outperforms in the initial stages, with standard BO with GP performing better in later rounds. [33] shows that this type of feedback only outperforms when the expert's manual sampling is consistently superior to the standard BO. However, such cases are rare in our examples (e.g., Rosenbrock), and [2, 63] corroborate this conclusion. - [(b)] **Pairwise comparison** This format relies on two critical assumptions: transitivity and completeness. Transitivity assumes no inconsistencies, which are often referred to as a "rock-paper-scissors" relationship. However, real-world human preferences frequently exhibit this issue [14]. Completeness assumes that humans can always rank their preferences at any given points. In practice, when a user is unsure which option is better, this assumption does not hold. Our imprecise probability approach avoids these issues by not relying on an absolute ranking structure [5, 35]. - [(c)] **Ranking** Ranking is an extension of pairwise comparison and has been classically researched as the Borda count, which is known not to satisfy all rational axioms. Theoretically, the Condorcet winner in pairwise comparison is the only method that is known to identify the global maximum of ordinal utility. - [(d)] **Belief function** This is another form of absolute quantity, which humans are generally not proficient at estimating. Additionally, the offline nature of this method does not allow for knowledge updates. We have added discussions in the manuscript after introducing the 'acceptance-rejection' feedback and in the Appendix. --- Rebuttal 2: Title: Responses to the review (cont.) Comment: # On 'Computational Complexity' > The algorithm's dependence on GP and the primal-dual method could be computationally expensive, particularly in high-dimensional settings. This could limit the practicality of the approach in real-time applications or for problems with very large datasets. **Scalability in Surrogate Models.** Computational complexity is a common challenge for Bayesian optimization due to the usage of Gaussian process, which requires $\mathcal{O}(n^3)$ ($n$ is the number of data points) computational time for inference. In the literature, there are many works on computationally scalable Gaussian process [Liu, et al.]. Indeed, our method only requires access to the lower/upper confidence bounds and posterior uncertainty of the unknown functions. These posterior confidence bounds and uncertainty can be derived using the computationally scalable methods in the literature. For example, sparse GPs have been proposed to approximate the true posterior using inducing points [Titsias, 2009]. Another line of work (e.g., [Salimbeni, 2017]) uses stochastic variational inference for scalable GP. Kernel approximation methods, such as [Rahimi, et al., 2007], are also popular scalable methods. Recent work [Lederer, et al., 2021] also proposes a tree-based efficient GP inference method. Beyond Gaussian process, we expect that combining the recent works on neural process [Garnelo, et al.], a computationally efficient stochastic process learning method, with our method is also a promising direction to scale up to real-time or high dimensional problems. **Scalability in Primal-Dual Method.** In each step $t$, we only require one step of primal update and one step of dual update. The main computational load is on the primal update, where an unconstrained nonlinear optimization problem is required to be solved as shown in Prob. (5). In low dimensions, this can be solved using grid search. In medium to high dimensions, we can use existing fast nonlinear programming solvers, such as Ipopt (which is highly scalable), to solve the primal update problem. > Given the potential computational overhead of the proposed method, have the authors compared the computational time of your proposed method with other baselines? We understand it may be difficult to check all appendices, but we want to highlight that we provided a computational time comparison in Figure 7 of Appendix F.4.1. Our method is not the slowest and is consistently on par with other baseline methods. While the algorithmic overhead ranges from a few seconds to tens of seconds across experiments, the human labeling time varies from a few seconds to several minutes. Thus, the algorithmic overhead is comparable to the labeling time. We would also like to draw your attention to the fact that we assume the function evaluation time is very expensive. The battery design tasks shown in Fig. 5 are for demonstration purposes using open datasets. In real-world development, creating a prototype battery requires at least three days for manufacturing and one week for testing. This experiment costs at least \$10,000 when outsourced, making the computational overhead negligible by comparison. This high expense is why experts are reluctant to rely solely on opaque AI algorithms for decision-making; they prefer to be involved in the design process for trustworthiness, even if they are uncertain about the effectiveness of aiding BO in advance. However, we acknowledge that, while the above cases are our primary focus, there may be other scenarios to consider. For instance, users might want to apply this approach in contexts with cheap feedback or even real-time feedback, such as hyperparameter tuning of lightweight ML models. In such cases, we agree that the computational overhead would be comparably more significant. # Citations - R.A. Bradley, et al. "Rank analysis of incomplete block designs: I. the method of paired comparisons." Biometrika 1952 - M. Titsias, "Variational learning of inducing variables in sparse Gaussian processes." AISTATS 2009 - M. Garnelo, et al. "Conditional neural processes." ICML 2018 - H. Liu, et al. "When Gaussian process meets big data: A review of scalable GPs." IEEE Trans. Neural Netw. Learn. Syst. 2020 - H. Salimbeni, et al. "Doubly stochastic variational inference for deep Gaussian processes." NeurIPS 2017 - A. Rahimi, et al. "Random features for large-scale kernel machines." NeurIPS 2007 - A. Lederer, et al. "Gaussian process-based real-time learning for safety critical applications." ICML 2021. We have added similar discussions above to the relevant places in the manuscript. --- Rebuttal 3: Title: Responses on the computational complexity (cont.) Comment: > Moreover, particularly in high-dimensional spaces, have the authors considered any strategies for reducing computational complexity? For example, could dimensionality reduction techniques be integrated into the process? Thanks for the suggestion. Besides the scalable GP techniques mentioned in our previous response, existing dimensionality reduction techniques for Bayesian optimization can also be integrated into our process. For example, line BO [Kirschner, et al., 2019] and random embedding techniques [Wang, et al., 2016] can also be integrated to our process by iteratively restricting the search space to a line or applying similar random embedding techniques. # Citations - Kirschner, Johannes, et al. "Adaptive and safe Bayesian optimization in high dimensions via one-dimensional subspaces." ICML 2019. - Wang, Ziyu, et al. "Bayesian optimization in a billion dimensions via random embeddings." JAIR 55 (2016): 361-387. We have added similar discussions above to the relevant places in the manuscript.
null
null
null
null
null
null
Tackling Uncertain Correspondences for Multi-Modal Entity Alignment
Accept (poster)
Summary: This paper aims to address the uncertainty in entity alignment within multimodal knowledge graphs (MMKGs). The authors design a MKE module to handle relations, attributes, and visual knowledge, enhancing attribute alignment and filtering through large language models and contextual learning. To address the issue of missing modalities, the paper introduces a MMI module that uses VAEs to generate pseudo-features. Additionally, MCE based on cross-attention and orthogonal constraints is developed to enhance semantic associations between different modalities. Strengths: The paper considers the issue of missing modalities in MMKGs, using VAEs to generate pseudo-features, achieving modality alignment in the latent space. It systematically considers the relationships between relations, attributes, and images in MMKGs and achieves interactive enhancement. The model diagram is concise and easy to understand. SOTA Experimental performance. Weaknesses: The problem description in the introduction is unclear, and the language logic is chaotic, failing to clearly explain the challenges faced by existing methods. The contribution points are also unclear, and the writing is poor. I understand that MMI is a relatively innovative part of the paper, but the ablation study shows its impact is minimal, indicating the limited effectiveness of the features supplemented by MMI. The paper lacks innovation. Using large models for filtering and MMI is a rather straightforward approach. However, the large model is simply ChatGPT. Why didn't you consider fine-tuning other large models such as Llama? The experimental table settings are problematic. For example, traditional methods like IPTransE have only 0.04 H1. Is it still necessary to compare with it? The method in this paper shows a significant improvement, far surpassing all baselines. For instance, on the DB15K dataset, the H1 performance of the previous EMNLP23's MoAlign method is **31.8**, and WWW23's method is **30.4**, while this paper achieves **86.7**. However, I find it hard to intuitively perceive the necessity for such performance improvement from the method presented in this paper. The MMI, which I consider innovative, shows a general impact in the ablation experiment. Experimental tables are missing. *Previous methods have tables set at 50% and 80%, but why are they **absent** here?* I couldn't find them in the appendix either. This paper lacks two major model performance tables. The ablation study is not thorough or sufficient. Technical Quality: 3 Clarity: 2 Questions for Authors: Why can the performance improve so significantly? What is your baseline, or is it entirely self-constructed? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: See weakness and question Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for your valuable comments. We appreciate your recognition of the soundness of our method, the clarity of the writing, and the SOTA performance. In response to your concerns, we would like to address the following points: - **[W1: Problem description & Challenges]**: As described in our problem definition in Section 3, we focus on addressing a fundamental problem in the field of KGs: multi-modal entity alignment. We focus on tackling uncertain correspondences between inter-modal or intra-modal cues of entities for MMEA. It is a crucial issue that has not been solved in previous works and results in insufficient utilization of multi-modal knowledge, particularly in fusion and enhancement. The challenges include three points: weak inter-modal associations, description diversity, and modality missing. Please refer to Section 1 for more details about these challenges. Our experimental results show promising performance, which proves that effectively addressing the problem of uncertain correspondences in MMEA task can lead to significant improvement. - **[W2: MMI effect]**: We would like to clarify that the impact of MMI module is not minimal and its effect is related to the extent of data missingness in the dataset. Its impact and effectiveness are demonstrated in both the ablation experiment and modality sensitivity experiment. As shown in Table 2, MMI shows an obvious improvement of ***nearly four percentage points in Hits@1*** on FB15K-YG15K dataset (w/o MMI: 77.9% vs TMEA: 81.8%). In Figure 3, our method demonstrates greater robustness: when only 20% of entities have the visual modality, TMEA achieves an MRR ***exceeding 0.5***. - **[W1 & W3: Contributions & Innovation of LLM for filtering and MMI & Finetuned LLM]**: For your concern about innovation, we would like to clarify that our approach is not a simple LLM filtering or MMI. The technical innovations/contributions about them are as follows: - To handle diverse attribute knowledge descriptions for attribute knowledge learning, our contribution is the design of a novel ***alignment-augmented abstract representation*** that incorporates the LLM and in-context learning into attribute alignment and filtering for ***generating and embedding the attribute abstract***. - To mitigate the impact of modality absence, our contribution lies in the proposal to ***unify diverse modalities into a shared latent subspace and generate pseudo features*** via VAEs according to existing modal features in MMI module. In addition to these two aspects of innovations, we specially design an ***inter-modal commonality enhancement mechanism*** based on cross-attention with ***orthogonal constraints*** to address the weak semantic associations in MCE module. Indeed, we have successfully addressed the critical issue of uncertain correspondences in MMEA by proposing a novel framework named TMEA, which significantly improves the performance of MMEA, a fundamental problem that benefits numerous downstream applications. Regarding LLM selection, we would like to emphasize that our innovation does not lie in the use of LLMs and it lies in addressing the problem of uncertain correspondences by proposing a TMEA framework. This framework can accommodate any LLM, and we have achieved promising results using a relatively straightforward LLM. - **[W4 & Q1: The reason for significant performance improvement & Baselines]**: We would like to emphasize that the significant improvement in performance is attributed to our solution to the uncertain correspondence problem in the MMEA task, which enables more comprehensive feature learning of multi-modal knowledge. Our primary innovation lies in the proposal of a holistic framework to tackle this important problem, and the experiments have demonstrated the overall effectiveness of both the framework and each individual component. As evidenced by the modality and component ablation studies in Table 2, the experimental results apparently suggest the improvement is the result of the combined contributions of all components. Regarding baselines, we followed the standard experimental setup of the MMEA task like ACK-MMEA, MCLEA, and MSNEA, using well-adopted baselines that comprehensively cover traditional and multimodal entity alignment approaches. Our results are either directly reproduced from public code or sourced from other papers, not self-constructed. - **[W5: Missing 50% and 80% tables]**: These experimental results are presented in Figure 3, which show that TMEA provides superior performance with different proportions of training data. - **[W6: Ablation study]**: The essential components in TMEA include MMI module (w/o MMI), MCE module (w/o MCE), alignment-augmented abstract representation (w/o AP), orthogonal constraint loss $L_o$ in MCE module (w/o $L_o$), MSE loss $L_{mse}$ in MMI module (w/o $L_{mse}$), and iterative strategy (w/o IT). In Table 2, we have presented the ablation study including all these variants, and the results demonstrate the effectiveness of all essential components. If there are any further suggestions, we are open to supplementing the experiments. We hope these responses effectively address your concerns. We will make revisions to further clarify these aspects in our revised paper. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for the rebuttal, which has addressed my concerns regarding the missing 50% and 80% tables as well as the contributions. However, the following concerns remain unresolved: W2: As you pointed out, MMI is one of your innovations, and you mentioned that in the FB15K-YG15K dataset, the ablation of MMI resulted in a performance drop from 81.8 to 77.9 (a decrease of 3.9). In contrast, other components such as MCE (**81.8 to 63.6**) and AP (**81.8 to 59.3**) showed much larger impacts. This suggests that MMI's contribution is not as significant. Moreover, as I previously mentioned, in the FB15K-DB15K dataset, MMI's performance change is only from 86.7 to 85.2 (a decrease of **1.5**). While MMI does have an effect, its impact does not seem as strong as claimed in the paper. W5: In the ablation study, the authors only conducted ablation on individual modules but did not perform combined ablations, such as removing **both AP and MMI together**. W4: The authors stated that the code was built on other baselines, yet the performance improvement is exceedingly large. For example, in the ablation study, even after removing a single module, TMEA still performs well: without AP, the performance drops from 86.7 to 78.6; without MMI, from 86.7 to 85.2; without LMSE, from 86.7 to 84.1. This raises another concern: **even after removing the proposed modules, the performance might still significantly exceed previous baselines** (EMNLP23's MoAlign method is **31.8**, and WWW23's method is **30.4**). In the rebuttal, the authors have not yet provided a reasonable explanation for this concern—how the code or mechanism achieves nearly a **50-point improvement** over previous SOTA baselines (**86.7 vs. 31.8, 30.4**) despite the ablation effects being not very pronounced. The authors pointed out that "the significant improvement in performance is attributed to our solution to the uncertain correspondence problem in the MMEA task." However, the ablation experiments do not indicate which specific mechanism(s) are responsible for such a substantial performance boost. --- Rebuttal 2: Comment: Thank you for your valuable feedback. In response to your concerns, we would like to address the following points: - **[W2: MMI effect]**: We would like to clarify that the effectiveness of MMI module can be demonstrated due to the following reasons: - The MMI module has shown clear improvements on both FB15K-YG15K and FB15K-DB15K datasets. The improvement on FB15K-DB15K is evaluated not only on Hits@1 but also on other metrics like MR. The MR notably decreased from *32.9* to *26.3*. This indicates that incorporating the MMI module performs better in all ranking results, particularly for *difficult samples*, which aligns with our description of more improvement in extreme missing data scenarios (Please refer to the next point). - As we explained before, *the impact of MMI is related to the extent of data missingness in the dataset*. In our experimental analysis, we provided the explanation about this issue in Lines 328-330. - Additionally, *to further validate the effectiveness of MMI, we conducted a modality sensitivity experiment on FB15K-DB15K*. In Figure 3, the results confirm that our method exhibits superior performance and greater robustness. For example, when only 20% of entities have the visual modality, TMEA achieves an MRR exceeding 0.5, while the best baseline falls below 0.4. These results have validated the effectiveness of MMI module. Regarding your concern that other components show much larger impacts than MMI, ***we respectfully disagree that the greater impact of other modules can indicate the ineffectiveness of MMI***. Each module is designed to address the uncertain correspondence problem from a different perspective, such as MMI addressing the issue of missing modality and MCE addressing weak inter-modal associations. These modules are not conflicting, and the experiments in Table 2 have demonstrated the effectiveness of each module. - **[W6 (Not W5): Ablation study]**: We would like to emphasize that the AP and MMI modules are two separate modules with different roles, so we conducted separate ablation experiments for each, which have demonstrated the effectiveness of the modules. Regarding your mentioned experiments of combined ablations, we present the results as follows: | Method | Hits@1 | Hits@10 | MR | |--------------|--------|---------|------| | w/o AP & MMI | 0.734 | 0.874 | 60.3 | | TMEA | 0.867 | 0.944 | 26.3 | These results further validate the effectiveness of the AP and MMI modules. - **[W4: Baseline]**: There seem to be some misunderstandings regarding the meaning of "baseline" in this context. In your review, we understood "baseline" as the method being compared. Therefore, our response described that we reproduced these methods for comparison using open-source code or directly cited the results from the original paper. Based on your latest reply, we now understand that the "baseline" you are referring to is the *base model*. We would like to clarify that *our model is entirely self-constructed*. - **[W4: Significant Improvement]**: We would like to clarify that our method's improvement is compared against the SOTA method, MSNEA (65.3%), and our initial multi-modal feature extraction and contrastive learning methods are similar to MSNEA. It can be observed that the performance of ACK-MMEA (30.4%) and MoAlign (31.8%) is significantly lower than MSNEA (65.3%). Additionally, we provide the experimental results of removing all the designed modules as follows: | Method | Hits@1 | Hits@10 | MR | |-----------------------------|--------|---------|-------| | w/o All Designed Components | 0.647 | 0.788 | 147.6 | | MSNEA | 0.653 | 0.812 | 54.0 | | TMEA | 0.867 | 0.944 | 26.3 | The experimental results also indicate that when we remove all the designed components, the results are similar to MSNEA. In conclusion, the various components we designed can alleviate the uncertain correspondence problem from different perspectives. The combination of all designed components significantly improves the performance, and the effect of each component has been demonstrated through individual ablation studies. We hope these responses effectively address your concerns, and we highly value your re-evaluation of our paper. Thank you very much for your continued consideration. --- Rebuttal Comment 2.1: Comment: Dear Reviewer fuge, We would like to express our sincere gratitude for the time and effort you spend reviewing our paper. As **the author/reviewer discussion stage draws to a close**, we are eager for your response to ascertain if our detailed response has sufficiently addressed your concerns. We would be honored to address any further questions you may have. *We eagerly anticipate and highly value your re-evaluation of our paper.* Best regards, Authors of Paper 2934
Summary: This paper proposed a novel method for tackling uncertain correspondences in multi-modal entity alignment, called TMEA. The approach addressed challenges such as weak inter-modal associations, description diversity, and modality absence that hinder effective entity similarity exploration. TMEA employd alignment-augmented abstract representation, integrating large language models and in-context learning to enhance attribute alignment and filtering. To address modality absence, it unified all modality features into a shared latent subspace and generates pseudo features using variational autoencoders. Additionally, it introduced an inter-modal commonality enhancement mechanism based on cross-attention with orthogonal constraints to improve weak semantic associations. Extensive experiments on two real-world datasets has demonstrated the effectiveness of TMEA, showing significant improvements over competitive baselines. Strengths: 1. This paper effectively encoded relational, attribute, and visual knowledge into feature representations using a Multi-modal Knowledge Encoder (MKE) module, providing a holistic view of entity features. 2. This paper used a Large Language Model (LLM) and in-context learning for better attribute alignment and filtering, and addressed modality absence through the Missing Modality Imputation (MMI) module, which generated pseudo features using Variational AutoEncoders (VAEs). 3. This paper developed an inter-modal commonality enhancement mechanism based on cross-attention with orthogonal constraints to improve semantic associations between modalities, ensuring coherent and aligned multi-modal features. Weaknesses: 1. The design involves multiple modules, increasing the method's complexity in terms of reproducibility and spatio-temporal complexity compared to previous approaches. 2. The experimental part of Label Dependency Analysis is not described in detail enough, for example, when the aligned sample pairs reach 80%, the effect of MCLEA is basically the same as the effect of TMEA on some indexes in the FB15K-DB15K dataset, but the text doesn't explain the reason for the phenomenon. Technical Quality: 3 Clarity: 3 Questions for Authors: Why should we consider modal missingness, the Missing Modality Imputation(MMI) module doesn't seem to play a big role? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have adequately addressed the limitations and potential negative societal impacts of their work in Appendix E. They underscored limitations in semantic specificity stemming from reliance on language models trained on general corpora and dependence on annotated data, which hampers practical application in knowledge graph contexts. Future research aims to refine these aspects for broader applicability in unsupervised settings within KGs. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your valuable comments. We appreciate your recognition of our method's novelty, effectiveness, and the sufficiency of our experiments. In response to your concerns, we would like to address the following points: - **[W1: Reproducibility & Complexity]**: To ensure reproducibility and promote research in this field, we plan to release our code. For your concern about complexity, we would like to clarify that our method is not more complex than many previous approaches. Mainstream methods for feature learning in MMKGs typically use different encoders to extract initial features and then design various fusion methods. Due to multiple modalities, these approaches can lead to slightly higher time and space complexity. Here, we calculate the time and space complexity of our method and one representative baseline, ACK-MMEA. We denote the batch size as $B$ and the dimension size as $d$. The time complexity of ACK-MMEA consists of following parts: (1) Attribute Uniformization: This takes $O(|V|(d_T+d_E+d_I)d)≈O(|V|d^2)$. (2) Merge Operator: As it is a GAT structure, it spends $O(|V|d^2+|E|d)$. (3) Generate Operator: This involves average aggregation, so it takes $O(|E|d)$. (4) ConsistGNN: It is a 2-layer GCN structure, costing $O(2*3|E|d)$. (5) Joint Loss calculation: The entity similarity loss takes $O(3|V|d)$, the attribute similarity loss takes $O(2|V|d)$, and the neighbor dissimilarity takes $O(|E|d)$. Thus, the total complexity is around $O(|V|d^2+(|V|+|E|)d)$. For our method, the time complexity consists of the following parts: (1) Linear Projections of the Pre-trained Visual and Attribute Features: They take $O(2Bd^2)$. (2) Loss for TransE: It takes $O(|T_{\mathcal{R}}| |T_{\mathcal{R}}^{-}|d)$. (3) MMI: It first does relational projection for $O(2Bd^2)$, and then VAEs take $O(4Bd^2)$. Finally, the loss functions take $O(6Bd)$. (4) MCE: It consists of six multi-head cross-attention modules, taking $O(6\eta B^2d)$. $L_{orth}$ further takes $O(6Bd)$. (5) Contrastive Learning Loss: It takes $O(4|H|d)$. Therefore, the total time complexity of TMEA is around $O(Bd^2+(B^2+|H|+|T_{\mathcal{R}}|^2)d)$. The space complexities of TMEA and ACK-MMEA are both $O(d^2 + |V|d)$. Our method has a smaller time complexity than ACK-MMEA when the number of entities $|V|$ and edges $|E|$ is much larger than the batch size $B$, and when the term $B^2$ along with $|H|$ and $|T_{\mathcal{R}}|^2$, is smaller than the sum of entities and edges. This demonstrates the scalability of our method to large-scale and dense graphs, when $|V|$ and $|E|$ are large while we can keep $B$ moderate to maintain small complexity. Regarding effectiveness, it is noted that our method outperforms ACK-MMEA by a large margin (0.867 vs 0.304). - **[W2: Phenomenon explanation]**: Thank you for your constructive advice. In Figure 4, there is a significant performance gap between our model and MCLEA, so we presume you are referring to MSNEA. When the aligned sample pairs reach 80%, even though their performance on the Hits@10 metric appears to be close, there is still a noticeable difference in MRR and Hits@1. This indicates that MSNEA performs poorly in aligning difficult samples. We will add this content into the revised paper. - **[Q1: Modal missingness & MMI effect]**: For your concern about the meaning of considering modal missingness, we would like to clarify that KGs in the real world are generally incomplete but rather have missing information. We propose considering modality missingness to more robustly address this situation. The effect of MMI module is related to the extent of data missingness in the dataset. To validate the effectiveness of this module, we conducted ablation experiments as well as experiments with different proportions of missing modalities. As shown in Table 2, MMI shows an obvious improvement of ***nearly four percentage points in Hits@1*** on FB15K-YG15K dataset (w/o MMI: 77.9% vs TMEA: 81.8%). Additionally, as illustrated in Figure 3, our method demonstrates greater robustness: when only 20% of entities have the visual modality, TMEA achieves an MRR ***exceeding 0.5***, while the best baseline falls below 0.4. These results demonstrate the effectiveness of MMI module. We hope these responses effectively address your concerns. We will make revisions to further clarify these aspects in our revised paper. --- Rebuttal 2: Comment: Dear Reviewer qUrF, Thank you very much for taking the time and making the effort to review our paper. We sincerely appreciate your valuable and constructive feedback. The author/reviewer discussion stage will be ending soon. We are looking forward to your reply, to wonder whether our detailed response has adequately addressed your concerns. If you have any further questions, we would be honored to address them. Thank you once again for reviewing our paper. Best regards, Authors of Paper 2934
Summary: This paper addresses the task of multi-modal entity alignment for integrating MMKGs. Existing efforts mostly focus on capturing entity features via diverse modality encoders or fusion methods but face issues with uncertain correspondences. To overcome these challenges, the authors propose a novel method called TMEA. TMEA introduces alignment-augmented abstract representation using a large language model (LLM) and in-context learning to manage diverse attribute descriptions. It also mitigates modality absence by unifying all modality features into a shared latent subspace and generating pseudo features with variational autoencoders (VAEs). Furthermore, it addresses weak inter-modal associations through an inter-modal commonality enhancement mechanism based on cross-attention with orthogonal constraints. Extensive experiments on two real-world datasets demonstrate the effectiveness of TMEA, showing significant improvements over baseline methods. Strengths: - This paper proposes to address the issues of uncertain correspondences for MMEA, such as weak inter-modal associations, description diversity, and modality absence. It is a novel view to improve MMEA task by exploring the similarities of aligned entities. - This paper proposes a novel method named TMEA. TMEA includes alignment-augmented abstract representation to handle diverse attribute knowledge descriptions, shared latent subspace for missing modality imputation, and inter-modal commonality enhancement mechanism to enhance modality associations. - Extensive experiments on two real-world datasets are conducted to validate the effectiveness of TMEA, showing clear improvements over competitive baselines. Weaknesses: - The contributions of the paper are not explicitly summarized. It appears that the primary contribution is the design of new structures to address uncertain correspondences, rather than focusing on multi-modal encoding. - General models like ChatGPT can sometimes have difficulty understanding the specific language and context within knowledge graphs. Have the authors considered personalized attribute learning methods that could more effectively grasp the semantics for different knowledge graphs? - Could the authors clarify whether they plan to release their code? This would greatly benefit the research community. Technical Quality: 3 Clarity: 4 Questions for Authors: - Could the authors provide a more explicit summary of the contributions of this paper? - How have the authors considered addressing attribute learning issues in different knowledge graphs, especially those with specialized knowledge? - Could the authors plan to release their code? Confidence: 5 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for your valuable feedback on our paper. We appreciate your recognition of the novelty of our research problem and methodology, as well as the robustness of our experiments. In response to your concerns, we would like to address the following points: - **[W1 & Q1: Summary of contributions]**: Thank you for your insightful suggestion. Below, we provide a more explicit summary of our contributions: - In this paper, we focus on tackling uncertain correspondences between inter-modal or intra-modal cues of entities for multi-modal entity alignment, including weak inter-modal associations, description diversity, and modality missing. - We propose a novel TMEA model to address these three issues by specially designing an inter-modal commonality enhancement mechanism based on cross-attention with orthogonal constraints, designing an alignment-augmented abstract representation that incorporates the LLM and in-context learning into attribute alignment, and unifying diverse modalities into a shared latent subspace and generating pseudo features via VAEs according to existing modal features. - We conduct extensive experiments on two real-world datasets, FB15K-DB15K and FB15K-YG15K. The experimental results clearly demonstrate the effectiveness and superiority of our proposed TMEA, showing a significant improvement over competitive baselines, with at least a 32.8\% increase in Hits@1. Furthermore, we will add this content into the revised paper. - **[W2 & Q2: Specialized knowledge]**: Regarding your concern about specialized knowledge, we agree that it is indeed an excellent research problem. Currently, general LLMs can handle a lot of specialized knowledge such as law and medicine, but there are still limitations. In future work, we will focus more on the alignment of specialized KGs. In Appendix E, we propose exploring the fine-tuning of open-source LLMs to enhance semantic understanding. However, the main focus of our work remains on addressing the issue of uncertain correspondences in MMEA task. - **[W3 & Q3: Code release]**: Thank you for your constructive suggestion. To ensure reproducibility and promote research in this field, we plan to release our code. We stated in the checklist that we would release the code upon publication. We hope these responses effectively address your concerns. We will make revisions to further clarify these aspects in our revised paper. --- Rebuttal 2: Comment: Dear Reviewer Sc2m, Thank you very much for taking the time and making the effort to review our paper. We sincerely appreciate your valuable and constructive feedback. The author/reviewer discussion stage will be ending soon. We are looking forward to your reply, to wonder whether our detailed response has adequately addressed your concerns. If you have any further questions, we would be honored to address them. Thank you once again for reviewing our paper. Best regards, Authors of Paper 2934 --- Rebuttal Comment 2.1: Comment: Thanks for the authors' response, which has adequately addressed my concerns. With a more explicit summary, the contributions are clearly presented. I think addressing the uncertain correspondences is an interesting idea, and the authors’ ablation results have demonstrated the effectiveness of the technical contributions. I will raise my confidence score. --- Reply to Comment 2.1.1: Comment: Thank you very much for acknowledging our paper! We will carefully incorporate these clarifications and further improve the quality of our paper.
Summary: This paper addresses the task of aligning entities across multi-modal knowledge graphs (MMKGs). The authors propose a novel method called TMEA to tackle the challenges of uncertain correspondences between inter-modal or intra-modal cues of entities. The TMEA method consists of several key components: 1. Multi-modal Knowledge Encoder: This module encodes relational, attribute, and visual knowledge into preliminary feature representations. It includes an alignment-augmented abstract representation that leverages the Large Language Model for attribute alignment and filtering. 2. Missing Modality Imputation: This module addresses the issue of missing modalities by unifying diverse modalities into a shared latent subspace and generating pseudo features via Variational AutoEncoders. 3. Multi-modal Commonality Enhancement: This module enhances the semantic associations between modalities using a cross-attention mechanism with orthogonal constraints. The experiments validate the effectiveness of the method. Strengths: 1.The problem addressed in this paper is highly practical and represents a pivotal issue for multi-knowledge graph entity alignment. 2.Extensive expriments. 3.The method is clearly presented. Weaknesses: 1. the proposed method seems to be a simple combination of the existing modules. Please clarify the challenges and the impact of this paper. 2. The authors should elaborate on the specific issues that arise when the semantic relationship between different modalities of the same entity is weak. 3. The alignment-augmented abstract representation is referenced multiple times in the article "to address diverse attribute knowledge descriptions." However, the article does not delve further into this concept or provide a concrete explanation. 4. The ablation was conducted with only one variable, and the essential components necessary to achieve a significant performance improvement were not identified. It is vital to explore whether each component, such as the loss function or MMI, contributes linearly and significantly enhances the model. The authors should engage in further discussions and experiments. Technical Quality: 3 Clarity: 3 Questions for Authors: Please see the weaknesses above. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: YES Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your valuable feedback on our paper. We appreciate your recognition of the significance of our research problem, extensive experiments, and clear presentation of the methodology. In response to your concerns, we would like to address the following points: - **[W1: Module combination & Challenges & Impact]**: We would like to clarify that our proposed method is not just a simple combination of existing modules and we have multiple innovative designs in our method. In this paper, we focus on tackling uncertain correspondences between inter-modal or intra-modal cues of entities for MMEA, which is a crucial problem that has not been solved in previous work. The **challenges** include three points, i.e., weak inter-modal associations, description diversity, and modality missing. More details of these challenges are described in the Introduction. Our technical **novelty** is summarized as follows: - To address weak semantic associations, we design an ***inter-modal commonality enhancement mechanism*** based on cross-attention with ***orthogonal constraints***. - To handle diverse attribute knowledge descriptions, we design an ***alignment-augmented abstract representation*** that incorporates the LLM and in-context learning into attribute alignment and filtering for ***generating and embedding the attribute abstract***. - To mitigate the impact of the modality absence, we propose to ***unify diverse modalities into a shared latent subspace and generate pseudo features*** via VAEs according to existing modal features. Our experimental results show promising performance, which proves that solving the crucial problem of uncertain correspondences can effectively improve the performance of MMEA. **Impact**: Our method can address the issue of uncertain correspondences in MMEA task, which enhances the feature learning of multimodal knowledge and more effectively aligns entities for better integration of MMKGs. This provides a more comprehensive external knowledge base for downstream applications such as recommendation systems and question-answering systems, thereby improving their performance. - **[W2: Specific issues about weak semantic associations]**: Thank you for your valuable suggestion. We would like to clarify that in multimodal tasks, semantic associations between different modalities are often utilized to mutually enhance features across modalities [1] [2]. If the semantic association is weak, modules designed for interactions between different modalities may negatively impact overall performance, potentially resulting in performance worse than that of a single modality. We will add this content in the revised paper. [1] Zhang, Yunhua, Hazel Doughty, and Cees Snoek. "Learning unseen modality interaction." NIPS 2023. [2] Qu, Leigang, et al. "Dynamic modality interaction modeling for image-text retrieval." SIGIR 2021. - **[W3: Explanation of diverse attribute knowledge descriptions]**: As we have described in lines 65-69, "The distinct descriptive manners of entities across different MMKGs complicate the matching of attributes with the same meanings between aligned entities. For instance, ``Release Date`` for ``Twilight`` in $MMKG_1$ and ``Debut Date`` for ``Twilight (film)`` in $MMKG_2$ both mean the initial release date of the movie", "Diverse attribute knowledge descriptions" refers to instances where the same attribute across two KGs is described using different terms, making it challenging for computing word embedding similarity to recognize that these terms denote the same underlying meaning. In addition to the above example, ``Storage Capacity`` and ``Memory Size`` may also describe the same concept. - **[W4: The ablation for the loss function and MMI]**: The ablation results for the loss functions (w/o $L_o$ and w/o $L_{mse}$) and MMI (w/o MMI) are shown in Table 2. The essential components in TMEA include MMI module (w/o MMI), MCE module (w/o MCE), alignment-augmented abstract representation (w/o AP), orthogonal constraint loss $L_o$ in MCE module (w/o $L_o$), MSE loss $L_{mse}$ in MMI module (w/o $L_{mse}$), and iterative strategy (w/o IT). In Table 2, we have presented ablation studies to demonstrate the effectiveness of all these components. If there are any further suggestions, we would be glad to supplement the experiments. We hope these responses effectively address your concerns. We will make revisions to further clarify these aspects in our revised paper. --- Rebuttal 2: Comment: Dear Reviewer PXNL, Thank you very much for taking the time and making the effort to review our paper. We sincerely appreciate your valuable and constructive feedback. The author/reviewer discussion stage will be ending soon. We are looking forward to your reply, to wonder whether our detailed response has adequately addressed your concerns. If you have any further questions, we would be honored to address them. We are looking forward to and highly value your re-evaluation of our paper. Thank you once again for reviewing our paper. Best regards, Authors of Paper 2934 --- Rebuttal Comment 2.1: Comment: Dear Reviewer PXNL, Thank you once again for your dedication to reviewing our paper. The author/reviewer discussion stage will be ending today. We hope our detailed response has adequately addressed your concerns. If any concerns remain, we would be glad to discuss them further. If no concerns remain, we would greatly appreciate it if you could consider raising the score. Thanks! Best regards, Authors of Paper 2934
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Full-Atom Peptide Design with Geometric Latent Diffusion
Accept (poster)
Summary: The paper presents a new diffusion model for generating peptide binders given protein pockets. It also presents a new benchmark dataset, created by selecting examples from PDB and ensuring sequence dissimilarity between training and test data. Strengths: Thank you for constructing a benchmark dataset for this task with a proper train/test split. This is overlooked in some related work. Is the dataset available to download? Notation is clearly set out and there is a good survey of related work. There are sensible ablations and comparisons to baselines, and the presented model performs well relative to these. Weaknesses: In the introduction and abstract, please state clearly that PepGLAD needs to be given the binding site (not the whole protein structure). Line 24 is hard to understand: ‘The key of peptide design …’. It seems to say that strong binding requires the peptide to adopt a compact, inflexible shape, but I do not understand why. The argument for the affine transformation is unclear. I could not decipher line 48 ‘These variances define divergent target distributions of Gaussian…’. Plenty of methods (AlphaFold, RFDiffusion, GeoDiff, …) are able to predict or design elongated protein and molecule shapes without any such transform, and here it should be easy because the binding site shape is given as a conditioning input. Surely a model can learn that the binder it generates should approximately match the shape of the binding site, even if the binding site is far from spherical? I think that proposition 3.1 could be stronger. Please check, but I think that if $x$ is rotated, then $L$ undergoes the same rotation, and hence $F_g(g.x)=F(x)$. Thus, $f(h, F(x))$ is invariant and $F^{-1}\overrightarrow{f}(h, F(x))$ is equivariant even if $f$ and $\overrightarrow{f}$ are not. I am surprised that this transformation improves performance. The score model in the transformed space now has to learn that bonds in different orientations should have different lengths, inferring from the inter-residue distances of the binding site how much each dimension is compressed or stretched. Technical Quality: 2 Clarity: 3 Questions for Authors: Why do you need to generate all-atom designs? Do you think it could work just as well to generate C-alpha coordinates and discrete residue types? Mixed discrete/continuous diffusion models have shown some success for small-molecule drug design (DiffSBDD) and crystal design (MatterGen). Line 223: does ‘filter out’ mean ‘keep’ or ‘discard’? The dataset seems small for deep learning and for the complexity of the task. How could it be augmented? Table 3: what counts as a good RMSD, for practical purposes? Are all the methods given the same inputs, or are some doing blind docking? Equation (3): how is the MSE on full-atom structures defined when the residue type is wrong? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: Like all peptide design in silico, this is hamstrung by lack of good in silico evaluation: dG values from Rosetta are not reliable estimates of real binding affinity. Ideally there would be some wet-lab evaluation of the generated designs. The method requires binding site to be identified before peptide binders are designed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your thoughtful reviews! > W1: In the intro/abs, please state clearly that PepGLAD needs to be given the binding site. Thanks for the suggestion! We will state it clearly in the revision. > W2: Line 24 is hard to understand: ‘The key of peptide design …’. Sorry for the confusion. First, it means that to achieve strong binding, the peptide should form compact interactions with the pocket, since dense secondary bonding (e.g. hydrogen bond) contributes to the overal binding affinity. Second, the binding conformation of the peptide is largely affected by the pocket. Free peptides are usually flexible [a], whereas it may adopt certain modes upon binding to form denser interactions. [a] "The X‐Pro peptide bond as an NMR probe for conformational studies of flexible linear peptides." Biopolymers: Original Research on Biomolecules 15.10 (1976). > W3: The argument for the affine transformation is unclear. I could not decipher line 48 ‘These variances define divergent target distributions of Gaussian…’. Thanks for the comment! The need for affine transformation arises from the unique challenges associated with peptides compared to general proteins and small molecules: 1. **Irregular Conformation Scales**: Small molecules have minor scale variances, while protein spread is largely determined by residue number, regularized by the radius of gyration [b]. Conformations for peptides with the same length can vary widely, from linearly extended to compact ball-like structures, making normalization with a shape prior particularly crucial. 2. **Dataset Size and Generalization**: The smaller peptide dataset necessitates a well-defined geometric diffusion space for better generalization. Our affine-based normalization in PepGLAD achieves consistent and accurate RMSDs, as shown in Figure 4, compared to DiffAb's Gaussian normalization, which often results in outliers with high RMSDs. [b] "Radius of gyration as an indicator of protein structure compactness." Molecular Biology 42 (2008). > W4: I think that proposition 3.1 could be stronger. Please check, I think that if $x$ is rotated, then $L$ undergoes the same rotation, and hence $F_g(g\cdot x) = F(x)$. Thanks for the comment, which raises a good theoretical question. Unfortunately, if $x$ is rotated by matrix $Q$, $L$ does not undergo the same rotation. Here is an example with 6 points: [ 0.687 -0.245 0.713], [ 1.164 1.221 -1.242], [-0.126 -0.797 -0.368], [-0.666 -0.501 1.189], [-0.315 -0.524 0.178], [-0.743 0.848 -0.47 ] Cholesky Decomposition on the covariance matrix produces $L$, a lower triangular matrix: [ 0.766 0 0 ] [ 0.318 0.764 0 ] [-0.367 -0.496 0.623] Rotation matrix $Q$: [-0.059 -0.357 -0.932] [0.519 0.786 -0.334] [0.852 -0.503 0.138] After rotation $Q$, Cholesky Decomposition produces $L_g$: [0.638 0 0 ] [0.639 0.904 0 ] [-0.087 0.033 0.632] However, $QL \neq L_g$ as shown below! [0.183 0.19 -0.581] [0.771 0.767 -0.208] [0.441 -0.454 0.086] $QL$ is unnecessarily lower triangular, and thus not the solution of Cholesky decomposition. Specifically, multiple solutions exist for $Q\Sigma Q^ = L_gL_g^T$(including $QL$), but only one unique solution exists (usually not $QL$) if restricted to lower triangular matrices. And we prefer a lower triangular matrix since its inverse matrix is easy to obtain. Hence, we can only derive $L_gL_g^T = QLL^TQ^T$, but not $QL = L_g$. > W5: I am surprised that this transformation improves performance. The score model in the transformed space now has to learn that bonds in different orientations should have different lengths, inferring from the inter-residue distances of the binding site how much each dimension is compressed or stretched. Thanks for your insightful comments! The reasons are two-fold. First, The affine-based normalization enhances generalization, outweighing potential challenges from dimension twisting (please see response to W3). Second, The latent diffusion framework inherently addresses some challenges related to dimension twisting. During diffusion, we generate only latent point clouds with coarse-grained geometries. The detailed full-atom reconstruction is handled by the autoencoder, which operates in the original data space and is encouraged to capture the patterns of bond lengths and angles. > Q1: Why do you need to generate all-atom designs? Do you think it could work just as well to generate C-alpha coordinates and discrete residue types? Thanks for the question! We think atom-level modeling provides better generalization as interactions are determined by secondary bonding between atoms. Ablations in Table 4 show significant degradation in diversity and success rate without full-atom context. > Q2: Line 223: does ‘filter out’ mean ‘keep’ or ‘discard’? Sorry for the confusion. It means 'keep'. We will correct it in the revision. > Q3: How could the dataset be augmented? Thanks for the question! We have explored data augmentation from monomer fragments, showing some enhancement. A synthetic dataset from models like AlphaFold 3 could be possible, but quality and balance issues need to be considered, which we leave for future work. > Q4: Table 3: what counts as a good RMSD, for practical purposes? Are all the methods given the same inputs, or are some doing blind docking? Thanks for the question! Only Alphafold 2 is doing blind docking since it is a cofolding model. For practical purposes, RMSD < 5A ̊ indicates near-native conformations, and 2A ̊ typically indicates high-quality conformations [c]. [c] "Comprehensive evaluation of fourteen docking programs on protein–peptide complexes."Journal of chemical theory and computation 16.6 (2020). > Q5: Equation (3): how is the MSE on full-atom structures defined when the residue type is wrong? During training, we still compare the geometry to the ground truth. Both sequence loss and structure loss guide the model to adjust sequence and structure predictions simultaneously. --- Rebuttal Comment 1.1: Title: Thank you for the clarifications Comment: Thank you for the clarifications. Regarding proposition 3.1: you're absolutely right. Sorry, and thank you for your patience. One further detail I could not find in the paper is the exact definition of the binding site. Is this a potential source of data leakage? I am wondering if the binding site is defined in terms of the ground-truth pose of the same peptide. I keep my score as-is. --- Reply to Comment 1.1.1: Title: Thanks for the thoughtful reviews Comment: Thanks sincerely for the thoughtful reviews! For the binding site, we use all residues on the target protein within 10Å to any atoms on the peptide. The threshold 10Å is kind of large to avoid data leakage. We will make this point clear in the revision. Thanks again for your effort in reviewing our submission.
Summary: This paper explores the structure-based peptide design problem. A benchmark on this task and a powerful diffusion-based model for full-atom peptide design, named PepGLAD, are proposed. PepGLAD explores the geometric latent diffusion, where the sequence and the full-atom structure are jointly encoded by a variational autoencoder. The proposed method outperforms the existing models on sequence-structure co-design and binding conformation generation. Strengths: 1. This paper tackles an under-explored problem and constructs a new benchmark which is beneficial for this field. 2. The novel Geometric Latent Space can maintain the advantages of Latent Diffusion for efficiency and normalization and Geometric Diffusion for explicit interaction modeling. 3. PepGLAD can achieve full-atom generation compared with backbone atom generation in previous methods, which is truly an improvement. 4. PepGLAD utilizes a novel Receptor-Specific Affine Transformation to map the pockect-specific distribution to the standard distribution Weaknesses: 1. This paper failed to further explore the explicit interaction modeling in geometric latent space. I think it’s the key advantage compared to standard latent space (without the geometric latent). 2. The introduction to Receptor-Specific Affine Transformation can be more clear. I kindly suggest the authors can add one sentence or two summarizing the first paragraph in section 3.3 into the section 1 of Introduction. In such way readers may be more clear that this Affine Transformation is applied to the peptides and the pockets (i.e. binding site distribution in the introduction). Also the point 3 in line 62-65 has a mistake: according to the method section the affine transformation F is mapping binding site distribution (of peptides) into standard Gaussian (not the reverse) 3. The conditioning method of pockets is not explored. The denoiser takes the geometric latents of peptides and the coordinates of pockets as input, but the latents are coordinates are different levels of abstraction and this may cause some underlying issues. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. What’s the motivation for using Latent Diffusion instead of standard diffusion? Just because it’s better in image generation domain? 2. How to incorporate geometric pocket condition (e.g. coordinates) into geometric latent diffusion process? 3. Is the geometric latent are just the Ca atoms of each residues? Will using all four backbone atoms as latent improve the perforemance? 4. Is the adaptive multi-channel equivariant encoder in dyMEAN a scalarization-based equivariant GNNs? It’s better to add some explainations in this paper. 5. Does this Receptor-Specific Affine Transformation beat simple gaussian normalization, common trick in Structure-based Drug Design field? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors didn’t discuss about the limitations of this method. This paper failed to further explore the explicit interaction modeling in geometric latent space. I think it’s the key advantage compared to standard latent space (without the geometric latent). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your insightful and constructive comments! > W1: This paper failed to further explore the explicit interaction modeling in geometric latent space. We apologize for not making this point clear sufficiently. Our model inputs the pocket as a condition for diffusion. During denoising, we use an equivariant-GNN-based encoder to capture explicit geometric interactions between the peptide and the pocket. This is achieved by considering distances and inner products of relative positions between the full-atom pockets and the peptide latent points during geometric message passing. > W2: The introduction to Receptor-Specific Affine Transformation can be more clear. I kindly suggest the authors can add one sentence or two summarizing the first paragraph in section 3.3 into the section 1 of Introduction. In such way readers may be more clear that this Affine Transformation is applied to the peptides and the pockets (i.e. binding site distribution in the introduction). Also the point 3 in line 62-65 has a mistake: according to the method section the affine transformation F is mapping binding site distribution (of peptides) into standard Gaussian (not the reverse) Thanks for the valuable suggestion! The 3rd contribution in section 1 tries to summarize the receptor-specific affine transformation. To improve clarity, we will add the following sentence to the paragraph: "While the complexes possess disparate distributions, we derive a receptor-specific affine transformation that is applied to both the binding sites and the peptides, projecting the shape of all complexes into approximately a standard Gaussian distribution." We also appreciate you pointing out the mistake in lines 62-65. We will correct this in the revision. > W3: The conditioning method of pockets is not explored. The denoiser takes the geometric latents of peptides and the coordinates of pockets as input, but the latents are coordinates are different levels of abstraction and this may cause some underlying issues. Thanks for the insightful observation! Since we have applied regularization on the equivariant latent coordinates by using KL divergence with respect to the $C_\alpha$ coordinates in Eq.(4), which helps ensure consistent scales between the peptide latent coordinates and the pocket, mitigating potential issues arising from their different levels of abstraction. We will highlight this point in the revision to address the reviewer's concern. > Q1: What’s the motivation for using Latent Diffusion instead of standard diffusion? Just because it’s better in image generation domain? Thanks for the question! The motivation for using Latent Diffusion in our work is to effectively address the challenges associated with full-atom generation, which is not well-suited to standard diffusion approaches due to two key problems: 1. Fixed Data Lengths: In full-atom generation, changing the type of residue during the denoising process would alter the number of atoms per residue. This discrete change in data length is incompatible with standard diffusion framework. 2. Generation with Paddings: If the number of atoms is fixed to the maximum number, as done in the baseline DiffAb, it is analogous to generating "padding" in NLP, which is suboptimal. Our results in Table 3 show that DiffAb’s performance in recovering full-atom geometry is significantly worse compared to PepGLAD. In contrast, latent diffusion tackles these issues by using a full-atom encoder to map each residue to a single point in the latent space. This approach avoids changes in data length during diffusion, without need to generate paddings, therefore handles the full-atom generation more elegantly. > Q2: How to incorporate geometric pocket condition (e.g. coordinates) into geometric latent diffusion process? Sorry for any confusion. We do incorporate the geometric pocket condition $\mathcal{G}_b$ on both the autoencoder (please see Eq.(1)) and the diffusion model (please see Eq.(11-13)). The pocket is put into the same geometry graph with the peptides for geometric message passing. > Q3: Is the geometric latent are just the Ca atoms? We apologize for the ambiguity in the presentation. The geometric latents are not limited to just the Cα atoms, but regularized around the Cα atoms to prevent exploding coordinates. However, they still encode full-atom geometries, working jointly with invariant latent representations to reconstruct the full-atom geometry. > Q4: Is the encoder in dyMEAN a scalarization-based equivariant GNNs? Yes, it is a scalarization-based equivariant GNN since the processing of coordinates in dyMEAN only includes invariant scalars via relative distances or inner products. We will add the explanations in the paper. > Q5: Does this Receptor-Specific Affine Transformation beat simple gaussian normalization? Thanks! Yes, our experiments indicate that Receptor-Specific Affine Transformation outperforms simple Gaussian normalization. In our preliminary experiments, we found that simple Gaussian normalization does not enhance generalization, as the high variability of peptide coordinate distributions and scales often leads to exploding coordinates on test complexes during generation. As Figure 4 shows, DiffAb, which employs simple Gaussian normalization, exhibits significant variations in RMSD across different test complexes, including many outliers with very large RMSD values. This suggests inferior generalization capability. In contrast, PepGLAD, which utilizes receptor-specific affine transformation, achieves stable RMSD values across various complexes, demonstrating superior generalization ability. Such discrepancy is unique to peptides, which often face more disparate geometry scales compared to antibodies and small molecules. > The authors didn’t discuss about the limitations of this method Sorry, we put the discussion on limitations in Appendix J due to limited space. We will move the section to the main text in the revision. --- Rebuttal Comment 1.1: Title: Thanks for your rebuttal! Comment: I have read all of the author rebuttal, which addressed most of my concerns. Thus I will raise my score. Thanks for the detailed response. --- Reply to Comment 1.1.1: Title: Thanks for the insightful reviews Comment: Thanks for the insightful reviews, which help refine our manuscript! We will add the responses to the revision to reflect the discussion process. Thanks again for your valuable efforts in reviewing our submission!
Summary: This paper introduces a novel latent diffusion model, Peptide design with Geometric Latent Diffusion (PepGLAD), for the task of peptide design. The authors propose an affine transformation to project the raw Euclidean space into a standardized one, ensuring physical symmetry. Overall, I think it is a good paper, with solid experimental results and novelty in methods. Strengths: This paper proposes a latent diffusion model, Peptite design with Geometric LAtent Diffusion (PepGLAD) , applied to the peptide design task. This paper also proposes an an affine transformation to project the raw Euclidean space into a standardized one, so that the physics symmetry can be ensured. The main contributions of PepGLAD over GeoLDM seem to be its application to full-atom peptide generation and the introduction of a receptor-specific affine transformation. While GeoLDM¹¹ was designed for 3D molecule generation, PepGLAD extends this to the more complex scenario of full-atom peptide generation. The receptor-specific affine transformation is a significant contribution, as it projects the geometry to approximately N(0,I) derived from the binding site, which appears to be a novel technique in the field of latent diffusion models for molecules and proteins. Weaknesses: There are several doubts on method contributions, datasets, as shown in Questions. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. What are the key differentiating factors of PepGLAD compared to GeoLDM [1]? While I recognize the shift from latent molecular generation to full-atom peptide generation, are there any specific model design elements (apart from the affine transformation) that set it apart from GeoLDM? Furthermore, does the model incorporate any strategies to reduce the costs associated with generating full-atom peptide scenarios? 2. The most significant contribution made by the authors in this paper is the Receptor-Specific Affine Transformation. This transformation projects the geometry to approximately N(0,I), derived from the binding site. This approach appears to be the first of its kind applied to latent diffusion models for molecules and proteins. I am curious if there are any other works that have used this technique to ensure equivariance in their generation process, such as frame-averaging [6] [7]. 3. Compared to previously established databases [2] [3], the datasets utilized in your study appear relatively modest in size. Could you comment on the unique advantages of your proposed datasets over these larger ones? Furthermore, have you considered applying your model to existing datasets to reinforce the validity of your results? 4. I suggest to add more citations on recently proposed related works on peptide design [3] [4] [5]. [1] M Xu, et al. Geometric Latent Diffusion Models for 3D Molecule Generation [2] Z Wen, et al. Pepbdb: a comprehensive structural database of biological peptide-protein interactions. [3] L Lin, et al. PPFlow: Target-Aware Peptide Design with Torsional Flow Matching [4] Osama Abdin, et al. PepFlow: direct conformational sampling from peptide energy landscapes through hypernetwork-conditioned diffusion [5] Colin A Grambow, et al. RINGER: Conformer Ensemble Generation of Macrocyclic Peptides with Sequence-Conditioned Internal Coordinate Diffusion [6] W Jing, et al. DSMBind: SE(3) denoising score matching for unsupervised binding energy prediction and nanobody design [7] Omri Puny, et al. Frame Averaging for Invariant and Equivariant Network Design Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: See Appendix. J Limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your constructive comments! > Q1: What are the key differentiating factors of PepGLAD compared to GeoLDM [1]? While I recognize the shift from latent molecular generation to full-atom peptide generation, are there any specific model design elements (apart from the affine transformation) that set it apart from GeoLDM? Furthermore, does the model incorporate any strategies to reduce the costs associated with generating full-atom peptide scenarios? Thanks for the question! The motivations of our PepGLAD and GeoLDM are fundamentally different. We resort to the latent space where the full-atom geometry of each residue is compressed into a single node so that the diffusion can be implemented. Otherwise, when sampling residues of different type, the number of atoms varies, which is incompatible with the diffusion framework. On the contrary, GeoLDM maps each atom in the small molecule to one point in the latent space, without addressing the problem we are trying to tackle here. By representing each residue's full-atom geometry as a single point in the latent space, PepGLAD inherently reduces the computational cost associated with generating full-atom scenarios. Specifically, the diffusion process only needs to handle a latent graph which is approximately 10 times smaller than the full-atom graph, resulting in improved efficiency and scalability. > Q2: The most significant contribution made by the authors in this paper is the Receptor-Specific Affine Transformation. This transformation projects the geometry to approximately N(0,I), derived from the binding site. This approach appears to be the first of its kind applied to latent diffusion models for molecules and proteins. I am curious if there are any other works that have used this technique to ensure equivariance in their generation process, such as frame-averaging [6] [7]. Yes, the proposed technique of receptor-specific affine transformation is novel and the first of its kind to the best of our knowledge. The technique differs from frame averaging from two perspectives. First, the motivations are totally different. While frame averaging aims to design equivariant models, our technique serves as a normalization skill particularly for pocket-based geometric diffusion models. Second, while frame averaging uses PCA to obtain the principal components of the coordinates, we use cholesky decomposition of the covariance of the coordinates to identify unique invertible affine transformations for projections between the data distribution and standard Gaussian. > Q3: Compared to previously established databases [2] [3], the datasets utilized in your study appear relatively modest in size. Could you comment on the unique advantages of your proposed datasets over these larger ones? Furthermore, have you considered applying your model to existing datasets to reinforce the validity of your results? Our dataset is carefully curated to establish two advantages: 1. Practical relevance: Consistent with previous research [a], we have focused on peptides ranging from 4 to 25 residues in length. This range is particularly relevant to practical applications such as drug discovery, as peptides within this length exhibit favorable biochemical properties [b]. In contrast, existing datasets like PepBDB [c] direct extract all complexes from PDB and lack filtering based on peptide lengths. 2. Redundancy and split: We remove the redundant complexes and split the dataset according to the sequence identity of the receptor for testing generalization across different target. On the contrary, the existing dataset contains redundant entries and does not provides such splits. Furthermore, we did apply our model to an existing dataset, PepBDB, to reinforce the validity of our conclusions, the results on which are shown in Table 1 and Table 3. [a] Tsaban, Tomer, et al. "Harnessing protein folding neural networks for peptide–protein docking." Nature communications 13.1 (2022): 176. [b] Muttenthaler, Markus, et al. "Trends in peptide drug discovery." Nature reviews Drug discovery 20.4 (2021): 309-325. [c] Wen, Zeyu, et al. "PepBDB: a comprehensive structural database of biological peptide–protein interactions." Bioinformatics 35.1 (2019): 175-177. > Q4: I suggest to add more citations on recently proposed related works on peptide design [3] [4] [5]. Thanks for the suggestion! We will definitely discuss and cite these recent works in the revision. --- Rebuttal Comment 1.1: Title: Reply Comment: I am glad that my concerns are all address by the authors. And I will lift my rating. --- Reply to Comment 1.1.1: Title: Thanks for your valuable reviews Comment: We sincerely thank the reviewer for the efforts in providing the valuable comments and questions, which helps enhance our submission! We are glad to hear that your concerns have been successfully addressed. We will add the responses to the revision to improve the quality of our paper.
Summary: The authors proposed PepGLAD, a latent diffusion model for full-atom peptide design. This work mainly addressed two challenges for peptide design, (1) full-atom modeling, and (2) diverse binding geometry. Specifically, a variational auto-encoder was trained to learn latent representations of protein-peptide interactions. Then, a latent diffusion model was employed to capture the underlying distribution to sample binding site-conditioned peptide conformations. The authors curated a benchmark dataset to evaluate the model performance, and compared with existing models to showcase its effectiveness. Strengths: - Full-atom modeling is significant to better represent protein-protein/peptide/ligand interactions. This work goes beyond backbone-only modeling and realized full-atom level generation. - Employing latent diffusion with a pre-trained VAE is a reasonable choice, which has recently become prevalent for protein generative modeling. - Provided a new benchmark dataset with careful curation. Proposed novel evaluation metrics and discussed limitations of existing metrics (e.g., AAR). - Under the proposed evaluation metrics, PepGLAD outperforms existing models. Weaknesses: - Dataset is small (6105 non-redundant complexes). Data augmentation attempts (ProtFrag) do not seem to help (Table 4 ablation results). Although sequence-based clustering has been utilized to prevent data leakage, training/testing VAE and diffusion model on 6105/93 data points make the results less convincing, *especially considering the enormous design space for sequence-structure co-design tasks*. - Guidance with an additional classifier (Sec 3.5) makes the model more complicated (which seems a bit ad hoc to me). - From the results shown in Table1, it seems that the model is able to generate a variety of conformations (high Div.) which are reasonable ($\Delta G < 0$). Given the extremely limited size of dataset, I am not sure if the model can properly capture the *conformation space* at all. In other words, I am a bit skeptical about the proposed evaluation metrics. Technical Quality: 2 Clarity: 3 Questions for Authors: - L#130. For each residue, the initial coordinates are initialized with the same $\vec{\boldsymbol{z}}_i$. How significant are these initial vectors? Can we use an SE(3)-invariant latent space and a regression model to generate atomic coordinates (similar to AlphaFold StructureModule, conditioned on the sampled residue type)? - Classifier guidance seems ad hoc and brings more complication to modeling. Would an architecture similar to that used in [ESM3](https://www.biorxiv.org/content/10.1101/2024.07.01.600583v1) or [PVQD](https://www.biorxiv.org/content/10.1101/2023.11.18.567666v1) help? - Table 1: out of 40 samples generated by PepGLAD, there are about 20 different conformations (Div.) and 20 valid peptides (Success). How should the readers interpret this? Peptides exhibit high structural flexibility upon binding? - Did I miss sequence recovery metrics in the model benchmark section (excluding Consistency, which only models the joint distribution)? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable efforts and comments! > W1. Dataset is small. Data augmentation attempts do not seem to help. Although sequence-based clustering has been utilized to prevent data leakage, training/testing VAE and diffusion model on 6105/93 data points make the results less convincing. Thanks for your comments! Indeed, to ensure our results are convincing, we follow the literature [a] and employ the 93 complexes for testing, which are carefully curated by [a] to represent a diverse and high-quality landscape of complexes. Our model trained on a non-redundant dataset of 6K complexes is able to deliver promising results on these testing complexes. Regarding data augmentation that is only applied to the autoencoder, it has a notable impact on energy scores, without which the success rate drops from 55.97% to 52.15% in Table 4. In addition, we also conducted experiments on a public dataset PepBDB [b], which is of larger size (10K samples and 190 testing complexes). These results, presented in Table 1 and Table 3, reinforce the validity and robustness of our conclusions. [a] "Harnessing protein folding neural networks for peptide–protein docking." Nature communications 2022. [b] "PepBDB: a comprehensive structural database of biological peptide–protein interactions." Bioinformatics 35.1 (2019) > W2. Guidance with an additional classifier makes the model more complicated. Thanks! The classifier guidance is used to sample from a sequential subspace within unordered point clouds. We understand that adding guidance somehow makes our model more complicated, but this component does further enhance the performance. To show this, we have additionally compared the structure prediction model with and without guidance in the table below. As its effect is relatively minor, we did not emphasize it as a major contribution to the paper. We will highlight this point in the revision. |guidance|$\text{RMSD}_{C\alpha}\downarrow$|$\text{RMSD}_\text{atom}\downarrow$|$\text{DockQ}\uparrow$| |-|-|-|-| |w/|4.09|5.30|0.592| |w/o|4.10|5.34|0.582| > W3. It seems that the model is able to generate a variety of conformations (high Div.) which are reasonable. Given the extremely limited size of dataset, I am not sure if the model can properly capture the conformation space at all. In other words, I am a bit skeptical about the proposed evaluation metrics. We understand your concern. It is true that using the Diversity metric alone is hard for correct evaluation. Hence, we also use rosetta energy to evaluate the fidelity to check whether the diversity is hacked by random conformations. As highlighted in Table 1, our model surpass the baselines both on diversity and fidelity (rosetta energy), therefore the improvement in diversity should be meaningful as the model is properly capturing the desired distribution. These findings are further confirmed on a larger dataset, PepBDB. For more discussion on the metrics, we kindly refer to appendix G.3. > Q1: L#130. For each residue, the initial coordinates are initialized with the same. How significant are these initial vectors? Can we use an SE(3)-invariant latent space and a regression model to generate atomic coordinates? Thanks! The initial vectors from the latent equivariant space are significant, which can convey sufficient geometric informaion. As suggested, we further conduct an experiment by discarding the equivariant latent vectors and using an SE(3)-invariant latent space instead. The table below shows that the reconstruction ability (RMSD, DockQ) of the VAE deteriorates significantly. It is reasonable since SE(3)-invariant latent space lacks geometric interactions with the pocket atoms, leading to difficulties in reconstructing the full-atom structures on the binding site. |Equivariant vectors|AAR$\uparrow$|RMSD$\downarrow$|DockQ$\uparrow$| |-|-|-|-| |w/|95.1%|0.79|0.898| |w/o|93.4%|1.75|0.823| > Q2: Classifier guidance seems ad hoc and brings more complication to modeling. Would an architecture similar to that used in ESM3 or PVQD help? Thanks for the question! Please see our response to W2 for the explanations of the classifier guidance. The suggested architectures like ESM3 or PVQD are SE(3)-invariant, which are defect in integrating the geometric conditions of binding sites. They are primarily designed to generate entire proteins with a random coordinate system. However, our pocket-based generation requires the generated structures (i.e. the output) to be located in the same coordinate system as the pocket (i.e. the input), thereby necessitating SE(3)-equivariant modeling. > Q3: Table 1: out of 40 samples, there are about 20 different conformations (Div.) and 20 valid peptides (Success). How should the readers interpret this? Peptides exhibit high structural flexibility upon binding? Thanks! To better interpret the results in Table 1, here we additionally calculate the diversity of generated peptides with successful binding (dG<0) in the table below. It reads that our generated peptides exhibit high structural flexibility upon successful binding. Thank you again for raising this valuable question and we will add this new result to the revised paper. |Div.|Div. ($\Delta G < 0$)| |-|-| |0.506|0.632| > Q4: Did I miss sequence recovery metrics in the model benchmark section? Sorry for the confusion. We did analyze the sequence recovery metric, namely, Amino Acid Recovery (AAR) in Appendix G.1 on a large dataset consisting of 600k peptide sequences against 328 receptors. Further, as shown in the table below, a high AAR might not be meaningful or reliable due to the extensive diversity in peptide binding. For example, HSRN achieves an AAR close to our method, but its success rate is much lower. Please find more details in our analyses in Appendix G.1, and we are willing to include the AAR results into the main body if requested. |Model|AAR|Success| |-|-|-| |HSRN|35.8%|10.46%| |DiffAb|37.1%|49.87%| |PepGLAD (ours)|36.7%|55.97%| --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Dear Authors, Thanks for your response. Most of my questions have been properly addressed. Although some limitations still remain, e.g., dataset size and benchmark metrics, I think researchers can refer to this work for some insights and know-how to solve their domain specific challenges. To that end, I have raised my rating and support the acceptance of this manuscript. Best, --- Reply to Comment 1.1.1: Title: Thanks for the constructive reviews Comment: Thanks for the constructive comments and questions! We will add the responses, as well as the remaining limitations to the revision, to better enhance the quality of our submission. Thanks again for your efforts in reviewing our submission!
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
How Sparse Can We Prune A Deep Network: A Fundamental Limit Perspective
Accept (poster)
Summary: The authors build on the work of Larsen et. al. [1] to estimate the fundamental limit of pruning i.e the smallest achievable density of a network by pruning. This is done by estimating the statistical dimension of the neural network and leveraging convex geometry. Similar to Larsen et. al. [1] the authors provide a lower bound on the smallest density i.e. the sparsest parameter set that still generalizes well. They also provide an upper bound of this density, showing that such a dense network exists. The authors provide computational methods to estimate these bounds reasonably for a neural network and empirically validate them. Overall, the work provides a theoretical characterization of the pruning limit of a network. Strengths: 1. The proposed bounds validate that flatter networks allow more parameters to be pruned as well as the use of magnitude based parameter pruning which has consistently outperformed other pruning criteria. 2. Algorithms to compute these bounds are also provided and empirically validated. 3. Experiments across image classifications tasks are provided which align with the computed limits on pruning. Weaknesses: 1. The analysis of Figure 4 is unclear to me. I cannot follow the interpretation of the provided plots to infer that iterative pruning can be beneficial. The provided plots suggest that removing a large fraction of parameters i.e. having a small pruning ratio, increases the value of R, which is not desirable according to the theory, for a dense network. However, does this also hold for a partially sparse network which is iteratively pruned like in the case of IMP, how does pruning iteratively mitigate this trend? 2. The authors have also not confirmed this insight via experiments with L1 regularization if iterative pruning helps over one-shot pruning. I believe the authors need to give a detailed explanation of Section 7. 3. In addition, can the authors comment on the connection to the pruning fractions shown in Figure 4 to the observations made by Paul et. al. [2] for IMP (Figure 5 in Paul et. al. makes a similar analysis). Is there an optimal pruning fraction for an iterative method like IMP? The use of the terms pruning ratio, pruning threshold and sparsity are interchangeably used without concretely defining them, which makes the paper difficult to follow. I would urge the authors to correct this and clarify that the pruning ratio implies the number of nonzero parameters left in the network after pruning (it seems so from the derivation?). The use of sparsity is incorrect in Table 9, sparsity denotes the number of zero parameters in the network. Technical Quality: 3 Clarity: 3 Questions for Authors: See above. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: This paper provides a theoretical framework to estimate the pruning limit of a neural network, building on recent work in this direction by Paul et. al. [2] and Larsen et. al. [1] via a convex geometry viewpoint, which can potentially be a useful contribution to the sparsity community. I am happy to increase my score further if my concerns are sufficiently addressed. [1] Larsen, Brett W., et al. "How many degrees of freedom do we need to train deep networks: a loss landscape perspective." International Conference on Learning Representations. 2021. [2] Paul, Mansheej, et al. "Unmasking the Lottery Ticket Hypothesis: What's Encoded in a Winning Ticket's Mask?." The Eleventh International Conference on Learning Representations. 2022. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful reading and constructive comments. **Weakness 1: The analysis of Figure 4 is unclear to me. I cannot follow the interpretation of the provided plots to infer that iterative pruning can be beneficial. The provided plots suggest that removing a large fraction of parameters i.e. having a small pruning ratio, increases the value of R, which is not desirable according to the theory, for a dense network. However, does this also hold for a partially sparse network which is iteratively pruned like in the case of IMP, how does pruning iteratively mitigate this trend?** In Fig. 4, in fact, we do not assert that iterative pruning itself is inherently beneficial. Instead, our aim is to demonstrate that in Iterative Magnitude Pruning (IMP), the lower bound of the pruning ratio increases after each pruning iteration. When the previously set pruning ratio is less than the theoretical limit, the pruning ratio should be adjusted to avoid incorrect pruning. Consequently, fewer weights need to be pruned in subsequent iterations, leading to an adjustment in the pruning ratio. This forms the basis for the pruning ratio adjustment algorithm. **Weakness 2: The authors have also not confirmed this insight via experiments with L1 regularization if iterative pruning helps over one-shot pruning. I believe the authors need to give a detailed explanation of Section 7.** We did not compare the performance of iterative pruning with one-shot pruning directly. Instead, our objective is to use our theoretical results to elucidate why iterative pruning requires adjustment of the pruning ratio. Additionally, we aim to explain why, in some iterative pruning algorithms, the use of regularization versus no regularization can lead to discrepancies in the final performance of the sparse network. **Weakness 3: In addition, can the authors comment on the connection to the pruning fractions shown in Figure 4 to the observations made by Paul et. al. [2] for IMP (Figure 5 in Paul et. al. makes a similar analysis). Is there an optimal pruning fraction for an iterative method like IMP?** Figure 4 in our paper and Figure 5 in Paul et al. utilize the same theoretical results to illustrate that in Iterative Magnitude Pruning (IMP), the network cannot be pruned to the target sparsity directly, necessitating iterative pruning. In contrast, our paper shows that after each pruning step, the lower bound of the pruning ratio increases, indicating that the pruning proportion must be adjusted in iterative pruning. The determination of the optimal pruning ratio for iterative methods is an intriguing research question, and we are actively exploring this area. **Weankness 4: The use of the terms pruning ratio, pruning threshold and sparsity are interchangeably used without concretely defining them, which makes the paper difficult to follow. I would urge the authors to correct this and clarify that the pruning ratio implies the number of nonzero parameters left in the network after pruning (it seems so from the derivation?). The use of sparsity is incorrect in Table 9, sparsity denotes the number of zero parameters in the network.** Thanks for your suggestion, pruning ratio and sparsity imply the number of nonzero parameters left in the network after pruning and we will correct those in the revised version. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thank you for providing clarifications on W1 and W3, these were especially confusing to me. So if I understand correctly, the number of parameters that can be pruned away reduces in each iteration and this motivates the need for iterative pruning? --- Reply to Comment 1.1.1: Comment: Thank you for your response! Your understanding is correct in that the percentage of removable parameters indeed decreases with the iteration. However, this is not the motivation for iterative pruning, rather, it just motivates the need of *adaptive* pruning ratio *if iterative pruning is used*. As a consequence, this illustrates the sub-optimality of fixed pruning ratio in IMP. As a comparison, Figure 5 in the work of Paul et al. which you mentioned previously serves to demonstrate the need of iterative pruning if an arbitrary target sparsity (say 2%) is set in the advance and a suboptimal pruning scheme (whose pruning lower bound is 30%, for example) is used, otherwise, the target sparsity cannot be achieved. Furthermore, regarding the fundamental performance comparison between one-shot magnitude pruning and iterative pruning, extensive simulations have shown that the former can achieve better performance than the latter, if the regularization coefficient is carefully selected. So it's sensible to conjecture that iterative pruning cannot beat the $l_1$ regularized one-shot magnitude pruning (LOMP) from the perspective of fundamental limit of pruning, i.e., iterative pruning is unnecessary in terms of the fundamental pruning limit. Rigorous proof and demonstrations regarding this conjecture is still under investigation.
Summary: This paper leverages the framework of statistical dimension in convex geometry to characterize the sharp phase transition point, i.e., the fundamental limit of the pruning ratio. Two key factors are found to be important for pruning, weight magnitude and network flatness. The flatter the loss landscape or the smaller the weight magnitude, the smaller pruning ratio. The theoretical analysis also aligns well with the empirical results, demonstrating the effectiveness of the theory proof. Strengths: 1. The powerful approximate kinematics formula in convex geometry is leveraged to provide a very tight characterization of the fundamental limit of the network pruning. 2. It is quite impressive to see the theoretical results perfectly coincide with the experiments. 3. Moreover, the theoretical study can be explored to two common tricks in pruning, gradually pruning and $l_2$ regularization in Rare Gems. 4. Code is provided. Weaknesses: 1. One thing that sounds strange to me is that one conclusion is that "The smaller the network flatness (defined as the trace of the Hessian matrix), the more we can prune the network", which is contrary to the previous common belief, i.e., the flatter the network's landscape, the easier to be pruned. https://arxiv.org/pdf/2205.12694. Could the authors clarify these two counterarguments? 2. While the theoretical proof looks sound, the empirical results in this paper look strange to me. For instance, it is weird to see that the accuracy maintains the same at a 100% pruning ratio, but drops significantly as it goes below 2%. Does the pruning ratio here stand for a different thing than the common definition, i.e., the ratio of weights that are zero? 3. Can the authors explain why RN50 achieves worse accuracy than RN18 on TinyImageNet? 4. Which pruning algorithm is used for Figure 3? It is surprising to see that we can preserve the original performance at 2% ratio, for RN50 on TinyImageNet. Technical Quality: 3 Clarity: 2 Questions for Authors: Please see the above weaknesses. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Provided. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weakness 1: One thing that sounds strange to me is that one conclusion is that "The smaller the network flatness (defined as the trace of the Hessian matrix), the more we can prune the network", which is contrary to the previous common belief, i.e., the flatter the network's landscape, the easier to be pruned. https://arxiv.org/pdf/2205.12694. Could the authors clarify these two counterarguments?** Thank you for the insightful question. We should indeed use sharpness (defined as “trace” of the Hessian matrix), where smaller sharpness corresponds to greater pruning. This definition will be adopted in the revised version to avoid confusion. **Weakness 2: While the theoretical proof looks sound, the empirical results in this paper look strange to me. For instance, it is weird to see that the accuracy maintains the same at a 100% pruning ratio, but drops significantly as it goes below 2%. Does the pruning ratio here stand for a different thing than the common definition, i.e., the ratio of weights that are zero?** We define pruning ratio as the proportion of remaining weights, 100% means no pruning is performed. **Weakness 3: Can the authors explain why RN50 achieves worse accuracy than RN18 on TinyImageNet?** I am not sure which data the accuracy refers to in this context. If it refers to the test accuracy of the network, then Table 9 shows that the accuracy of RN50 is higher. **Weakness 4: Which pruning algorithm is used for Figure 3? It is surprising to see that we can preserve the original performance at 2% ratio, for RN50 on TinyImageNet.** We first train the RN50 model with L1 regularization on TinyImageNet. Following the training, we prune the network by removing the smaller weights. Details of the hyperparameters used during training are provided in the appendix. Additionally, a comparison of the pruning algorithms is included in the appendix. Since L1 regularization-based magnitude pruning is not new technique and not the primary focus of this paper, we have placed it in the appendix. --- Rebuttal 2: Title: After reading rebuttal Comment: I thank the authors for their response. Defining the pruning ratio as the proportion of remaining weights makes no sense to me. I suggest to change it to the opposite. I am not an expert in the theoretical study of pruning/sparsity. I will keep my original score and defer the assessment of the theoretical analysis to the AC and other reviewers. --- Rebuttal Comment 2.1: Comment: Thank you very much for your time and constructive comments. We would like to clarify that it is acceptable to use either the proportion of remaining weights or the proportion of removed weights as the pruning ratio. When the latter is used, the theoretical result will be 1 minus our theoretical result, which does not affect the correctness of the theory. We will include this clarification in the revised version. Once again, thank you for your time and comments, which have contributed to the improvement of our paper’s quality.
Summary: The paper investigates the theoretical limits of how sparse a network can be pruned without sacrificing performance. The authors formulate pruning as a convex geometry problem. By imposing sparsity constraints on the loss function, they show that the pruning ratio can be bounded by the width of the loss landscape and the magnitude of the weights. They also propose an spectrum estimation algorithm for dealing with large Hessian matrices. The empirical results align well with the theoretical findings. Strengths: * Understanding the fundamental limits of network pruning is an important problem, and the insights provided by this paper (weight magnitude and network flatness as key factors) could potentially influence future research in the area. * The authors have provided detailed proofs and derivations for the theoretical results. * The experiments conducted are extensive enough to validate the theoretical findings, and there is a strong alignment between the theoretical and empirical results. Weaknesses: * Limited technical novelty: applying convex geometry to attain similar results (e.g. bounds dependent on the Gaussian width) has been explored in prior works, some of which are cited in the related work section. However, the paper lacks discussion of how this work differs from or builds upon existing literature. * More intuitions on the theoretical results would be helpful. For example, loss flatness is one of the key factors discussed in the paper, and only its formal definition (trace of the Hessian) is given in the paper. Intuitions like smaller flatness (i.e. flatter loss landscape) means the loss function is less sensitive to perturbations in the weights would be helpful. * Inconsistencies in notations: For example, from equation (17) onwards, the loss sublevel set notation takes in the weights as the argument. Is this the same loss sublevel set defined in equation (2), which takes in epsilon as the argument? And how is the epsilon parameter chosen for the loss sublevel set? * I would define "pruning ratio" earlier in the paper. It was not clear until the contributions section that smaller pruning ratios means more pruning of the network. * The authors have introduced "pruned ratio", which equals to 1 - pruning ratio, for Table 2. This is the only place in the paper where this term is used. This is confusing. Why did the authors not stick to "pruning ratio" throughout the paper? * Please consider removing adjectives like "powerful" when describing convex geometry and approximate kinematics formula, or "very tight characterization." They are subjective and do not add much to the clarity of the paper. * Some typos like "Lottery Tickets Hypolothsis" (line 77) * Some terms like SLQ (line 82) are not defined until later in the paper. Technical Quality: 3 Clarity: 2 Questions for Authors: * What is novel here in light of prior work showing results connecting Gaussian widths to recovery thresholds [4]? * How sensitive are the theoretical bounds to the choice of loss function? Do the results generalize across different loss functions commonly used in deep learning? * How computationally expensive is the proposed spectrum estimation algorithm compared to existing methods? Is it practical for very large networks? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your constructive feedback! The concerns and questions in the review are addressed as follows. **Weakness 1: Limited technical novelty: applying convex geometry to attain similar results (e.g. bounds dependent on the Gaussian width) has been explored in prior works, some of which are cited in the related work section. However, the paper lacks discussion of how this work differs from or builds upon existing literature.** Although previous work has employed convex geometry tools to derive lower bounds for network pruning, these bounds might be far from the actual pruning limit. In contrast, we employ more powerful tools in convex geometry, i.e., the statistical dimension and approximate kinematic formula, thus enabling us to obtain a sharp threshold of pruning. Moreover, our research tackles the pruning problem in a more systematic way, and through our theoretical results, we explicitly identify two key factors that determine the pruning limit, i.e., the network flatness and the weight magnitude, which is, in our opinion, valuable to better understand the existing pruning algorithms and design new and more efficient algorithms. **Weakness 2: More intuitions on the theoretical results would be helpful. For example, loss flatness is one of the key factors discussed in the paper, and only its formal definition (trace of the Hessian) is given in the paper. Intuitions like smaller flatness (i.e. flatter loss landscape) means the loss function is less sensitive to perturbations in the weights would be helpful.** Thank you for the suggestion. In the revised version, we will include more intuitive explanations to enhance the understanding of the paper. **Weakness 3: Inconsistencies in notations: For example, from equation (17) onwards, the loss sublevel set notation takes in the weights as the argument. Is this the same loss sublevel set defined in equation (2), which takes in epsilon as the argument? And how is the epsilon parameter chosen for the loss sublevel set?** These are, in fact, two sublevel sets. Consequently, epsilon should be included in the sublevel set notation of Equation 17; we will correct this in the revised version. We compute the standard deviation of the loss across different batches of the training set and use it as epsilon. **Weakness 4: I would define "pruning ratio" earlier in the paper. It was not clear until the contributions section that smaller pruning ratios means more pruning of the network.** Thanks for your suggestion, we will define pruning ratio earlier in the revised version. **Weakness 5: The authors have introduced "pruned ratio", which equals to 1 - pruning ratio, for Table 2. This is the only place in the paper where this term is used. This is confusing. Why did the authors not stick to "pruning ratio" throughout the paper?** Good suggestion, we will change it to "pruning ratio" in the revised version. **Weakness 6: Please consider removing adjectives like "powerful" when describing convex geometry and approximate kinematics formula, or "very tight characterization." They are subjective and do not add much to the clarity of the paper.** We will correct this in the revised version. **Weakness 7: Some typos like "Lottery Tickets Hypolothsis" (line 77)** Thanks for pointing out this, we will correct typos in the revised version. **Weakness 8: Some terms like SLQ (line 82) are not defined until later in the paper.** We will define SLQ before using in the revised version. **Question 1: What is novel here in light of prior work showing results connecting Gaussian widths to recovery thresholds [4]?** Ref.[4] and our work share the similarity that they take advantage of convex geometry to characterize the threshold of given inference or learning problems. The main difference between them lies in: 1) Our work extends significantly the application of convex geometry from a specific inference problem to a general learning problem: Ref. [4] addresses in specific the recovery threshold of the *linear* inverse problem, which is a classical statistical *inference* problem. On the other hand, our work tackles a statistical *learning* problem which is of general DNN architecture and *arbitrary loss function*. The methods and results in Ref. [4] just cannot be simply translated to the problem we address in our work. 2) The threshold in our work is normally *sharper* than Ref. [4]: Ref. [4] utilizes the Gaussian width to characterize the recovery threshold, which unfortunately can only provide the lower bound (*necessary condition*) since the key result Ref. [4] relies on, i.e. Gordon's escape through a mesh theorem only provide the necessary condition. In contrast, our work leverage the statistical dimension framework and the accompanying approximate kinematic formula, whose necessary and sufficient condition almost coincides, thus capable of providing sharp thresholds. **Question 2: How sensitive are the theoretical bounds to the choice of loss function? Do the results generalize across different loss functions commonly used in deep learning?** Our theory only concerns the pruning limits of well-trained models and does not address the loss function during the training process. In other words, our theoretical results apply to one-step pruning for all well-trained models. **Question 3: How computationally expensive is the proposed spectrum estimation algorithm compared to existing methods? Is it practical for very large networks?** In fact, our proposed improved algorithm introduces only a modest increase in computational complexity compared to existing methods. For instance, with a fixed sample size of 100, it requires more computation of only 200 derivatives. These computations do not necessitate the use of the entire dataset and can be performed on a subset. Furthermore, for very large networks, the improved algorithm provides a more accurate estimation of the eigen spectrum. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed rebuttal addressing the concerns and questions raised. The explanation of the differences compared to [4] is appreciated. The clarifications on how the statistical dimension framework enables deriving sharper bounds are helpful for understanding the paper's contributions. I am slightly raising my score. --- Reply to Comment 1.1.1: Comment: Thank you once again for your time, support, and constructive feedback, as well as the score increase. We sincerely appreciate your input, which has significantly contributed to improving the quality of our paper. We will add clarifications on how the statistical dimension framework enables deriving sharper bounds in our revised version.
Summary: This paper tries to answer the question of how sparse can a deep neural network be pruned without increasing the loss function. The authors employ high-dimensional geometry tools such as statistical dimension, Gaussian width, and the Approximate Kinematic Formula to derive the lower bound and upper bound of the ratios of weights that can be pruned. The authors also provide an improved algorithm for estimating the eigenvalues of large Hessian matrices, which is used to estimate the Gaussian width. Strengths: 1. This paper provides an interesting perspective for analyzing the limits of network pruning. 2. The tools from convex geometry are novelly applied to study the problem. The derivations of the lower bound and upper bound are solid. 3. The results provide insights into the problem of pruning neural networks and explain certain behaviors from previous work. 4. The experiments show a certain matching between the theoretical bounds and the actual performance. Weaknesses: 1. The derived upper bound and lower bound on the pruning ratio depend on the new weights of the pruned networks. This is strange to me, meaning that the limit of the pruning ratio varies when a different weight is used. This also limits the application of the results. 2. The numerical experiments are performed using the L1 regularized loss function. However, this is not considered in the theoretical results. I wonder if the bounds will be different if the loss function is changed. 3. The one-shot magnitude pruning ignores the interaction between the weights, which has been shown to be suboptimal in some recent work. However, the authors claim that it is optimal for the lower and upper bounds. I believe this is because the L1 regularization is used as the loss function, which already takes that into account. This should be clarified. 4. The authors claim that the upper and lower bounds match each other when the proposed L1 regularized loss function is used for training. This might not hold when the original loss is without L1 regularization. This should also be discussed and not overclaimed in general cases. 5. The numerical comparison did not consider many state-of-the-art pruning methods (which are also pruning after training) such as [1] R. Benbaki et. al., Fast as CHITA: Neural Network Pruning with Combinatorial Optimization [2] L. You et. al., SWAP: Sparse Entropic Wasserstein Regression for Robust Network Pruning [3] X. Yu et. al., The combinatorial brain surgeon: Pruning weights that cancel one another in neural networks 6. Relating to the previous comment, the magnitude pruning method used here minimizes the bounds proposed here, which then is used as a support for their usefulness. However, in many cases, the network is not pruned to its limit. 7. The title is overclaimed. While the bounds proposed here are sometimes useful, they are not fundamental limits since there are approximations and relaxation used along the derivations. Moreover, the upper and lower bounds do not match each other in general. Technical Quality: 3 Clarity: 2 Questions for Authors: Please see the weakness part. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Yes, the authors did address the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for your insightful feedback. Below are our detailed responses to your concerns: **Weakness 1: The derived upper bound and lower bound on the pruning ratio depend on the new weights of the pruned networks. This is strange to me, meaning that the limit of the pruning ratio varies when a different weight is used. This also limits the application of the results.** Good question. In fact, this is the key point in pruning bounds. We aim for the performance of the sparse network to closely match that of the dense network. In other words, the performance of the sparse network is bounded by that of the dense network. If the weights of the dense model change, the performance bound of the sparse network also changes, leading to a different pruning ratio bound. **Weakness 2: The numerical experiments are performed using the L1 regularized loss function. However, this is not considered in the theoretical results. I wonder if the bounds will be different if the loss function is changed.** In fact, we are considering the pruning limit of a well-trained model without special focus on the loss function. Our theoretical results actually apply to all well-trained models, regardless of the loss function. **Weakness 3: The one-shot magnitude pruning ignores the interaction between the weights, which has been shown to be suboptimal in some recent work. However, the authors claim that it is optimal for the lower and upper bounds. I believe this is because the L1 regularization is used as the loss function, which already takes that into account. This should be clarified.** Thank you for your reminder, we will add relevant instructions to the paper: In the absence of L1 regularization, although the theoretical lower bound for magnitude pruning is the lowest, the achievable upper bound for magnitude pruning may not be the lowest. In general, if the optimal algorithm is not clear, magnitude pruning can be considered as a viable choice. **Weakness 4: The authors claim that the upper and lower bounds match each other when the proposed L1 regularized loss function is used for training. This might not hold when the original loss is without L1 regularization. This should also be discussed and not overclaimed in general cases.** Great reminder. In fact, we only claimed that the upper and lower bounds match when the proposed L1 regularized loss function is used for training. This might not hold in other cases, and we will make this point more clear in the revised version. **Weakness 5: The numerical comparison did not consider many state-of-the-art pruning methods (which are also pruning after training) such as [1] R. Benbaki et. al., Fast as CHITA: Neural Network Pruning with Combinatorial Optimization [2] L. You et. al., SWAP: Sparse Entropic Wasserstein Regression for Robust Network Pruning [3] X. Yu et. al., The combinatorial brain surgeon: Pruning weights that cancel one another in neural networks.** In fact, since we proved that the lower bound of magnitude pruning is smaller than that of other pruning methods, and that the upper bound of magnitude pruning is more likely to match the lower bound, we focused solely on magnitude pruning. We will include a comparison with other algorithms in our revised version. **Weakness 6: Relating to the previous comment, the magnitude pruning method used here minimizes the bounds proposed here, which then is used as a support for their usefulness. However, in many cases, the network is not pruned to its limit.** This is true; in general, the lower bound for magnitude pruning is the lowest among pruning methods. However, not all upper bounds for magnitude pruning match this lower bound. If we consider the lower bound as the limit, many models in practice will not be pruned to this extent. Nonetheless, our theoretical results highlight two key factors that determine network pruning limit: the flatness of the loss landscape and the magnitude of network weights. **Weakness 7: The title is overclaimed. While the bounds proposed here are sometimes useful, they are not fundamental limits since there are approximations and relaxation used along the derivations. Moreover, the upper and lower bounds do not match each other in general.** Thank you for your comments. Although the upper and lower bounds generally do not coincide exactly, their gap is indeed negligibly small. Moreover, our theoretical results identify two key factors that influence network pruning: the flatness of the loss landscape and the magnitude of network weights. By constraining these factors, such as using L1 regularization to control magnitude, the upper and lower bounds are more likely to match. Additionally, despite employing approximations in our derivations, the results we obtain is actually nearly exact asymptotically, which is supported by that our theoretical results closely match the simulation outcomes. --- Rebuttal Comment 1.1: Comment: **Weakness 1: The derived upper bound and lower bound on the pruning ratio depend on the new weights of the pruned networks. This is strange to me, meaning that the limit of the pruning ratio varies when a different weight is used. This also limits the application of the results.** Our previous response seems not addressing your concern in the right direction. So we'd like to supplement our response as follows: We acknowledge this is indeed a sharp observation. The pruning limit we provide does depend on the trained weights, however, it does not mean that this limit will change with different weights (due to different initialization, for example). In fact, in our pruning limit formula, the pruning limit only depends on the weights in a *global* way. To be specific, it depends on the projection distance, i.e., the *sum* of the square of the removed weights and the *bulk* spectrum of the associated Hessian matrix. Therefore, if the final weights after training obeys a given distribution (for example, a heavy-tailed distribution) regardless of the initialization (which is supported by both empirical and theoretical studies, such as Mahoney's work), the pruning limit will be the same. Simulation results validate this in Table 2. [1] Lynn, Christopher W., Caroline M. Holmes, and Stephanie E. Palmer. "Heavy-tailed neuronal connectivity arises from Hebbian self-organization." Nature Physics 20.3 (2024): 484-491. [2] Martin, Charles H., and Michael W. Mahoney. "Implicit self-regularization in deep neural networks: Evidence from random matrix theory and implications for learning." Journal of Machine Learning Research 22.165 (2021): 1-73. [3] Martin, Charles H., Tongsu Peng, and Michael W. Mahoney. "Predicting trends in the quality of state-of-the-art neural networks without access to training or testing data." Nature Communications 12.1 (2021): 4122. **Weakness 3: The one-shot magnitude pruning ignores the interaction between the weights, which has been shown to be suboptimal in some recent work. However, the authors claim that it is optimal for the lower and upper bounds. I believe this is because the L1 regularization is used as the loss function, which already takes that into account. This should be clarified.** We realize that we have not fully addressed your concern. So we'd like to update our response as follows: You're correct. L1 regularization is very important to ensure that the lower bound and upper bound of pruning limit match. Furthermore, L1 regularization is a very natural relaxation for L0 regularization if weight sparsity is encouraged. In this sense, we think our pruning limit resulting from the L1 regularization is capable of characterizing the fundamental limit of one-shot pruning. --- Rebuttal 2: Comment: Sincere thanks once again for your feedback. Below is our response: **Weakness 1: I am not so sure about the statement that different initializations will result in nearly the same empirical distribution of the weights. Table 2 doesn't seem to support this statement.** The experiments in Table 2 were conducted using independent initializations (with the same initialization method), and the results indicate that their pruning limits are largely consistent. We also performed a statistical analysis of the weight distributions and calculated the total variation distance between them. Below is a summary of the experimental results, which we will include in the revised version. The experiments suggest that independent initializations do not lead to significant differences in the final weight distributions. For reasons of time, the table is simplified task | tv distance of weight distribution Mean (standard deviation) VGG, CIFAR10 | 0.02(0.02) AlexNet, CIFAR10 | 0.01(0.008) ResNet18, CIFAR100 | 0.04(0.04) ResNet50, CIFAR100 | 0.03(0.02) ResNet18, TinyImageNet | 0.02(0.03) ResNet50, TinyImageNet | 0.03(0.02) **Weaknesses 2,3 and 4: The bounds provided in the paper should also work for networks trained with the original loss function. To answer whether L1 regularization would change the pruning ratio limit, it would be helpful to compare the limits and the actual pruning performance for both networks trained with and without the L1 regularization.** You're right and thank you for your suggestion. The bounds provided in the paper indeed also work for networks trained with the original loss function and we will include experiments related to the original loss in the revised version. **Weakness 6: So when $\epsilon$ decreases further, will the pruning limit eventually lead to a dense network? (e.g. with about 30% of the weights pruned) In that case, the performance of magnitude pruning should be quite far away from other optimization-based methods. I wonder what are the implications of the bounds in those regimes.** If $\epsilon$ is significantly reduced, the amount of pruning that can be done will indeed decrease, as our tolerance for performance degradation also becomes smaller. Therefore, $\epsilon$ should not be chosen arbitrarily, as values that are too high or too low are not appropriate. We compute the standard deviation of the loss across different batches of the training set and use it as $\epsilon$.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
FedSSP: Federated Graph Learning with Spectral Knowledge and Personalized Preference
Accept (poster)
Summary: This paper focuses on cross-domain Federated Graph Learning, where graph data stored in clients exists negative domain structural shifts. Authors observe the presence of spectral biases as a reflection of structural shifts. Thereafter, this work proposes the Generic Spectral Knowledge Sharing (GSKS), which allows clients to share certain components containing generic spectral knowledge. Moreover, it proposes Personalized Graph Preference Adjustment (PGPA) to satisfy preferences of clients for more suitable local applications. Extensive experiments on different scenarios demonstrates the superiority of the proposed methods. Strengths: 1. Well-motivated. In particular, it first observes the spectral biases as a reflection of structural shifts and further provides a targeted solution to address the biases. Moreover, the authors acknowledge the unique preferences of the dataset in this scenario may lead to the global message passing being not fully applicable to local data and accordingly propose appropriate solutions for the particularity of datasets. 2. Very easy to follow. The Figure is well presented. In particular, the authors provide a well-defined motivation visualization, helping with the understanding of specific challenges and the framework details. In addition, the precise framework presentation aids in the comprehension of each module’s role. 3. Structural heterogeneity is a common challenge in federated graph learning. The innovative solution presented in this paper offers a new perspective for future advancement by promoting improvements in spectral GNNs instead of traditional ones. It is not necessarily limited to cross-domain scenarios but can also bring new ideas for addressing structural heterogeneity in other non-IID federated graph learning contexts. Weaknesses: 1. Lacking of comprehensive comparative analysis of existing FGL methods. For instance, FGSSL appears in the performance table but is not comparatively analyzed in the introduction. The authors are supposed to clarify the shortcomings of this method in the given scenario. Technical Quality: 3 Clarity: 3 Questions for Authors: I expected a more detailed comparative analysis of FGL baselines mentioned in the weakness. Could the authors provide a discussion on it? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: This work solves the problems in cross-domain settings well. I wonder whether it is generalizable enough to be extended to other non-iid problems, such as each client having a portion of the same graph dataset. It’s necessary to add relevant experiments to demonstrate that this method is not limited. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer cuMU: Thank you for your thorough review and the kind words about our well-motivated study and handling of structural heterogeneity. We sincerely appreciate your time and effort. We hope that our responses below will address your concerns. ### Weakness **W1: Lacking comprehensive comparative analysis of existing FGL method FGSSL.** A1: In scenarios with strong structural heterogeneity across datasets and domains, alignment to global structural knowledge in FGSSL is inevitably negatively impacted. Even with adjustments to align client neighbor modeling knowledge to the global GNN, it remains challenging to aggregate a highly generalizable global model under domain structural bias. Furthermore, even though its node-level strategies are beneficial for better graph modeling, the effectiveness of the method decreases under the influence of structure-biased message passing. Specifically, clients are inevitably influenced by the message-passing schemes of GNNs trained on graph data with significantly different domain structures, thus inevitably failing to model their local graph data accurately. ### Limitations **Uncertainty about generalizability to other non-iid problems** A2: Theoretically, our method can be extended to scenarios where each client possesses a portion of a dataset and different clients may have different datasets. Therefore, we validated the scalability of our method in this scenario and conducted the following experiments for demonstration. Specifically, we conduct experiments where each of the seven small molecule datasets is split into 1-11 segments. Different from cross-domain and cross-dataset settings in this paper, it simulates non-iid scenarios where different clients may have portions of the same dataset. It denotes that FedSSP consistently outperformed the other methods across all scenarios. *Table: **Extended non-iid scenarios** where each client possesses a portion of a small molecule dataset* | Client Number | 7 | 21 | 35 | 49 | 63 | 77 | |:---:| :--:|:---:|:---:|:---:|:---:|:---:| | FedAvg | 74.12 | 71.92 | 70.75 | 71.16 | 73.66 | 74.03 | | FedStar | 78.63 | 76.37 | 74.18 | 73.20 | 75.81 | 78.28 | | FedSSP | **79.62** | **76.74** | **74.86** | **74.31** | **76.25** | **79.44** | --- Rebuttal 2: Comment: Dear Reviewer cuMU, We highly value the time and effort you have dedicated to evaluating our work. We are fully aware of the importance of your time and strive to respect it. In light of this, we would greatly appreciate any additional feedback or confirmation that our rebuttal has effectively addressed your comments. Our goal is to ensure that we have comprehensively addressed your concerns. Thank you so much for your time and consideration. Authors
Summary: The paper introduces a novel method FedSSP designed to address the current limitations of personalized Federated Graph Learning methods. The author highlighten that existing methods fail to deal with domain structural shift and ignore the uniqueness of datasets in cross-dataset scenarios. To address the limitations, FedSSP innovatively utilizes spectral GNNs and shares key layers in them to facilitate generic communication. Meanwhile, the authors propose preference module for the preferences derive from different graph datasets. Strengths: 1. Addressing problems in pFGL from a spectral perspective is novel and interesting, setting a new direction for future research in federated graph learning. 2. The motivation is clear and compelling. The sharing strategy tackles the issue of knowledge conflict and spectral bias. Preference adjustment is employed to cater to the structural characteristics and preferences of individual datasets, thus ensuring that the graph features extracted are highly relevant and suitable. 3. The experiment is comprehensive, covering a wide range of scenarios to thoroughly evaluate the proposed methods. The ablation studies are thoroughly conducted, providing clear insights into the contributions of each component and demonstrating its effectiveness. Weaknesses: 1. Since the method of spectral encoder sharing essentially belongs to the personalized layer methods, I believe the authors need to discuss the differences between this method and other personalized layer methods in this paper, which they did not, such as APPLE. 2. How the node-level method FGSSL is used here in the experiments? What is the core model of APPLE here in the federated graph learning scenario? Could the authors provide the implementation details of the baselines? 3. The author changed the model architecture here but has not compared its communication cost with baselines. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the Weakness part. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Authors should discuss communication cost for this potential limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer 3kee: We deeply appreciate your positive feedback regarding the innovative aspects of our methods and the comprehensiveness of our experiments. Thank you for your time and effort in reviewing our paper. We hope that our responses below will address your concerns and further affirm the quality of our research. ### Weakness **W1: Insufficient discussion on differences between spectral encoder sharing and other personalized layer methods like APPLE.** A1: Following the implementation of the personalized layer method FedPer in previous FGL research, the implementation of APPLE here shares the graph convolution parameters in GNNs. Due to the lack of targeted solutions for cross-dataset and cross-domain scenarios, direct sharing of all graph convolution parameters inevitably struggles with structural conflict. Instead, collaboration under our proposed solution overcomes the conflict from the spectral perspective. Based on our investigation of domain spectral biases, we achieved effective cross-domain collaboration by exploring components unaffected by structural bias in spectral GNN. **W2: Lack of implementation details for node-level method FGSSL and core model APPLE in federated graph learning.** A2: FGSSL primarily includes two strategies: federated graph structure distillation (FGSD) to enhance node feature modeling, and federated node semantic contrast (FNSC) to encourage the neighborhood modeling and structural processing of each client to approach the global level. The node features before GNN average pooling is leveraged for similarity matrix calculation in FGSD. Specifically, in our graph-level task, each client possesses multiple graphs, each graph having its feature similarity matrix. Based on this, they converge towards the inherent adjacency relationships provided by the global model. For FNSC, since there are no unified node labels across clients in the scenario, we implemented contrastive learning using an unsupervised strategy to replicate feature optimization of FGSSL in graph-level scenarios. In addition, the core model part in APPLE here refers to the graph convolution parameters as we illustrated in A1. **W3: No comparison of communication cost with baselines.** A3: Since our strategy only shares knowledge of the eigenvalue encoder and filter encoder, it is certain that our communication cost is significantly lower than all baselines. Whether it is the traditional approach of sharing all parameters, the sharing of structure processing channels in FedStar, or the sharing of graph convolution parameters in APPLE, our communication cost is lower. If accepted, our final version will include a discussion on communication costs. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. I maintain my score. --- Rebuttal 2: Comment: Dear Reviewer 3kee, We would like to express our heartfelt thanks for your consistent support. Your positive assessment of our work is greatly appreciated. Please feel free to reach out if you have any additional thoughts or suggestions. Thank you for your consideration and valuable time. Authors
Summary: This work proposed a personalized federated graph learning framework for federated graph classification. The framework includes strategies for sharing generic knowledge and satisfying personalized preferences. The authors evaluated six different cross-dataset and cross-domain settings and showed good performance. Strengths: 1. The writing in this paper is coherent, making the proposed concepts and methodologies easy to follow. The logical flow and concise explanations enhance overall readability and comprehension. 2. The methods are thoughtfully designed with a deep understanding of the specific challenges inherent in cross-domain scenarios. They effectively address these targeted issues, ensuring good performance and adaptability across diverse datasets and domains. Weaknesses: 1. In the motivation, the authors do not clearly explain the relationship between the two spectral bias metrics and the graph structure. 2. The scalability of the method is not well analyzed in the experiment. 3. Absence of a detailed discussion on the necessity of the shared filter encoder in the proposed method, particularly in relation to its specific architecture. Technical Quality: 3 Clarity: 3 Questions for Authors: Q1) I have some confusion regarding the two spectral bias metrics used in the motivation. Could the authors specifically point out the relationship between these two spectral bias metrics and graph structure? Q2) Could the authors add experiments on the scalability of the method, such as performance with varying numbers of clients? Q3) The global consensus in the method is similar to the prototype. If it differs from prototype, authors should provide distinctions and explanations. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer jSYo: We sincerely appreciate your time and effort in reviewing our paper, and we are grateful for your positive feedback on our writing and the effectiveness of our methodology. We hope that our responses below will address your concerns and lead to an updated score. ### Weakness **W1 & Q1: How spectral bias metrics are related to graph structure?** A1: Eigenvalues and the Fiedler value are significantly associated with the graph structure. In the first place, eigenvalues and eigenvectors can be employed for graph clustering and partitioning. For instance, the k smallest non-zero eigenvalues and their corresponding eigenvectors can be utilized for k-means clustering of the graph. Furthermore, the Fiedler value can assess the connectivity of the graph [1]. The Fiedler value and its corresponding eigenvector, known as the Fiedler vector, also play a crucial role in community identification [2]. **W2 & Q2: Validation of the scalability of the proposed method under varying numbers of clients.** A2: Theoretically, our method can be extended to scenarios where each client possesses a portion of a dataset and different clients may have different datasets. Therefore, we validated the scalability of our method based on these scenarios. We conducted experiments in SM cross-dataset settings under varying client scales. Specifically, we performed experiments with the client scales ranging from 7 to 77 where each of the seven small molecule datasets was split into 1-11 segments. The results in the following Table denote that FedSSP consistently outperforms the other methods across all scenarios. *Table: **Performance under varying client scales*** | Client Number | 7 | 21 | 35 | 49 | 63 | 77 | |:---:| :--:|:---:|:---:|:---:|:---:|:---:| | FedAvg | 74.12 | 71.92 | 70.75 | 71.16 | 73.66 | 74.03 | | FedStar | 78.63 | 76.37 | 74.18 | 73.20 | 75.81 | 78.28 | | FedSSP | **79.62** | **76.74** | **74.86** | **74.31** | **76.25** | **79.44** | **W3: Absence of a detailed discussion on the necessity and architecture of the shared filter encoder.** A3: Filter encoders encapsulate knowledge of various frequency components which affects how much graph signal varies from nodes to their neighbors for better graph convolution construction. This strategy benefits clients by enabling them to acquire knowledge of transmitting graph signals at different frequencies from other GNNs, thereby promoting the construction of graph convolutions for individual clients. Furthermore, the functionality is discussed in detail on line 253 of our paper. ### Questions **Q3: How does the prototype differ from global consensus?** A4: Our paper focuses on federated graph learning in cross-dataset and cross-domain scenarios, which denotes that each client possesses its unique class information, thus rendering the concept of prototypes inapplicable here. The global consensus in this paper is derived from the feature mean of graphs across all clients in each round, without the class information contained in prototypes. Global consensus represents the knowledge for graph data modeling by all participating GNNs and provides standardized global graph processing knowledge, which the prototypes cannot achieve in this scenario. In addition, this difference is discussed in detail on line 253 of our paper. We hope this provides the necessary clarification. [1] Fiedler M. Algebraic connectivity of graphs[J]. Czechoslovak mathematical journal, 1973, 23(2): 298-305. [2] Von Luxburg U. A tutorial on spectral clustering[J]. Statistics and computing, 2007, 17: 395-416. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. After reading other reviewers' comments, I decided to raise my original score. --- Rebuttal 2: Comment: Dear Reviewer jSYo, Thank you very much for your prompt response and for reconsidering your evaluation. We are truly grateful for your valuable feedback and the time you have invested in our manuscript. Your comments have greatly contributed to improving our work, and we appreciate your support in advancing it through this process. If there is anything further you believe could be refined or enhanced, please let us know. Authors
Summary: FedSSP tackles structural heterogeneity well in personalized Federated Graph Learning. When it comes to cross-domain scenarios, the structural heterogeneity becomes more negative than usual. It is crucial to mitigate the impact of domain structural shifts. FedSSP proposes two strategies to address these challenges from two directions. It firstly overcomes knowledge conflict by generic spectral encoder weights to seek better collaboration across various datasets. Besides, considering the uniqueness of clients in this scenario, it then satisfies the special preferences of the graphs by feature adjusting. Strengths: - Studies on cross-domain federated graph learning are crucial for the applications of federated learning in the real world. This would help increase the generalizability in practical applications. This study promotes the generalizability of federated graph learning and demonstrates superior performance in various simulated settings, proving its advantage and broad applicability in practical applications. - The authors present a novel perspective and a spectra-based solution to tackle the issue of structural heterogeneity among clients. The proposed approach effectively addresses the variations in graph structures across clients, thereby enhancing the overall coherence and performance of the federated learning framework. - The generic spectral knowledge sharing and preference modules work harmoniously and complement each other effectively. After addressing knowledge conflicts through spectral bias mitigation, personalized adjustments act as further personalized optimizations, enhancing generic message passing. They together enable adaptive collaboration across diverse datasets and domains. Weaknesses: The client scale considered in the experiments is quite small. Cross-device federated learning typically involves a significantly larger client scale to ensure the robustness and generalizability of the results. Therefore, it would be beneficial to consider expanding the client scale in experiments to better reflect practical scenarios and to validate the scalability of the proposed method. Technical Quality: 3 Clarity: 4 Questions for Authors: Please refer to the weaknesses. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The experiment should include performance under varying conditions to ensure the method has no limitations in this aspect, such as different numbers of training rounds. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer LD7P: Thank you for your encouraging comments on the coordination of our methods and the overall significance of our work. We hope that our responses below will address your concerns and reinforce your positive evaluation. ### Weakness **W1: The client scale in the experiments is too small to reflect practical scenarios.** A1: For performance analysis under larger client scales, our method is extended to scenarios where each client possesses a portion of a dataset and different clients may have different datasets. To demonstrate the performance of FedSSP under varying client scales, we conducted experiments in SM cross-dataset settings with client scales ranging from 7 to 77 where each of the seven small molecule datasets was split into 1-11 segments. The results in the following Table denote that FedSSP consistently outperformed the other methods across all scenarios. *Table: **Performance under varying client scales*** | Client Number | 7 | 21 | 35 | 49 | 63 | 77 | |:---:| :--:|:---:|:---:|:---:|:---:|:---:| | FedAvg | 74.12 | 71.92 | 70.75 | 71.16 | 73.66 | 74.03 | | FedStar | 78.63 | 76.37 | 74.18 | 73.20 | 75.81 | 78.28 | | FedSSP | **79.62** | **76.74** | **74.86** | **74.31** | **76.25** | **79.44** | ### Limitations **L1: The experiment lacks performance evaluation under varying conditions, such as different numbers of training rounds.** A2: To demonstrate the performance of FedSSP under varying local training rounds, we conducted experiments in SM cross-dataset scenarios under different local training rounds. The results in the following Table denote that FedSSP consistently outperforms other methods in all four settings of local training rounds. *Table: **Performance under varying local training rounds *** | Local Training Round | 1 | 2 | 3 | 4 | |:---:| :--:|:---:|:---:|:---:| | FedAvg | 74.12 | 75.56 | 74.75 | 73.81 | |FedStar | 78.63 | 79.35 | 79.42 | 79.04 | | FedSSP | **79.62** | **80.78** | **80.13** | **79.56** | --- Rebuttal 2: Comment: Dear Reviewer LD7P, We sincerely appreciate the time and expertise you have devoted to reviewing our submission. Acknowledging the demands on your schedule, we are mindful not to intrude on your time. However, we would be grateful if you confirm that our rebuttal has addressed your concerns adequately. Thank you in advance for your consideration. Authors --- Rebuttal 3: Comment: Dear authors, I've read all the replies and decided to keep my scoring.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Fair Online Bilateral Trade
Accept (poster)
Summary: This paper addressed the problem of fair bilateral trade: at each round $t\leq T$ a buyer and a seller, with respective valuations $B_t$ and $S_t$, want to trade a good. The agent acts as a facilitator for the trade, by posting a common price $p_t$. The trade happens if $p_t \leq B_t$ and $p_t \geq S_t$. Importantly, the agent only observes the two-bit feedback $(\1{p_t \leq B_t\}, p_t \geq S_t)$. In classical bilateral trade, the objective is to maximize the cumulative expected Gain From Trade (GFT), where the GFT at round $t$ is $B_t - S_t$ if the trade succeeds, 0 otherwise. Here, the authors propose to consider an other performance measure called the Fair Gain From Trade, defined as $\min(B_t-p_t,p_t - S_t)$ if the trade happens, 0 otherwise. This performance measures encourages trades that share fairly the utility between buyer and seller. The authors first consider the two-bit feedback model with i.i.d. valuations of for the buyer and the seller. They show that, as is the case when maximizing the GFT, the FGFT can be linear in $T$ if the seller's and the buyer's valuations are not independent. Interestingly, when these valuations are independent, regrets of order $\tilde{O}(T^{2/3})$ (in terms of FGFT) can be achieved under milder assumption than when considering GFT: namely, Lipschitz continuity of the c.d.f. of the valuations is no longer required. They provide a matching lower bound, proving that this rate is tight up to logarithmic factors. They also consider deterministic valuations, showing that in this case, the regret scales as $\log(T)$ (they provide an upper and lower bound on the regret). Again, this departs from the regrets observed when maximizing GFT. Finally, they consider the full information model, when the agent observes both $B_t$ and $S_t$. They show that in this case, the regret scales as $\sqrt(T)$ in the stochastic case, which is optimal, and is constant in the deterministic case. Strengths: This paper studies an interesting an well-motivated problem. The results provided highlight a very different behavior than in classical bilateral trade problems, which I find very interesting. The treatment of the subject is thorough: the authors explore various models, including two-bit and full feedback, as well as i.i.d. and deterministic valuations, and both independent and dependent valuations. For each case, they provide matching upper and lower bounds on the regret. They also comment on the adversarial case. The paper is also very clear and well-written. Each theorem is accompanied by easily understandable proof sketches, supplemented by rigorous proofs in the Appendix. Weaknesses: There are no major weaknesses. As a minor suggestion, given the paper's density, it might be helpful for readers if the rates in the different settings were summarized in a table, alongside the corresponding rates for maximizing the gain from trade. Technical Quality: 4 Clarity: 4 Questions for Authors: The problem of bilateral trade is closely related to that of dynamic pricing. Do you know any work studying fairness in that setting? Although I understand that this is probably beyond the scope of this already dense paper, do you think your results could be extended to the one-bit feedback setting? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The authors addressed adequately the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - Adding a summary table Great idea! We are happy to do it. - Fair dynamic pricing The few existing works on fair dynamic pricing study notions of fairness that are orthogonal to what ours would be when translated to dynamic pricing. For example, see Xu et al., Doubly Fair Dynamic Pricing, and Maestre et al., Reinforcement Learning for Fair Dynamic Pricing. Note that in bilateral trade, since the platform is the learner, there is a symmetry between the buyer's and seller's utility, and thus, it is sensible that our fair gain from trade function strives to make their utilities as similar as possible. In contrast, in dynamic pricing, the seller is the learner, and hence there is an inherent asymmetry in how the learner will choose their objective. The seller might still have an incentive to reduce their margin to improve that of the buyer because this way, they could provide enough incentives for buyers to join and continue using the platform. For this reason, we think that a suitable alternative reward function in dynamic pricing should remain somewhat asymmetric and give more weight to the seller's margin. An example can be the minimum between the seller's profit and a factor of $w$ times the buyer's profit. A similar idea was also discussed in the answer to reviewer 89Fg, in the context of bilateral trade. - One-bit feedback We conjecture that the techniques in [8] can be tweaked to show that learning is impossible with 1-bit feedback, for reasons analogous to that in [8]: lack of observability. With only 1 bit of feedback, we conjecture it is possible to build two different instances that are undistinguishable from one another such that the optimal price in one instance is highly suboptimal in the other. --- Rebuttal Comment 1.1: Title: Rebuttal acknowledgment Comment: The authors have adequately answered my questions. Having read again the grading guidelines for Neurips, I believe that this paper will belong to the 50% of the top accepted papers, and is a "clear accept". I therefore raise my score to 8.
Summary: This paper studies the fair online bilateral trade problem, where a platform posts prices for one item at a time. At each time point, a (buyer, seller) pair arrives, each with private valuations. A trade occurs if the posted price is between the buyer and seller valuations. In this paper, the goal is to maximize the minimum utility of the buyer and the seller; this differs from prior work, where the objective was to maximize the total utility. The paper considers multiple valuation regimes, including deterministic, stochastic, stochastic and independent, and full feedback, and bounds the regret in each regime. The main results of the paper are as follows. First, the paper provides a linear lower bound in the stochastic case where buyer and seller valuations are not necessarily independent. The paper complements this result with an upper bound of $\tilde{O}(T^{⅔})$ in the stochastic case where buyer and seller valuations are independent. This result relies on the Convolution Lemma, which is the key technical insight from the paper. The paper further shows that in the deterministic setting (where buyer and seller valuations are fixed over all rounds), there is a tight regret bound of $\Theta(\ln(T))$. Finally, when valuations are known, the paper provides an algorithm which achieves $O(\sqrt{T})$ regret and shows that this bound is tight. Strengths: The problem statement is interesting and the difference from prior work is clearly stated. The model is clean and easy to understand, and I appreciated the formal problem definition in lines 114 - 117. The paper provides comprehensive results (often both lower and upper bounds) for multiple reasonable assumptions, and gives intuition for where the difficulty comes from in each regime. The work seems relevant for NeurIPS. I found the paper to be very easy to follow. The assumptions are clearly stated in each section, and the presentation is generally well done. In particular, I liked how the theorem statements are rigorous and clear and that proof sketches are given when there is a lack of space for full proofs. The proofs are nicely written and correct as far as I checked. In particular, Lemma 1 (the Convolution Lemma) was clean and clever, and is made good use of throughout the rest of the paper (i.e. in the proofs of Theorems 2 and 6). I felt that the lower bound examples in section 2 and in the proof of Theorem 1 were also excellent at building intuition for the remainder of the paper, and interesting examples in and of themselves. Weaknesses: The structure of section 1 made it a bit harder to follow than the rest of the paper. Personally, I would have found it helpful for section 1 to be split into multiple parts (such as intro/related work/our contributions) in order to be able to more easily pinpoint the main contributions. While the paper presents a very nice theoretical model, it does not discuss many applications. I would have found it helpful if the paper had further discussed applications of fair online bilateral trade – for example, by discussing when such an objective function would make sense. Technical Quality: 4 Clarity: 4 Questions for Authors: How well and/or easily would the main results generalize to a weighted fair gain from trade objective? For example, would similar arguments work if the buyer’s utility was weighted twice as much as the seller’s utility? What types of applications come to mind for the fair online bilateral trade problem? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - Weighted fair gain from trade objective A possible weighted generalization of the fair gain from trade could be $WFGFT(p,s,b) = \min( w \cdot (p-s)^+$, $(b - p)^+ )$ for a fixed constant $w \ge 1$ and where we recall that $p$ is the posted price, $s$ is the seller valuation and $b$ is the buyer valuation. This would address the reviewer's question in the sense that here the buyer's utility (profit) is valued $w$ times that of the seller. For fixed $s <b$, the optimal price is $(w s + b) / (w+1)$, and this price gets "pulled" toward the seller side as $w$ increases, thus increasing the buyer's profit (since the buyer's profit is valued more). Our regret upper bounds can be extended to this weighted fair gain from trade. This is straightforward in the deterministic setting, since we just need to locate the optimal price $(w s + b) / (w+1)$. We either see it immediately (in the full-feedback model) or, otherwise, we use a binary search. In the stochastic case, the following extension of the convolution lemma can be shown to hold (recall that it is under independence of buyer and seller): $\mathbb E [ WFGFT(p,S,B) ] = \int_0^1 \mathbb{P} [ S \le p - \frac{u}{w} ] \mathbb{P} [ u+p \le B ] du $ Hence the proofs (Theorems 2 and 6) can be extended with minor adjustments, yielding the same rates. Analogous considerations hold for lower bounds, which can be easily adapted to obtain the same rates we obtained for $FGFT$. - Applications of fair online bilateral trade A natural application is online ride-sharing services like Uber and Lyft where fairness problems have been previously studied, although in different settings with metrics different from ours (see, e.g., Sühr et al. "Two-sided fairness for repeated matchings in two-sided markets: A case study of a ride-hailing platform", 2019). --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their response, and have no further questions or comments at this time.
Summary: The paper considers a fair version of online bilateral trade problem to minimize fair GFT regret w.r.t. optimal fixed price in hindsight. The paper is comprehensive in studying both upper/lower bounds in various settings. Strengths: - The problem setup is pretty interesting, with a good motivation to the practical scenario. - The paper is pretty comprehensive, and the paper well-written and easy to follow. - It is interesting that ETC style of algorithm is order-optimal. Weaknesses: Technicality - It is good to know that the ETC style of algorithm 1 works well, but I must admit that it is not particularly interesting. - The algorithm for deterministic case is also tight, but the algorithm itself is a straightforward application of binary search, and the regret lower bound is not difficult to come up with, so I would exclude this section for my evaluation. Minor comments - Is it necessary to have bounded support of [0,1] for the valuations? If not (or it's wlog), it would be worth mentioning it. - Why would one need to invoke minimax theorem to argue Remark 1? It seems to trivially follow as the adversary there is simply stronger, correct me if I'm wrong. - It might be good to decompose the introduction to explicitly distinguish the true intro / the paper's results / relate work. As of now it's difficult to follow the intro as they are highly mixed up. - L#80 does not seem necessary, or a footnote might suffice. - The problem of online bilateral trade itself seems highly relevant to Kleinberg and Leighton FOCS'03 (Bounds on Regret for On-line Posted-Price Auctions) and its follow-up works, but it is never mentioned. - Since the algorithm is rather simple but is proven to be order-optimal, I think it would be good to add experiments to see whether it works well compared to other candidates from standard bandit literature. Technical Quality: 3 Clarity: 3 Questions for Authors: - Has the one-shot version of this problem ever been studied? I.e., whether FGFT can be constant apx to the ex-post FGFT or any structural connection between FGFT, first-best GFT, etc. - It seems the regret lower bound largely depends on a highly concentrated distribution. Do you think a sort of smoothness assumption like regularity or monotone hazard rates can be a remedy for better regret bounds Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Mentioned above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - Weakness 1 We are not sure we understand what the reviewer meant here. We are happy to provide clarifications if the reviewer needs some. - Valuations not in $[0,1]$ Extending beyond $[0,1]$ works as in the bandit literature: If $[0,1]$ is replaced by $[0,m]$ (for some $m$), the same results apply up to a multiplicative $m$; If the interval is unbounded, no meaningful results can be obtained, unless additional assumptions on the distributions are made (e.g., $\sigma$-sub-Gaussianity, with known $\sigma$). - Why minimax? Yao's minimax theorem immediately implies the result, but the reviewer is correct in stating that the result would have also followed by observing that a lower bound on the expected regret of the stochastic case implies a lower bound in the adversarial case because in-expectation lower bounds can be turned into worst-case lower bounds. We will mention this in the revised version. - Kleinberg and Leighton (FOCS'03) We agree that dynamic pricing is a sufficiently related setting to be mentioned in the related work section. The revised version will contain a paragraph about it. - Experiments comparing against bandits The comparison with bandit algorithms would not be fair given that bandit algorithms cannot run in the two-bit feedback setting (the two bits of feedback are not sufficient to reconstruct the realized reward of posted prices, see Lines 39-41). Moreover, from a practical point of view, the bandit version of our problem is not well-motivated, as it is hard to imagine that a learner would be able to observe the reward of a price without access to the valuations (which would lead back to the full-info case). If the reviewer agrees with us, we would prefer not to implement this change. - One-shot version To the best of our knowledge, we are the first to introduce this fair variant of the bilateral trade problem, which arose as a natural continuation of the recent stream of works on online learning in bilateral trade. We agree with the reviewer that studying the one-shot version would also be an interesting extension of our work. - Does regularity help? It does not, in fact, our current lower bound constructions could be modified by "smoothing" the Dirac masses into uniform distributions over squares, obtaining the same results. To give some intuition as to why this is the case, in bilateral trade problems, smoothness helps by making the objective (i.e., the expected reward) Lipschitz, which in turn reduces the continuum-arm problem to a finite-arm problem via discretization. In the *fair* bilateral trade problem, however, the objective is already $1$-Lipschitz with no further assumptions. What hinders learnability in our problem is a lack of observability, which is recovered by assuming independence of buyers and sellers. Although we don't have an explicit lower bound construction at hand for the monotone hazard rate, we believe that this assumption would also be insufficient to obtain learnability. --- Rebuttal Comment 1.1: Comment: Thanks the authors for the detailed response. I have no further question. Although the techniques are not particularly interesting, I think the paper is worth being accepted as it introduces a novel problem that might interest the community in several directions.
Summary: The paper focuses on the online bilateral trade problem, in which at each round a buyer and a seller with private valuations for an item arrive, and the platform has to post prices for the item being traded. In this paper the objective of the platform is that of maximizing the cumulative “fair gain from trade”, that is the minimum between the seller’s and buyer’s utilities. In the two-bit feedback setting, the paper proves a tight $O(\log T)$ regret bound in the deterministic setting, and a $O(T^{2/3)$ regret in the stochastic setting when seller’s and buyer’s valuations are independent of each other. In the full-feedback setting, the paper provides tight regret bounds. Strengths: The paper is well-structured and explores a research direction which could potentially be very interesting. Weaknesses: Section 2 is clear but does not sufficiently explain the additional challenges that need to be addressed compared to the standard online bilateral trade framework by Cesa-Bianchi et al. (EC 21). The main area for improvement in this paper is the generalization of the results beyond the specific objective of Fair GFT. What is the general structure required to achieve results similar to those presented? Could similar results be directly obtained for other objectives? The justification for focusing specifically on maximizing Fair GFT is not particularly convincing at the moment. Developing a broader framework to address these problems would significantly strengthen the results and make them more widely applicable and interesting. Technical Quality: 4 Clarity: 3 Questions for Authors: See weaknesses Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 2 Limitations: Limitations are discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - Additional challenges here compared to Cesa-Bianchi et al., EC '21 A first high-level observation is that the pairs "assumption"/"regret rate" differ between our setting (Fair Bilateral Trade) and that of Cesa-Bianchi et al. [10] ("regular" Bilateral Trade), suggesting that different ideas will be needed to obtain optimal results for each pair. For example, while smoothness (bounded densities) was crucial in obtaining sublinear rates in [10], we had to understand that, in our work, it plays no role, and consequently discover new algorithmic and proof ideas. At a very high level, we share with [10] the same issue of having poor feedback, which is not even sufficient to reconstruct the reward at the price we post. In our case, however, the different form of the objective requires new procedures to recover usable information on the reward function, which is made possible by our new Convolution Lemma. Another difference is that, in [10], to maximize the realized gain from trade, it is sufficient to post a price $p\in[S,B]$, while, in our case, we have to address the more delicate task of locating the midpoint $p = (S+B)/2$. This is the reason why (even optimal) algorithms for bilateral trade can suffer linear regret in our setting (as we show in Section 2). Another technical subtlety that we had to tackle is that a direct application of the convolution lemma in the 2-bit feedback setting would yield a suboptimal upper bound of $T^{3/4}$. Instead, by carefully defining a data-gathering procedure such that each observation contributes to estimating the convolution of the cdfs at *all* points, we are able to obtain the optimal $T^{2/3}$ rate. We will better highlight these various challenges in the paper. - Justification for focusing specifically on maximizing Fair GFT We fully agree that developing a broader framework is an interesting goal to pursue. We suspect that obtaining estimators for a reasonably large class of objective functions in bilateral trade could be a challenging problem if one aims at capturing at least both the GFT and FGFT objectives. The main reason for this difficulty is that different objectives would likely require significantly different techniques due to the problem-specific quantities that the learner has to estimate. For example, the fair and regular gain from trade problems rely on different algorithms, estimation lemmas, proofs, and assumptions, as discussed above. Our specific focus on the Fair Gain From Trade objective is motivated by the so-called egalitarian rule in social choice theory (sometimes also called the max-min rule or the Rawlsian rule), where one favors the alternative that maximizes the minimum utility of the involved parties to promote fairness. We will add a paragraph about this bullet point in the revised version. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for the detailed response. I have no additional questions.
Rebuttal 1: Rebuttal: We thank the four reviewers for the time spent reading our work and for sharing their comments. We will update the submission in light of the feedback. In particular, we will further highlight our technical contributions and the economic relevance of our results, we will provide additional discussions and clarifications in light of the comments and questions, and we will discuss potential extensions of our results. Additionally, we agree that subdividing the first section into "Introduction", "Related Work", and "Our Contributions" will improve readability. The revised version will be structured as suggested. Given the additional content page allowed, and the available time left, we are confident that we will successfully improve the submission with the requested changes. We hope that our four separate responses have appropriately answered all questions and comments. We remain available should any further clarifications be needed.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Rethinking Score Distillation as a Bridge Between Image Distributions
Accept (poster)
Summary: The paper analysis score distillation methods in a single framework and hypotheses about two possible error sources - the single step ODE solver approximation and the mismatch between the assumed and true source image distribution. It then proceeds to tackle the first of them using a custom negative prompt design. The authors show this improves the quality of the generated images in both 2D and 3D tasks as measured by qualitative metrics and in a user study. Strengths: S1) The analysis of different methods in a single framework and with illustrations and experiments is quite interesting for a reader and useful for understanding the SotA and its motivation. S2) The exposition is very clear except for the treatment of the related work discussion. S3) The proposed negative prompt idea is very effective in the tested scenarios. S4) Quantitative metrics are complemented by a user study. S5) The "3D Sketch-to-Real" task hidden in the Appendix seems quite original but it would require more examination (and probably a separate paper) to become a substantial contribution on its own. Weaknesses: W1) The technical contribution of the paper is limited. Specifically, I see the DDS gradient [18] and the new proposed gradient (eps_ours) as closely related. As far as I can tell, the formulas are the same and the difference is in the choice of the source prompt. Since DDS is presented to work with arbitrary source and target prompt pairs, I feel the new method could be postulated as a specific realization of DDS for one specific source prompt. The authors also omit DDS from all comparisons which supports this viewpoint. The paper still presents a novel analysis of performance in setups that were not shown in the original DDS paper [18] but I am worried the way the method is presented here ("Ours") is not optimal from the perspective of originality. W2) The idea of discouraging artifacts using a negative prompt is likely not a substantial scientific contribution since negative prompts are commonly used by practitioners in image generating interfaces such as Midjourney. W3) The authors identify two sources of errors - the 1st order approximation and the source distribution match - but they mostly only analyze the latter. The former is speculated to improve performance of ISM but if this hypothesis should be considered seriously, I would expect the multi-step ODE solver to be tested in combination with the various gradient definitions. This could show it in fact helps consistently and how orthogonal it is to the other error type. Currently, not even the basic ISM is included in Table 1 so I find the "First-order approximation error" theory plausible but untested. W4) Other suggestions for the analytical Sec. 2.3: - The analysis of SDS focuses on cases with s >> 1 but it does not explain the poor performance with s = 1. - Unlike for SDS, the analysis and Figure for DDS omits the effect of CFG while the original work uses s > 1 ([18]: Eq 3 and Fig. 5). I would expect to see some of the same distribution shifts as in SDS. W5) The related work is deferred to the appendix. The discussion covers the various distillation methods and their applications quite well but due to its location the paper alone is sometimes a bit hard to follow. Therefore, I am personally not very keen of this design choice. Notice that I do not consider this a reason for rejection but I still put it among the negatives since I see it as a minor weakness. Minor presentation issues and suggestions: - [25] is NeurIPS 2022 - When Fig. 1 is first referenced the meaning of different eps suffixes has not yet been defined in the text. ----------- **Justification of recommendation** Overall this could be a very nice meta-paper on score distillation but to that goal it unfortunately only explores one part of the problem - the score definition - and not the ODE approximation. The negative prompting idea seems effective but not very novel given the common usage of negative prompts in image generation and the similarity of the score gradient to DDS. Therefore I lean towards rejection. ----------- **Post rebuttal** I increase my rating after the rebuttal because the authors could show that their method is useful also in 3D generation and it also can be combined with the multi-step inversion to demonstrate influence of both limiting factors discussed in the theoretical part. These should be integrated into the paper. I keep my scope just positive since I still see the method as technically very incremental (e.g. the main difference from DDS is just the input image) and I await the discussion of the other reviewers. Technical Quality: 3 Clarity: 3 Questions for Authors: Q1) The authors use CFG strength s=40 (L483) or even s=100 (L487) to test their method which is a lot more than the range 3-15 considered in the DDS paper [18]. Why the difference? Q2) The definition of the score function in Eq. 2 differs from [25] and also from what I would expect. It is a bit hard to judge because $\epsilon_t$ is also not at all defined but I would expect $\sigma_t$ to play a role, ie. smt like $\epsilon_t/ \sigma_T^2$ as in [25, Eq. 3]. Can you please clarify this? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors discuss impact of AI art on society and mention that the performance of their method does not match the reverse sampling. Flag For Ethics Review: ['Ethics review needed: Research involving human subjects'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > 1. The formulas of DDS and the proposed method are the same up to the source prompt. A small yet fundamental difference is that DDS is a specialized method for editing that computes the source distribution direction based on a reference image instead of the current optimized image. That is, DDS computes the direction to the source distribution as $\epsilon_\phi (x_\text{ref}; y_\text{src}, t)$ as shown in the equation above line 155, while we compute it as $\epsilon_\phi (x_{θ,t}; y_\text{src}, t)$ as shown in the equation above line 186. This makes DDS incompatible with generation tasks since it requires a reference image. In addition, when there are reference images, it induces additional source distribution error as we explained in line 155. > 2. The idea of using a negative prompt is likely not a substantial scientific We emphasize in the global response that our main contribution is to provide a new view of the sources of errors in SDS. To demonstrate that improving the source distribution approximation error can improve generation quality, we propose a two-stage optimization process. Although using negative prompt is a common practice, the proper way to use it with SDS is not well explored. For example, as shown by the two-stage ablation experiment in Figure A1, always using negative prompt in SDS leads to divergence and poor geometry. > 3. The authors identify two sources of errors - the 1st order approximation and the source distribution match - but they mostly only analyze the latter. Please find the qualitative results of ISM in Figure A6. We notice that ISM generally outputs sharper results than SDS. However, it actually still uses single-step estimation to approximate the bridge. Although it uses multi-step inversion to produce a noisy sample instead of adding randomly sampled noise, it computes two epsilon directions using diffusion models to estimate the gradient, similar to SDS. Instead, here we propose to resolve first order approximation error by solving the entire PF-ODE path to recover the dual bridge and estimate the endpoint of the bridge $\psi_{0, \text{tgt}}$ that is coupled with$\psi_{0, \text{src}}$. In this way, we obtain the most accurate gradient direction with minimal approximation error $\psi_{0, \text{tgt}} - \psi_{0, \text{src}}$. We refer to this approach as “full path”. However, solving the inversion ODE is not trivial. We noticed that the inversion can exaggerate the distribution mismatch error and cause the optimization to get stuck at a local optimum at the beginning of the optimization. Instead, the high variance of the single-step methods often demonstrates more robustness to different initializations. Therefore, we first perform the single-step score distillation optimization to obtain reasonable results before moving to solving the full bridge. With this approach, we can now explore addressing both the first and second source of error. First source (linear approximation) with “full-path” and second source (source distribution mismatch error) with 2-stage. As shown in the table below, we find that using the “full-path”multi-step (mitigating error source 1) always outperforms the single-step methods, achieving a lower FID. However, the same trend does not fully transfer to the text-to-3D experiments. We observe that it typically introduces additional artifacts and makes the optimization less stable. We leave the best way of leveraging this gradient as a future research exploration. | Method | Addressing linear approx. error | Addressing dist. mismatch error | FID | |---|----|---|---| | SDS | No | No | 79.95 | | with two-stage | Yes | No | 69.82 | | with full-path PF-ODE | No | Yes | 66.51 | | with two-stage & full-path | Yes | Yes | 62.69 | > 4.a) The analysis of SDS does not explain the poor performance with $s = 1$. SDS requires $s>>1$ to make $(\epsilon_\text{tgt}-\epsilon_\text{uncond})$ dominate the gradient direction. The function of the $\epsilon$ term effectively averages the optimized image by adding different Gaussian noise. As shown in Figure A5 in the uploaded PDF, a small value of $s$ would make the generation over-smoothed and lacking in details. > 4.b) Unlike for SDS, the analysis for DDS omits the effect of CFG. We believe that the effect of changing s is equivalent to changing the learning rate of the optimization, which controls the strength of the editing in DDS without changing the gradient direction. Therefore, we omit it in our gradient analysis. > 5, 6. The related work is deferred to the appendix. Minor issues. Thank you, We will address these in the revised version. > Q1. Why use CFG strength $s=40$ or 100 which is more than the range 3-15 in DDS? s does not affect the direction of the gradient and can be absorbed into the learning rate. In their experiment, learning rate of 0.1 is used while we mostly use a learning rate of 0.01. Equivalently, we can use $w=4$ and learning rate=0.1 which results in a similar scale to the DDS hyperparameters. > Q2. Can you please clarify the definition of the score function in Eq. 2, compared with [25]? We follow the notation in the DDPM paper. Suppose that $\mathbf{x}_t \sim \mathcal{N}(\sqrt{\alpha_t}\mathbf{x}_0; (1-\alpha_t)\mathbf{I})$, then the score can be computed as: $ p(\mathbf{x}_t) = \frac{1}{\sqrt{2\pi (1-\alpha_t)}} \exp{(-\frac{1}{2(1-\alpha_t)}\cdot (\mathbf{x}_t - \sqrt{\alpha_t}\mathbf{x}_0)^T(\mathbf{x}_t - \sqrt{\alpha_t}\mathbf{x}_0))}$ $ \nabla \log p(\mathbf{x}_t) = - \frac{1}{2(1-\alpha_t)}\cdot 2(\mathbf{x}_t - \sqrt{\alpha_t}\mathbf{x}_0) $ $ = - \frac{1}{(1-\alpha_t)}\cdot (\sqrt{\alpha_t}\mathbf{x}_0 + \sqrt{1-\alpha_t}\mathbf{\epsilon}_t - \sqrt{\alpha_t}\mathbf{x}_0) \text{(Reparametrization trick with } \mathbf{\epsilon}_t \sim \mathcal{N}(\mathbf{0}; \mathbf{I}) \text{)} $ $ = - \frac{1}{\sqrt{1-\alpha_t}} \mathbf{\epsilon}_t $ [25] uses a different noising function, thus a different score function. --- Rebuttal Comment 1.1: Title: Rebuttal Comment: Thank you for answering my questions. --- Reply to Comment 1.1.1: Title: Thank you! Comment: Thanks for updating your review!! We will incorporate these suggested experiments in our revised version. Regarding DDS, we also wanted to mention that it uses the same negative prompt throughout the optimization, which our new ablation study shows is highly ineffectual. We want to iterate that the contribution of our paper is the proposed framework, which enables the understanding and the proposal of a two-stage optimization pipeline.
Summary: This paper proposes interpreting score distillation sampling (SDS), a widely used method for generating 3D, 4D, and vector graphics, through the lens of the Schrödinger Bridge (SB) problem. According to the paper, SDS is a linear approximation of the optimal path moving from the current distribution to the target distribution. The paper identifies two sources of approximation error: first-order approximation error and source distribution mismatch. To address the second error source, the authors suggest a simple method: using a text prompt that describes the source distribution instead of a null prompt. This approach is computationally more efficient than the best variant of SDS, VSD, and the authors demonstrate its effectiveness across various tasks such as text-to-image, text-to-3D, painting-to-real, and illusion generation. Strengths: - The interpretation that SDS finds the optimal path connecting two distributions is novel and aids in understanding the behavior of the widely used SDS. - The paper addresses the oversaturation problem of conventional SDS and is computationally more efficient than VSD. - The paper demonstrates effectiveness across a wider range of tasks compared to previous papers (SDS, NFSD, and VSD). - The paper is well-written and easy to follow. Weaknesses: - Up to section 2.3, I enjoyed reading and expected a principled solution. However, the solution in section 2.4 was quite naive and heuristic. A major drawback of this solution is that the pre-trained diffusion model needs to accurately match the proposed descriptions such as "oversaturated, smooth,…" with the source distribution. Considering that the text-to-3D experiments are based on Threestudio, my assumption is that the text-to-image model used in this paper is Stable Diffusion 2-base (the exact model is not mentioned in the paper). **It is questionable whether other models (MVDream, SDXL, PixArt, SD3, etc.) can understand these descriptions well.** My guess is that diffusion models trained on such high-quality data will still generate clean images even when descriptions like "oversaturated, smooth,…" are appended, and therefore will still suffer from the source distribution mismatch problem. - The paper does not consider the **Janus problem**, which frequently occurs in text-to-3D. It is questionable whether the proposed methodology would be effective with MVDream [50], a pre-trained diffusion model that addresses this issue. - **VSD has the strength of** not only ensuring the quality of rendered images but also **achieving sample diversity as the number of particles increases.** Therefore, line 282 is not true, and the proposed method falls behind VSD in terms of sample diversity. Technical Quality: 2 Clarity: 3 Questions for Authors: - What pre-trained diffusion model did you use in this paper? - I personally tried text-to-3D with SDS using Diffusion-DPO [A, B], which is a post-trained diffusion model for aesthetic quality, and failed to generate plausible geometry. Can this phenomenon be explained from the perspective of the Schrödinger Bridge problem? [A] https://huggingface.co/mhdang/dpo-sd1.5-text2image-v1 [B] Wallace et al., Diffusion Model Alignment Using Direct Preference Optimization, CVPR 2024. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The limitations of this paper are stated in section 4, but they need improvement. As mentioned in the weaknesses, unlike the method proposed in this paper, VSD can resolve the diversity issue by increasing the number of particles, so line 282 is misleading. Additionally, it is necessary to mention the slow generation speed of SDS and the failure to address the Janus problem in text-to-3D, for the benefit of the readers. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > 1. The solution in section 2.4 was quite naive and heuristic. A major drawback is whether other models (MVDream, SDXL, PixArt, SD3, etc.) can understand the descriptions "oversaturated, smooth,…" well. Although using negative prompt is a common practice in text-based diffusion models, how to use it with SDS is not well explored. As shown in the global response and Figure 1A, simply using it all the time during the optimization process leads to inferior results. Instead, the two-stage process consistently outperforms the single-stage baselines. We show that the prompt generally works well with other models like MVDream and SDXL. We perform our 2D generation experiment with MVDream using SDS and our two-stage optimization process. We use the same negative descriptors for the source prompt as proposed in our experiments with Stable Diffusion 2.1. As shown in Figure A4-a, we show that the two-stage optimization (bottom) produces more convincing colors and additional realistic high frequency details compared to an SDS baseline (top) for the same MVDream model. We do the same with the SDXL base model, again using the same negative descriptors proposed for Stable Diffusion 2.1. In Figure A4-b, the two-stage optimization process (below) produces less saturation artifacts and more high frequency details than the SDS baseline (above). SDXL is known to perform poorly in the SDS setting and is therefore not commonly used, but we include these results to demonstrate the universality of the proposed optimization. Despite the fact that these diffusion base models are trained on multi-view or high-quality images whose captions may not contain the proposed negative descriptors, we argue that the powerful pretrained text encoders that embed their prompts represent these artifacts well. For example, the embedding of “oversmoothed” is likely far from the embedding of “detailed” and close to other negative descriptors like “blurry”. Empirically, we find that the same negative descriptors work across base models without needing retuning. > 2. The paper does not consider the Janus problem. Our proposed framework is focused on analyzing and improving the diffusion gradient in SDS. MVDream addresses the Janus problem, which requires data priors on objects by training on multi-view data and conditioning generations with camera pose. These problems are orthogonal, and we show that our two-stage optimization process works well with MVDream to address both issues in Figure A4-a. > 3. VSD has the strength of not only ensuring the quality of rendered images but also achieving sample diversity as the number of particles increases. Therefore, line 282 is not true, and the proposed method falls behind VSD in terms of sample diversity. Thanks, we will make clear the advantage VSD has on sample diversity. In line 282, we mentioned, “...neither approaches have yet to achieve the quality and diversity of images generated by the reverse process.“ We did not intentionally mean that our two-stage optimization achieves better diversity than VSD, but to say, all SDS variants still induce lower diversity than the reverse process. The reason for that is still unclear. We hope that our analysis may inspire future research to better understand this problem. For instance, to analyze the diversity issue within our framework, we notice that the ending point of the bridge is deterministically decided by the initial conditions and the ODE processes. When training a LoRA on all the particles, the loss encourages different ODE processes on individual particles. As a result, the LoRA module assigns slightly different directions to each particle and improves diversity. A recent study [1] reinforces this point by introducing a repulsive ensemble method to VSD. In general, this is beyond the scope of our paper. We will add this discussion in the paper. [1] https://arxiv.org/abs/2406.16683 > Question 1: What pre-trained diffusion model did you use in this paper? We use stable-diffusion-v2-1-base for our experiments. We will make this clear in our revised version. > Question 2: I personally tried text-to-3D with SDS using Diffusion-DPO [A, B], which is a post-trained diffusion model for aesthetic quality, and failed to generate plausible geometry. Can this phenomenon be explained from the perspective of the Schrödinger Bridge problem? Thanks for bringing up this interesting observation. We have also observed that models like SDXL fail to generate reasonable geometry in practice. Similar to Diffusion-DPO, SDXL filters its data using the aesthetic scores. Our hypothesis is that the images with high aesthetic scores overrepresent canonical views of the object. For example, the front view of a dog is often deemed to be more aesthetic than its back view. As a result, this induces an issue that the target distribution of SB heavily biases toward this canonical view. When applying these models to 3D generation, there could be more inconsistency across different views, which makes the optimization less stable. --- Rebuttal 2: Comment: Since most text-to-image models use a frozen text encoder, untrained negative descriptor embeddings are likely to be far from positive descriptor embeddings, as the authors mentioned in the rebuttal. However, a text-to-image model would not be able to output a good score or gradient for an untrained negative embedding. I believe this is why SDXL performs poorly in the SDS setting. Considering that recent text-to-image models are increasingly trained on high-quality images, **the proposed method seems difficult to apply beyond stable-diffusion-v2-1-base and MVDream**, which is a post-trained version of stable-diffusion-v2-1-base. While I see value in this paper's explanation of the behavior of SDS and its variants (including VSD as addressed in the rebuttal) from the perspective of the SB problem, I feel that the proposed method for addressing the issues with SDS needs further improvement and is not yet ready for publication in NeurIPS. Therefore, I will maintain my current score. I remain open to further discussion. --- Rebuttal Comment 2.1: Comment: Thanks for the response! > However, a text-to-image model would not be able to output a good score or gradient for an untrained negative embedding. I believe this is why SDXL performs poorly in the SDS setting. This may be a misunderstanding—SDS is not performing poorly because of the negative embedding (since naive SDS uses no negative embedding!). Naive SDS just performs poorly overall with SDXL. In fact, the experiments in the rebuttal PDF actually show that our approach, and therefore adding the negative description, **improves upon the SDXL-based SDS baseline**, producing results with fewer color artifacts. This is evidence that our method is **not difficult to apply beyond SDv2.1**. The objective of this paper was to analyze the sources of error in SDS. We hypothesized that accurately representing the current source distribution is one key to enhancing SDS quality. To validate this, we introduced an experimental alternative to SDS that appends negative modifiers to more effectively model the source distribution. In our initial submission, all our experiments used SDv2.1 as the base model because our goal was to compare to naive SDS, and at the time of submission, Stable Diffusion 2.1 was (and still is) the main image generator used for score distillation sampling experiments. Delving further into how the SDS performance varies across SOTA text-to-image diffusion models, especially those trained on aesthetic images, is an interesting and under-explored research direction that is beyond the scope of this paper. In the future, as novel image generators appear, there may be more effective ways of modeling the source distribution. We believe our provided experiments in the paper and the new additions in the rebuttal have sufficiently validated our analysis, and these insights will be applicable to newer image generation models. --- Rebuttal 3: Comment: Thank you for the detailed response. I apologize for the late additional question, but I have one more. In the experiments conducted in the paper and rebuttal, including those with SDXL, within what interval were the **diffusion timesteps** sampled, and what was the **weighting scheme** for each timestep? For example, DreamFusion uses sigma-weighted SDS, and VSD uses timesteps only within the range [0.5, 0] during the refinement stage. Since the sampling distribution and weights of timesteps in diffusion models are known to be important [A], I am curious about this aspect. I’m curious if the effect of negative prompts might manifest at specific diffusion timesteps and weighting schemes. [A] Kingma et al., Understanding Diffusion Objectives as the ELBO with Simple Data Augmentation, NeurIPS 2023. --- Rebuttal Comment 3.1: Comment: Thank you for the follow-up and sharing this interesting paper! For the text-to-2D experiments, including the SDXL experiment in the response, we adopt the timestep sampling and weighting as proposed in Dreamfusion (i.e., sigma weighting $\omega(t)=1-\hat{\alpha}_t$ and $t\sim[0.02, 0.98]$). In general, we find that annealing the maximum timestep is helpful, but exclude that from text-to-2D experiments for a more direct comparison. For text-to-3D experiments, to make a fair comparison with VSD, we use the configuration from ProlificDreamer. That is, we use sigma weighting and sample timestep $t\sim[0.02, 0.98]$ for the first $5,000$ steps and $t\sim[0.02, 0.50]$ for the remaining $20,000$ steps. Overall, we observe similar effects when tuning these hyperparameters in SDS and our proposed second stage. We will add these details in our revision.
Summary: This paper revisits the application of Score Distillation Sampling (SDS) for tasks with limited data availability by proposing a new interpretation based on Schrödinger Bridges for optimal-cost transport between distributions. The paper highlights that existing SDS methods produce artifacts due to linear approximations and poor estimates of source distributions. By aligning the text conditioning more closely with the source distribution's characteristics, the authors demonstrate significant improvements in image generation tasks such as text-to-2D and 3D, and translation between art styles. The proposed method avoids the computational overhead of previous approaches while achieving comparable or superior results in terms of image quality across various domains. Strengths: - Overall I find that the writing is clear, concise, and well-structured, making it easy for readers to follow the arguments and understand the key points. I really like the view of optimal transport between source and target distributions to understand score distillation. - This paper provides a comprehensive analysis of existing methods from a unified point of view. They further propose a simple yet effective method in transferring 2D diffusion prior to the 3D scene generation or editing. In contrast to prior state-of-the-art ProlificDreamer, it does not require fine-tuning of diffusion models, which may introduce training inefficiency and instabilities Weaknesses: - The description of the proposed method is quite concise. Some technical parts lack enough rationales. For example, estimating the source distribution by negative text prompts lacks the rationales. I believe providing detailed analysis with ablation studies will make the paper more informative to answer the following questions: How do you choose these negative prompts? Does it need to be hand-crafted carefully every time adapting to a new domain/task? Why do you have to propose a two-stage optimization pipeline? - How do you choose the value of w? If the value of w is still very large, then the proposed method likely inherits the weaknesses of using a high CFG with standard SDS: less diversity of generations for a given prompt, less realistic generations, and over saturated colors. Besides, if the value of w is still very large, I think the proposed method is more similar to directly using the original negative prompts as guidance. - The text-to-3D experimental results are somewhat not convincing since the comparison of both qualitative and quantitative results is inadequate and more competitive baselines should be included, such as Fantasia3D, Magic3D, CSD, NFSD, etc. Technical Quality: 3 Clarity: 4 Questions for Authors: See the weakness section for questions. I would like to raise my scores if the above questions are addressed. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > 1. Some technical parts lack enough rationales, such as the negative prompt and two-stage optimization process. How do you choose these negative prompts? We propose a two-stage optimization process motivated by our SB framework. To reduce the effect of the distribution mismatch error, we aim to more accurately describe the current distribution across different stages of optimization. While we experimented with more complicated approaches to achieve this, we found prompting to be unreasonably effective and much simpler. The negative prompts were selected based on the characteristic SDS artifacts, e.g., "oversaturated, smooth, pixelated,.." (rest in paper). This description does not require hand-crafting based on problem domains—it is kept fixed across all shown examples and use-cases. We do not claim that these specific prompts are optimal or even of particular interest. To demonstrate this, we prompted GPT-4 through ChatGPT a single time to generate alternative negative prompts and ablated the results in Figure A2 with otherwise identical hyperparameters. ChatGPT Prompt: *Here's a set of "negative prompts" to append to a text-to-image prompt that describe undesirable image characteristics: ", oversaturated, smooth, pixelated, cartoon, foggy, hazy, blurry, bad structure, noisy, malformed" I want to try a variety of them, please brainstorm many of roughly the same length* Generated Variations: *1. ", washed out, grainy, distorted, flat, smeared, overexposed, undefined, choppy, glitchy, dull"* *2. ", low contrast, jumbled, faint, abstract, over-sharpened, muddy, cluttered, vague, jagged, poor detail"* *3. ", soft focus, muffled, streaky, patchy, ghosted, murky, unbalanced, skewed, mismatched, overcrowded"* *4. ", overbright, scrambled, bleary, blocky, misshapen, uneven, fragmented, obscured, chaotic, messy"* *5. ", dull tones, compressed, smeary, out of focus, unrefined, lopsided, erratic, irregular, spotty, stark"* In addition, we show that the same negative descriptors work across different base models, such as MVDream. Since MVDream denoises four camera-conditioned images jointly, we treat the canvas of four images as a single optimization variable for the SDS gradient. In Figure A4-a, we compare the SDS baseline (top) to the proposed two-stage optimization (bottom), in which we generate more natural colors and detail. This is especially noticeable in the grass around the crocodile. Due to the space limit, we will add more results in the revised version. We also ablate the proposed two-stage optimization process in the global response. As the source distribution keeps changing along with the optimization process, it is necessary to update the source distribution. We effectively achieve this by first running SDS (stage 1) then updating our source distribution with negative prompts to steer the optimization away from the artifacts (stage 2). In the global response, we show that if we start with such “negative” prompts from the beginning, which do not accurately describe the rendered images at initialization, it causes additional distribution approximation error and fails to generate a plausible object. > 2. How do you choose the value of w? Is large w value similar with large cfg that causes artifacts? We choose $w=40$ to produce a gradient on a similar scale as SDS with CFG scale $=100$. This is because, in text-to-3D, it is crucial to balance many other regularization losses on sparsity, opacity, and so on. Using a similar scale allows us to adopt the same hyperparameters to compare fairly with other SDS variants. In addition, w is different from CFG since SDS also incorporates a term with sampled Gaussian noise, and the CFG term needs to be large enough to make the $(\epsilon_\text{tgt}-\epsilon_\text{uncond})$ dominant. Instead, $w$ simply scales the gradient, unlike the CFG scale in SDS, which changes the direction of the gradient. We also notice that $w$ is relatively robust and gives similar results in a range. Instead, when the CFG scale is small in SDS, the averaging effect of Gaussian noise dominates and creates oversimplified 3D objects. When $w$ or CFG scales are large, with other loss weights intact, the optimization becomes unstable and may diverge. See Figure A5 in the uploaded PDF for this qualitative comparison. > 3. Missing comparison in text-to-3D with more competitive baselines. Thank you for the suggestion. We ran a comparison with Fantasia3D, Magic3D, and CSD through a drop-in replacement of SDS with our method. We did not compare with NFSD as they did not release the official code and we empirically found that its results resemble SDS results. Specifically, all three methods optimize a textured DMTet, which is initialized from an SDS-optimized NeRF, using SDS or CSD for 5k or 10k iterations. We replace the SDS or CSD stage of these approaches with the two-stage optimization motivated by our framework. Just like our text-to-3D NeRF experiment, we perform the first stage for 60% of iterations and the second stage for 40% of iterations. Note that we keep all the other hyperparameters the same, which were tuned for the baselines, not our method. This replacement leads to the same optimization time as the original methods. For Fantaisia3D and Magic3D, we use threestudio for fair comparison (Magic3D does not have code available) and the default prompts, which are generally believed to work the best with this reimplementation. For CSD, we use the official implementation. As shown in Figure A3, our method improves the visual quality of all the methods by reducing the oversaturated artifacts of SDS and improving the details. --- Rebuttal Comment 1.1: Comment: Thanks for your reply. I would like to maintain my score.
null
null
Rebuttal 1: Rebuttal: We thank all reviewers for their thoughtful feedback. We propose an optimal transport view to understand score distillation, which reviewers “really like” (Cgf2), and find “novel” (sBp4). We provide illustrations and experiments under this single framework, which reviewer XvCb finds “quite interesting … and useful for understanding the state-of-the-art and its motivation.” They also find the method we propose to improve SDS based on this interpretation simple yet effective, (Cgf2, sBp4, XvCb), efficient (Cgf2, sBp4) and explained with clear exposition (Cgf2, sBp4, XvCb). We want to stress that our primary contribution is the analysis of the sources of error in SDS and its variants (i.e., why it does worse than sampling with reverse process)—forming the hypothesis that accurately expressing the current source distribution is crucial for improving the quality of SDS. We validate this hypothesis with a simple approach that appends negative modifiers to better model the source distribution. While this method is likely useful in itself, it primarily serves as a practical way to empirically support our hypothesis (i.e., since it outperforms baseline methods that less accurately model the source distribution). Most of the reviewer concerns were centered around the particular design decisions of our experimental optimization approach, or noted similarities with existing methods. In this rebuttal (both here and in the individual reviewer responses) we detail some of the motivations for these design decisions and answer some recurring questions: Our Schrodinger's Bridge (SB) interpretation presents SDS as transporting the optimization variable from a source distribution toward a target distribution. This interpretation highlights the importance of accurately modeling the source distribution—where inaccuracies may cause the characteristic artifacts of SDS (e.g., saturation, over-smoothing, etc). To validate this, we devise an experimental solution that aims to better model the source distribution at different stages of optimization, and compare it to SDS and its variants. Our experimental optimization procedure has two stages. At early stages of optimization, we use the standard SDS optimization objective, since the source distribution estimated by SDS (i.e., the model’s unconditional prediction) is a reasonably good approximation of the sample distribution at initialization (blob initialization in NeRF optimization or zero initialization for images). At later stages, once the SDS objective has begun to instill many of the characteristic artifacts in the optimized solution—we change to modeling the source distribution with the target scene description, appended with a set of standard negative modifiers that approximately describe the collection of SDS artifacts (“oversaturated, smooth, pixelated, cartoon, foggy, hazy, blurry, bad structure, noisy, malformed,”). While this descriptor is fixed across all sequences (and therefore does not require per-instance/domain hand-crafting), it much more accurately models the intermediate source distribution and thus we find that it is effective at steering the optimization towards an artifact-free solution. SDS by itself is shown to be notably worse in all our comparisons, but one may additionally wonder—why not only use the second stage? As noted above, our objective is to improve the estimate of the source distribution, and at early stages, SDS’s unconditional sample already models the source distribution (random initialization) reasonably well. In fact, the negative prompt modifiers likely model this distribution particularly poorly. We experimented with this variant, and found that optimization either (1) diverges, resulting in an entirely black volume or (2) generates very unreasonable geometry. A three-way comparison is shown in the attached PDF Figure A1, showing that our proposed two-stage approach is clearly better than both (1) only SDS, and (2) only stage-2. Our analysis of the error present in SDS includes two potential sources: (1) the first-order approximation error, as well as (2) the source distribution mismatch error described above. SDS incurs the first error by using only a single-step estimate of the Schrodinger Bridge rather than solving it fully through the full-path PF-ODE. To validate that reducing the first source of error can further improve SDS (XvCb), we perform an experiment in which we solve the entire PF-ODE path to recover the dual bridge (instead of using a first-order approximation) and estimate the endpoint of the bridge $\psi_{0, \text{tgt}}$ that is coupled with $\psi_{0, \text{src}}$. We use this endpoint as the target. Although this is slow, it consistently produces better results as shown by the lower COCO-FID scores in the table below. Please see response to XvCb for more details: | Method | Addressing linear approx. error | Addressing dist. mismatch error | FID | |----------------------------|---------------------------------|---------------------------------|-------| | SDS | No | No | 79.95 | | with two-stage | Yes | No | 69.82 | | with full-path PF-ODE | No | Yes | 66.51 | | with two-stage & full-path | Yes | Yes | 62.69 | We have also performed more experiments as suggested, described in the individual responses below. We will add these additional experiments and figures in the revised paper. We hope these additions strengthen the proposed Schrodinger Bridge SDS interpretation, which is our primary contribution. Pdf: /pdf/bff93143c9c3138c2baebb84cf13d0441df3a830.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Revisiting Self-Supervised Heterogeneous Graph Learning from Spectral Clustering Perspective
Accept (poster)
Summary: The paper proposes a new framework that revisits SHGL from a spectral clustering perspective, incorporating rank and dual consistency constraints. This approach uses a rank-constrained spectral clustering method to refine the affinity matrix and remove noise, while also integrating node-level and cluster-level consistency constraints to better capture and use invariant and clustering information. Theoretical analysis and experimental results show that this new method significantly improves performance in downstream tasks compared to existing methods. Strengths: 1. This paper provides a theoretical investigation of previous SHGL methods from the perspective of spectral clustering. 2. This paper proposes a new framework to capture the cluster-level graph invariant representations. 3. Experiments demonstrate the effectiveness of the proposed methods. Weaknesses: Unfortunately, this paper is not easy to follow due to the missing parts in terms of motivation and logic. 1. In the introduction part, the authors mention three challenges abruptly from Lines 44 to 48. However, it is unclear why they are challenging to address. For example, why it is difficult to understand previous SHGL methods from the clustering perspective? It is also unclear the logical relationships between the three challenges. 2. In Section 2, the authors introduce some notations about the heterogeneous graph. However, it is unclear what is the objective of the problem. 3. In Section 2.1, it is unclear why authors need to bridge the gap between SHGL methods and graph-cut algorithm, and what is its benefits. 4. In Section 2.2, it is unclear why the rank constraint can mitigate noisy connections. 5. When reading Section 2.2, the reviewer gets lost in the lemma and equation details. It is better if the authors can highlight the contributions and leave all technical derivations to the appendix. 6. The logical relationship between Sections 2.2 and 2.3 is unclear. Why both the low-rank constraints and dual consistency constraints should be imposed, and what is the relationship between these two constraints? 7. In the experiments, it is unclear why two homogenous graph datasets were used for performance evaluation, given the objective of this paper is for heterogeneous graph learning. Technical Quality: 3 Clarity: 2 Questions for Authors: Please find questions in the weaknesses above. Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: This paper does not mention the limitations of this work or the potential negative societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the positive comments on the novelty, theoretical analysis, and experimental results of our method. We are so encouraged and will try our best to address the concerns one by one. > **Q1.** Unclear why three challenges are challenging to address and the logical relationships between them. **A1.** Since previous SHGL methods are close to clustering techniques, as stated in lines 30-33, and suffer from two issues (i.e., noisy connections and neglect of cluster-level information), as stated in lines 34-41, we have challenges as follows. (i) To formally understand previous methods from clustering perspective. This is challenging because no existing literature formally connects self-supervised graph learning with spectral clustering. This requires us to derive the whole process from scratch. (ii) To learn an adaptive graph structure to avoid inter-class noise. The challenge is hard to solve because although some graph structure learning works [1-2] try to optimize graph structures, they may have noisy connections due to the lack of theoretical guarantees. (iii) To effectively incorporate cluster-level information to boost downstream tasks. This is difficult as existing works often divide nodes within the same class into different clusters, resulting in sub-optimal cluster-level information that cannot be used in an effective way. Relationships: Challenge (i) is the theoretical basis of the whole paper and points out the direction for solving the following two challenges. The latter two challenges build on each other under the theory framework of challenge (i). Specifically, challenge (ii) mitigates noise and outputs high-quality cluster-level information for challenge (iii), while challenge (iii) aligns intra-class representations, thus aiding challenge (ii). [1] Towards Unsupervised Deep Graph Structure Learning. WWW 2022. [2] Heterogeneous Graph Structure Learning for Graph Neural Networks. AAAI 2021. > **Q2.** In Section 2, unclear the objective of the problem. **A2.** The goal is to learn a model receiving diverse types of nodes and edges in the heterogeneous graph as input, and output low-dimensional representations for downstream tasks. We will add it in the final version. > **Q3.** In Section 2.1, unclear why to bridge the gap between SHGL methods and graph-cut algorithm and what is its benefits. **A3.** Theorem 2.3 bridges the gap between them to further point out the issues in previous SHGL methods. That is, previous SHGL methods cut the learned representations into $d$ clusters, which is much larger than the number of classes $c$. Therefore, nodes within the same class may be divided into different clusters, thus the learned representations cannot be clustered well. Benefits: Theorem 2.3 gives the theoretical motivation of Section 2.2 and provides a direction for improving previous SHGL methods, i.e., adding low-rank constraints to divide the learned representation into $c$ clusters instead of $d$ in previous methods. > **Q4.** Unclear why the rank constraint can mitigate noisy connections. **A4.** The ideal noise-free affinity matrix is to only contain connections within the same class and have no noisy connections among different classes. That is, the ideal affinity matrix contains exactly $c$ (i.e., the number of classes) connected components. Based on Lemma 2.4, if the rank of $\mathbf{L}_\mathbf{S}$ equals $n-c$, then the affinity matrix $\mathbf{S}$ contains exactly $c$ connected components to achieve the ideal case and mitigate noisy connections. > **Q5.** In Section 2.2, the reviewer gets lost in lemma and equation details. It is better to highlight the contributions and leave all technical derivations to the appendix. **A5.** Section 2.2 is designed to mitigate noise with the rank constraint, and we highlight it as follows. First, we propose to learn an adaptive graph structure via Eq. (5). After that, Lemma 2.4 explains how we add the rank constraint to Eq. (5) to avoid noise. Then we rewrite the rank constraint to spectral clustering by Eq. (6)-Eq. (8) as the rank constraint is not easy to tackle. Finally, we solve Eq. (8) by alternating optimization, i.e., fix eigenvectors $\mathbf{F}$ to obtain the closed-form solution of the affinity matrix $\mathbf{S}$ by Eq. (9)-Eq. (10), while fixing the affinity matrix $\mathbf{S}$ to obtain the optimized eigenvectors $\mathbf{F}$. To avoid the high time complexity of eignedecomposition, we replace it with Eq. (11)-Eq. (13) to efficiently approximate $\mathbf{F}$ with $\mathbf{Y}$. We will leave most derivations in the appendix in the final version. > **Q6.** The logical relationship between Sections 2.2 and 2.3 is unclear. Why impose both low-rank and dual consistency constraints, and what is the relationship between them? **A6.** Without the low-rank constraint, our framework would be affected by noisy connections. Without the dual consistency constraint, our framework would fail to consider the cluster-level information, thereby weakening downstream tasks. Hence, it is crucial for our method to impose both constraints. Relationships: The rank constraint in Section 2.2 and the dual consistency constraint in Section 2.3 actually benefit from each other. Specifically, the rank-constrained spectral clustering provides high-quality cluster-level information for the dual consistency constraint. Moreover, the dual consistency constraint encourages the representations within the same class to align with each other to help the rank-constrained spectral clustering. > **Q7.** Unclear why homogeneous graph datasets were used for evaluation. **A7.** We use both heterogeneous and homogeneous datasets to verify the effectiveness of our method on different types of graphs. > **Q8.** There is no discussion of limitations or potential negative societal impacts. **A8.** We discussed limitations and potential negative societal impacts in Appendix F, and we will move them to the main paper in the final version. --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal Comment: Thank you for the detailed rebuttal, and I increase the score to 5.
Summary: This work deals with the problem of self-supervised representation learning in heterogeneous graphs. First, a spectral-clustering based objective is presented to unify the objectives of existing methods. Second, a novel self-supervised method is proposed that tries to capture both the cluster information and node-level information in the learnt representation. Experiments show the effectiveness of the proposed method over existing methods. Strengths: 1. The proposed method is novel and follows a principled design. 2. Presented theoretical analysis is helpful in deeper understanding of existing and proposed method. 3. Experiemental section is comprehensive with adequate baselines, datasets, and ablations. Weaknesses: 1. Presented theorems are often unclear or imprecise. For example (a) In Theorem 2.6, authors use the term model complexity abruptly without any definition or reference. I could not understand the relation between the infimum and supremum of model complexity with generalization ability (which also needs to be rigorously defined). (b) In Theorem 2.2, what exactly are the previous meta-path-based and adaptive-graph-based SHGL method mentioned in the theorem statement? What is the expression for the regularization term for the corresponding previous methods? (c) There does not seem to be a clear delineation between what already exists in the literature and what is the new contribution of the paper. E.g., does Theorem 2.3 directly follow from existing results in the literature, or does this require custom analysis? (d) Theorem 2.5 is not precise enough. When referring to "proposed method", it would be more precise to refer to specific objective function or algorithm block. Also "equivalent to performing spectral clustering based on the affinity matrix ....". Authors are recommended to replace the language description with precise objective function. 2. The writing is often unclear and could be greatly improved. Several terminologies (which might not be standard in the literature) are never explained/defined. For example: (a) Section 1 Line 42 ".. it is feasible to analyse ..", it is not clear what exactly is feasible to analyse (although after Section 2.1, it can be understood). (b) In abstract and introduction, it is not clear what is invariant information is (c) Sentence in line 216-218 is unclear. (d) How does H^T H = I imply statistical independence? Technical Quality: 2 Clarity: 2 Questions for Authors: Please refer to the questions and comments in Weaknesses section. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Limitations and impacts are adequately discussed in the Appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the positive comments on the novelty, theoretical analysis, and experimental results of our method. We are so encouraged and will try our best to address the concerns one by one. > **Q1.** Presented theorems are often unclear or imprecise. For example, > **Q1-a**. In Theorem 2.6, no definition or reference of the model complexity. Unclear relation between the infimum and supremum of model complexity with generalization ability. **A1-a**. Due to the space limitations, in the submission, we followed previous works [1-2] and gave Definition C.5 of the model complexity in lines 686-691 in Appendix C.4. In the rebuttal, we further followed previous works [1-2] to define the generalization error $E$ of a model based on the model complexity, i.e., **Definition 1**. For any $\delta \in [0, 1]$, with probability at least $1-\delta$, the generalization error $E$ of a model follows the inequality, i.e., $$ E \le \frac{1}{n} \sum_{i=1}^{n} \ell(f(x_i), y_{i})+\sqrt{\frac{C}{n}}+\mathcal{O}(\sqrt{\frac{\log (1 / \delta)}{n}}),$$ where $(x_i, y_i)$ is a pair of labeled data, $f$ is the model, $l$ is the loss function, $n$ is the number of labeled data, $C$ is the model complexity measure. Based on Theorem 2.6 and the Definition above, we can obtain that the proposed method obtains a lower model complexity boundary, thus achieving a lower generalization error and higher generalization ability boundary than previous SHGL methods. We will move Definition C.5 and add the above Definition 1 to the main paper in the final version. [1] Representation Based Complexity Measures for Predicting Generalization in Deep Learning. NeurIPS 2020. [2] Methods and Analysis of The First Competition in Predicting Generalization of Deep Learning. NeurIPS 2020. > **Q1-b.** In Theorem 2.2, what exactly are the meta-path-based and adaptive-graph-based SHGL methods, and their regularization terms? **A1-b.** In Theorem 2.2, the meta-path-based SHGL methods indicate the works that utilize meta-paths to build edges among nodes. Almost all SHGL comparison methods in the paper belong to this group, such as HGMAE, CPIM, HGCML, HeCo, HDMI, etc. We gave their regularization term in Appendix C.1 in Eq. (28). The adaptive-graph-based SHGL methods indicate the works that utilize the adaptive graph structure to build edges among nodes. Only the latest comparison method (i.e., HERO) belongs to this group. We gave the regularization term in Appendix C.1 in Eq. (35). We will clarify two group's methods and regularization terms in the main paper in the final version. > **Q1-c**. Clarify new contributions of the paper. E.g., does Theorem 2.3 require custom analysis? **A1-c**. Theorem 2.3 connects the traditional graph-cut algorithm with existing SHGL methods, which has never been done in the existing literature and requires custom analysis. The new contributions of the paper can be summarized as: 1) Motivation: We first attempt to revisit existing SHGL methods from the spectral clustering perspective in a unified manner. 2) Theory: This paper builds theoretically connections between previous SHGL methods and the spectral clustering as well as the graph-cut algorithm, which have never been analyzed in existing literature. 3) Method: This paper proposes for the first time to adaptively learn a rank-constrained affinity matrix and introduce a dual consistency constraint to alleviate the issues in previous methods. > **Q1-d.** Theorem 2.5 is not precise enough. **A1-d.** Thanks for your suggestion, we will reformulate Theorem 2.5 in the final version as follows, **Theorem 2.5.** Optimizing the spectral loss $L_{sp}$ leads to performing the spectral clustering based on the affinity matrix $\mathbf{S}$ and conducting RatioCut ($V_1,\ldots, V_c$) algorithm to divide the learned representations into $c$ partitions, i.e., $$ \min L_{sp} \Rightarrow \min \operatorname{Tr} (\mathbf{Y}^T\mathbf{L}_S\mathbf{Y}) \Rightarrow \min \operatorname{RatioCut}(V_1,\ldots,V_c), $$ where $\mathbf{Y}$ is the cluster assignment matrix, $\mathbf{L}_S$ is the Laplacian matrix of the affinity matrix $\mathbf{S}$, $\operatorname{Tr}(\cdot)$ indicates the matrix trace, and $c$ is the number of classes. > **Q2.** The writing is often unclear and could be greatly improved. (a). Section 1 Line 42, it is not clear what exactly is feasible to analyse (although after Section 2.1, it can be understood). (b). In the abstract and introduction, it is not clear what the invariant information is. (c). Sentence in line 216-218 is unclear. (d). How does $\mathbf{H}^T\mathbf{H}= \mathbf{I}$ imply statistical independence? **A2.** (a) We will modify it as "it is feasible to analyze previous SHGL methods from a clustering perspective thanks to their close connection to clustering techniques ...". Actually, we have mentioned this connection in lines 24-33. (b) Although node representations collect information from nodes within the same type, and heterogeneous representations collect information from nodes of different types, they both contain the same feature of the node itself, which we call the invariant information. (c) We will modify it as "In addition, according to Theorem 2.2, previous SHGL methods actually perform spectral clustering to learn node representations. However, previous SHGL methods fail to utilize the cluster-level information outputted by the spectral clustering, thus weakening the downstream task performance." (d) The independence refers to the independence among different representation dimensions. Specifically, $\mathbf{H}^T\mathbf{H} \in \mathbb{R}^{d_1 \times d_1}$ calculates the correlation among different dimensions of $\mathbf{H}$, where $d_1$ is the dimension of $\mathbf{H}$. After that, the constraint $\mathbf{H}^T\mathbf{H}= \mathbf{I}$ enforces the off-diagonal elements of $\mathbf{H}^T\mathbf{H}$ to 0, thus minimizing the correlation among different dimensions of $\mathbf{H}$ to achieve the statistical independence. --- Rebuttal Comment 1.1: Comment: Thank you for the response. I encourage the authors to revise the manuscript with discussed changes to improve the presentation and theoretical rigor. Regarding $H^T H = I$, note that uncorrelation does not imply statistical independence. Uncorrelation only prevents linear dependence but cannot prevent some nonlinear dependence. Consider using "uncorrelated" instead of "statistically independent". Based on the overall quality of the paper, I maintain my score. --- Reply to Comment 1.1.1: Comment: Thank you for your valuable feedback and positive comments on our paper. According to your suggestions, we will replace "statistically independent" with "uncorrelated" to avoid confusion. Additionally, all discussed changes will be incorporated into the final version.
Summary: This paper proposes a theory-backed method for Self-Supervised Heterogeneous Graph Learning (SHGL) based on spectral clustering and incorporates rank constraint and node/cluster consistency regularizers to generate better embeddings. In specific, the authors start by showing that existing algorithms divide the representations into a certain number of clusters that are much larger than the number of real classes. Then, an objective function with rank constraint is proposed to reduce the noise of message-passing and consistency terms are employed to improve downstream task performance. Experiments on the real datasets show that the proposed method outperforms other baselines. Strengths: 1. The paper analyzes the issue of the exiting methods that are based on meta-path-based graph or the adaptive graph structure theoretically. Furthermore, a rank and consistency constrained approach is proposed to tackle know issues of SHGL and it is shown to have less complexity theoretically. 2. Experimental results show the superiority of the proposed method consistently over a variety of datasets. Weaknesses: 1. Notations are not always clearly defined. For example, what are the dimensions of the mapping $g_{\phi}$ and $p_{\phi}$? 2. The derivation of the first loss term is not clear to me. Why is there an entropy constraint term in (13)? How do we translate Eq. (8) to Eq. (13) and what is the correspondence of the terms? Line 183 states that ‘fitting eigenvectors F by (13)’, but it seems that F does not appear (13). Or is it that (13) is only for solving the last term in (8)? Clarity can be further improved upon the objective function, especially the term $\cal{L}_{sp}$. 3. Why is $Y$ a probability matrix? Is there any guarantee on that? Only orthogonality is proved for the matrix. Do we need a simplex constraint on the columns of $Y$? Technical Quality: 3 Clarity: 2 Questions for Authors: 1. In Line 172, whose independence are we referring to? Why $H^T H=I$ is equivalent to statistical independence? 2. In practice, is it always the case that number of clusters equal to $c$ will give the optimal results? 3. How do we reach the inequality in (53) and (54)? Does the term in (55) correspond to the loss function in (17)? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors are encouraged to discuss the limitations in the main paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the positive comments on our theoretical and experimental results. We are so encouraged and will try our best to address the concerns one by one. > **Q1.** Some undefined notations, e.g., the dimensions of $g_\phi$ and $p_\varphi$. **A1**. We employ $g_{\phi}\in\mathbb{R}^{f\times d_1}$ and $f_\theta\in\mathbb{R}^{f\times d_1}$ to derive semantic and heterogeneous representations, where $f$ and $d_1$ are the dimensions of node features and representations, respectively. We then employ projection heads $p_{\varphi}\in\mathbb{R}^{d_1\times c}$ and $q_\gamma\in\mathbb{R}^{d_1\times d_2}$ to map representations, where $c$ and $d_2$ are mapped dimensions. We summarized the settings of the above notations in Table 3 in the uploaded PDF file. > **Q2.** Unclear derivation of the first loss. Why an entropy constraint in (13)? How do we translate (8) to (13) and what is the correspondence terms? Or is (13) only for solving the last term in (8)? Clarity can be further improved upon $\mathcal{L}_{sp}$. **A2**. The second term (i.e., the entropy constraint) in Eq. (13) is used to maximize the entropy of $\mathbf{Y}$ (the entropy is maximum when nodes are evenly assigned in different clusters), thereby preventing most nodes from being assigned to the same cluster. The first term in Eq. (13) is only for solving the last term in Eq. (8). Specifically, as stated in lines 153-157, Eq. (8) is solved by the alternating optimization, i.e., fix $\mathbf{S}$ then optimize $\mathbf{F}$, and fix $\mathbf{F}$ then optimize $\mathbf{S}$. When fixing $\mathbf{F}$, Eq. (8) only remains the first two terms to optimize $\mathbf{S}$ (the last term in Eq. (8) is fixed), thus we can obtain the closed-form solution of $\mathbf{S}$ with Eq. (10). When fixing $\mathbf{S}$, Eq. (8) only remains the last term to optimize $\mathbf{F}$ (the first two terms in Eq. (8) are fixed), which requires cubic time complexity due to the eigendecomposition. Hence, we replace it with neural networks (i.e., $p_\varphi$ and orthogonal layer) and the spectral loss to reduce the time complexity. Therefore, $\mathcal{L}_{sp}$ in Eq. (13) consists of two parts, i.e., the first term optimizes $\mathbf{Y}$ to approximate $\mathbf{F}$ to solve the last term of Eq. (8), and the second term assigns nodes evenly into different clusters. > **Q3.** Why is $\mathbf{Y}$ a probability matrix? Any guarantee on that? Do we need a simplex constraint on $\mathbf{Y}$? **A3**. In the submission, we called $\mathbf{Y}$ as the probability matrix, because $\mathbf{Y}$ indicates the prediction to assign nodes to different clusters, and the larger the value $\mathbf{y}_{ij}$, the greater the probability of node $v_i$ to be assigned to the $j$-th cluster. There is no other constraint (e.g., the simplex constraint) on $\mathbf{Y}$ except the orthogonality and spectral loss. Indeed, $\mathbf{Y}$ is not a typical probability matrix, and we will rename it as the cluster assignment matrix in the final version to avoid confusion. In the rebuttal, we further conducted experiments to add the simplex constraint on $\mathbf{Y}$, and reported the results in Table 4 in the uploaded PDF file. In Table 4, the method with the simplex constraint obtains inferior results than the method without. This is reasonable because adding a simplex constraint to $\mathbf{Y}$ may hinder its approximation to $\mathbf{F}$, affecting the quality of the affinity matrix. > **Q4.** In Line 172, whose independence is referring to? Why $\mathbf{H}^T\mathbf{H}=\mathbf{I}$ is equals to statistical independence? **A4.** The independence refers to the independence among different representation dimensions. Specifically, $\mathbf{H}^T\mathbf{H}\in \mathbb{R}^{d_1\times d_1}$ calculates the correlation among different dimensions of $\mathbf{H}$, where $d_1$ is the dimension of $\mathbf{H}$. After that, the constraint $\mathbf{H}^T\mathbf{H}= \mathbf{I}$ enforces the off-diagonal elements of $\mathbf{H}^T\mathbf{H}$ to 0, thus minimizing the correlation among different dimensions of $\mathbf{H}$ to achieve the statistical independence. > **Q5.** Is $c$ clusters always yield optimal results? **A5.** To verify it, we changed the number of clusters and reported the results in Figure 1 in the uploaded PDF file. Obviously, our method obtains the best results when the number of clusters equals $c$ and decreases as the number of clusters increases. This is reasonable because if the number of clusters is larger than $c$, the nodes within the same class may be divided into different clusters. Moreover, if the number of clusters is smaller than $c$, the nodes from different classes may be divided into the same cluster. > **Q6.** How do we reach the inequality in (53) and (54)? Does the term in (55) correspond to the loss function in (17)? **A6.** We can reach (53) and (54) by proving the following inequality: $$a^2b+(1-a)^2c\ge\frac{bc}{b+c}, $$where $0\le a\le1$, and $0\le b,c$. To prove it, we construct $f(a) = a^2b+(1-a)^2c-\frac{bc}{b+c}$. We then take the derivative of $f(a)$, i.e., $$f'(a)=2ab-2c+2ac.$$ Then we let $f'(a)=0$, and have $a=\frac{c}{b+c}$. We have $$f(\frac{c}{b+c})=\frac{c^2b}{(b+c)^2}+\frac{b^2c}{(b+c)^2}-\frac{bc}{b+c}=0.$$ In addition, we take the second-order derivative of $f(a)$ and obtain $$f''(a)=2(b+c)\ge0.$$ Therefore, $f(a)$ is decreasing when $a<\frac{c}{b+c}$ and increasing when $a>\frac{c}{b+c}$, and reaches its minimum 0 at $\frac{c}{b+c}$. Hence, $f(a)\ge0$ always holds for $0\le a\le1$, thus proving the above inequality. We can reach the inequality in (53) by replacing $a$, $b$, and $c$ with $P_0$, $\sigma_0^2$, and $\sigma_j^2$, respectively. Similarly, we can also reach the inequality in (54). In addition, the term in (55) corresponds to the loss function in Eq. (17). > **Q7.** Discuss limitations in the main paper. **A7.** Thanks for the suggestion. We discussed limitations in Appendix F, and we will move them to the main paper in the final version. --- Rebuttal Comment 1.1: Comment: Thank you for the response and clarification. I would suggest summarizing the notations in the Appendix and please add the details for deriving Eq. (53) and (54) as well. Besides, it looks like $H^{\top} H = I$ only implies uncorrelatedness instead of statistical independence. Please consider rephrasing the sentences. A follow-up question: my question on the loss term derivation was how do we go from (8) to (13)? If (13) is only for solving the last term in (8) (i.e., for solving Y with fixed S), then why does the final objective function (18) only contains ${\cal{L}}_{sp}$ but not the whole Eq. (8)? If this loss is for solving (8), how is the entropy constraint term derived from (8)? --- Rebuttal 2: Comment: Thank you for your constructive feedback. According to your suggestions, we will summarize the notations and details for deriving Eq. (53) and Eq. (54) in the Appendix. Moreover, we will replace "statistically independent" with "uncorrelated" to avoid confusion. For the follow-up question, Eq. (13) contains two terms, the first term in Eq. (13) is derived from and for solving the last term of Eq. (8) by fixing $\mathbf{S}$ and approximating $\mathbf{F}$ with $\mathbf{Y}$. In contrast, the second term (i.e., the entropy constraint) in Eq. (13) is not derived from Eq. (8), and it is a widely used regularization term in existing literature [1-4] to help optimize $\mathbf{Y}$ by preventing most nodes from being assigned to the same cluster. We will add more clarification about Eq. (13) in the final version. The final objective function Eq. (18) only contains Eq. (13) (i.e., $L_{sp}$) but not the whole Eq. (8) because, in the final objective function, we only need to approximate one variable (i.e., $\mathbf{F}$) in the last term of Eq. (8) with $L_{sp}$. In contrast, another variable (i.e., $\mathbf{S}$) in the first two terms of Eq. (8) is already solved by its optimal solution in Eq. (10) and does not require gradient back-propagation to optimize. That is, the first two terms of Eq. (8) do not have to appear in the final objective function. Please let us know if you have any other concerns. Thanks! [1] Sparse Subspace Clustering with Entropy-Norm. ICML 2020. [2] Deep semantic clustering by partition confidence maximisation. CVPR 2020. [3] Contrastive clustering. AAAI 2021. [4] Entropy regularization for unsupervised clustering with adaptive neighbors. Pattern Recognition 2022. --- Rebuttal Comment 2.1: Comment: Thank you for the further clarification and addressing my concerns. I will increase the score accordingly.
Summary: Overall, this paper makes the first attempt to theoretically revisit previous SHGL methods from the spectral clustering perspective in a unified manner. Specifically, this paper revisits SHGL from the spectral clustering and introducing a novel framework enhanced by rank and dual consistency constraints. Specifically, the proposed framework incorporates a rank-constrained spectral clustering method that refines the affinity matrix to exclude noise effectively. Additionally, the proposed method integrates node-level and cluster-level consistency constraints that concurrently capture invariant and clustering information to facilitate learning in downstream tasks. Strengths: 1. I really appreciate the idea that revisiting previous SHGL methos from the perspective of spectral clustering, which is interesting and may inspire some researchers in the graph learning as well as the self-supervised learning communities. 2. Theoretical analysis verifies the effectiveness of the proposed method, which divides the learned representations into distinct partitions based on the number of classes. Furthermore, the proposed method exhibits enhanced generalization ability than previous SHGL methods. 3. Extensive experiments on both heterogeneous and homogeneous graphs demonstrate the effectiveness of the proposed method. Visualization and case studies further verify the claims in this paper. 4. The proposed rank-constrained spectral clustering is novel and interesting, and it seems like can also be used to the graph structure learning in the homogeneous graph. Weaknesses: 1. In the Introduction, the authors claim that previous methods conduct message-passing relying on meta-path-based graphs and adaptive graph structures, which inevitably include noise. Are there any real examples to better illustrate the noise in meta-path-path based graph as well as the adaptive graph structures? 2. In the dual consistency constraints, the proposed method designs the node-level consistency constraint to capture the invariant information between the node representations and the heterogeneous representations. How about replacing the node-level consistency constraint with other common loss, such as InfoNCE [1]? [1] Aaron van den Oord et al., Representation learning with contrastive predictive coding. 3. The authors design the rank-constrained spectral clustering to learn the affinity matrix for nodes belonging to the same node type. Although the visualization verifies the effectiveness of the affinity matrix, it would be better for the authors to add some ablations studies to further verify it, such as replacing the affinity matrix with a self-attention mechanism or simple cosine similarity. 4. What are the specific processes for different downstream tasks (e.g., node classification and node clustering)? It seems like training the proposed method first, and then freezing parameters of the model and applying outputted representations for downstream tasks. 5. The proposed method is designed for the heterogeneous graph while it can be implemented on both the homogeneous graph datasets and heterogeneous graph datasets. Is it just to replace the heterogeneous graph encoder with the graph convolutional layer? How to generate two different views for the dual consistency constraints? 6. It would be better to add some recent works about the self-supervised heterogeneous graph learning in the related work, especially those published in the past two years. 7. The paper needs further proofreading. For example, - In Eq. (17) I know $q_i$ is the i-th projected representation. What’s the $\hat{\mathbf{q}}_{\mathbf{y}_i}$ actually means? - The definitions of some symbols need to be further determined, such as \mathbf{I} in Definition 2.1. I guess it might represent the identity matrix. - Some mistakes in Table 3, the Freebase dataset should be removed from the Table. - The caption of Table 7 should be fixed. Technical Quality: 3 Clarity: 3 Questions for Authors: see weakness. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: Yes, the authors have discussed the limitations and potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the positive comments on the novelty, theoretical analysis, and experimental results of our method. We are so encouraged and will try our best to address the concerns one by one. > **Q1.** Any real examples to illustrate the noise in meta-path-based graphs and adaptive graph structures? **A1.** Yes. Actually, the noise in meta-path-based graphs and adaptive graph structures indicates the edges connected nodes from different classes. Take the academic heterogeneous graph with several node types (i.e., paper, author, and subject) as an example. For the meta-path-based graph, if an author wrote two papers belonging to different classes (e.g., "data mining" and "computer vision"), then there will be a meta-path "paper-author-paper" to connect these two papers. For the adaptive graph structure, if two papers belonging to "data mining" and "computer vision" share part of keywords, there also may be an edge to connect them. As a result, such noisy edges in meta-path-based graphs and adaptive graph structures connect two paper nodes from different classes and introduce confusion into node representations after the message-passing. > **Q2.** How about replacing the node-level consistency constraint with other loss, e.g., InfoNCE? **A2.** To verify the effectiveness of the node-level consistency constraint, we conducted experiments to replace the proposed node-level consistency constraint with the InfoNCE loss and reported the results in Table 1 in the uploaded PDF file. From Table 1, we can find that the variant method with InfoNCE loss obtains a similar performance to the proposed method. However, the InfoNCE loss generally requires the time complexity of $\mathcal{O}(n^2)$, where $n$ is the number of nodes. This may introduce large computation costs during the training process. In contrast, the proposed method simply designs the node-level consistency constraint in Eq. (15) to capture the invariant information with the time complexity of $\mathcal{O}(nd^2)$, where $d$ is the representation dimension and generally $d^2<n$. We will add the experimental results and analysis in the final version. > **Q3.** It would be better to add some ablations studies to verify the effectiveness of the affinity matrix, e.g., replacing it with the self-attention mechanism or simple cosine similarity. **A3**. In the submission, we conducted experiments to replace the affinity matrix with the self-attention mechanism and reported the results in Table 6 in Appendix E. To further verify the effectiveness of the rank-constrained affinity matrix, we further investigated to replace the affinity matrix with the simple cosine similarity and reported the results in Table 2 in the uploaded PDF file. Obviously, the proposed method with the affinity matrix obtains superior performance than the cosine similarity and the self-attention mechanism on all datasets. The reason can be attributed to the fact that the affinity matrix in the proposed method is constrained to contain exactly $c$ components to mitigate noisy connections from different classes. In contrast, although either the cosine similarity or self-attention mechanisms may assign small weights for node pairs from different classes, it inevitably introduces noise during the message-passing process to affect the quality of node representations. As a result, the effectiveness of the rank-constrained affinity matrix is verified. We will add the experimental results and analysis in the final version. > **Q4.** What are the specific processes for downstream tasks (e.g., classification and clustering)? **A4.** We follow the evaluation in previous works [1-2] to conduct node classification and node clustering as semi-supervised and unsupervised downstream tasks, respectively. Specifically, we first pre-train models with unlabeled data in a self-supervised manner and output learned node representations. After that, the resulting representations can be used for different downstream tasks. For the node classification task, we train a simple logistic regression classifier with a fixed iteration number and evaluate the effectiveness of all methods with Micro-F1 and Macro-F1 scores. For the node clustering task, we conduct clustering and split the learned representations into c clusters with the K-means algorithm, then calculate the normalized mutual information (NMI) and average rand index (ARI) to evaluate the performance of node clustering. We will add the details about downstream tasks in the final version. [1] HDMI: High-order Deep Multiplex Infomax. WWW 2021. [2] Multi-view Contrastive Graph Clustering. NeurIPS 2021. > **Q5.** Is it just to replace the heterogeneous graph encoder with the graph convolutional layer to implement on homogeneous graph datasets? How to generate two different views for the dual consistency constraints? **A5**. Yes, we replace the heterogeneous graph encoder with the graph convolutional layer to implement the proposed method on homogeneous graph datasets. In addition, we follow previous work [3] to generate two different views of the homogeneous graph by removing edges and masking features. [3] Graph Contrastive Learning with Adaptive Augmentation. WWW 2021. > **Q6**. It would be better to add some recent works about the SHGL in the related work. **A6**. Thanks for your suggestion, we will add more recent related works in the final version. > **Q7.** The paper needs further proofreading. For example, ​ 1) In Eq. (17), what does the $\hat{\mathbf{q}}_{\mathbf{y}_i}$ actually mean? ​ 2) The definitions of some symbols need to be further determined, such as $\mathbf{I}$ in Definition 2.1. ​ 3) Some mistakes in Table 3. ​ 4) The caption of Table 7 should be fixed. **A7.** 1) $\hat{\mathbf{q}}_{\mathbf{y}_i}$ indicates the cluster representation whose label equals to $\mathbf{y}_i$. ​ 2) $\mathbf{I}$ indicates the identity matrix. ​ 3)-4) We will fix these mistakes in the final version. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' clarification and my concerns are well addressed. Overall, this is an interesting work based on its insights, novelty, and theory framework. After acrossing other comments and the rebuttal, I maintain the acceptance of this paper.
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for their insightful and constructive comments. Due to space limitations in the rebuttal, we listed tables and figures in the uploaded PDF file. All modifications will be found in the final version. Our key responses are summarized as follows: **> Clarification of unclear points.** As Reviewer NZZJ suggested, we provided more details to clarify some notations. For example, the dimensions of the encoders $g_\phi$, $f_\theta$, and projection heads $p_\varphi$, $q_\gamma$. Moreover, we provided more explanation to clarify the objective function $\mathcal{L}_{sp}$. In addition, we renamed $\mathbf{Y}$ as the cluster assignment matrix to avoid confusion. Finally, we provided more explanations to clarify the statistical independence. As Reviewer v7xk suggested, we provided more explanation to clarify previous SHGL methods in Theorem 2.2, and listed their corresponding expressions for the regularization term. Moreover, we highlighted the new contributions of the paper, compared to previous literature. In addition, we provided explanation and modified the sentences in abstract and introduction for a clear expression. As Reviewer oBdx suggested, we explained three challenges and the logical relationships between them. Moreover, we clarified the objective of the problem, the reason for bridging the gap between SHGL methods and graph-cut algorithm, and its benefits. In addition, we explained why rank constraints can mitigate noisy connections and the logical relationship between Sections 2.2 and 2.3. Finally, we explained why homogeneous graph datasets were used for performance evaluation. **> Additional theoretical details.** As Reviewer NZZJ suggested, we provided the details to reach the inequality in (53) and (54). As Reviewer v7xk suggested, we will move the definition of the model complexity in the appendix to the main paper. Moreover, we provided the definition of the relationship between model complexity and generalization ability. In addition, we reformulated language description with a precise objective function to achieve precise Theorem 2.5. **> Additional experimental results.** As Reviewer LzB2 suggested, we conducted experiments to replace the proposed node-level consistency constraint with the InfoNCE loss. Moreover, we conducted experiments to replace the affinity matrix with the simple cosine similarity and self-attention mechanism. As Reviewer NZZJ suggested, we conducted experiments to add a simplex constraint on the columns of $\mathbf{Y}$. Moreover, we conducted experiments to change the number of clusters in the proposed method. **> Other responses.** As Reviewer LzB2 suggested, we provided a real example to better illustrate the noise in meta-path-based graphs and adaptive graph structures. Moreover, we provided more details about the specific processes for different downstream tasks and the implementation on the homogeneous graph. **> Summary.** We thank all the reviewers again for the detailed and constructive review. It seems that almost all the reviewers have agreed with the novelty and contributions of the proposed method, and that most of the concerns are raised to part of the unclear expressions, and experiments. We hope our clarification, additional theoretical details, and additional experimental results in the rebuttal could address all of your concerns. Please let us know if you have any questions or concerns. Pdf: /pdf/327d10cfbf102070f3d4c987e354c09109f67fdf.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Real-world Image Dehazing with Coherence-based Pseudo Labeling and Cooperative Unfolding Network
Accept (spotlight)
Summary: This paper posits that real-world image dehazing is particularly challenging due to the intricacies of accurately modeling haze distributions and the limited availability of paired real-world data. Traditional and deep learning-based methods struggle to address the complexities of real haze, often resulting in color distortion and suboptimal outcomes. To tackle these issues, this paper introduces the CORUN model, which integrates atmospheric scattering and image scenes to incorporate physical information into deep networks. Furthermore, it proposes the Coherence-based Label Generator to produce high-quality pseudo labels for network training, thereby improving the network's generalization in haze removal. Strengths: 1. The proposed method, CORUN (COopeRative Unfolding Network), integrates physical information into deep networks to overcome the limitations of existing deep learning-based dehazing methods. It cooperatively models atmospheric scattering and image scenes by incorporating Transmission and Scene Gradient Descent Modules at each stage, effectively restoring haze-contaminated details. 2. CORUN is constructed on the basis of the atmospheric scattering model using a proximal gradient descent method within a deep unfolding network, which provides strong interpretability to the method. 3. The proposed Colabator framework significantly improves the performance of models pretrained on synthetic datasets in real-world scenarios within a limited number of iterations by fine-tuning with real degraded images. This plug-and-play framework incurs no additional computational cost during deployment. 4. Experiments demonstrate the excellent performance of the proposed CORUN model and Colabator framework in real-world dehazing tasks. Both quantitative and visual results validate the network's capability to model real-world haze and restore real-world scenes effectively. Weaknesses: 1. The paper contains some inconsistencies in the use of symbols. While images are generally denoted by P, in the loss functions (15, 16, 17), the authors use I to represent images. Although the text explains the use of these symbols, it can still impact readability. Additionally, in line 172, the pseudo-label symbol is misspelled and should be corrected to P^{R}\_{Pse\_{i}}. 2. In line 175, the symbols contain unnecessary tildes, which should be removed. The symbols in Figure 2 should also be consistent with those used in the formulas. 3. Equations (5) and (7) do not address the consistency of the number of channels between the Transmission Map and the image. Although this is illustrated in Figure 2, it still needs to be clearly stated in the equations to facilitate readability. 4. This paper adequately demonstrates its method's performance in real-world dehazing tasks through thorough experiments in both visualization and quantitative results. However, the provided ablation studies are insufficient. The authors should conduct more comprehensive ablation studies and experiments on different datasets to further validate the effectiveness of their method and its components. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. This paper introduce Colabator, a plug-and-play coherence label generation method designed to fine-tune models pre-trained on synthetic datasets using real degraded images, thereby achieving better real-world image processing results. Can the authors provide additional results of this framework's fine-tuning on various tasks to support this claim? 2. I noticed that the authors assign weights to the local quality of pseudo-labels using a weighted combination of CLIP scores and NR-IQA scores, instead of the more common approach of using their dot product. Can the authors provide experimental results to demonstrate the effectiveness of this choice? 3. Regarding the choice of NR-IQA, why did this paper select MUSIQ over other methods? What are the reasons behind this decision? 4. There is a small portion of similar or identical image content between the RTTS and URHI subsets of the RESIDE dataset. Did the authors notice this issue during their experiments? Have they addressed this issue, or did they directly use the full URHI subset for fine-tuning? If the authors used the entire URHI subset for fine-tuning, have they considered the model's performance on the RTTS subset after removing these similar images? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Regarding limitations, the proposed method partially addresses previous issues such as the lack of paired real-world data and the difficulty in modeling complex real-world haze distributions. Section A.1 "Limitations and Future Work" in the appendix discusses the current work's limitations and potential future research directions to address these unresolved issues. As for broader impacts, Section A.2 in the appendix provides a thorough discussion. As stated in the paper, this work has no foreseeable negative impacts on the field or society and offers substantial positive contributions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the valuable comments. If not specifically stated, all experiments are conducted on the real-world image dehazing task with *RTTS* for space limitation. **W1 & W2: Typo and Misuse of symbols.** We apologize for the errors in our writing and appreciate you pointing out these mistakes in our article. We will correct the issues you mentioned in the final version to improve the readability of the paper. **W3: Lack of clarity in the formulas** We have revised Eq. (5) and Eq. (7) to clearly express the handling of information across different channels. The updated Eq. (5) is as follows: $$ \mathbf{T}\_k=\underset{\mathbf{T}}{\arg \min }\frac{1}{2}\sum\_{c\in\{R,G,B\}} \| \mathbf{P}^{c} - \hat{\mathbf{J}}\_{k-1}^{c}\cdot \mathbf{T} + \mathbf{T} - \mathbf{I} \|^{2}\_{2}+\phi(\mathbf{T}). $$ The updated Eq. (7) is as follows: $$ \hat{\mathbf{T}}\_k=\sum\_{c\in\{R,G,B\}}(\mathbf{I}-\hat{\mathbf{J}}\_{k-1}^{c}+\frac{\lambda\_{k}}{(\mathbf{I}-\hat{\mathbf{J}}\_{k-1}^{c})^{\top}})^{-1} \cdot (\mathbf{I}-\mathbf{P}^{c}\frac{\lambda\_{k} \mathbf{T}\_{k-1}}{(\mathbf{I} -\hat{\mathbf{J}}\_{k-1}^{c})^{\top}}). $$ We will update this content in the final version. **W4: More Ablation studies.** We tested our CORUN method on the O-HAZE and I-HAZE datasets, using PSNR and SSIM for evaluation. The results, presented in **Table A4**, illustrate the robustness of CORUN across dehazing datasets. **Q1: Plug-in-play experiment** Thank you for your suggestion. According to our experiments, Colabator exhibits plug-and-play characteristics and brings enhancements across different networks and real-world tasks. In **Figure. 1** and **Table. 2** of the paper, we validated the effectiveness of Colabator on deep unfolding networks and real-world image dehazing tasks. Here, we integrate our Colabator with more cutting-edge image restoration methods and fine-tune them 5000 iterations in *underwater image enhancement* and *Real robotic laparoscopic hysterectomy desmoke tasks after pretraining*. In the UIE task, all methods pretrained on the *UIEB* dataset, finetuned and tested on the *EUVP* trainset’s low-quality part and testset, we find the performance average gain by **11.3%** in SwinIR [68], **20.6%** by Restormer[69], **15.4%** by GRL[70], and **13.0%** by AST[71]. ||NIMA↑|BRISQUE↓|FADE↓| |---|---|---|---| |SwinIR[68]|3.392|31.775|0.536| |SwinIR+Colabator|3.650|27.247|0.472| |Restormer[69]|3.981|25.688|0.482| |Restormer+Colabator|4.185|16.489|0.381| |GRL[70]|3.816|25.795|0.506| |GRL+Colabator|3.925|18.854|0.423| |AST[71]|4.073|22.772|0.473| |AST+Colabator|4.186|16.353|0.435| In the Desmoke task, all methods pretrained on *Desmoke-LAP* with synthetic paired data, finetuned and tested on *Desmoke-LAP*’s real unpaired data, we find the performance average gain by **7.9%** in SwinIR [68], **4.9%** by Restormer[69], **7.1%** by GRL[70], and **6.4%** by AST[71]. ||NIMA↑|BRISQUE↓|FADE↓| |---|---|---|---| |SwinIR[68]|3.056|40.668|0.655| |SwinIR+Colabator|3.317|35.519|0.639| |Restormer[69]|3.554|33.356|0.628| |Restormer+Colabator|3.738|30.933|0.614| |GRL[70]|3.503|31.682|0.601| |GRL+Colabator|3.716|27.663|0.585| |AST[71]|3.808|29.735|0.563| |AST+Colabator|3.952|26.483|0.538| The gains brought by our Colabator are significant, demonstrating that Colabator possesses plug-and-play capabilities across different networks and tasks. **Q2: Ablation study on trusted weights calculation** We apologize for not explicitly explaining the reason for using weighted summation in the text. We opted for summation rather than multiplication to calculate the Trusted weight, primarily because both the CLIP score and NR-IQAs score need to contribute their respective effects to the weight distribution. This approach also allows for the adjustment of different score ratios for different tasks to achieve more reasonable and better results. If weighted multiplication were used, it would be impossible to control the importance between different components. As shown in the table, we experimentally demonstrated that using the score summation method to calculate the Trusted weight, compared to using score multiplication, results in better final outcomes. ||NIMA↑|BRISQUE↓|FADE↓| |---|---|---|---| |Product|5.487|14.949|0.807| |Summation (Ours)|5.315|11.956|0.751| **Q3: Ablation study on NR-IQA** Thank you for your question. In fact, we have previously tested many NRIQA methods and found that MUSIQ has the best alignment with the human visual system. MUSIQ evaluates images at multiple scales, comprehensively considering both the technical and aesthetic quality of images, making it broadly applicable to various image quality assessment tasks. We conducted ablation experiments using different NR-IQA methods, and the results show that using MUSIQ in the scenarios presented in this paper can achieve better overall performance. ||NIMA↑|BRISQUE↓|FADE↓| |---|---|---|---| |NIMA|5.337|12.242|0.825| |BRISQUE|5.245|11.652|0.813| |NIQE|5.122|12.900|0.835| |MUSIQ(Ours CORUN+)|5.315|11.956|0.751| **Q4: Similar portion in RTTS and URHI** Thank you for your question. We used the URHI dataset following the common setting as referenced in works like PSD, so the issue does not cause unfairness in the comparative experiments. However, this concern is worth considering. Therefore, we attempted to remove images from the URHI dataset that overlap more than 60% with the RTTS dataset, ultimately retaining 3728 images for the second phase of fine-tuning CORUN. The final results are shown below. From the results, it can be seen that our method still achieves excellent performance even after using the cleaned dataset. We found that the NIMA score actually increased, which we believe is due to the removal of some low-quality data during the data cleaning process. ||NIMA↑|BRISQUE↓|FADE↓| |---|---|---|---| |URHI-|5.348|12.784|0.832| |URHI|5.315|11.956|0.751| [68]SwinIR, ICCVW, 2021 [69]Restormer, CVPR, 2022 [70]GRL, CVPR, 2023 [71]AST, CVPR, 2024 --- Rebuttal Comment 1.1: Comment: After reviewing all the materials and discussions on this page, I believe the authors have made significant efforts to address my main concerns. The proposed framework Colabator has strong generalization performance, including dehazing, desmoking and underwater image enhancement tasks. I believe that this quality-evaluation-based pseudo-label selection framework provides a new direction for real-world image restoration. I am now happy to update my rating to strong acceptance. --- Reply to Comment 1.1.1: Title: Thanks for recognizing the value of our work! Comment: We wish to express our sincere appreciation to the reviewer for recognizing the substantial significance of our contribution, specifically the Colabator framework and the CORUN method, within the realm of real-world image dehazing, along with image desmoking and underwater image enhancement. Your acknowledgement holds great importance to us and serves as a meaningful validation of our dedicated efforts to advance this critical area of research.
Summary: This paper focuses on challenging real-world image dehazing problem. The authors develop their network based on unfolding network, while leveraging cooperative proximal mapping modules to facilitate the estimation of transmission map and image content. In addition, the authors propose an interesting teacher network, named as Colabator, with newly developed quality assessment metrics to better provide the high-quality pseudo-labels for dehazing. The subjective metrics on different benchmark, and the user study with downstream tasks evaluation demonstrate the effectiveness of this work. Strengths: 1. The Colabator devised by authors sound reasonable, it also provides some insights on training scheme design for restoration problems where paired-images data are hard to collect. 2. The CPMM modules further improve the representative abilities of unfolding networks to model the gradient descent process of image dehazing problems. 3. The experimental results give better subjective qualities of the proposed network compared to other methods. Moreover, the results of user study and downstream tasks confirm this point. Judging by myself, I also agree that the authors’ method generates more natural restored images with minimal color shift and the best quality. Weaknesses: 1. The derivation of equation (7) and (10) are unclear. The authors may need to provide some basic theories before stating these equations. 2. The details of using DA-CLIP as haze density evaluator is unclear, the statement in line 164 indicates that the authors use a fixed text feature, which is also missing in the article. Moreover, the reason behind the patch partition in line 165 is not clearly stated, the information about patch size N cannot be found in the article. 3. The article lacks ablation studies about CPMM modules. I wonder how network will performance if removing all CPMM modules. Technical Quality: 3 Clarity: 3 Questions for Authors: First of all, please address my concerns listed in the weakness part. I have some additional questions: 1. About density loss: similar as the problems in the second point of weakness, how does this loss function formulate? 2. In the fine-tuning phase, as the student network is optimized using strong augmented data, is the performance gain coming from this augmentation strategy? 3. Why partition images before quality assessment? How about using full-resolution images as input? I hope to see the answers and will adjust my rating accordingly. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have stated the limitations of their article in the supplementary materials. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the valuable comments. If not specifically stated, all experiments are conducted on the real-world image dehazing task with *RTTS* for space limitation. **W1: Unclear derivation of equation** For space limitation, we only show the detailed derivation of Eq. (7) and the Eq. (10) can be derived similarly. Having gotten Eqs. (5) and (6), the solution of $\hat{\mathbf{T}}$ can be formulated according to the proximal gradient algorithm: $$ \mathcal{T}(\hat{\mathbf{J}}\_{k-1}, \mathbf{T}\_{k-1})=\frac{1}{2}\| \mathbf{P}-\hat{\mathbf{J}}\_{k-1}\cdot \hat{\mathbf{T}}+\hat{\mathbf{T}}-\mathbf{I}\|^2_2+\frac{\lambda_k}{2}\|\hat{\mathbf{T}}-\mathbf{T}\_{k-1}\|^2_2. $$ Then we obtain the partial derivative: $$ \partial\_{\hat{\mathbf{T}}}\mathcal{T}(\hat{\mathbf{J}}\_{k-1}, \mathbf{T}\_{k-1})=(\mathbf{I}-\hat{\mathbf{J}}\_{k-1})^T(\mathbf{P}-\hat{\mathbf{J}}\_{k-1}\cdot \hat{\mathbf{T}}+\hat{\mathbf{T}}-\mathbf{I})+ \lambda\_k(\hat{\mathbf{T}}-\mathbf{T}\_{k-1}). $$ Let the partial derivative be equal to zero, we archieve the closed-form solution for $\hat{\mathbf{T}}$ in Eq. (7). Similarly, the solution of $\hat{\mathbf{J}}$ can be formulated as $$ \mathcal{J}(\hat{\mathbf{T}}\_k, \mathbf{J}\_{k-1})=\frac{1}{2}\| \mathbf{P}-\hat{\mathbf{J}}\cdot \hat{\mathbf{T}}\_k+\hat{\mathbf{T}}\_k-\mathbf{I}\|^2\_2+\frac{\mu\_k}{2}\|\hat{\mathbf{J}}-\mathbf{J}\_{k-1}\|^2\_2. $$ The corresponding partial derivative is $$ \partial\_{\hat{\mathbf{J}}}\mathcal{J}(\hat{\mathbf{T}}\_k, \mathbf{J}\_{k-1})=-\hat{\mathbf{T}}\_k^T(\mathbf{P}-\hat{\mathbf{J}}\cdot \hat{\mathbf{T}}\_k+\hat{\mathbf{T}}\_k-\mathbf{I})+\mu\_k(\hat{\mathbf{J}}-\mathbf{J}\_{k-1}). $$ The closed-form solution for $\hat{\mathbf{J}}$ is presented in Eq. (7) when let the partial derivative be equal to zero. **W2 & Q1 & Q3: Details of using DA-CLIP and reasons for partition** Our method for using DA-CLIP as a haze density estimator involves calculating the cosine similarity between the encoded input image and the encoded input text. The specific formula is $$ \mathcal{D}(\mathbf{S}^{R}\_{\widetilde{HQ}})=\frac{Enc\_{image}(\mathbf{S}^{R}\_{\widetilde{HQ}})}{\|Enc\_{image}(\mathbf{S}^{R}\_{\widetilde{HQ}})\|}\cdot (\frac{Enc\_{text}(\text{Text})}{\|Enc\_{text}(\text{Text})\|})^\top $$ The text we used is "hazy" from the DA-CLIP provided in the text list. **(Q1)** As Density Loss, we normalize the result of this formula from 0 to 1, the higher result means higher density. **(W2)** As Haze Density Evaluator for Colabator, the $norm$ in eq. (12) means $1 - normalized(\mathcal{D}(\mathbf{S}^{R}\_{\widetilde{HQ}}))$ which has the same value as (1 - density loss), the higher result means lower density. **(Q3)** We assess the image both globally and locally. We use global assessment to determine whether the overall quality of the image has improved relative to the images in the existing optimal label pool, updating the pseudo-labels. We also use block-based processing to obtain independent haze density and image quality scores for each region of the image. This allows us to assign appropriate confidence weights to the pseudo-labels of each block. This approach enables our network to maximize the use of high-quality dehazed parts of the image while avoiding being misled by poorly dehazed or low-quality parts. By focusing on both global and local aspects of the image, our method achieves better performance. ||NIMA↑|BRISQUE↓|FADE↓| |---|---|---|---| |Only Full|5.229|13.099|0.803| |Partition+Full(Ours)|5.315|11.956|0.751| **(W2)** The patch size N is set to 32 to balance performance and efficiency. We will add this. **W3: Ablation study on CPMM** After removing all CPMM modules, our method retained only 8 learnable parameters. As shown in the table, removing CPMM brings a significant performance decline both in the FADE metric, which measures haze removal capability, and the NIMA and BRISQUE metrics, which measure the quality of generated results. However, compared to the original hazy images, the CPMM-removed CORUN still possesses a certain haze removal capability. ||NIMA↑|BRISQUE↓|FADE↓| |---|---|---|---| |Hazy(Input)|4.483|36.642|2.484| |w/o CPMM|4.836|38.197|1.362| |w/ CPMM(Ours CORUN+)|5.315|11.956|0.751| In the **Figure.A3**, we provide a case of generated result examples. The images include, respectively, the input image, the result with the CPMM module, the result without the CPMM module, and the result without the CPMM module but with manually increased exposure. From our analysis, we can conclude that removing the CPMM module severely affects the flexibility of CORUN, causing the generated results to lose a significant amount of detail, especially in dark areas, and leading to a substantial decrease in haze removal capability. **Q2: Ablation study on strong data augmentation before student network** We conducted ablation experiments on the data augmentation strategies used during the fine-tuning phase. In the table below, we provide the quantitative results obtained after disabling strong data augmentation for the student network input. From the results, it can be seen that the strong data augmentation strategy for the student network indeed brings some performance gains, but even without strong data augmentation, Colabator still provides a considerable performance improvement to the network. Therefore, we can conclude that the performance gain during the fine-tuning phase does not entirely come from the strong data augmentation strategy. ||NIMA↑|BRISQUE↓|FADE↓| |---|---|---|---| |w/o Colabator|4.856|16.541|1.091| |w/o Strong aug.|5.084|12.671|0.813| |w/ Strong aug. (Ours CORUN+)|5.315|11.956|0.751| --- Rebuttal Comment 1.1: Comment: We thank the authors for providing a detailed rebuttal in such a short time duration. After reviewing the rebuttal, I believe the authors have solved all my concerns about their paper. I will raise my rating to accept, and I support accepting this paper. However, PLEASE make sure to update missing information about DA-CLIP and equation parts in the final revision to make the paper more comprehensive. --- Reply to Comment 1.1.1: Title: Thanks for recognizing the value of our work! Comment: We wish to express our sincere appreciation to the reviewer for recognizing the substantial significance of our contribution, specifically the Colabator framework and the CORUN method, within the realm of real-world image dehazing, along with image desmoking and underwater image enhancement. Your acknowledgment holds great importance to us and serves as a meaningful validation of our dedicated efforts to advance this critical area of research. We will update the missing information about DA-CLIP in our final version.
Summary: The paper aims for real-world image dehazing, where the paper tries to handle the difficulties of modeling real haze distributions and the scarcity of real data. For modeling haze, the paper proposes to jointly model atmospheric scattering and image scenes by a cooperative unfolding network. To handle the scarcity of data, the paper proposes to generate pseudo labels using an iterative mean-teacher network, which the paper terms Coherence-based label Generator (Colabator). Strengths: - The paper is well-written and easy to follow. - The proposed method is effective in that object detection results are improved on images dehazed by the proposed method in comparison to previous works. - The iterative mean-teacher framework for generating pseudo labels seems to be effective in image dehazing. Weaknesses: - The major concern I have for this paper is that the proposed method seems to be mostly combination of existing works. - The joint estimation of atmospheric scattering/transmission (or atmospheric light) and clean image is not new: - Mondal et al., Image Dehazing by Joint Estimation of Transmittance and Airlight using Bi-Directional Consistency Loss Minimized FCN - Im et al., Deep Variational Bayesian Modeling of Haze Degradation Process - Zhang et al., Joint Transmission Map Estimation and Dehazing using Deep Networks - Ren et al., Single image dehazing via multi-scale convolutional neural network - Deep Unfolding Networks (DUNs) has been utilized for dehazing by Yang and Sun [50], mentioned in the paper - The use of PGD for DUNs in the context of image restoration has been done by DGUN [33], as mentioned in the paper. - Also, using mean teacher for creating pseudo labels is commonly performed in deep learning. - Considering all of the above, the proposed method seems to be the combination of the existing methods. - What is a major difference and technical novelty in comparison to the combination of existing methods above? - Since the proposed method is a combination of many components, a detailed ablation study would be needed to justify the effectiveness of each component. Technical Quality: 3 Clarity: 3 Questions for Authors: My major concern and questions have been written in the weakness section. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors have mentioned the limitation of their work in the supplementary document. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the valuable comments. If not specifically stated, all experiments are conducted on the real-world image dehazing task with RTTS for space limitation. **W1: Contribution** Our work goes beyond simply combining existing methods. We introduce the Cooperative Unfolding Network (CORUN), which iteratively optimizes transmission maps and scene information with theoretical guarantees. We also present Colabator, a semi-supervised fine-tuning framework that requires no extra computational cost and adapts well to real-world scenarios with just a few iterations. Specifically, compared to previous methods, our innovations are as follows: - Methods by Mondal et al.[64], Im et al.[65], and Ren et al.[66] use common networks to estimate atmospheric light and transmission maps through the atmospheric scattering model but overlook other complex scene information. Zhang et al.[27] estimate the transmission map and haze features from hazy images but ignore the atmospheric model during reconstruction, limiting generalization. In contrast, our method mathematically models the relationship between the image scene and transmission map, utilizing a deep unfolding network based on proximal gradient descent for progressive dehazing. This enhances interpretability and detail restoration, resulting in more realistic images. To verify this, we incorporate our joint estimation component into the three open-sourced methods and observe evident performance gain. |Datasets|Metrics|Ren[66]|Ren+CORUN|Mondal[64]|Mondal+CORUN|Im[65]|Im+CORUN|CORUN(Ours)| |-|-|-|-|-|-|-|-|-| |O-HAZE|PSNR↑|15.72|19.91|16.37|18.85|18.85|21.36|25.66| ||SSIM↑|0.703|0.753|0.717|0.750|0.776|0.793|0.847| |I-HAZE|PSNR↑|14.49|18.83|15.52|18.27|17.93|20.15|23.90| ||SSIM↑|0.710|0.760|0.721|0.763|0.752|0.785|0.868| - PDN[50] introduced a deep unfolding network for image dehazing but couldn't effectively use complementary information between the scene and transmission map, leading to sub-optimal results with real-world data. In contrast, our new dual proximal gradient descent cooperative unfolding network considers both atmospheric light and image information, enhancing robustness and flexibility. We integrated our optimization mode into PDN[50], resulting in improved performance. |Datasets|Metrics|PDN[50]|PDN+CORUN|DGUN[33]|DGUN+CORUN’s model|DGUN+CORUN|CORUN(Ours)| |-|-|-|-|-|-|-|-| |O-HAZE|PSNR↑|16.37|19.02|18.86|20.16|22.74|25.66| ||SSIM↑|0.717|0.763|0.756|0.788|0.805|0.847| |I-HAZE|PSNR↑|15.86|18.80|18.27|19.82|22.63|23.90| ||SSIM↑|0.712|0.757|0.732|0.763|0.791|0.868| - DGUN models the relationship between degraded and clean images but ignores the interaction between the scene and transmission map in dehazing. Additionally, DGUN[33]'s unfolding strategy overlooks complex hazy conditions in real-world scenarios. Our CORUN addresses this with CPMM modules and a proposed global coherence loss. Replacing DGUN[33]'s dehazing model with ours and integrating our unfolding framework into DGUN[33] both improve performance. - Mean-teacher, commonly used in high-level vision tasks, commonly utilizes uncertainty to constrain pseudo-labels which isn't suitable for low-level vision tasks. Some attempts have been made to adapt it for low-level tasks using data augmentation, but these still face issues with erroneous pseudo-labels and overfitting. Colabator improves this by using CLIP and NR-IQA to weight pseudo-label regions, guiding the student network to focus on high-quality areas. It also evaluates pseudo-label quality globally to refine the optimal label pool iteratively, enhancing performance over Mean-teacher. Comparison results can be found in **Table A10**. - Totally, the major difference and technical novelty in comparison to the combination of existing methods above is: - From a mathematical optimization perspective, we jointly model the transmission map and image scene based on the atmospheric scattering model, creating a novel optimization function that effectively handles complex real-world haze scenarios. - Our unfolding network, CORUN, is tailored to this optimization formula, integrating scene and haze feature commonalities. Using the proximal gradient descent algorithm, CORUN reconstructs high-quality dehazed images progressively from coarse to fine. - Our Colabator architecture is plug-and-play, designed for low-level vision tasks like real-world image dehazing. It evaluates pseudo-labels globally and locally, helping the student network learn from high-quality regions while minimizing errors. It also dynamically maintains an optimal label pool to ensure pseudo-labels stay globally optimal. - Therefore, our work is not a mere assembly of existing methods but an effective framework proposed specifically based on the characteristics and challenges of real-world image dehazing. **W2: Detailed ablation study** In the paper, we presented Colabator, along with ablation experiments on the Optimal Label Pool, Trusted Weights, Iterative Mean-Teacher, and the number of layers in the deep unfolding network. **We have supplemented additional ablation experiments in** **Global Rebuttal‘s pdf supplement**, which will be included in the supplementary materials of the final version. The results of our CORUN and Colabator on various datasets, data processing methods, and types of Image Quality Assessment (IQA) are presented in **Tables A2, A4, A6, A17**. **Tables A3, A14** demonstrate the generalizability of Colabator across different tasks and networks. **Tables A1, A12** detail the ablation studies of CORUN’s components, while **Table A7** shows the ablation of loss functions. **Table A8** assesses the robustness of DA-CLIP. **Tables A9, A11, A13, A15, A16** demonstrate the ablations of Colabator’s components and functions. [64]Mondal et al., CVPRW, 2018 [65]Im et al., CIKM, 2023 [66]Ren et al., ECCV, 2016 [67]Zhang et al., TCSVT, 2019 --- Rebuttal 2: Comment: I appreciate the authors' rebuttal, which has addressed most of my concerns. One additional concern I have after reading the rebuttal is the authors have made comparisons against other methods on I-HAZE, O-HAZE, but not on RTTS, a benchmark the authors have used in the main paper. I believe this has to do with the other reviewer's recommendation. It would be great if the authors could make comparisons on RTTS as well in the final version. Also, the authors have presented a lot of new experimental results during the rebuttal, requiring a major change to the paper. I'm ok with the acceptance and hence raise the rating, given that the authors make these major changes to the final version of the paper, along with new comparisons on RTTS as well. --- Rebuttal Comment 2.1: Title: Thanks for recognizing the value of our work! Comment: We wish to express our sincere appreciation to the reviewer for recognizing the substantial significance of our contribution, specifically the Colabator framework and the CORUN method, within the realm of real-world image dehazing! First, we will incorporate the experiments and major changes conducted during the rebuttal phase into our revised manuscript. Our initial plan is to include critical experiments, such as detailed comparisons with existing methods and essential ablation studies, directly in the main text, while other experiments will be placed in the supplementary material due to space constraints. To make room for these important experiments, we will move some less critical visual results to the appendix. Additionally, following the approach in **W1**, we have added comparisons with existing methods on the RTTS dataset. As shown in the table, we observed significant performance improvements when integrating either our joint estimation component or our unfolding framework proposed in CORUN. This experiment will also be included in our revised version. Please feel free to let us know if you have any further concerns. We would be happy to address them. | Metrics | Ren[66] | Ren+CORUN | Mondal[64] | Mondal+CORUN | Im[65] | Im+CORUN | PDN[50] | PDN+CORUN | DGUN[33] | DGUN+CORUN’s model | DGUN+CORUN | CORUN (Ours) | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | NIMA↑ | 3.173 | 3.420 | 3.372 | 3.518 | 3.304 | 3.414 | 2.925 | 3.271 | 4.272 | 4.419 | 4.583 | 4.823 | | BRISQUE↓ | 31.335 | 28.441 | 29.951 | 27.773 | 26.906 | 25.103 | 33.186 | 29.342 | 29.753 | 27.728 | 25.308 | 20.944 | | FADE ↓ | 2.065 | 1.878 | 1.732 | 1.626 | 1.719 | 1.633 | 2.159 | 2.006 | 1.663 | 1.535 | 1.376 | 1.051 |
Summary: This paper focuses on Real Image dehazing. They propose a cooperative unfolding network (CORUN) to integrate physical knowledge for image dehazing. The proposed CORUN exploits the complementary information between components in Atmospheric Scattering Model. Besides, due to the lack of real paired data, this paper proposes a Coherence-based Label Generator to generate pseudo labels for real hazy inputs. The experimental results demonstrate that the proposed method achieves good dehazing performance. Strengths: 1. The research direction of this paper is meaningful. 2. The quantitative results show that the proposed CORUN+ achieves state-of-the-art performance in terms of NRIQA metrics. 3. The visual results shown in Fig.5 outperform other competing methods. Weaknesses: 1. There is still room for improvement in the writing of this paper. a) Typo in line 167 b) How to simplify A in line 111 is unclear. According to my understanding, after simplification, P in Eq. 2 has different physical meaning as defined in line 110. c) In lin3 172, the same symbols are used to indicate different images. d) Line 187 presents two kinds of reconstruction loss, which is inconsistent with the perceptual loss in line 186. 2. The experiment is a bit unfair. This paper utilizes the data generation pipeline proposed in RIDCP, which has been proved that can improve the real image dehazing performance of dehazing networks. However, the quantitative results of PDN and MBDN reported in Table 1 seem to be calculated by using their original models, rather than re-training on the same training data used in this paper. 3. The experiment is a bit insufficient. a) The author only validates the effectiveness of Colabator on DGUN, neglecting other widely-used dehazing networks. b) The full-reference evaluation on real dehazing benchmarks (e.g., O-HAZE, I-HAZE, etc.) can also be an important evidence to support the effectiveness of the proposed method. Technical Quality: 2 Clarity: 2 Questions for Authors: Please see the weakness for more details. Besides, I was wondering: 1. What are the essential advantages of CORUN compared to state-of-the-art end-to-end dehazing models? Are there any experiments? 2. Whether the proposed method can remove medium or dense haze correctly. Since the dehazing results shown in Fig. 4 still have haze residues even if the hazy input is not very dense. 3. Why use both perceptual loss and contrastive loss simultaneously? There are overlapping parts between them. 4. How robust is the estimation of haze density by DA-CLIP? Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The authors have discussed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the valuable comments. If not stated, all experiments are conducted on the RTTS dataset. ``` *Experiment results are placed in the attached PDF for space limitations* ``` **W1: Writing and formula** - The redundant comma will be removed. - The simplified $P$ in eq.(2) differs from the $P$ in line 110. In line 111, the original atmospheric scattering model $P=J\cdot T+A\cdot (I-T)$ is divided by the atmospheric light A, resulting in $P/A=J/A\cdot T+I-T$. We then simplified $P/A$ and $J/A$ to $P$ and $J$, leading to $P=J\cdot T+I-T$. We will reorganize it. This simplification aims to help integrate atmospheric light $A$, which is closely related to image features, with the scene for better results. Experiments in **Table A1** show that this improves performance by **10.7%**. - We will correct the notion: pseudo-dehazed image ${\mathbf{P}^{R}\_{\widetilde{HQ}}}\_{i}$ and previous pseudo-label ${\mathbf{P}^{R}\_{Pse}}\_{i}$. - In eqs. (15) and (16), we introduced two loss functions for different phases. Eq. (15) includes reconstruction loss and perceptual loss, while eq. (16) substitutes the perceptual loss with a stronger supervised contrastive perceptual loss. We will rename eq. (15) and eq. (16) to Reconstruction-Perceptual Loss and Reconstruction-Contrastive Perceptual Loss. **W2: Unfair in data augmentation and generation pipeline** To ensure a fair comparison with RIDCP, we follow its setup by using the same data pipeline (generation + augmentation) and dataset. To fully show our results, we provide results under two extra settings: 1) We remove the augmentation pipeline, 2) We further replace RIDCP's dataset and generation pipeline with the OTS dataset (commonly used in previous methods). As shown in **Table A2**, the results verify our method still achieves a leading place under the two settings compared with existing methods. **W3: Effect of Colabator and results on full-reference** - We integrate our Colabator with cutting-edge dehazing methods and test them in RTTS. In **Table A3**, we find evident performance gains brought by Colabator: **23.2%** in C2PNet, **22.0%** in FFA-Net, **31.7%** in GDN. - We further evaluate our methods on the full-reference setting with O-HAZE and I-HAZE. Because we have paired data for training, we drop Colabator and only train our CORUN following the practice of MB-TaylorFormer. **Table A4** shows that our method achieves leading performance with a big margin on both datasets. **Q1: Essential advantages** Previous state-of-the-art dehazing methods, such as C2PNet[60], fail to generalize to broader real-world scenarios like the RTTS dataset when trained on synthetic data. RIDCP, with its well-designed data synthesis pipeline, generalizes better to *RTTS*, while is limited by depth estimation errors, causing unrealistic haze residuals. Our CORUN is specifically designed for real-world scenarios. It integrates deep unfolding networks and proximal gradient descent algorithms to combine physical priors with network learning. Therefore, as shown in **Table A5**, our method achieves state-of-the-art real-world dehazing results. **Q2: Haze residues and medium/dense haze removal** Apart from the problem of the synthetic training data, *i.e.*, the inaccurate depth estimation and the hazy residue from the ground truth, the reasons for failing to completely remove the haze lie in our semi-supervised setting with incomplete supervision. In Figs. 4, it is a common problem that also exists in other comparative methods. However, as shown in **Fig. A1**, our method has a better dehazing effect compared to existing methods, which is attributed to the joint estimation capacity of CORUN and the generalizability of Colabator. Besides, we randomly select 300 data in *RTTS* and invite 10 naive observers to score them based on how hazy they are. Then we select the first 100 images as dense hazy data and the 101st-200th images as medium-density hazy data according to the scores. **Table A6** shows our superiority in removing medium-density and dense haze. **Q3: Loss functions overlap** The two losses, *i.e.*, eq.15 and eq.16, are employed in different phases with different inputs and their specific parameters in eq.20 and eq.21. In the pre-training phase (eq.20), we use eq.16 with contrastive learning for constraints, using synthetic data. During the fine-tuning phase (eq.21), we employ both eq.15 and eq.16. Here, eq.15 uses synthetic data to ensure stability in the fine-tuning process, while eq.16, which incorporates contrastive perceptual loss, uses real-world data to provide stronger constraints and align the model training with real-world conditions. As shown in **Table A7**, our ablation verifies that combining these two losses enhances the model's performance. We will reorganize the content. **Q4: Robustness of DA-CLIP** We evaluated the robustness of DA-CLIP by randomly selecting 300 images and assessing their density score. We invited 10 observers to rate haze density in these images. In **Table A8**, our analysis revealed that while DA-CLIP's results generally align with human ratings (with discrepancies under 0.2), significant differences occur in certain situations. Notable discrepancies (over 0.5) were observed in cases involving 1) images with predominant sky coverage on cloudy days, 2) color distortion or severe faults, and 3) insufficient lighting at night (examples shown in **Fig. A2**). To address DA-CLIP's limitations in handling complex scenes, we designed a pseudo-label rating strategy. Along with DA-CLIP, we used NR-IQA to evaluate potential pseudo-labels. A pseudo-label is updated only if its scores from both DA-CLIP and NR-IQA surpass the latest label in our optimal label pool. Locally, we partitioned images and used DA-CLIP and MUSIQ to assign weights, encouraging the network to focus on higher-quality regions with better dehazing effects. **Table A9** verifies the effect of our pseudo-label rating strategy. --- Rebuttal Comment 1.1: Title: Thank you for your detailed reponse Comment: The detailed rebuttal from the authors addresses most of my concerns. I want to thank the authors for their work. At this stage, I am considering the overall rating of this work mainly based on its novelty, contributions, and insights. --- Reply to Comment 1.1.1: Title: Thanks for your encouraging reply! Comment: We would like to sincerely thank the reviewer for the encouraging feedback. In our **seven** response to reviewer Udmv, we have added **nine** tables and **two** figures to thoroughly demonstrate the superiority of our CORUN dehazing model and the generalizability of our Colabator framework. Additionally, we encourage reviewer Udmv to briefly review the content in the global rebuttal as well as our responses to the other reviewers. These sections further validate the generalization of our Colabator framework across other critical low-level tasks and provide a detailed comparison between our method and existing dehazing techniques, supported by extensive experimental evidence. We believe this will help the reviewer fully understand the novelty, contributions, and insights of our paper. Specifically, **we are the first to propose a plug-and-play iterative mean-teacher framework (Colabator) for real-world image dehazing, along with a robust dehazing algorithm (CORUN) with theoretical guarantees.**
Rebuttal 1: Rebuttal: We extend our sincere gratitude to all the reviewers (**R1**-**Udmv**, **R2**-**ivCJ**, **R3**-**h5fd**, and **R4**-**wF18**) for their insightful and considerate reviews, which help us to emphasize the contributions of our approach. We are pleased to hear that the reviewers approved the novelty of our work, as well as the commendable performance (**R1, R2, R3, R4**). We are delighted to see reviewers confirm our contributions to the field of real-world image dehazing. These encompass our novel deep unfolding dehazing method, CORUN, and the ingenious plug-and-play pseudo-label generation and fine-tuning framework, Colabator. In direct response to your thoughtful comments, we have methodically addressed each point in our individual responses, and we provide a summary here: - We corrected writing errors, revised certain formulations, and added information about the theory, formulas, and derivation process in this paper to enhance clarity. - We compared our method with more SOTA methods on additional datasets to underscore our superiority in performance. - We added experiments to verify the generalizability of our plug-and-play Colabator framework and the advancement of CORUN. - We provided more ablation study results to demonstrate the robustness and effectiveness of the various modules in CORUN and Colabator. Thanks again for all of your valuable suggestions. We will update the paper accordingly and release our code for community study. We appreciate the reviewers' time to check our response and **hope to further discuss with the reviewers whether the concerns have been addressed or not**. If the reviewers still have any unclear parts about our work, please let us know. ``` Due to space limitations, images and most of our supplementary experiments are stored in the attached PDF. Please download this PDF for more information. Thank you! ``` Pdf: /pdf/245affc6d5a3b21947367341a0f9d81db25cb623.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Unleashing the power of novel conditional generative approaches for new materials discovery
Reject
Summary: This paper presents a framework for crystal structure generation, focusing on polymorphs. The framework utilizes matrix representation of crystals and various generative models used in vision tasks, with specially designed similarity metrics and loss function. It is tested on (1) modification of given structures and (2) generation from scratch within the dataset, as well as finding new structures in the Ta–W–B system. Strengths: This work develops domain-specific representation, metric, and loss, so that generative models that have proven useful in image generation can apply to crystals. The topic is timely and important. Weaknesses: - In the demonstrated use cases, the generation is conditioned on elements, space group, etc., but not materials properties of interest. These show limited usefulness in materials discovery and design. - The matrix representation does not take physical constraints into account, e.g., space group determines symmetries in the lattice parameters. Besides, related previous works, e.g., [UniMat](https://openreview.net/forum?id=wm4WlHoXpC), should be discussed. - The clarity and rigorousness need to be improved (see Questions). The mathematical notations are not unified, e.g., $x$ vs $X$. Technical Quality: 3 Clarity: 3 Questions for Authors: - Line 53, “both pipelines” are not introduced. The previous paragraph is also not unclear, please check the grammar. - In Line 71, do “conventional limitations” mean those of conventional approaches in materials science, or previous ML/generative methods? Not just how the proposed method differs from previous ones, but also what challenge it overcomes, should be specified. - In the descriptions of the matrix representation, the role of elemental properties is unclear. My understanding is that they are not part of the data structure to be generated. Where and how are they used? A probably related question is, what is $el_{emb}$, and how is it obtained? - Line 207, what is the role of time condition t, and why does it matter for materials discovery? - The modification task shows significantly lower metrics than the generation task. What does this imply? Does this indicate generation is preferred over modification? If not, what are the scenarios where modification can be useful, and how to mitigate the performance? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Discussed in the Conclusion. Besides, Sec. 9 contains a GitHub link that could break anonymity. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your attentive feedback. We appreciate your thorough review. Here are our responses to your concerns: Regarding the Objectives and Methodology: The primary objective of this work is to propose models capable of generating stable crystal structures, aligning with the existing research in this field. Hence, formation energy serves as a crucial indicator of structural stability, directly contributing to our aim of generating more stable and synthesizable materials. If our future works we plan to expand the space of thermodynamic properties used. However, it is limited by the number of data in open source database Addressing Physical Constraints and Symmetry: To incorporate physical constraints, we integrate space groups as conditions within our models(Figure 2a). Moreover, our proposed PBC loss function explicitly accounts for structural symmetry. We acknowledge the relevance of the UniMat work and will cite it accordingly. Regarding Mathematical Notation and Clarity: We appreciate the feedback regarding mathematical notation and will thoroughly edit it for improved clarity and consistency. Clarification on "Both Pipelines": We recognize the error in stating "both pipelines" and acknowledge that GNoME utilizes two distinct pipelines. This error will be corrected in the revised variant. We are grateful for your observation. Explanation of "Conventional Limitations": Our reference to "conventional limitations" refers to the challenges and limitations associated with traditional methods employed in materials science research. These methods often necessitate significantly greater time and computational resources compared to our proposed approaches. Role of Element Properties: Element properties are incorporated into the model to provide additional features and account for elemental characteristics. The element embeddings, denoted as $𝑒𝑙_{𝑒𝑚𝑏}$, are processed through a dedicated multilayer convolutional network and are treated as conditions rather than part of the input data (x). In addition, the elemental property matrix contains 22 chemical features encoding the chemical element the structure consists of, you can get more information in lines 118-120 of the article. We will also change mathematical notation for that one and other concepts to make it more clear. Justification for Time Conditioning: Time conditioning is employed specifically in our diffusion and flow-matching models to simulate the temporal aspect of the diffusion process. The use of the time parameter ($t$) is fundamental to the operation of all diffusion models. Also, diffusion models require passing an interpolation $x_t$ (for instance in CFMs $x_t$ is formed as follows $x_t = t * x_1 + (1 - t) * x_0$). Consequently, its inclusion is essential for our approach, which relies on diffusion and flow-matching techniques for material generation. Results of Modification Experiments: Our experiments with structure modification have yielded less successful results compared to generation tasks. While modification was initially proposed as a potentially superior approach, our empirical findings have shown the opposite trend. Overall, we are deeply appreciative of your insightful comments and suggestions. We will diligently address all points raised and ensure a comprehensive revision of the manuscript. --- Rebuttal Comment 1.1: Comment: The authors' response has addressed my clarifying questions. However, some issues are not addressed or are inherent limitations of the proposed method. 1. Physical constraints and symmetry. Including space groups as conditions does not ensure the generated matrix follows the required symmetry. 2. Time conditioning. My main concern is how $t$ relates to materials science applications. To use this generative model for materials discovery or design, how should one set $t$? With the confusing expressions properly addressed, I could raise the Presentation score to 3, however, the lack of physical constraints limits the Contribution and Soundness.
Summary: The authors studied the use of diffusion and flow matching approaches for the generation of crystalline materials. The authors trained UNet models on polymorphs in the AFLOW database (which has a series of DFT-computed properties for these materials) using either simple R3 regression, diffusion/flow matching. The authors then presented inference results on similarity to training structures (Section 3.4, which shows these methods can reproduce training structures to different extent), and showed that a subset of the modified generated crystals (with Ta, W, B) can have a small non-zero formation energy. Strengths: *Originality*: The authors attempted to study the problem of crystal structure generation with no invariances/equivariances other than periodic translation invariance. *Quality*: The authors attempted to use DFT to validate some inference results. *Significance*: Crystal structure generation (especially synthesizable ones) is an important problem. It seems that training from uniform noise distribution works better for CFM than training on Gaussian noise, contrary to the established results in the field. Weaknesses: *Originality*. The manuscript lacks originality. The diffusion/flow matching techniques are well-established in inorganic crystal structures (e.g. CDVAE cited here, DiffCSP/FlowMM that's not here). Sure, using a network architecture not designed for materials/crystals and using no invariances/equivalences is new, but it deviates from standard practices in the field without sufficient justification. I believe the implementations shown in the paper are a great exercise for practitioners interested in the field, but unfortunately, I do not see it as a NeurIPS paper. *Quality*. The manuscript is _very_ bare-boned, making a comprehensive technical critique challenging without appearing disproportionately critical. - On the ML side, there are numerous large fallacies/mistakes (e.g. no consideration of bonds between atoms at all, Sec. 3.3 there is no description of the PBC loss, the generation does not consider the unit cell, no generation with atom types, and there is no investigation of any experiments observed e.g. why is uniform noise better for CFM, the result in Table 1 appears to evaluate overfitting rather than novel generation, the list can go on). - On the chemistry/validation side, there are again numerous problems (why would formation energy be given during the generation process, what functional did you use in DFT, there are no comparisons against existing structures and hence cannot be claimed as novel, etc.) - There is no comparison against _any_ known methodologies. - The results overall, are very weak both in ML and in chemistry (e.g. Table 3 shows most if not all materials generated have extremely large positive formation energies despite the simple elemental composition; the remaining few negative ones are at the brink of instability, in any case they likely would not be synthesizable). *Clarity*. The manuscript suffers from poor presentation, starting with a promotional-style title that lacks scientific descriptiveness. I unfortunately do not understand the novelty of the paper in comparison to existing methods. The paper consistently fails to provide essential explanations across both machine learning and chemical methodologies. - On the ML side, there are numerous things poorly presented (e.g. Figure 1 is just periodic translation invariance and in a typical manuscript would be summarized in one sentence). - On the chemistry side, things are greatly exaggerated (e.g. computationally making a few materials with negative formation energy can be done by undergraduate students and certainly does not warrant descriptions such as 'This significant outcome underscores the remarkable potential of our framework in uncovering thermodynamically stable materials) Technical Quality: 2 Clarity: 1 Questions for Authors: Unfortunately, without significant innovations and revisions, I do not believe I can convinced this paper would be accepted in NeurIPS. Confidence: 5 Soundness: 2 Presentation: 1 Contribution: 2 Limitations: Partially. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. We appreciate your thorough review. Here are our responses to your concerns: We employed a network architecture not previously used for this task and did not utilize standard invariances/equivalences, but this deviation from standard practices is justified by our contributions. Specifically: - Our approach involves conditional generation based on crystal composition, space group, number of atoms, and formation energy. The space group, in particular, accounts for all potential crystal symmetries and invariances, addressing concerns about invariance. Details can be seen in section Data. - It is incorrect to state that we ignored invariances, as we included space groups as one of the conditioning inputs, providing comprehensive information about the crystal’s symmetries and invariances. - Our model generated eight novel crystal structures with only with 6840 attempts, demonstrating its effectiveness and originality. As for machine learning issues. Atom Bonds: In crystal structures, each atom interacts with others through attractive and repulsive forces. Thus, explicitly modeling bonds between atoms is not necessary for our approach. PBC Loss: We have described the Periodic Boundary Conditions (PBC) loss in the appendix of the paper. Unit Cell Consideration: The generation process does account for the unit cell, as detailed in Section Data. Atom Types: Our model includes atom types in the generation process, which is also covered in Section Data. Uniform Noise: We provide an explanation for why uniform noise worked better than Gaussian noise in Figure 3 in Appendix section and refer to it in the text. Overfitting in Table 1: Table 1 contains energies metrics on validation data, not train data. Moreover, we also described the algorithm for validation dataset construction in Section 2.2: “The polymorph group formulas were initially divided into distinct training and validation sets, ensuring a relatively balanced distribution of chemical elements across these subset”. That’s enough for lacking the overfitting. As for chemistry/validation issues. Formation Energy: Formation energy is used to generate structures with specific energy characteristics. This is crucial for creating stable structures, which is our primary objective. DFT Functional: We use the VASP software for DFT calculations, as mentioned in the paper. We have also included the specification of VASP settings for reproducibility in the Appendix section. Structure Comparison: We compared the generated structures against those from AFLOW with the same chemical composition. This comparison validates the results. As for comparison against methodologies. While the paper does not explicitly compare against other methodologies, our results show a higher percentage of stable structures compared to those reported in GNoME(for example). Valid comparisons require the use of identical datasets, and our models were trained on different data. However, we acknowledge that a more detailed comparison with other methodologies would strengthen our paper. Overall Results. Table 3 shows the energy above the hull. Most structures have negative formation energies, indicating stability. Even if some structures have positive energies above the hull, they mostly still possess negative formation energies. The structures were chosen from well-researched compositions, which adds to their validity. Clarity: We acknowledge the reviewer's comments on the manuscript's presentation and the title. We will revise the title to better reflect the scientific content and highlight our contributions. Our novelty lies in the unique architecture and conditional generation methodology, which has not been previously applied in crystalline structure generation. We have described the data for both tasks, provided a detailed model description, and illustrated the model architecture in Figure 2. Sections 4 and 5 cover the model and the methodology for training and building the models, and we encourage a re-examination of these sections. Presentation of ML Concepts: We agree that Figure 1 might be overly simplistic and will revise or remove it to enhance clarity. Chemistry Methodology and Claims: There is a significant difference between negative formation energy and negative energy above the hull. Our methodology focuses on generating stable structures with negative energy above the hull, indicating their potential for synthesis. This approach, applied to a single chemical composition, successfully found 8 stable structures with energy above the hull below zero. Extending this methodology to numerous other compositions could uncover many new stable structures, far beyond routine undergraduate-level work. Thank you for your feedback. We have addressed these issues in our manuscript and believe that our approach and results contribute valuable insights to the field. --- Rebuttal Comment 1.1: Comment: I very appreciate the authors for the detailed response. I am willing to raise my score to 3 and revise my soundness/contribution scores. While the rebuttal indeed clarifies a few things, the key ML problems (technical novelty, lack of baselines, lack of comparable tasks to other papers) and chemistry problems persist. Re chemistry problems, I just want to clarify one point regarding DFT calculation: There is still no description of functional and k-point mesh used in VASP, VASP is a package that accommodates many good and bad functionals. In fact it is a very delicate package where small changes to parameters can very much change the results. In this sense, formation energy is very easy to manipulate in general and hence not a useful objective in computational crystal design (and even less so experimentally)
Summary: The paper addresses the inverse problem of generating crystal structures based on given properties, thereby avoiding the need for extensive computational resources typically required in traditional methods. The authors utilized the AFLOW materials database, selecting unstable and stable series of structures for two specific tasks: modifying structures to achieve stability and conditional structure generation. Strengths: The authors experimented with various generative model approaches and evaluated two tasks in crystal generation. Additionally, they integrated the VASP software for application testing and successfully identified four previously undiscovered stable structures through conditional generation. Weaknesses: 1. From the perspective of model application, although the authors used the AFLOW database for their study, they did not compare the data range and coverage with other significant databases like the Materials Project. This omission leaves a gap in understanding how the generative models perform across different datasets and whether the results are consistent and generalizable. Comparing the performance of the same generative models on different databases could provide valuable insights into the robustness and applicability of their approach. 2. There is a partial break of anonymity in the GitHub link on Page 9 in this paper. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. In Section 2.2, the authors process the dataset for the structure modification task by training unstable structures towards stable ones. However, unlike typical trajectory datasets for structure optimization, the authors select the most stable polymorph of the same chemical formula as the target and use the remaining polymorphs as initial structures to complete the structure modification task. This dataset approach leans more towards converting between different polymorphs rather than optimizing the stability of an arbitrary atomic configuration as stated by the authors in the abstract. The authors could consider explain the relationship between using this part of the dataset and the stated objective in detail. 2. In terms of the generation task, the authors generate structures by specifying formation energies. However, in Section 6, the authors only list a metric related to formation energy without providing a detailed discussion on other important aspects, such as the effectiveness of other condition controls, the validity of the generated structures, and whether duplicate structures are generated. These issues seem to remain unanswered. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The authors proposed two major directions: conditional generation and conditional modification. There is room for improvement in both the experimental results and the data used for conditional modification. For example, they could consider recognizing unit cells with translational and rotational transformations and introducing more ways to assess generation results. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed feedback on our paper. We appreciate your insights and have addressed your concerns below: We did compare the AFLOW database with the Materials Project in the critique of the GNOMe paper(Introduction section), noting that the Materials Project has a much smaller dataset (around 100,000 structures) which is insufficient for training large models. Our experiments with smaller datasets were unsuccessful, and we achieved better results only after significantly increasing the dataset size using AFLOW, which contains over 3 million crystal structures. We apologize for the oversight regarding the partial break of anonymity due to the GitHub link on Page 9. We should have removed the link to maintain anonymity and will ensure this does not happen in future submissions. We recognize the importance of demonstrating robustness across different datasets and will include more comprehensive comparisons in future work. We will also improve our manuscript to enhance clarity and presentation. You are correct in noting that our approach involves converting between different polymorphs rather than optimizing the stability of an arbitrary atomic configuration. This method is a practical solution to achieve the stated objective. The difference between converting polymorphs and changing atomic configurations is subtle, as a more optimal atomic configuration essentially results in a more stable polymorph. We do not consider changes in chemical composition, as comparing formation energies is only meaningful among structures with the same chemical formula. Regarding the generation task, our primary focus was on formation energy due to its importance in crystal stability. We acknowledge that we did not provide detailed discussions on other aspects such as condition controls, validity, and duplicate structures. While many generated structures may be near-duplicates, formation energy remains the key metric for stability in crystal generation, which is why we prioritized it. Thank you again for your constructive feedback. We will refine our work to address these points in more detail. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response and the clarifications provided. I appreciate the effort you've put into addressing the concerns raised. However, after careful consideration, I will maintain my initial assessment. I believe the points discussed are valuable for future work and manuscript revisions. I genuinely appreciate the hard work and dedication of the authors.
Summary: The paper deals with an important application of generative models for science: generation of crystalline structures. However, I have some serious concerns. First, scope. While comparing different methods for the same objective is informative, I am not so sure what is the purpose here. Having so many different methods certainly dilutes the main message in a short paper format like neurips. Second, approach. From what I understand, there are questionable design problems with the technical approach. Third, results. The authors spend most of their space explaining various methods such that there is little room left for explaning the impact of their results, or comparing their results with existing approaches. Finally, presentation. Figure 1 is kind of trivial or at least very simple and I am not sure it is worth a separate figure. The overall typesetting looks not too professional. Strengths: The methodology selection is broad and hopefully the audience can benefit from a mini-benchmark of different generative approaches. The overall model architecture is distinctive from what I have seen in the literature. Weaknesses: I have a good number of questions on the technical approaches. More specifically, the model takes a specially formulated data structure that does not seem to be obviously invariant or equivariant under permutation, translation, rotation, which is concerning. For example, change the selection or ordering of the unit cell vectors and everything will change in an uncontrolled way. The conditioning approach seems to be to provide desired properties as inputs to the generative approaches. I am not sure this always make sense. For example, if one desires a certain space group, there is no enforcing compliance with the space group. One can always easily check it. It is perhaps more suitable to enforce space group compliance using a guidance-based conditioning approach. Technical Quality: 2 Clarity: 1 Questions for Authors: See above. Confidence: 5 Soundness: 2 Presentation: 1 Contribution: 2 Limitations: There is insufficient discussion about the limitations given my concerns shown above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback on our paper. We appreciate the time you have taken to provide a thorough review. Below, we address each of your concerns: Firstly, scope. The primary aim of our paper is to create a comprehensive comparison of different generative approaches in the field of machine learning for crystalline structure generation. While we acknowledge that including multiple methods can dilute the focus in a short paper format, we believe that providing a broad overview offers significant value to the community. It allows for a mini-benchmark of different generative approaches, highlighting the strengths and weaknesses of each. This can serve as a foundation for future research to build upon and refine. Second, approach. We recognize that there are potential design issues with the technical approach. Specifically, the concern about the model's invariance or equivariance under permutation, translation, and rotation is valid. To address this, we have implemented steps to mitigate these issues. For example, we standardize the orientation of all crystal structures and sort atoms according to their chemical elements. Also, crystal structure representation includes space groups, which has sufficient information about symmetries of a structure. In our future work, we will explore the use of equivariant architectures and more robust data representations to further improve the model's reliability. Third, results. We acknowledge that the explanation of our results and their impact was limited by space constraints. To address this, we will revise the manuscript to better highlight the significance of our findings and provide a more detailed comparison with existing approaches. Specifically, we will include metrics such as the ratio of the number of crystal structures generated by our model to the number of optimal structures, demonstrating that our method is state-of-the-art in this regard. We agree that Figure 1 is simplistic and may not add substantial value to the paper. We will remove or replace it with a more informative figure. Additionally, we will improve the overall typesetting and presentation to ensure a more professional appearance. We appreciate your detailed technical questions. To address the concern about the conditioning approach and space group compliance, we agree that a guidance-based conditioning approach could be more suitable. In future work, we will explore methods to enforce space group compliance directly within the generative process. Additionally, we will include a thorough discussion of the limitations of our current approach, acknowledging areas for improvement and potential future directions. In conclusion, we are committed to addressing the issues raised in your review. We believe that our paper makes a valuable contribution by benchmarking various generative models for crystalline structure generation and providing a distinctive model architecture. We will refine our manuscript to ensure that the results are presented more clearly and the limitations are transparently discussed. Thank you once again for your constructive feedback. --- Rebuttal Comment 1.1: Comment: Thanks for the explanation. I'll keep my score.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
FedLPA: One-shot Federated Learning with Layer-Wise Posterior Aggregation
Accept (poster)
Summary: This paper proposes FedLPA, a novel one-shot federated learning method that uses layer-wise posterior aggregation. It aggregates local models to obtain a more accurate global model without requiring extra datasets or exposing private label information. The key innovation is using layer-wise Laplace approximation to efficiently infer the posteriors of each layer in local models, then aggregating these layer-wise posteriors to train the global model parameters. Extensive experiments show FedLPA significantly improves performance over state-of-the-art methods across several metrics, especially for non-IID data distributions. Strengths: 1. It achieves good performance with only a single round of communication between clients and the server, reducing communication overhead and privacy risks. It performs well on heterogeneous data distributions across clients. 2. FedLPA doesn't need additional public datasets. 3. The paper provides a convergence analysis showing a linear convergence rate for the global model optimization. Weaknesses: 1. The method relies on multiple layers of approximation - empirical Fisher to approximate the Hessian, block-diagonal Fisher instead of full, and approximating global model parameters through optimization. Each approximation introduces some error. These compounding approximations could potentially lead to suboptimal global models. However, the paper's empirical results suggest that in many practical scenarios, these approximations still lead to good performance. Nonetheless, a more thorough theoretical analysis of these approximation errors and their impact would strengthen the paper. 2. Although more efficient than some baselines, FedLPA still requires more computation than simpler methods like FedAvg. 3. Storing and transmitting the block-diagonal Fisher matrices for each layer increases memory usage and communication costs compared to methods that only share model weights. For very large models or with many clients, the increased communication overhead from sharing Fisher matrices could become significant. Technical Quality: 2 Clarity: 3 Questions for Authors: please see the weakness Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer 4qUr, thanks for your comments, which helped us improve our paper. The answers to all your questions are as follows: > **Q1**: The method relies on multiple layers of approximation - empirical Fisher to approximate the Hessian, block-diagonal Fisher instead of full, and approximating global model parameters through optimization. Each approximation introduces some error. These compounding approximations could potentially lead to suboptimal global models. However, the paper's empirical results suggest that in many practical scenarios, these approximations still lead to good performance. Nonetheless, a more thorough theoretical analysis of these approximation errors and their impact would strengthen the paper. **Answer**: In our paper, note that we have three approximations listed as follows: (1) empirical Fisher to approximate the Hessian In our Appendix E, we show the theoretical analysis of this approximation error. (2) block-diagonal Fisher instead of full Fisher Paper [1], which we cited in our paper, provides a detailed evaluation and testing of using block-diagonal Fisher to approximate the full one. Firstly, Chapter 6.3.1, "Interpretations of this approximation" in the paper [2] indicates that using a block-wise Kronecker-factored Fisher closely approximates the full Fisher. Although there is a bias term (due to the approximation in our Appendix Equation 30), this term approximates zero when there are sufficient samples. Furthermore, the paper examines the approximation quality of block-diagonal Fisher compared with the true Fisher and suggests that block-diagonal Fisher captures the main correlations, while the remaining correlations have a minimal impact on the experimental results. (3) approximating global model parameters through optimization In our Appendix J, we show the convergence analysis of our method. In summary, the approximations have a negligible effect on the final test accuracy. Some experiment results are shown in Table 22 of Appendix M.12. We will summarize these in the appendix once accepted. [1] Ritter, Hippolyt, Aleksandar Botev, and David Barber. "A scalable laplace approximation for neural networks." 6th international conference on learning representations, ICLR 2018-conference track proceedings. [2] Martens, James. Second-order optimization for neural networks. University of Toronto (Canada), 2016. > **Q2**: Although more efficient than some baselines, FedLPA still requires more computation than simpler methods like FedAvg. **Answer**: Our method FedLPA only introduces 30% more computation time compared to the simple FedAvg while increasing the test accuracy by **up to 35%** in some settings, as shown in Table 1 and Table 4. As shown in Table 17, the state-of-the-art Dense will use **6.15x times the computation time of our FedLPA**, while our method performs better in most cases considering test accuracy. Co-Boosting, another distillation method, will use **10.77x times the computation time of our FedLPA**. Our method is also faster than FedProx and FedOV. Our method is compatible with FedOV and Co-Boosting and performs much better than FedProx. Our FedLPA strikes a balance between the computation overhead and the performances. > **Q3**: Storing and transmitting the block-diagonal Fisher matrices for each layer increases memory usage and communication costs compared to methods that only share model weights. For very large models or with many clients, the increased communication overhead from sharing Fisher matrices could become significant. **Answer**: Our FedLPA strikes a balance between the communication overhead and the performances. As shown in Table 4, FedLPA also has a communication overhead of only 2x times that of FedAvg. The communication overhead is similar between our FedLPA and the popular SCAFFOLD. In some settings, FedLPA could increase the performance up to 35% compared to FedAvg and SCAFFOLD. In our paper, we also give detailed examples of the communication overhead of our FedLPA in Appendix M.8. For all the datasets, as the number of clients increases, the communication overhead will also increase linearly. Intuitively, for very large models or with many clients, our communication overhead is solely about two times of FedAvg. Compared with the computation overhead in the one-shot setting, this is acceptable. --- Rebuttal 2: Title: Looking forward to your reply Comment: Dear Reviewers 4qUr, We sincerely appreciate your time and efforts in reviewing our manuscript and offering valuable suggestions. Note that there will be no second stage of author-reviewer discussions. As the author-reviewer discussion phase is drawing to a close, we would like to confirm whether our responses have effectively addressed your concerns. We provided detailed responses to your concerns a few days ago, and we hope they have adequately addressed your issues. If you require further clarification or have any additional concerns, please do not hesitate to contact us. We are more than willing to continue our communication with you. Best regards, Authors of Submission #8916 --- Rebuttal 3: Title: Looking forward to your reply Comment: Dear Reviewers 4qUr, We sincerely appreciate your time and efforts in reviewing our manuscript and offering valuable suggestions. We understand you are busy viewing multiple papers. As the rebuttal deadline is approaching, we are slightly nervous and look forward to your reply or suggestions. We would be more than grateful if you took some time to confirm whether our responses have effectively addressed your concerns and increased the evaluation of our paper. Please do not hesitate to contact us if you require further clarification or have any additional concerns. We are more than willing to continue our communication with you. Thanks so much! Best regards, Authors of Submission #8916 --- Rebuttal Comment 3.1: Title: Feedback Comment: Thank you for the detailed explanation regarding my concerns. Some of my concerns have been well addressed, but I still believe that the proposed method may expose more information to attackers which could lead to the privacy leakage. Thus, I will update my score accordingly. --- Reply to Comment 3.1.1: Title: Appreciation for the constructive comments Comment: Dear Reviewer 4qUr, We sincerely appreciate your constructive comments and prompt responses, which help us improve our paper. It is our pleasure to address your concerns during the discussion. Again, thanks for your time and reviews, for evaluating the value of our work, and for improving our scores! Note that In Appendix L, we discuss privacy concerns related to our FedLPA method and demonstrate that it offers the same level of privacy as baseline methods, effectively countering the iDLG attack. We also highlight that FedLPA is compatible with privacy-preserving technologies like DP. Additionally, a concrete example illustrates how FedLPA maintains data privacy. Thanks again for improving the rating of our paper, and we are more than grateful for this positive score. Your reviews really help us to polish our paper and make our manuscript more solid. Hope you have a wonderful day! Warm regards, Authors of Submission #8916
Summary: The paper "FedLPA: One-shot Federated Learning with Layer-Wise Posterior Aggregation" introduces FedLPA, a novel one-shot federated learning method that addresses challenges associated with high statistical heterogeneity in non-identical data distributions. The framework uses layer-wise posterior aggregation based on the empirical Fisher information matrix, allowing for the accurate capture and aggregation of local model statistics into a global model. The paper claims that FedLPA improves learning performance significantly over state-of-the-art methods across several datasets without requiring auxiliary datasets or exposing private label information. Strengths: 1. **Originality**: The introduction of layer-wise posterior aggregation using the empirical Fisher information matrix is a novel approach in the context of one-shot federated learning. This method effectively addresses the challenge of non-IID data distributions. 2. **Quality**: The paper provides a rigorous theoretical foundation for the proposed method, including convergence proofs and detailed mathematical formulations. The extensive experimental results on various datasets further support the claimed improvements in learning performance. 3. **Clarity**: The paper is well-structured and clearly written, with thorough explanations of the methodologies and theoretical concepts. The inclusion of figures and tables helps in understanding the experimental results. 4. **Significance**: By improving the performance of one-shot federated learning under non-IID conditions, the proposed method has significant implications for practical applications where data privacy and communication efficiency are critical concerns. Weaknesses: 1. **Implementation Complexity**: The use of layer-wise posterior aggregation and the empirical Fisher information matrix introduces significant complexity. The practicality of implementing FedLPA in real-world settings could be challenging without detailed guidelines or simplifications. 2. **Scalability**: The paper does not adequately address the scalability of the proposed method to large datasets or a high number of clients. Evaluating the computational and communication overheads in such scenarios is necessary to understand the practical feasibility of FedLPA. 3. **Privacy Considerations**: While the method claims to preserve data privacy, the paper lacks a detailed discussion on potential privacy risks and mitigation strategies, which is crucial for federated learning applications. 4. **Experimental Scope**: The experiments, although comprehensive, are limited to a few datasets and simple neural network structures. Additional validation on larger and more complex datasets, as well as a variety of neural network architectures, would provide stronger empirical support for the method's generalizability. 5. **Parameter Sensitivity**: The performance of FedLPA may be sensitive to the choice of parameters, such as the noise levels for the empirical Fisher information matrix. A detailed analysis of parameter sensitivity and guidelines for selecting appropriate parameter values would enhance the robustness of the method. Technical Quality: 3 Clarity: 3 Questions for Authors: I have no question. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have acknowledged some limitations of their work, but further discussion would enhance the robustness of the paper: 1. **Complexity and Practicality**: The complexity of the FedLPA framework may hinder practical implementation. Providing more detailed guidelines or potential simplifications could make the approach more accessible for real-world applications. Additionally, discussing potential trade-offs between complexity and performance would be beneficial. 2. **Scalability**: The paper does not sufficiently address the scalability of the proposed methods in environments with many clients or large graphs. Detailed evaluations of the computational and communication overheads involved in scaling the methods would help understand their practical feasibility. 3. **Privacy Risks**: A deeper analysis of potential privacy risks and mitigation strategies is necessary, particularly in federated learning settings where data privacy is a major concern. Discussing how privacy-preserving techniques can be integrated into the proposed framework would strengthen the paper. 4. **Experimental Validation**: While the experiments are comprehensive, further validation on larger-scale datasets and more diverse GNN architectures would provide stronger empirical support for the proposed methods. Expanding the experimental scope would help demonstrate the applicability of the methods in various real-world scenarios. Overall, the paper makes valuable contributions to federated learning for graph data, but addressing the mentioned weaknesses would further enhance its robustness and applicability. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer XECp, thanks for your comments, which helped us improve our paper. The answers to all your questions are as follows: > **Q1**: **Implementation Complexity**: The use of layer-wise posterior aggregation and the empirical Fisher information matrix introduces significant complexity. The practicality of implementing FedLPA in real-world settings could be challenging without detailed guidelines or simplifications. >And > **Complexity and Practicality**: The complexity of the FedLPA framework may hinder practical implementation. Providing more detailed guidelines or potential simplifications could make the approach more accessible for real-world applications. Additionally, discussing potential trade-offs between complexity and performance would be beneficial. **Answer**:The theoretical analysis of the layer-wise posterior aggregation may be complex, however, the practical implementation is not complicated. Besides, we have provided several functions with APIs to ensure non-experts can adopt our FedFPA framework within a few lines of code. In detail, in our submitted source code zip file, we provide the APIs for the users to use our method in line 541 of experients_our.py. We also include the artifact details in our paper and provide all the script files to reproduce the results of our paper. In our paper, we also show that our FedLPA strikes a balance between the computation overhead and the performances and incurs solely 30% more computation time compared to the simple FedAvg method while increasing the test accuracy by **up to 35%** in some settings, as shown in Table 1 and Table 4. > **Q2**: **Scalability**: The paper does not adequately address the scalability of the proposed method to large datasets or a high number of clients. Evaluating the computational and communication overheads in such scenarios is necessary to understand the practical feasibility of FedLPA. **Answer**:In our paper, our FedLPA shows good performances on MNIST, FMNIST, CIFAR-10, CIFAR-100, SVHN, EMNIST datasets. We further add the experiments on the same experiment setting in the paper using ResNet-18 on Tiny-ImageNet. The results show that our method has the potential to deal with large datasets. | Partitions | FedLPA | Dense | FedAvg | |------------|---------------|---------------|----------------| | 0.1 | 17.02$\pm$1.40 | 15.88$\pm$1.96 | 3.72$\pm$1.44 | | 0.3 | 27.80$\pm$2.10 | 24.91$\pm$1.65 | 8.41$\pm$0.87 | | 0.5 | 30.14$\pm$1.25 | 29.43$\pm$0.72 | 12.07$\pm$1.92 | We also add the experiments with more clients in the same experiment setting of the paper using FMNIST datasets. The results show that our method could support up to 200 clients. | Partitions\Clinet number | 10 | 20 | 50 | 100 | 200 | |--------------------|--------------|--------------|--------------|--------------|-------------| | 0.1 | 55.33$\pm$0.06 | 57.37$\pm$0.05 | 57.03$\pm$0.00 | 54.80$\pm$0.13 | 54.17$\pm$0.26 | | 0.3 | 68.20$\pm$0.04 | 71.30$\pm$0.03 | 66.70$\pm$0.23 | 66.28$\pm$0.45 | 64.52$\pm$0.08 | | 0.5 | 73.33$\pm$0.06 | 74.07$\pm$0.00 | 71.13$\pm$0.00 | 70.72$\pm$0.09 | 70.05$\pm$0.27 | The computational and communication overheads are shown in Table 4 in our paper. As the number of clients increases, the computational and communication overheads increase linearly. > **Q3**: **Privacy Considerations**: While the method claims to preserve data privacy, the paper lacks a detailed discussion on potential privacy risks and mitigation strategies, which is crucial for federated learning applications. **Answer**:In Appendix L, we give a detailed discussion about privacy concerns for our FedLPA. We show that our method has the same privacy level as the baselines, counteractting the iDLG attack. Our method could also be compatible with existing privacy-preserving technologies (i.e., DP). At the end of Appendix L, we also provide a concrete example of privacy attacks to show how FedLPA preserves data privacy. > **Q4**: **Experimental Scope**: The experiments, although comprehensive, are limited to a few datasets and simple neural network structures. Additional validation on larger and more complex datasets, as well as a variety of neural network architectures, would provide stronger empirical support for the method's generalizability. >And > **Experimental Validation**: While the experiments are comprehensive, further validation on larger-scale datasets and more diverse GNN architectures would provide stronger empirical support for the proposed methods. Expanding the experimental scope would help demonstrate the applicability of the methods in various real-world scenarios. **Answer**:In our paper, our FedLPA shows good performances on MNIST, FMNIST, CIFAR-10, CIFAR-100, SVHN, EMNIST datasets. We further did the experiments with Tiny-ImageNet, as shown above. In Appendix M.10, we also show the results using more complex network structures such as ResNet-18. In this paper, we conduct the experiments on CV tasks without focusing on the graph neural network. > **Q5**: **Parameter Sensitivity**: The performance of FedLPA may be sensitive to the choice of parameters, such as the noise levels for the empirical Fisher information matrix. A detailed analysis of parameter sensitivity and guidelines for selecting appropriate parameter values would enhance the robustness of the method. **Answer**: In our paper, we do not have the noise level settings for the empirical Fisher information matrix. We only have one hyper-parameter, $\lambda$ from Eq. 33, which controls variances of a priori normal distribution and guarantees $A_k$ and $B_k$ are positive semi-definite. All other Laplace Approximations are sensitive to the hyper-parameter $\lambda$ based on their experimental results, **but Table 3 shows that our approach is relatively robust**. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' detailed rebuttal. After careful consideration, the original assessment and rating will remain the same. --- Rebuttal 2: Title: Looking forward to your reply Comment: Dear Reviewers XECp, We sincerely appreciate your time and efforts in reviewing our manuscript and offering valuable suggestions. Note that there will be no second stage of author-reviewer discussions. As the author-reviewer discussion phase is drawing to a close, we would like to confirm whether our responses have effectively addressed your concerns. We provided detailed responses to your concerns a few days ago, and we hope they have adequately addressed your issues. If you require further clarification or have any additional concerns, please do not hesitate to contact us. We are more than willing to continue our communication with you. Best regards, Authors of Submission #8916 --- Rebuttal 3: Title: Thank you for the response Comment: Dear Reviewers XECp, Thanks for replying to us while you are busy reviewing multiple papers. Although you maintained your rating after careful consideration, we, feeling a little bit of pity, still appreciate your time and efforts in reviewing our manuscript and offering valuable suggestions. We would be grateful if you could increase the rating of our paper during the following discussions among the reviewers and AC. Again, thanks for your time and reviews. If you have any further concerns, we will be more than happy to continue our communication with you. Best regards, Authors of Submission #8916
Summary: This paper proposes a one-shot Federated Learning (FL) method, denoted as FedLPA, to address heterogeneous data distribution among clients. FedLPA does not demand auxiliary datasets or private label information during aggregation on the server side. To achieve this, FedLPA infers the posteriors by leveraging the Fisher information matrix of each layer in local models using layer-wise Laplace approximation and aggregates these to train the global model. Abundant experiment results demonstrate the efficacy of FedLPA compared to conventional FL methods under the one-shot setting. Strengths: 1. The idea of the paper is easy to follow. 2. Instead of measuring co-relations between different layers, the proposed method only approximates layer-wise Fisher to get a good trade off. 3. Extensive experiment results show the superiority of FedLPA under the one-shot FL setting. Weaknesses: To be honest, I do not buy the one-shot Federated Learning (FL) setting. This setting goes against the idea of FL. But this does not affect my rating. 1. There are some typo problems. 2. Authors should compare the proposed method with differential privacy (DP) FL or prototype-based methods rather than conventional FL methods. All these approaches address communication security but from different perspectives. 3. Would the proposed method also show superior performance on more challenging datasets, like tinyimagnet and office-home? 4. Any proof to show that "computing the co-relations between different layers brings slight improvement"? Technical Quality: 3 Clarity: 3 Questions for Authors: As I mentioned in the weakness, the authors should compare the proposed method with DP FL and prototype-based FL rather than conventional FL methods. I believe the experiments in the paper do not provide a fair comparison. According to the authors in Supplementary Sec.L, the security level of the proposed method is similar to that of FedAVG, meaning it is also vulnerable to attacks targeting FedAVG. Does the only advantage of the proposed method is that attackers have fewer opportunities to strike since they only communicate with the server once? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The topic discussed within the paper is highly related to privacy protection and the authors show a novel method to deal with it. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer F77H, thanks for your comments, which helped us improve our paper. The answers to all your questions are as follows: > **Q1**: There are some typo problems. **Answer**: Thanks for pointing that out and thanks for your efforts in reviewing our paper. We found the following typos and will revise if we find more. Sec3.7 We->we Sec4.1 benchmark->benchmarks Appendix F in one-shot -> in the one-shot Appendix M.13 proportion -> proportions > **Q2**: Authors should compare the proposed method with differential privacy (DP) FL or prototype-based methods rather than conventional FL methods. All these approaches address communication security but from different perspectives. >And > As I mentioned in the weakness, the authors should compare the proposed method with DP FL and prototype-based FL rather than conventional FL methods. I believe the experiments in the paper do not provide a fair comparison. **Answer**: Beyond the privacy protection and the reduction on the attack surface, the one-shot FL framework also tries to deal with the substantial communication overhead and higher demand for fault tolerance throughout the rounds. In Appendix L.1, we show the experiment of DP-FedLPA and DP-FedAvg in the one-shot setting. Specifically, since the sensitivity of the data sample distribution after the normalization is 1, we add Laplacian noises with $\lambda=\frac{1}{\epsilon}$. We set $\epsilon=\{3,5,8\}$ that provides modest privacy guarantees since normally $\epsilon \in (1,10)$ is viewed as a suitable choice. Besides, we have added the experiments using the same experiment setting in the paper on the FMNIST dataset to compare DP-FedAvg (multiple-round) as Appendix L.1 with our FedLPA (one-shot), show the round results of how many rounds DP-FedAvg needs to achieve the same test performance. | $\epsilon$\Partitions | 0.1 | 0.3 | 0.5 | |------------|-----|-----|-----| | 8 | 11 | 10 | 8 | | 5 | 11 | 9 | 8 | | 3 | 12 | 9 | 7 | The results show that DP-FedAvg needs about 10 rounds of communication to achieve the same test performance, compared to our one-round FedLPA. Combined with our previous results in Table 4 and Table 7, our FedLPA could save the communication and computation overhead and combine with the DP method to mitigate the potential privacy leakage. Based on the above settings, DP-FedAvg needs at least **3x communication overhead** and **5x computation overhead**. While DP-FedAvg needs multiple rounds to get similar accuracy, DP-FedAvg maybe vulnerable to more privacy attack methods due to the multiple queries, such as curvature-based privacy attacks. > **Q3**: Would the proposed method also show superior performance on more challenging datasets, like tinyimagnet and office-home? **Answer**: In our paper, our FedLPA shows good performances on MNIST, FMNIST, CIFAR-10, CIFAR-100, SVHN, EMNIST datasets. We further add the experiments on the same experiment setting in the paper using ResNet-18 on Tiny-ImageNet. The results show that our method has the potential to deal with large datasets. | Partitions | FedLPA | Dense | FedAvg | |------------|------------|-------------|--------------| | 0.1 | 17.02$\pm$1.40 | 15.88$\pm$1.96 | 3.72$\pm$1.44 | | 0.3 | 27.80$\pm$2.10 | 24.91$\pm$1.65 | 8.41$\pm$0.87 | | 0.5 | 30.14$\pm$1.25 | 29.43$\pm$0.72 | 12.07$\pm$1.92 | Due to the rebuttal time limit, we leave the implementation on office-home dataset as future work. However, we believe that the FedLPA performs well on office-home dataset, since it shows the potential to deal with challenging data on Tiny-ImageNet dataset. > **Q4**: Any proof to show that "computing the co-relations between different layers brings slight improvement"? **Answer**: Paper [1] we cited in our paper provides a detailed evaluation and testing of using block-diagonal Fisher to approximate the full one. Firstly, Chapter 6.3.1, "Interpretations of this approximation" in the paper [2] indicates that using a block-wise Kronecker-factored Fisher closely approximates the full Fisher. Although there is a bias term (due to the approximation in our Appendix Equation 30), this term approximates zero when there are sufficient samples. Furthermore, the paper examines the approximation quality of block-diagonal Fisher compared with the true Fisher and suggests that block-diagonal Fisher captures the main correlations, while the remaining correlations have a minimal impact on the experimental results. We will add the above analysis in the appendix on the camera-ready version. [1] Ritter, Hippolyt, Aleksandar Botev, and David Barber. "A scalable laplace approximation for neural networks." 6th international conference on learning representations. [2] Martens, James. Second-order optimization for neural networks. University of Toronto, 2016. > **Q5**: According to the authors in Supplementary Sec.L, the security level of the proposed method is similar to that of FedAVG, meaning it is also vulnerable to attacks targeting FedAVG. Does the only advantage of the proposed method is that attackers have fewer opportunities to strike since they only communicate with the server once? **Answer**: In fact, our approach can reduce the attack surface as attackers have fewer opportunities since they only communicate with the server once. Beyond that, in the one-shot setting, the FedLPA could also deal with the substantial communication overhead and higher demand for fault tolerance throughout the rounds. FedAvg is vulnerable to privacy attacks with huge communication overhead. However, FedLPA is compatible with existing preserving approaches (i.e., DP) to achieve an even higher privacy level and strike a balance between computation, communication overhead, and performance. Note that our paper mainly focuses on the efficiency perspective to improve the performance of one-shot FL with inferior consideration of the security perspective. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. It addressed most of my concerns. Though I do not buy this setting, I believe every study has its value. Thus, I would like to raise my score. --- Rebuttal 2: Title: Appreciation for the constructive comments Comment: Dear Reviewer F77H, We sincerely appreciate your constructive comments and prompt responses, which help us improve our paper. It is our pleasure to address your concerns during the discussion. Again, thanks for your time and reviews. Thanks for evaluating the value of our work! Warm regards, Authors of Submission #8916
null
null
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Towards Understanding Evolving Patterns in Sequential Data
Accept (spotlight)
Summary: This paper proposes a novel metric, Evolving Rate, using mutual information to measure the existence of the evolving patterns in sequential data. For the scenarios in which data samples across disparate time steps are not aligned, the paper proposes to build the correspondence between snapshots using optimal transport, and thus develop the corresponding EvoRate_w, to measure evolving patterns without correspondences. The final experiments conducted on multiple tasks using several datasets, demonstrate the superior performance of the proposed EvoRate and EvoRate using OT to align data across time steps, compared to SOTA baselines. Strengths: Significance: Evolving patterns are important for us to use sequential models to observed time series data. This paper develop the evolving rate (EvoRate), to quantify the evolving patterns. In particular, this EvoRate can be to assess temporal order and to conduct feature selections in sequential observations. Clarity: The paper is well motivated! Most of the technical parts of the work are clearly presented! Quality: Overall, the technical contribution of the work look sound! The theorectical aspect of the evolving rate is valuable in modeling sequential data! Weaknesses: The experimental details about the cost function of the optimal transport in aligning data across time steps, the parameter and the neural architecture, are not provided. These details could be crucial for the others to use the methods! Technical Quality: 3 Clarity: 3 Questions for Authors: How to choose the cost function in using optimal transport to build correspondence for data samples across time steps? Can you provide more experimental details? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: No, the authors do not discuss the potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Responses to Weakness 1. (Cost function) The cost function of Optimal Transport (OT) is defined in Eq (7). In the literature [1-2], OT typically uses a distance metric between two samples, each sampled from different marginal distributions, as the cost function. For instance, $z^s$ and $z^t$ represent data from source and target domains in domain adaptation tasks [1-2]. In our approach, we account for the dynamic characteristics of sequential data. Specifically, we assume the presence of an autoregressive function $f$ that governs the evolving patterns within the data. Consequently, we design the cost function as the distance between the output of $f$ with historical samples as input $f(\\textbf{z}^t_{t-k+1})$, and the subsequent sample $z_{t+1}$. In our implementation, OT aims to find the matching between a batch of samples $\\{z_{t,i}\\}^B_{i=1}$ and $\\{z_{t+1,i}\\}^B_{i=1}$ in a training iteration across time steps. 2. (parameter and architecture) Thanks for pointing it out. We will add this detail in the revision. The autoregressive model $f$ is typically a 2 or 3-layer LSTM, with input size, hidden size, and output size set to 128, 256, or 512, depending on the architecture of the encoder $g$. For tabular data, $g$ is a 2 or 3-layer MLP, with the input size corresponding to the data size, and the hidden and output sizes set to either 128 or 512. For image data, $f$ is a ResNet-12 with a hidden size of 512. Q: (choose the cost function) We do not select the cost function; rather, we use a cost function that is consistent with the critic function as an autoregressive form in mutual information estimation, which leverages the structure of sequential data. [1] Optimal Transport for Domain Adaptation, TPAMI 2015 [2] Joint distribution optimal transportation for domain adaptation, NIPS 2017
Summary: The article introduces a groundbreaking technique for measuring changes in sequential data, marking a notable advancement in the realm of machine learning. The authors bring forth the Evolving Rate (EvoRate) and its advanced iteration, EvoRate$_\mathcal{W}$, as effective tools for evaluating the temporal dynamics within datasets. Strengths: - Tackles a real-world issue in machine learning, with possible uses in predicting time-series data, categorization, and other sectors. - Bridges a knowledge gap in the current comprehension and measurement of patterns that change over time in sequential data. - The creation of EvoRate and EvoRate$_\mathcal{W}$ represents a creative leap forward, offering a numerical assessment of evolving patterns applicable to a range of learning tasks. The utilization of optimal transport (OT) to resolve the issue of non-correspondence in temporal data is especially innovative. Weaknesses: - It may not fully address the scalability of the proposed methods as dimensionality grows. The authors assert that the dimensionality reduction of the original data can lead to a decrease in computational time when calculating Mutual Information (MI). However, the process of training an auto-encoder itself demands considerable time and resources. Is this factored into their considerations? - The scope of the experiments might be limited, with a notable absence of extensive real-world datasets - The true MI values for high-dimensional data, such as in video and time series contexts, are often infeasible to obtain. How do the authors ensure that EvoRate can provide an accurate measure of MI for such high-dimensional data, especially given this significant limitation? Technical Quality: 4 Clarity: 4 Questions for Authors: -I find the concept of leveraging Mutual Information (MI) to evaluate the evolution of patterns to be logical, yet I harbor reservations about whether MI can truly equate to the changes observed in sequential data. A deeper exploration of this connection would be highly beneficial. -The ForeCA method's results are counterintuitive, as they oppose the prediction performance outlined in Table 1, where one would expect the inclusion of more historical data to enhance the predictive capabilities. Can the authors elucidate why EvoRate, an estimator of MI, is deemed superior for capturing evolving patterns over ForeCA, which is based on entropy estimation? -I am curious to know if the EvoRate model, once trained, can be directly applied to new datasets without the need for retraining. If this is possible, it would make EvoRate a more adaptable and computationally efficient tool that could be easily integrated into various systems. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: There's no limitation relating to social impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Responses to Weakness: 1. (scalability & training an auto-encoder) We have added experiments on the scalability of the method with different dimensions in the encoding space using the Video Prediction dataset KITTI. Encoding Dim|128| 256 |512| 1024 |-------------------------|-------------------------------|-----------------------------|-------------------------------|-----------------------------| |EvoRate|2.19| 2.37|2.47| 2.56| From the above table, a smaller dimension in the encoding space leads to a lower EvoRate due to greater information loss. However, the performance remains acceptable even with a very low dimension of 128. Training an auto-encoder model from scratch can be computationally expensive. However, we can use a pretrained backbone, significantly reducing computational costs by directly learning the dynamic function $f$ from the encoded embeddings. | Corruption Ratio |0 |0.05|0.1|0.25|0.5 | |-------------------------|-------------------------------|-----------------------------|-------------------------------|-----------------------------|----------------------------| |Finetune $g$ | 2.56|1.52|0.93|0.37|0.05 |Pretrained&fixed $g$ |2.54| 1.55| 0.92|0.39|0.10 The Corruption Ratio represents the probability of shuffling the sequence to disrupt the evolving patterns. A higher Corruption Ratio should result in a lower EvoRate. We find that with a fixed pretrained encoding function $g$, the estimated EvoRate closely approximates the result of training $g$ with EvoRate on the video dataset KITTI. The pretrained model used is ResNet-18 from torchvision, pretrained on ImageNet-1K. 2. (Absence of real-world datasets) We have utilized real-world datasets: five time-series forecasting datasets presented in Table 2, three EDG datasets (Portraits, Caltran, and Powersupply) in Table 3, and one video prediction dataset (KITTI) in Figure 2-c. Additionally, we included the experimental result of EvoRate on the NLP dataset Associative Recall from zoology [3]. A low EvoRate for the NLP dataset implies either a low prediction potential or that this dataset alone cannot be used to train a good predictive model. Notably, the test accuracy of AR is typically 100%, which is consistent with the high EvoRate of 2.93. | Corruption Ratio |0 |0.25|0.5|0.75|1 | |----------------|-------------------------------|-----------------------------|-------------------------------|-----------------------------|----------------------------| |EvoRate |2.93| 1.52|1.28|1.18|1.04| 3. (True MI are not able to obtain) We acknowledge that for real-world high-dimensional data, MI is often intractable, as highlighted in the literature [1]. However, our experimental results demonstrate that we can achieve a reasonable estimation of MI, as illustrated in Figure 1. Similar to the approach of ForeCA, we use an information-theory-based estimator as an indicator of evolving patterns. While ForeCA uses entropy, we use a mutual information estimator. Responses to Questions: 1. (MI to evaluate evolution of patterns) Using MI to estimate the relationship between pairs of variables is a fundamental problem in science and engineering. We propose to leverage MI to quantify the strength of the evolving patterns, which reveals that the evolving patterns is strongly related to the latent temporal dependency. We justify our choice of using MI to measure evolving patterns in Section 4.2 and Proposition 1. Specifically, MI is an intrinsic property of the data that directly influences the expected error of the maximum likelihood estimation loss. Therefore, we claim that MI serves as a direct and reliable indicator of evolving patterns. 2. (Why EvoRate better than ForeCA) This limitation is due to ForeCA's inability to detect complex patterns, as illustrated in lines 112-114 of our paper. While temporal patterns can encompass trends, cycles, irregular fluctuations, and more complex behaviors, ForeCA is restricted to detecting only simple linear cyclical patterns. In his paper [2], he assumes that the data from future time steps is a linear transformation of the historical data. However, our methods can leverage MI-guided efficient deep representation learning to describe complex and non-linear evolving patterns. 3. (apply to new datasets without finetuning) We acknowledge that the current EvoRate model can utilize a fixed pre-trained encoder $g$ to project data into a lower dimension to reduce computational complexity. However, the autoregressive function $f$ in Eq (4) and (9) still requires fine-tuning for each dataset due to the distinct evolving patterns present in different datasets. Nonetheless, training a larger EvoRate model with more datasets could potentially result in a more robust model capable of testing evolving patterns across various datasets. This is a promising direction for our future research. [1] Estimating mutual information. Physical Review 2004 [2] Forecastable component analysis. ICML 2013 [3] Zoology: Measuring and Improving Recall in Efficient Language Models, Arxiv 2023 --- Rebuttal Comment 1.1: Comment: Thank you for the response. I've read it carefully and my questions have been addressed. I will keep my current score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer oZmm, Thank you for your kind response and for taking the time to review our responses. We are pleased to hear that our responses were satisfactory and that you are maintaining your original assessment. Your thoughtful feedback has been invaluable in strengthening our work, and we sincerely appreciate your efforts. Best regards, Authors
Summary: This paper introduces EvoRate, a novel metric designed to quantify the evolving patterns in sequential data. The authors propose leveraging mutual information (MI) to measure the temporal dependencies between data points in a sequence. The paper addresses a significant challenge in machine learning: identifying and quantifying evolving patterns in high-dimensional sequential data. EvoRate is further extended to EvoRate$_W$, which uses optimal transport to establish correspondences between data points at different timestamps, facilitating MI estimation even when direct correspondences are absent. The proposed methods are validated through experiments on both synthetic and real-world datasets, demonstrating their effectiveness in various applications such as time-series forecasting, classification with temporal distribution shifts, and video prediction. Strengths: 1. It is true that most researchers apply sequential models to sequential data without considering whether there are evolving patterns. Therefore, it is critical to delve deeper into understanding these evolving patterns in sequential data. The introduction of EvoRate and its extension, EvoRate$_W$, addresses a significant gap in the literature by providing a quantitative measure for evolving patterns in sequential data. This is a novel and impactful contribution. 2. The paper provides analysis and examples to clarify why EvoRate can measure evolving patterns, particularly through Proposition 1. This proposition reveals that mutual information (MI) is an intrinsic property of sequential data, reflecting the evolving patterns within the data. Detailed explanations of how MI quantifies temporal dependencies and the use of optimal transport for handling data without direct correspondences are included, enhancing the understanding of this measure. 3. Figure 1 (a, b) illustrates the designed MI critic function used in Equation 4, which better measures MI by considering the data structure of sequential data. More importantly, Figure 1 (c, d) demonstrates the effectiveness of EvoRate$_W$. Even in the absence of correspondence between time points, EvoRate$_W$ can still approximate the ground truth MI, showcasing its robustness and practical utility. Weaknesses: 1. The authors refer to EDG as their algorithmic contribution in Line 70, but the detailed setting and definition of the EDG problem are lacking. A clear introduction and definition are necessary to understand the context and setup of EDG. 2. There is some confusion regarding the term "learning performance" mentioned in Line 53. It is unclear whether this refers to the prediction tasks of sequential data or the process of training the model to measure mutual information (MI). Additionally, how EvoRate manages the trade-off between computational complexity and learning performance needs clarification. Since the mutual information measurement model also requires training, as is standard in variational MI estimators [1], why is it necessary to measure MI to reflect evolving patterns rather than directly relying on the predictive performance of sequential models (e.g., LSTM, Attention, SSM), which also require training? 3. It would be beneficial to include a pseudocode table to facilitate a clearer understanding of the algorithms for the readers. 4. Although the authors state they do not have experiments on NLP tasks, it would be highly recommended to apply EvoRate to NLP datasets, given that NLP is one of the most important sequential data tasks. 5. The claim that the function $f$ will converge to $f^*$ in Remark 2 is unconvincing. This is because the joint distribution is not available, making it difficult to accurately learn the transition function $f$. Without access to the joint distribution, the learning process lacks the necessary information to ensure convergence to the optimal function $f^*$, especially when dealing with complex, high-dimensional real-world data. 6. Synthetic experiments with known correspondences in Section 6.1, only include the average operation as the transition function. It would be more beneficial to also consider a more practical dataset. 7. There are some typos that cause confusions: Table 2 should be EvoRate and Table 3 should be EvoRate$_W$. [1] Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeshwar, Sherjil Ozair, Yoshua Bengio, Aaron Courville, and Devon Hjelm. Mutual information neural estimation. In Jennifer Dyand Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 531–540. PMLR, 10–15 Jul 2018. Technical Quality: 4 Clarity: 3 Questions for Authors: See the weaknesses above. Confidence: 5 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: This paper is about fundamental machine learning research and is unlikely to have any potential negative social impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. (EDG setting) We thank you for pointing it out and will include an illustration of the Evolving Domain Generalization (EDG) setup. A brief explanation of EDG can be found in lines 328-330. More specifically, The EDG tasks setup involves using the training dataset $D_S = \\{ \\{ x_{t,i}, y_{i,t} \\}^N_{i=1} \\}^M_{t=1}$, where $x_{i,t}$ and $y_{i,t}$ represent the data and its label of the $i$-th sample at time $t$ respectively and $N$ is the sample size, to learn a function that can predict the class of samples from future timestamps, which are then evaluated on the test dataset $D_T = \\{ \\{ x_{i,M+t}, y_{i,M+t} \\}^N_{i=1}\\}^L_{t=1}$. 2. - (a: learning performance) Thanks for pointing it out. The term "learning performance" refers to the performance of approximating MI using the trained EvoRate, which is parameterized by the learned autoregressive function $f$ and the encoder function $g$. - (b: trade-off between complexity and performance) When the original dimension of the data $D$ equals the dimension of the encoded data's embedding $d$, it provides the most precise MI estimation, comparable to the computational cost of directly training a deep autoregressive model. On the other hand, when $d \ll D$, the estimated precision is lower but more computationally efficient. Specifically, if we directly load a pretrained model for $g$, we only need to train $f$, which is typically only composed of multi-layer perceptrons (MLPs). - (c: why measure MI instead of relying on the predictive performance) As pointed out in (b), setting $d \ll D$ makes EvoRate efficient in estimating MI compared to training a deep autoregressive model in the original high dimension, especially for high-resolution videos. The key is to estimate MI in the latent encoding space. We can trade off some precision of EvoRate due to the Data Processing Inequality [1] for computational cost efficiency. We provide additional experimental results with the KITTI dataset: | Corruption Ratio |0 |0.05|0.1|0.25|0.5 | |-------------------------|-------------------------------|-----------------------------|-------------------------------|-----------------------------|----------------------------| |Finetune $g$ | 2.56|1.52|0.93|0.37|0.05 |Pretrained&fixed $g$ |2.54| 1.55| 0.92|0.39|0.10 In the above table, the Corruption ratio represents the probability of shuffling the data sequence, where a high ratio leads to a degradation of evolving patterns. 3. (pseudocode) We have included the pseudocode in the global response PDF and will add it to the revised version of the paper. 4. (NLP tasks) We further experiment with EvoRate on the NLP dataset Associative Recall (AR) in zoology [4], and the results are shown below. Due to time limit, we choose this simple synthetic dataset for our experiments. EvoRate is estimated between the historical sequence and its query's recall. A low EvoRate for the NLP dataset implies either a low prediction potential or that this dataset alone cannot be used to train a good predictive model. Notably, the test accuracy of AR is typically 100%, which is consistent with the high EvoRate of 2.93, indicating strong predictive performance.. | Corruption Ratio |0 |0.25|0.5|0.75|1 | |----------------|-------------------------------|-----------------------------|-------------------------------|-----------------------------|----------------------------| |EvoRate |2.93| 1.52|1.28|1.18|1.04| 5. ($f$ will converge to $f^*$) In Lemma 1, we show that with our defined autoregressive cost function, if $f$ attains $f^*$, the estimated joint distribution $\pi^*$ will converge to the real distribution. Furthermore, it is a common practice to estimate the joint distribution with optimal transport to build the correspondence, as demonstrated in [2,3]. However, proving the convergence of $f$ to $f^*$ is challenging, and we will try to address this in future research. 6. (Synthetic experiments) For more complex transition functions, the analytical mutual information value is intractable, as there are no methods to compute the ground truth MI value quantitatively. For datasets with correspondences, we test EvoRate on real-world time series forecasting datasets Crypto, Player Traj., M4-Monthly, M4-Weekly, M4-Daily, and video prediction KITTI dataset. 7. (typos) Thank you for pointing them out, and we will revise the paper accordingly. [1] Elements of information theory. John Wiley & Sons, 1999 [2] Joint distribution optimal transportation for domain adaptation, NIPS 2017 [3] Optimal Transport for Domain Adaptation, TPAMI 2015 [4] Zoology: Measuring and Improving Recall in Efficient Language Models, Arxiv 2023 --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications and the new experiments. After reviewing the rebuttal, my concerns have been fully addressed. I am particularly impressed that EvoRate can be effectively applied to NLP datasets, demonstrating significant potential for various real-world applications. I have no further questions and am pleased to raise my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer rYZr, We greatly appreciate your feedback and are glad that our rebuttal has resolved your concerns, resulting in an improved rating! We will incorporate your suggestions into the manuscript in future revisions. Best regards, Authors of 6086
Summary: This paper aims to identify the evolving pattern in sequential data. In addition, given evolving pattern may present in the sequential data, this paper would like to introduce a technique that can identify the best temporal order and features for learning the sequential data. To address that, this paper proposed an indicator Evolving Rate (EvoRate), by borrowing the ideas from existing works in Mutual Information (MI) Estimation. The effectiveness of EvoRate is tested by experiments on multivariate time series forecasting, video forecasting and evolving Domain Generalization (EDG) tasks. Strengths: The research question discussed is interesting. Weaknesses: This paper measures evolving patterns by an improved MI indicator. Some discussions and experiments are conducted to evaluate the advantages of EvoRate over existing MIs. However, the question discussed in this paper is "How to determine the existence of evolving patterns in data sequences" which does not limit the technical solutions to MI-based one. Therefore, I am questioning the contribution of this paper. Technical Quality: 2 Clarity: 2 Questions for Authors: Apart from comparing to existing works in MI, do you also consider studies working on evolving patterns but not using MI? How to use the results of accurate identification of evolving patterns? This paper shows the results of the estimated EvoRate compared to MI results. However, it is less clear to me what the real benefit of introducing MI into solving the evolving pattern issue. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: This paper shows some interesting ideas and results, but the motivation is not very clear to me. The proposed method is an edited version of MI. However, I am not clear why it is beneficial to introduce MI to address evolving patterns compared to existing solutions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - W1 & Limitation (motivation is not very clear) - **Our motivations and contributions**: Our work's contribution is three-fold: - It is theoretically motivated by the use of MI to estimate evolving patterns. - It applies a specially designed similarity critic that considers the autoregressive manner. - For cases where data is sampled at each timestamp and lacks correspondence, we propose methods to approximate the absent correspondence between timestamps with optimal transport (OT), thus enabling EvoRate estimation. - W1 & Q1 (benefit of introducing MI) - **Why we use MI**: We justify our choice of using MI in Section 4.2 and Proposition 1. Specifically, we show that **a larger MI leads to a smaller expected maximum likelihood estimation (MLE) loss, which indicates better autoregression performance**. As shown in Eq (5), the expected risk can be decomposed into two terms: the first term is related to the prediction model, and the second term, $H(Z_{t+1}) - I(Z_{t+1};\textbf{Z}^t_{t-k+1})$, is only related to the data. Since the inequality $H(Z_{t+1}) \ge I(Z_{t+1}; \textbf{Z}^t_{t-k+1})$ always holds, minimizing the prediction risk in autoregressions requires MI to be as close as possible to $H(Z_{t+1})$. The insight behind achieving equality is that we can predict the future state $Z_{t+1}$ if we have complete information about the historical states $\textbf{Z}_{t-k+1}^t$. Consequently, we argue that MI serves as a direct and reliable indicator of evolving patterns. Based on this conclusion, our experiments aim to show that EvoRate can effectively approximate MI in sequential data. - W1 & Q1 (comparing to existing works) - **Advantages over the existing work**: The problem of evolving patterns in sequential data is under-studied, and to the best of our knowledge, the only related work is ForeCA [1]. However, ForeCA relies on the spectral entropy of data and assumes linear dependency between historical information and future states. Consequently, it can only detect the cyclical pattern and cannot serve as an indicator of complex evolving patterns. Moreover, ForeCA cannot be applied to high-dimensional data, limiting its use for many real-world datasets. EvoRate addresses these issues by MI-guided efficient deep representation learning, and its effectiveness is justified in Table 1 of our paper. More discussions of the benefits of EvoRate over ForeCA are also presented in lines 106-116 of our paper. We also include three statistic indicators for time series in global response. - Q1 (how to use results of identification of evolving patterns): There are various applications: - a) Model selection (sequential vs. static): By establishing an empirical threshold for EvoRate, we can determine the appropriate model for predictions. If EvoRate exceeds the threshold, a sequential model is recommended. Conversely, if EvoRate is below the threshold, indicating a lack of temporal patterns (e.g., coin tossing), a static model should be used. For example, based on the results in Table 3, we should apply a static model for the Portraits and Caltran datasets. - b) Temporal order estimation: EvoRate can be used to estimate the temporal order in the data. From Table 1, we suggest building a temporal model with a temporal order of 90 can achieve optimal performance, as EvoRate values for orders 180 and 270 are very close to that of 90. - c) Temporal feature selections: EvoRate can be used for feature selection for temporal prediction tasks, allowing for efficient and explainable model training by excluding redundant or irrelevant features, as shown in Figure 2-b. - d) Evolving domain generalization (EDG): EvoRate naturally acts as a regularizer to enhance the performance of learning dynamics in sequential data, as verified in EDG tasks in Table 4. - e) We further experiment with EvoRate on the NLP dataset Associative Recall (AR) of zoology [4]. The results indicate that a low EvoRate for the NLP dataset implies either a low prediction potential or that the dataset alone cannot be used to train a good predictive model. EvoRate is estimated between the historical sequence and its query's recall. |Corruption Ratio|0 |0.25|0.5|0.75|1| |-|-|-|-|-|-| |EvoRate |2.93| 1.52|1.28|1.18|1.04| The Corruption ratio represents the probability of shuffling the data sequence. A high ratio leads to a degradation of evolving patterns, resulting in a lower EvoRate. This is consistent with the results shown in the above table. Notably, the test accuracy of the AR dataset is normally 100%, which aligns with its high EvoRate of 2.93. - Limitation (EvoRate is edited MI): There are two major differences between EvoRate and existing MI estimation methods (e.g., [5-6]): 1) Existing MI estimation methods are designed for static data, and their concatenated and separable critic functions directly compute the similarity between embeddings in two different domains. In contrast, EvoRate leverages an autoregressive critic function to efficiently account for the structure of sequential data by mapping historical states to the next state and computing similarity in the domain of the next state. As a result, EvoRate underestimates the ground truth MI values of sequential data, as shown in Figure 1-b. 2) Existing MI estimation methods do not account for sequential data without correspondences, and hence cannot directly estimate MI in such cases. EvoRate$_W$ mitigates this issue by building correspondence with optimal transport (OT). However, even with OT, existing MI estimation methods still fail to estimate MI for sequential data due to limitations in their critic functions, as shown in Figure 1. [1] Forecastable component analysis [2] Lyapunov exponents [3] Testing stationarity in time series [4] Measuring and Improving Recall in Efficient Language Models [5] Mutual information neural estimation [6] A contrastive log-ratio upper bound of mutual information --- Rebuttal Comment 1.1: Comment: Thanks for good explanation for my concerns --- Reply to Comment 1.1.1: Comment: Thank you for raising our score. We highly appreciate your recognition of our work. --- Rebuttal Comment 1.2: Comment: We have identified a typo in our rebuttal: In our response to the Limitation section, the phrase "As a result, EvoRate underestimates..." should instead refer to existing MI methods, not EvoRate.
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for their valuable comments on our work. We have received four reviews with ratings of 3, 6, 7, and 7. We are pleased that the reviewers have good impressions of our work, including: - Addressing an interesting and important problem (8iST, rYZr, V8vA); - Presenting a well-motivated paper with the proposed method EvoRate being useful for a variety of applications (rYZr, V8vA, oZmm); - Theoretical justification for choosing mutual information (MI) as the indicator of evolving patterns in sequential data (rYZr, V8vA); - Generally clear presentation (V8vA). During the rebuttal period, we have provided detailed responses to all comments and questions point by point. Specifically, - We further clarified why we chose Mutual Information as the metric to measure evolving patterns (to 8iST & oZmm). - We further clarified how we trade off computational cost with the performance of EvoRate by selecting the dimension of the encoding space and added experiments to address this with different dimensions of the encoding space (to rYZr & oZmm). - We added an NLP dataset to evaluate EvoRate, demonstrating its capability in real-world applications (to rYZr & oZmm). - We added experiments with a fixed pretrained encoder to show that EvoRate can be very efficient by only training the autoregressive model $f$ (to oZmm). In the attached PDF, we provide the pseudocode for our proposed algorithms, a detailed comparison of three traditional time-series statistical indicators with EvoRate, and an introduction to the newly added NLP dataset. Lastly, we would like to thank all the reviewers for their time once again. Could you please check our response and confirm if you have any further questions? **We are looking forward to your post-rebuttal feedback!** Pdf: /pdf/3024d012e53e0f78f49bccbcce869a6cd548110a.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
NVRC: Neural Video Representation Compression
Accept (poster)
Summary: This paper proposes a INR-based video compression framework, Neural Video Representation Compression (NVRC), targeting compression of the implicit representation. Based on the proposed novel entropy coding and quantization models, NVRC is able to optimize an INR-based codec in a fully end-to-end manner. They also proposed a model compression framework for coding all the network, quantization and entropy model parameters hierarchically. Experiments show that NVRC outperforms many conventional and learning-based codecs, with a 24% overall coding gain over VTM (Random Access) on the UVG dataset, measured in PSNR. Strengths: The manuscript is well written and easy to follow. Though the components exist in previous learned compression works, the proposed overall framework has novelty, which builds an elegant and efficient dependency among grid parameter, neural network parameter and hyperprior. The rate distortion performance is outstanding, making INR based methods better than VTM. Weaknesses: Only UVG dataset is evaluated in the experiment. Please consider using more dataset as previous works do. In VAE based deep video compression, there exists a line of works like ‘Bit allocation using optimization, ICML 2023’, which is based on overfitting the latent representation to individual input video/GoP during encoding. This line of research is closely related to INR based video compression and should be discussed in the related works part. The complexity in Table 2 should be compared with previous methods like figure 3. The components used in this paper exist in previous works and should be discussed as previous works, including group based quantization (Fully quantized network for object detection, CVPR 2019), patch or block based parallel context model (Riddle: Lidar data compression with range image deep delta encoding, CVPR 2022; Checkerboard context model for efficient learned image compression, CVPR 2021). It is better to highlight the technical novelty in each component by comparing with those previous methods. Minor: L95, [11] is cited twice. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the weakness part. Especially about the complexity and technical novelty comparision with previous works. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitation is addressed in the manuscript. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ***Q: Evaluation on more datasets*** A: Thank you for this suggestion. We agree that evaluation on more datasets is important, and we have included additional results (MCL-JCV and JVET CTC Class B, RGB setting) in the rebuttal results (see the attached pdf). Here we used the JVET CTC Class B rather then JCT-VC CTC Class B (HEVC-B) as the former is the latest HD testset for VVC. It can be observed that promising results have been observed in both dataset. The performance of NVRC is significantly better than HiNeRV, the previous best INR-based method in the MCL-JCV dataset. It is noted that the performance of NVRC with the MCL-JCV dataset is worse compare with the other datasets. We believe that this is mainly due to the shorter length of the sequences, and the above observation is consistent with previous work [24]. Full results on these datasets will also be provided in the final version of the paper. ***Q: Discussing related works on VAE based deep video compression*** A: We appreciate this suggestion. In the paper, we have referred to related work on overfitting-based VAE for image [55] and video [52] compression, but we agree that it would be beneficial to include further contributions in this class including the paper suggested by the reviewer [Xu2023]. ***Q: Complexity versus Rate plots *** A: We agree that complexity versue rate plots will provide a better illustration of performance. We have included these in the rebuttal results (see the attached pdf) and will of course include them in the revised paper. ***Q: Technical novelty in each component over existing works.*** A: Thank you for highlighting this. We agree that group-based quantization and patch/block-based context models have been employed in previously published papers, and we will include discussion of this in the revised paper. Regarding the novelty of our approaches over these works, although the aforementioned components do exist in the literature, existing INR-base video codecs have rarely employed these models, especially to combine learned quantization with different entropy models. In terms of the use of quantization and entropy models, we would like to highlight that there is a gap in the design of these components between conventional end-to-end approaches and INR-based compression models; existing INR-based models only opt for simple solutions, e.g., conditional Gaussian or simple auto-regressive models. Our work exploits the use of more complex models, i.e., the axis-based conditional Gaussian and block-based auto-regressive models, which offer advantages over existing INR-based approaches. However, we do understand that these entropy models are not SOTA choices when considering all learning-based compression models, e.g. the checkerboard model [He2021]. While our technical novelties include the use of both better quantization and entropy models for INR-based video compression, more importantly, we have combined different quantization and entropy models into a fully optimized framework and propose encoding the whole model (the INR + quantization/entropy models) in a hierarchical manner. Previous work has usually utilized a single entropy model for encoding different types of parameters. For example, [23, 27] only use a learned context model for encoding latent grids but employs non-learned quantization and entropy model for coding network parameters; [57] only uses a conditional Gaussian model for encoding all types of parameters. In our work, we use group-based quantization for network parameters and block-based context models for feature grid encoding. This offers better coding efficiency, as the parameters are encoded by a class of models that are likely to be more efficient. Furthermore, in INR-based compression, introducing more complex quantization and compression models can introduce significant overhead, because the parameters of these models have also to be transmitted to the decoder side. Thus, we introduce the hierarchical parameter coding structure into INR-based compression, which enables the use of more complicated quantization and compression models. We will include the above discussion in the paper. ***Q: [11] is cited twice*** A: Thank you for pointing out this issue. We will fix this in the camera-ready version. ***Refs*** [Xu2023] Xu, Tongda, et al. "Bit allocation using optimization." International Conference on Machine Learning. PMLR, 2023. [He2021] He, Dailan, et al. "Checkerboard context model for efficient learned image compression." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021. --- Rebuttal Comment 1.1: Comment: Thanks for the relpy, I increase my score from ba to wa after considering all the reviewer comments.
Summary: This paper focuses on the implicit neural representation for video compression, where several modifications are applied for the coding of feature grid and network layer. An enhanced training pipeline is also applied. Strengths: 1. The claimed performance over traditional codec and VAE-based codec is impressive. Weaknesses: 1. This paper claims the first fully end-to-end optimized INR-based framework. However, the problem of joint rate-distortion optimization has been addressed for INR-based compression in several existing works, including [13] and [Ref1] (missing in this paper). As these two papers have already been dedicated to enabling joint rate-distortion optimization in INR-based image compression, it is important to tell the difference in this paper when compared with these two previous works. I know these two previous works are for image, while this paper focuses on video. If the author think the difference between image and video in this part is large, the author should highlight the challenge and the corresponding solution of joint rate-distortion optimization for video when compared with image. [Ref1] Compression with Bayesian Implicit Neural Representations. NeurIPS 2023. 2. In Fig. 3 UVG-YUV 420, the PSNR of baseline DCVC-DC is about 40 dB at 0.03 bpp. However, in the Fig. 7 of DCVC-FM paper [https://github.com/microsoft/DCVC/tree/main/DCVC-FM], the PSNR of DCVC-DC is obviously larger than 41 dB at 0.03 bpp. So, for the same model, what is the reason for this large RD difference? 3. The results on MCL-JCV and HEVC B are missing. The ablation study is only based on two videos, which is not reliable. Currently I think this paper has very good performance. If the author can address my concerns and questions, I will consider increasing my rating. Technical Quality: 1 Clarity: 2 Questions for Authors: 1. In Table 1, what is the anchor method? Where is the result of the proposed NVRC in Table 1? 2. In Table 2, the Frame Enc FPS can be as high as 6.4 FPS, so can the NVRC encode a 600-frame UVG video within 100 seconds? From my understanding, the INR-based solution is quite slow for encoding. The author should give the encoding time from RGB video to binary bit-stream, and the decoding time from binary bit-stream to RGB video, where the actual arithmetic coding should be performed, especially when considering NVRC has auto-regressive model. Confidence: 5 Soundness: 1 Presentation: 2 Contribution: 2 Limitations: no negative societal impact Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ***Q: The difference between NVRC and [13]/[Guo2023]**** A: We thank the review for highlighting this point. We agree that there are multiple INR-based codecs that focus on joint rate-distortion optimization, and will describe these in the revised paper. When we refer to a "fully end-to-end optimized framework" in the paper, we imply that i) the approaches should be optimized using a rate-distortion loss during the whole encoding process, and ii) all components with parameters that will be entropy coded, and their related quantization/entropy parameters should be optimized with according to the rate-distortion objective. Based on this definition, we believe that [13] is not fully optimized. Specifically, in [13], the rate term only contributes to the second stage optimization. In contrast, our approach is fully end-to-end optimized, because: 1) The RD loss is used throughout the encoding process, including both Stage 1 and 2. This is important because i) it provides improved performance (as shown in the additional ablation study in the rebuttal, V6); ii) it could also provide a more streamlined solution, as pointed out in a recent work [23], when incorporating rate distortion optimization in the first stage of training, the second stage is not mandatory as it only contributes to very little gain (we have observed a similar characteristic in our experiment). 2) The total rate is also jointly optimized in our work, which includes all the parameters that are entropy coded and have non-negligible overhead. In contrast, previous works [23, 27, Leguay2023] only include some of them. In [Leguay2023], only the latent grids are optimized with RD optimization, while the network parameters are entropy coded without optimizing the rates. In their limitation section (Sect V.), they pointed out that the network parameters actually cost a high overhead (up to >30\% bit-rate) especially at low rates. In [Guo2023], the focus was solely on image compression. We also observed that the framework in [Guo2023] differs significantly from those used in INR-based video compression [10, 13, 17, 23, 24, 26, 27]. While the former encodes the parameters without quantization, the latter performs quantization followed by entropy coding. Our approach follows the second method, which is mainly adopted in INR-based video coding tasks. Regarding the difference between INR-based image and video compression: for video compression, INR-based approaches usually utilize a larger network, due to both the larger size of the signal instance and the higher compression ratio required. For example, NeRV-based models [10, 13, 17, 23, 24, 26, 27] typically contain millions of parameters. Due to the larger size of the models, more complicated compression techniques, such as fine-grained quantization and entropy parameters, are beneficial. Therefore, INR-based image and video compression tasks introduce different challenges, and we think that the fully end-to-end optimization could play a more important role for the video compression task compared to image coding, which allows us to achieve improved overall rate distortion performance. ***Q: DCVC-DC results in Fig. 3*** A: We appreciate the reviewer's attention to detail. We have identified that this inconsistency is due to the incorrect use of the model checkpoints. We have corrected the results (as included in the rebuttal results) and will update them in the revised paper. We have also cross-checked the remaining results and confirmed their correctness. ***Q: Evalution (and ablation study) on more content*** A: This is a valid observation. We have provided preliminary coding results on the MCL-JCV and JVET CTC Class B datasets in the rebuttal (see the attached pdf) and will include full results in the revised paper. Here we used the JVET CTC Class B rather then JCT-VC CTC Class B (HEVC-B) as the former is the latest HD testset for VVC. We agree that performing the ablation study on more sequences is important. However, due to the limited time available, we have not been able to generate these additional ablation study results during the rebuttal. We commit to providing full results on the whole UVG dataset in the paper. ***Q: Anchor methods in Table 1*** A: In Table 1, we compare NVRC (test) against each of the baseline models (anchor) to calculate the BD-rate results. We will make this clear in the paper. ***Q: Encoding time in Table 2*** A: In Table 2, we provide the encoding (training) speed in terms of frames per second (FPS) per each training epoch. Therefore, the total enc time = (#frames x #epoches)/enc FPS, e.g., the total time for encoding a 600 frame sequences with 390 epochs is around 10 hours, when the encoding speed is 6.4 FPS. We reported the enc speed in this manner instead of the total time, for two reasons: i) this is commonly used in related works (e.g. [23] reported enc time per 1K steps and [24] reported the time per step), ii) the number of epochs of INR-based approaches is different with varying quality. We have reported both the i) entropy en/decoding times for the whole model ('Model Compression Enc/Dec Time' in Table 2), ii) INR decoding time for reconstructing a video frame ('Frame Dec Time' in Table 2). For example, it takes 37 seconds for entropy decoding the smallest model, and this decoding process only needs to be performed once, and the decoded model can be used for reconstructing up to 600 frames in our experiment. Additional parallelism can be further achieved for faster coding with the autoregressive model, but we have not implemented it in our experiment: i) parallel en/decoding can be enabled because different resolution grids are independently coded, ii) parallel entropy decoding and frame reconstruction is also possible, as the decoding a single video frame does not require the full grids to be decoded. We will clarify these points in the final paper and report the total en/decoding times as suggested by the reviewer. --- Rebuttal Comment 1.1: Title: Response Comment: I increased my rating.
Summary: This paper describes the INR-based video codec NVRC. NVRC is optimized E2E and includes a quantized model which is critical for device reproducibility (though this is not discussed in the paper). The performance on UVG is good and significantly better than VVC VTM 20.0. Benchmarks are given for an NVIDIA 4090 and shows near real-time for decode but 100X slower than real-time for decode. Source code will be released when accepted. Strengths: NVRC provides SOTA performance on the UVG dataset, provides a quantized model which is necessary for device reproducibility and efficient inference (both speed and power), and will release source code. Decoding time is near real-time on a NVIDIA 4090. It is a good solution for streaming scenarios. Weaknesses: The encoding time is 100X off from real-time, making it not approprate for any real-time scenarios (e.g., video conferencing, surveillance, etc). There is no discussion of device reproducibility. Is the entire system quantized or is there still some floating point operations that could break device reproducibility? The evaluation is only done on UVG which is insufficient. Others to include (see DCVC-DC paper) are: MCL-JCV, HEVC B, HEVC C, HEVC D, HEVC E, HEVC RGB. Technical Quality: 3 Clarity: 3 Questions for Authors: Is NVRC device reproducible, i.e., can the same stream be decoded accross any device? This would be a huge advantage. Floating point neural codecs are generally not device reproducible. In order for NVRC to be practical for video streaming the decode needs to run on NPUs. What is needed to run on an NPU and what is the benchmark for that (say an Apple M3). Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, though because the evaluation is pretty weak (only UVG) there may be video scenarios that perform much worse than UVG. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ***Q: For real-time scenarios*** A: We agree with the reviewer that the proposed method is not yet appropriate for real-time applications. The relatively long encoding time is one of the main limitations of this AND other INR-based compression methods. We will mention this in the paper as an important topic for future work. ***Q: The reproducibility issue*** A: Regarding reproducibility, we have not verified it for our work. As our model only utilizes a small convolutional network for autoregressive coding, and simple element-wise operations for network parameter coding, we believe that our network is compatible with integer operations and lookup table techniques, which enables reproducibility [Balle2018]. We will include a discussion of reproducibility when describing the limitations in our paper. ***Q: More databases for evaluation*** A: We agree with the reviewer that performing evaluation on multiple datasets is important. Additional (preliminary) experiment results on the MCL-JCV and JVET CTC Class B datasets with the RGB settings are provided in the rebuttal (in the results pdf). Here we used the JVET CTC Class B rather then JCT-VC CTC Class B (HEVC-B) as the former is the latest HD testset for VVC. It can be observed that promising results have been observed in both dataset. The performance of NVRC is significantly better than HiNeRV, the previous best INR-based method in the MCL-JCV dataset. It is noted that the performance of NVRC with the MCL-JCV dataset is worse compare with the other datasets. We believe that this is mainly due to the shorter length of the sequences, and the above observation is consistent with previous work [24]. We will include full results in the camera-ready version if the paper is accepted. ***Refs*** [Balle2018] Ballé, Johannes, Nick Johnston, and David Minnen. "Integer networks for data compression with latent-variable models.", ICLR. 2018. --- Rebuttal Comment 1.1: Title: Reply to rebuttal Comment: Thanks for the response. I maintain my accept rating.
Summary: - This paper proposes an INR-based video codec, NVRC, which aims to improve the rate efficiency by encoding parameters hierarchically. Experimental results of the proposed method have been shown in RD performance on the UVG dataset. Strengths: - Experiments show good RD performance compared to recent INR-based codecs ([24], [23], [57]) (Figure 3), supporting this as an effective approach. Weaknesses: - I thought that the usefulness of the method could be better demonstrated by showing how much the RD performance was improved for each of the changes in the proposed method (1-4 on page. 2). - The fact that the comparison was made with only one UVG dataset is also a weak point in demonstrating the general effectiveness of the method. Technical Quality: 3 Clarity: 3 Questions for Authors: - Have you compared each of the changes in the proposed method (1-4 on page. 2) with other methods ([24], [23], [57])? For example, V4 in Table 3 is intended to show the usefulness of hierarchical parameter coding, but have you compared it with the parameter coding method used in INR-based codecs ([24], [23], [57])? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: - Section A.4 describes the limitations of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ***Q: Showing coding gain for each contribution on Page 2.*** A: Thank you for your suggestion. In the original paper, we presented these figures in the ablation study, including (contribution 2) the use of different quantization setting and entropy models (V1/V2/V3 for entropy models and V5 for quantization) and (contribution 3) the hierarchical parameter coding (V4). We are conducting additional ablation studies for V6 (pretraining + fine-tuning [13, 37, 57], i.e., without fully end-to-end optimized settings) and V7 (without alternating optimization) to confirm our contributions 1 and 4, respectively. Preliminary results for V6 have been included in the rebuttal pdf file. ***Q: More databases for evaluation*** A: We agree with the reviewer that performing evaluation on multiple datasets is important. Additional (preliminary) experiment results on the MCL-JCV and JVET CTC Class B datasets with the RGB settings are provided in the rebuttal (in the pdf). Here we used the JVET CTC Class B rather then JCT-VC CTC Class B (HEVC-B), as the former is the latest HD testset for VVC. It can be observed that promising results have been observed in both dataset. The performance of NVRC is significantly better than HiNeRV, the previous best INR-based method. It is noted that the performance of NVRC with the MCL-JCV dataset is worse compare with the other datasets. We believe that this is mainly due to the shorter length of the sequences, and the above observation is consistent with previous work [24] We will include full results in the camera-ready version when the paper is accepted. ***Q: Compare each change with [24], [23], and [57]. *** A: Due to the large amount of experimentation required, we were unable at this stage to directly compare our approach with the parameter coding methods used in other INR-based codecs. However, the variants in our ablation study (including the additional results provided in the rebuttal) share some common and important features with these methods; the results indicate the effectiveness of our compression model. Specifically: - V6 (in the rebuttal results) employs the pre-training + fine-tuning settings used in [24, 57]. The results of this experiment show that the fully end-to-end optimized pipeline (in NVRC) does offer improved performance. - V5 uses a fixed quantization step size for feature grids, which is the same as that in [23]. The results demonstrate its inferiority compared to the learned step size. In addition, as mentioned in the paper (Line 332), we have applied the fixed quantization step size for neural network parameters, but the networks do not converge well. Furthermore, our proposed NVRC framework employs HiNeRV [24] as the neural representation model with minimal changes; we demonstrate significant performance gain compared to HiNeRV with the original compression pipeline (Table 1 in the paper). We believe that this is mainly due to our improved encoding and compression pipeline. We are happy to provide the results of the NVRC pipeline with the original HiNeRV in the final paper if needed. Compared to C3 [23], our model type is very different: in NVRC, HiNeRV is used as the neural representation model, where NeRV-style networks are much larger than the network used in C3. In C3, the neural network only contains a very small number of parameters, and the rate of the network is excluded in the training loop and the network is simply compressed with searched quantization parameters. This method is therefore inefficient in our setting. We will modify the text and include the justification text/additional results mentioned above in the revised paper. --- Rebuttal Comment 1.1: Comment: Thank you for your thoughtful reply. I have increased my rate.
Rebuttal 1: Rebuttal: Thank you the reviewers for thorough feedback to our submission. We will address the concerns individually. Pdf: /pdf/ffe9b2ece59bb0241872a29602883e6f600808fe.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
ChatTracker: Enhancing Visual Tracking Performance via Chatting with Multimodal Large Language Model
Accept (poster)
Summary: This paper introduces a simple yet effective VL tracking framework based on a multimodal large language model, called ChatTracker. The main idea is to utilize the rich world knowledge in multimodal large language model to generate high quality language descriptions and improve tracking performance. Specifically, a reflection-based prompt optimization module is proposed to iteratively refine the ambiguous and inaccurate descriptions of the target with tracking feedback. A large number of experiments demonstrate the effectiveness of the proposed method. Strengths: 1.the paper is well written and can be easily understood by the reader. 2.The proposed tracker can achieve SOTA performance and adequate ablation experiments are accomplished. Weaknesses: - Line 127 mentions only the initial bounding box and search frame as inputs and does not mention template frames as inputs. However, the equation in line 128 writes template as input. - In section 4.1, MixFormer-L and ARTrack-256 as visual coders for ChatTracker-L and ChatTracker-B? I have a question, why not use mixformer-B/L or ARTrack-B/L uniformly as visual encoders. Could the authors please provide more results for mixformer-B and ARTrack-L for performance comparison. - In section 4.3 and Table 2, when applying the proposed module to other visual trackers, the paper does not describe how the linguistic description is fused with the visual features, could the authors please explain this process specifically. - The LaSOT_ext and OTB_Lang test datasets are also included in the VL tracking task, and the authors are asked to provide performance on these datasets to help enrich the experimental results in this paper. Technical Quality: 3 Clarity: 3 Questions for Authors: - It is an interesting idea to apply large language models to the visual tracking field. The general practice would be to do fine-tuning operations on the downstream data so that the large language model is more adapted to the downstream task. I have a question, does the proposed approach require fine-tuning in the downstream tracking dataset? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitations of the method have been accounted for in this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ***Q1: Line 127 mentions only the initial bounding box and search frame as inputs and does not mention template frames as inputs. However, the equation in line 128 writes template as input.*** Thanks for your careful reading. In the equation $P^{t}_{VT} = \mathcal{F}_V$$_T$$(I^{t};I^{1}, G) $ , $I^t$ represents the image of the t-th frame, $G$ represents the initial bounding box, and $I^1$ is the first frame of the video with $G$ as the initial bounding box. We will clarify this in the revised manuscript. ***Q2: In section 4.1, MixFormer-L and ARTrack-256 as visual coders for ChatTracker-L and ChatTracker-B? I have a question, why not use mixformer-B/L or ARTrack-B/L uniformly as visual encoders. Could the authors please provide more results for mixformer-B and ARTrack-L for performance comparison.*** Thanks for your question. At the time of submission, ARTrack[1] had not released the code and model weights for ARTrack-L384, so we did not use ARTrack-L as the visual encoder for ChatTracker-L.The choice of MixFormer-L[2] for ChatTracker-L leads to high accuracy, and ARTrack-B for ChatTracker-B provides a better trade-off between accuracy and speed. We provide more results for MixFormer-B and ARTrack-L for performance comparison. In the table below, results of visual trackers with the integration of ChatTracker are marked by *. **Table.1 The comparison of results forVision Language trackers using ChatTracker-generated text (marked by\*). The speed was tested on the same device.** | | LaSOT[3] | | | TNL2K[4] | | | Speed (fps) | | ------------------------- | ----------- | ----- | ----- | ------------ | ----- | ----- | ----------- | | Choice for visual tracker | $AUC$ | $P$ | $P_{Norm}$ | $AUC$ | $P$ | $P_{Norm}$ | | | Mixformer-B | 69.15 | 74.70 | 78.70 | 55.12 | 55.17 | 71.21 | 57 | | Mixformer-B* | 70.43 | 76.27 | 80.28 | 57.38 | 57.93 | 73.94 | 44 | | Mixformer-L | 70.1 | 76.3 | 79.9 | 60.55 | 63.01 | 76.16 | 27 | | Mixformer-L* | 74.1 | 83.8 | 81.2 | 65.41 | 70.20 | 83.25 | 20 | | ARTrack-256 | 70.77 | 76.23 | 79.54 | 58.09 | 59.90 | 74.33 | 51 | | ARTrack-256* | 71.68 | 77.50 | 80.92 | 59.63 | 62.06 | 76.27 | 48 | | ARTrack-L384 | 73.49 | 80.56 | 82.34 | 60.58 | 64.36 | 77.25 | 22 | | ARTrack-L384* | 74.65 | 82.07 | 84.71 | 62.01 | 65.65 | 78.93 | 18 | ***Q3: In section 4.3 and Table 2, when applying the proposed module to other visual trackers, the paper does not describe how the linguistic description is fused with the visual features, could the authors please explain this process specifically.*** In fact, we do not fuse the visual tracker into our ChatTracker at the feature level. When replacing visual trackers, we incorporate the different tracking results of the corresponding visual tracker, $P^t_{VT}$, into $P^t_{fore}$ as supplemental foreground proposals. At the same time, the foreground proposals generated by the GVLM using linguistic descriptions do not change. We will provide a detailed description of how we replace the other visual tracking methods in the revised manuscript and we will release our code to help our readers better understand this process. ***Q4: The LaSOT_ext and OTB_Lang test datasets are also included in the VL tracking task, and the authors are asked to provide performance on these datasets to help enrich the experimental results in this paper.*** Thank you for your suggestion. Our performance on these two datasets is as follows, demonstrating that our method also achieves SOTA results on these datasets. **Table.2 Performance comparison of ChatTracker, UVLTrack, and ARTrack on the LaSOT_ext and OTB_Lang datasets.** | | LaSOT_ext | | | OTB_Lang[5] | | | | ------------- | --------- | ----- | ----- | --------------- | ----- | ----- | | | $AUC$ | $P$ | $P_{Norm}$ | $AUC$ | $P$ | $P_{Norm}$ | | ChatTracker-L | 56.06 | 64.84 | 68.18 | 71.78 | 94.25 | 86.82 | | ChatTracker-B | 53.72 | 62.09 | 65.13 | 70.77 | 92.00 | 85.29 | | UVLTrack-L | 51.21 | 59.00 | 62.30 | 71.89 | 93.21 | 87.77 | | ARTrack-L384 | 52.8 | 59.7 | 62.9 | 71.66 | 92.80 | 85.78 | | ARTrack-256 | 48.36 | 53.73 | 57.69 | 69.90 | 91.15 | 84.10 | ***Q5: Does the proposed approach require fine-tuning in the downstream tracking dataset?*** No, we don't need to fine-tune MLLM in the downstream tracking dataset. We use the MLLM reflection to narrow the knowledge gap between the VL tracker and the MLLM. This allows ChatTracker to achieve favorable performance without fine-tuning. Thanks for your suggestion. We will investigate the fine-tuning version of our proposed framework in future work. [1]Xing Wei, et al. Autoregressive visual tracking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,2023. [2] Yutao Cui, et al. Mixformer: End-to-end tracking with iterative mixed attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,2022. [3]Heng Fan, et al. Lasot: A high-quality large-scale single object tracking benchmark. International Journal of Computer Vision, 2021. [4]Xiao Wang, et al. Towards more flexible and accurate object tracking with natural language: Algorithms and benchmark. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021. [5]Zhenyang Li, et al. Tracking by natural language specification. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2017 --- Rebuttal Comment 1.1: Title: Comment Comment: Thanks to the author's reply, my concerns were largely addressed. I will maintain my original rating score. --- Rebuttal 2: Title: Thanks to Reviewer Bwy6 Comment: Dear Reviewer, Thanks a lot for your recognition of our work. Best wishes, The Authors
Summary: This paper proposes ChatTracker, a novel framework that leverages MLLMs for visual object tracking. The Reflection-based Prompt Optimization (RPO) module can narrow the knowledge gap between the VL tracker and the MLLM. ChatTracker also achieves SoTA performance on several tracking datasets. Strengths: 1. This paper designs a new framework that leverages MLLMs for visual object tracking. The entire framework's process is clear and easy to follow. 2. The approach performs favorably against the state-of-the-art methods on several datasets. Weaknesses: 1. SOTA trackers are missing in Table 1, such as OVLM (TMM23), MMTrack(TCSVT23), etc. There is a discrepancy between the data from UVLTrack in Table 1 and the data provided by the official source. 2. I want to know the model's performance on MGIT (NeurIPS '23) and OTB99_Lang (CVPR '17) to verify the generalizability of the method in different scenarios (short-term tracking and global instance tracking). 3. Why is it necessary to design ChatTracker-L and ChatTracker-B, but use different backbone networks? Also, I believe comparing the L model in Table 1 is unfair, as UVLTrack provides the L model, and annotations are needed for models of different scales. 4. For analysis of language descriptions generated by ChatTracker, if the natural language description includes background information, is it reasonable to only crop out the target for calculating image-text similarity? 5. I believe the phrase "via Chatting with Multimodal Large Language Model" in the title overstates the contribution of the entire framework. Technical Quality: 2 Clarity: 3 Questions for Authors: Please refer to weaknesses. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ***Q1: SOTA trackers are missing in Table 1, such as OVLM[1], MMTrack[2].*** Thank you for your valuable suggestion. We will include comparisons with OVLM and MMTrack in the revised manuscript. Additionally, we have found that compared to their variants with highest performance, our ChatTracker demonstrates better tracking performance. Furthermore, our method does not require manually annotated target text for initialization. **Table.1** Complete comparison of OVLM and MMTrack with ChatTracker's performance on LaSOT[3] and TNL2K[4] Datasets. | Method| LaSOT | | TNL2K | | |-|-|-|-|-| | | $AUC$ | $P$ | $AUC$ | $P$ | | MMTrack | 70.0 | 75.7 | 58.6 | 59.4 | | OVLM-256 | 65.6 | 71.1 | 62.5 | 66.5 | | OVLM-384 | 67.7 | 74.2 | 64.7 | 69.3 | | ChatTracker-B | 71.7 | 80.9 | 59.6 | 62.1 | | ChatTracker-L | 74.1 | 81.2 | 65.4 | 70.2 | ***Q2: I want to know the model's performance on MGIT[5] and OTB99_Lang[6].*** Thank you for your valuable suggestion. We evaluated ChatTracker-B on MGIT and OTB99_Lang. - **For the MGIT dataset**, we used only the first frame's bounding box (BBOX) for initialization and used ChatTracker's self-generated language descriptions for tracking. For the comparison method JointNLT[7], we selected Mechanism A (i.e., the first frame's BBOX and the annotated text in the dataset) for comparison. For ARTrack-256[8], we used only the first frame's BBOX for initialization. **Table.2**. Results on the MGIT dataset | Tracker | **Normalized Precision (N-PRE)** | **Precision (PRE)** | S**uccess Rate (SR_IoU)**. | |-|-|-|-| | ChatTracker-B | 0.833 | 0.691 | 0.801 | | JointNLT | 0.786 | 0.445 | 0.610 | | ARTrack-256 | 0.711 | 0.516 | 0.599 | - **For the OTB99_Lang dataset,** our ChatTracker used only the first frame's bounding box (BBOX) for initialization and utilized ChatTracker's self-generated language descriptions for tracking. The settings for JointNLT and ARTrack-256 are the same as those for MGIT. **Table.3**: Results on the OTB99_Lang dataset | Tracker | $AUC$ | $P$ | $P_{Norm}$ | |-|-|-|-| | ChatTracker-B | 70.77 | 92.00 | 85.29 | | JointNLT | 65.52 | 86.23 | 80.41 | | ARTrack-256 | 69.90 | 91.15 | 84.10 | The results show that, thanks to more accurate text descriptions and the proposed vision-language tracking framework, our ChatTracker outperforms existing methods. We will include these results in the revised manuscript. ***Q3 Why is it necessary to design ChatTracker-L and ChatTracker-B, but use different backbone networks?*** - For the design of ChatTracker-L and ChatTracker-B: ChatTracker-L is designed for better performance, while ChatTracker-B is designed to achieve a better trade-off between accuracy and speed. Here, L stands for Large, and B stands for Base. - For the use of different backbone networks: At the time of submission, ARTrack[1] had not released the training code and model weights for ARTrack-L384, so we use the Mixformer-L for ChatTracker-L. ***Q4: There is a discrepancy between the data from UVLTrack[9] in Table 1 and the data provided by the official source. Also, I believe comparing the L model in Table 1 is unfair, as UVLTrack provides the L model, and annotations are needed for models of different scales.*** Thank you for pointing this out. In Table 1, we reported the results for UVLTrack-B. As shown in the table below, our AUC scores on LaSOT, TrackingNet, and TNL2K are higher than UVLTrack-L. For other trackers in Table 1, we selected the variant with the highest performance for a fair comparison. We will replace the comparison results of UVLTrack-B in Table 1 with the UVLTrack-L version and annotate the scales of the compared methods in the revised manuscript. We have included the revised Table.1 in the uploaded PDF file. **Table.4 Comparison results of UVLTrack-L and ChatTracker on the LaSOT, TrackingNet, and TNL2K datasets** | | LaSOT | | | TrackingNet | | | TNL2K | | |-|-|-|-|-|-|-|-|-| | | $AUC$ | $P$ | | $AUC$ | $P$ | | $AUC$ | $P$ | | UVLTrack-L | 71.3 | 78.3 | | 84.1 | 82.9 | | 64.8 | 68.8 | | ChatTracker-L | 74.1 | 81.2 | | 86.1 | 86.0 | | 65.4 | 70.2 | ***Q5: if the natural language description includes background information, is it reasonable to only crop out the target for calculating image-text similarity?*** It is reasonable to only crop out the target for calculating image-text similarity. When background information is present in the language description, the image-text similarity is lower compared to language descriptions without background information. Therefore, the higher the image-text similarity between the language descriptions and the cropped target, the less background information is included in the language descriptions. The less background information included in the language descriptions indicates that the text quality is higher. Therefore, in our experiments, cropping the target from each frame and calculating its text-to-image similarity is reasonable. We will clarify this in the revised manuscript. ***Q6: I believe the phrase "via Chatting with Multimodal Large Language Model" in the title overstates the contribution of the entire framework.*** "via Chatting with Multimodal Large Language Model" refers to the iterative process where the MLLM generates language descriptions, and the GVLM provides feedback on the quality of these descriptions to the MLLM, helping it generate better language descriptions. This process is akin to chatting with the MLLM. We think this phrasing helps our readers better understand our method intuitively. We are open to changing the title if there is a more suitable one. We would appreciate it if you could give more detailed suggestion or more explanation about how does the current one overstate the contribution. Due to the word limit in the rebuttal, we provide the remaining citations in the following comment. --- Rebuttal 2: Title: Response to Reviewer z5a5(part2) Comment: Due to the word limit in the rebuttal, we provide the remaining citations here. [1]OVLM: Huanlong Zhang, Jingchao Wang, Jianwei Zhang, Tianzhu Zhang, Bineng Zhong "One-stream Vision-Language Memory Network for Object Tracking" TMM 2023 [2]MMTrack: Yaozong Zheng, Bineng Zhong, Qihua Liang, Guorong Li, Rongrong Ji, Xianxian Li"Towards Unified Token Learning for Vision-Language Tracking" TCSVT 2023 [3]Heng Fan, Hexin Bai, Liting Lin, Fan Yang, Peng Chu, Ge Deng, Sijia Yu, Harshit, Mingzhen Huang, Juehuan Liu, et al. Lasot: A high-quality large-scale single object tracking benchmark. International Journal of Computer Vision, 129:439–461, 2021. [4]Xiao Wang, Xiujun Shu, Zhipeng Zhang, Bo Jiang, Yaowei Wang, Yonghong Tian, and Feng Wu. Towards more flexible and accurate object tracking with natural language: Algorithms and benchmark. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13763–13773, 2021. [5]Shiyu Hu, Dailing Zhang, Meiqi Wu, Xiaokun Feng, Xuchen Li, Xin Zhao, and Kaiqi Huang. 2023. "A Multi-modal Global Instance Tracking Benchmark (MGIT): Better Locating Target in Complex Spatio-temporal and Causal Relationship." Paper presented at the 37th Conference on Neural Information Processing Systems (NeurIPS 2023), Track on Datasets and Benchmarks. [6]Y. Wu, J. Lim and M. -H. Yang, "Object Tracking Benchmark," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 9, pp. 1834-1848, 1 Sept. 2015, doi: 10.1109/TPAMI.2014.2388226. [7] Li Zhou, Zikun Zhou, Kaige Mao, and Zhenyu He. Joint visual grounding and tracking with natural language specification. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 23151–23160, 2023. [8]Xing Wei, Yifan Bai, Yongchao Zheng, Dahu Shi, and Yihong Gong. Autoregressive visual tracking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9697–9706, 2023. [9]Yinchao Ma, Yuyang Tang, Wenfei Yang, Tianzhu Zhang, Jinpeng Zhang, and Mengxue Kang. Unifying visual and vision-language tracking via contrastive learning, 2024 --- Rebuttal Comment 2.1: Title: Official Comment by Reviewer z5a5 Comment: I read the author's experiments and replies to all reviewers. I am grateful for the author's efforts during the rebuttal, which largely answered my doubts. There are still some small details that need to be paid attention to. On the one hand, MGIT provides flexible and diverse semantic labels. This benchmark is somewhat closer to the "chat" feature than current VLT benchmarks. Therefore, I suggest that the author conduct further experiments on this benchmark to analyze the effects and limitations of the method under different granularities and evaluation mechanisms, and consider adding this part of the analysis to the article to highlight the motivation and theme of the article. In addition, please note that the author cited the wrong MGIT literature (Reference 5) in the second reply window for me. I am unsure what the relationship is between the work cited by the author and the rebuttal itself. Finally, NeurIPS officially recommends that the reply to each reviewer during the rebuttal should not exceed 6,000 characters (only one window). I noticed that the author's reply to 2 reviewers had exceeded the limit of one window. In a sense, this is unfair (for other authors of the same period who only use one window for rebuttal). I hope the authors can pay attention to these details. In summary, I would like to temporarily maintain my original score and listen to the opinions of other reviewers on the authors' rebuttal. --- Rebuttal 3: Title: Thanks and Response to Review z5a5 Comment: Dear Reviewer: Thank you again for taking the time to review our work and for providing us with valuable feedback. We now address your additional comments. ***Q1. MGIT provides flexible and diverse semantic labels. This benchmark is somewhat closer to the "chat" feature than current VLT benchmarks. Therefore, I suggest that the author conduct further experiments on this benchmark to analyze the effects and limitations of the method under different granularities and evaluation mechanisms, and consider adding this part of the analysis to the article to highlight the motivation and theme of the article.*** We appreciate your valuable suggestion. MGIT is a high-quality vision-language benchmark. The MGIT dataset focuses on the impact of text annotations at different granularities on tracking performance. However, our ChatTracker only uses the initial bounding box as input and does not utilize additional text descriptions as input. Although we agree that further applying our proposed method to MGIT is very interesting, employing different text annotation granularities is beyond the scope of this paper. In the rebuttal, we have provided ChatTracker's results on the MGIT dataset with the initial bounding box and we will include these results in the revised manuscript. Thank you again for your valuable suggestion. We will further investigate these text inputs into our framework in future work. ***Q.2 In addition, please note that the author cited the wrong MGIT literature (Reference 5) in the second reply window for me. I am unsure what the relationship is between the work cited by the author and the rebuttal itself.*** Thank you for your careful reading. We have corrected Reference [5] in the comment. ***Q.3 Finally, NeurIPS officially recommends that the reply to each reviewer during the rebuttal should not exceed 6,000 characters (only one window). I noticed that the author's reply to 2 reviewers had exceeded the limit of one window. In a sense, this is unfair (for other authors of the same period who only use one window for rebuttal). I hope the authors can pay attention to these details.*** Thanks again for your advice. For additional comments, we follow the [NeurIPS 2024] Clarification on author rebuttal email: > - Comments to paper and reviews will be fine. Comments can be seen in time. Please set the readers correctly when you post them. Reviewers are not required to take comments into consideration. To better discuss the issues, we responded to the longest review (up to 11 questions) in one additional Comments and provided relevant references in another Comment. We strictly follow the policy and instructions given by the NeurIPS 2024 PCs email. This email is sent to all authors and we don't think there is any fairness issue. We hope our responses have fully addressed your concerns. If you have any other questions or need further clarification, please feel free to let us know. Thank you very much! Best wishes, Authors
Summary: The paper proposes a novel Multimodal Large Language Model framework to improve the vision-language visual tracking performance. By introducing the reflection-based prompt optimization module, the tracking prompt can be iteratively refined via tracking feedback. The proposed method shows state-of-the-art results compared to prior vision-language trackers and visual trackers. Strengths: - The paper investigates the multimodal large language models for the visual tracking problem. - By showing the limitations of prompts created by MLLM and manual annotation for visual tracking, the paper introduces a novel iterative prompt generation via chatting and semantic prompt verification. - The paper shows adequate evaluation comparing the proposed method with both vision-language trackers and visual trackers. Weaknesses: The paper should illustrate the number of chat iterations in the reflection-based prompt optimization module and their performance effect. The example in Figure 5 shows that the module needs 2 iterations to get an accepted prompt but the overall analysis should be considered, especially in the images with complex scenes and objects. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the weakness. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have mentioned the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ***Q1: The paper should illustrate the number of chat iterations in the reflection-based prompt optimization module and their performance effect. The example in Figure 5 shows that the module needs 2 iterations to get an accepted prompt but the overall analysis should be considered, especially in the images with complex scenes and objects.*** - The paper should illustrate the number of chat iterations in the reflection-based prompt optimization module. Thanks for your advice. The iterative refinement process averages **2.3** rounds. We illustrated the number of chat iterations in Section 4.1 (Line 229 - 230). - The example in Figure 5 shows that the module needs 2 iterations to get an accepted prompt but the overall analysis should be considered, especially in the images with complex scenes and objects. We conducted experiments using ChatTracker-B to study the performance effect of the maximum number of iterations. The results show that when the maximum number of iterations is less than 10, the tracker performance improves with an increase in iterations. When the number of iterations exceeds 10, the improvement in tracker performance becomes less significant. In our experiments, the default maximum number of iterations is set to 20. For complex scenes and objects, we will add more visual examples in the revised manuscript to help our readers understand the impact of iteration numbers on tracker performance. **Table.1 Impact of the number of iteration rounds on tracker performance on Lasot[1] and TNL2k[2] datasets** | Number of Maximum Iterations | 1 | 10 | 20 | 30 | | ---------------------------- | ----- | ----- | ----- | ----- | | LaSOT AUC | 64.87 | 67.24 | 67.89 | 67.91 | | TNL2K AUC | 53.66 | 55.63 | 56.39 | 56.42 | [1]Heng Fan, Hexin Bai, Liting Lin, Fan Yang, Peng Chu, Ge Deng, Sijia Yu, Harshit, Mingzhen Huang, Juehuan Liu, et al. Lasot: A high-quality large-scale single object tracking benchmark. International Journal of Computer Vision, 129:439–461, 2021. [2]Xiao Wang, Xiujun Shu, Zhipeng Zhang, Bo Jiang, Yaowei Wang, Yonghong Tian, and Feng Wu. Towards more flexible and accurate object tracking with natural language: Algorithms and benchmark. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13763–13773, 2021. --- Rebuttal Comment 1.1: Comment: Thank the authors for their response. I am satisfied with the answer. After carefully reading other reviews, I will maintain the score. --- Reply to Comment 1.1.1: Title: Thanks to Reviewer aCWh Comment: Dear Reviewer, Thank you very much for recognizing our work. Best wishes, The Authors
Summary: The paper proposes a new Visual-Language (VL) tracking framework called ChatTracker, that integrates MLLMs into VL tracking through iterative refinement of text prompts for VL trackers. The text prompts optimized using the proposed Reflection-based Prompt Optimization (RPO) module are more accurate than manual text annotations in the datasets and improve the tracking performance by taking into account both foreground and background objects. Strengths: - The paper identifies a clear gap for Visual-Language trackers and how manually annotated or naively generated text prompts can provide sub-optimal results for language-assisted visual tracking. - The proposed framework combines the power of a strong MLLM like GPT-4V and a GVLMs like Grounding DINO-T to formulate the iterative refinement procedure for generating better text prompts for tracking tasks. It uses a GVLM powered by the optimized prompts in conjunction with a visual tracker to get a set of proposals. It also incorporates GVLM generated region proposals to train its Foreground Verification module, which not only does foreground classification but scores candidate regions with how they overlap with background proposals. While different parts of the framework are pre-existing and straight-forward, the paper uses them in a new way to improve visual tracking performance in relevant benchmarks. - The method is plug-and-play, and any visual tracker (regardless of whether it is a VL tracker or not) can be used as the baseline tracker in the Semantic Tracking Module of the framework. - The experimental results and ablations support the claims in the paper. - The paper is well-written, well-organized and clear. Weaknesses: - The proposed framework does not take into account the potential temporal changes in the video that might require a more optimal foreground/background text prompt to proceed. For example, in Fig. 3 we see the framework landing on ‘hands, bar’ as the positive prompt, which focuses on hands being attached to the bar, which may not be true for the rest of the video. The background text of the video is also never updated. In fact, neither the GVLM that the framework heavily depends on nor the Foreground Verification module have any temporal components. Therefore, the framework would have to heavily rely on the baseline tracker in this aspect. - The quality of the region proposals coming out of the Semantic Tracking Module is un-addressed. How often GVLM proposals are preferred to the visual tracker results (unless the prompt optimization fails as mentioned in Appendix A)? How dependent is the final performance on how well the Foreground Verification module is performing (since it’s only been trained on the LaSOT training set)? Technical Quality: 3 Clarity: 3 Questions for Authors: - The potential limitations mentioned in the weaknesses should be addressed or discussed in the paper. Temporal changes to target and background are the main challenges in visual tracking and the proposed framework does not add any mechanisms to address these even though it might heavily depend on GVLM results for its answers. - Related to the above point: Have the authors done any experiments where foreground/background prompts were updated while the video is being processed? What were the insights? - As mentioned above, how often does the visual tracker proposal get picked over a GVLM proposal and vice versa when both exist in the pool going into the Foreground module? Or does the visual tracker proposal mostly get used when there are no GVLM proposals? - Related works section can be expanded a bit. An important motivation for this paper is the disadvantages brought by the manual textual annotations in the VOT benchmarks (that have them). For the broader audience, the authors could add a short section on related benchmarks and if they have manual text annotations, explain how those came about (why are they low quality?). Line [300-301] mentions the prompt “swing swinging above a man in black pants” - which almost sounds like a generated text prompt, having this context in the paper itself would be helpful. - Equation (6) - What’s the purpose of $T^i_{pos}$ in this equation? - Lines [152-153] - What happens if a word matches to no proposals? Can we add that to negative words? Why not? - Lines [204-205] - To confirm, the anchor sample is always the target template, but do you sample positive samples from tracker result or foreground proposal? Visual tracking is mentioned in Line 202, but not in the Appendix. - Line [221] - The IoU threshold for especially foreground seems a little low, what’s the reason for this? Do you have any statistics for what the accepted IoUs are from the foreground verification? Is it generally lower with the proposals? - Please use $P_{norm}$ or NP consistently across tables. Also, for a broader audience you should mention what the metrics are even if it's in a couple sentences. - Some tables report NP and P, others one or the other. It would be better to report consistently even if it’s just across smaller tables. Moreover, baseline tracker results in Table 3 are not matching the same tracker’s results in Table 1. If I’m missing some details, please let me know. - Fig. 3 - not seeing any blue boxes in the image, even though CiteTracker is in the legend. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Authors did address some limitations of their method in the Appendix. Please see my discussions about other potential limitations in the weaknesses and questions sections above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ***Q1. The potential limitations mentioned in the weaknesses should be addressed or discussed in the paper. Have the authors done any experiments where foreground/background prompts were updated while the video is being processed? What were the insights?*** Thanks for pointing this out. Temporal changes to target and background are indeed one of the main challenges in the visual tracking domain. Our ChatTracker relies on a baseline tracker to consider potential temporal changes in the video. However, compared to traditional visual trackers, which rely only on a single image template to locate the target, **natural language provides more temporal-invariant features**, such as object categories and textures. Our ChatTracker incorporates MLLMs and designs a novel Reflection-based Prompt Optimization (RPO) module to obtain accurate target descriptions, providing better robustness when facing temporal changes. Updating foreground and background prompts is not straightforward for two main reasons: 1. **Accuracy of Predictions**: During tracking, the tracker's predictions of the target are not always accurate, and there are no annotations of background objects. This makes it difficult to dynamically generate foreground and background prompts. 2. **Trade-off Between Performance and Efficiency**: If MLLMs are called multiple times during tracking to update foreground and background prompts, it incurs a certain computational overhead (although this overhead is decreasing as technology advances). Achieving a good balance between performance and efficiency requires extensive research. We intend to delve into this challenge as part of our future work, and to address this, we have incorporated a discussion in the limitations section. ***Q2. How often does the visual tracker proposal get picked over a GVLM proposal? Or does the visual tracker proposal mostly get used when there are no GVLM proposals?*** A GVLM proposal gets picked 33.27% of the all time in our experiment. We analyzed 685,360 frames in the LaSOT dataset **[1]**, of which 228,042 used the GVLM proposal as the result. ***Q3. How dependent is the final performance on how well the Foreground Verification module is performing (since it’s only been trained on the LaSOT training set)?*** Thanks for pointing this out. The Foreground Verification module is crucial to the whole framework and its final performance. To better answer your question, we design an additional ablation experiment using ChatTracker-L on the OTB-Lang dataset[2]. The experiment involves three versions of our trackers: 1. **Complete ChatTracker-L**: Uses the Foreground Verification module. 2. **ChatTracker-L with Ground Truth**: Does not use the Foreground Verification module but selects the proposal with the highest IoU with Ground Truth as the tracking result. This entry is used to establish the theoretical upper bound. 3. **ChatTracker-L with Random Selection**: Does not use the Foreground Verification module and randomly selects a proposal as the tracking result. The results are shown in the table below. The results show that using the Foreground Verification module significantly outperforms the random selection group. This demonstrates that our Foreground Verification module is effective. Although there may still be room for improvement, our ChatTracker-L result is close to the theoretical upper bound. Thank you for your suggestion. We will discuss and add this in the supplementary. **Table.1 The performance comparison of Foreground Verification module ,random selection module in our framework on Lasot Dataset** | Method| $AUC$ | $P$| $P_{Norm}$ | |-|-| - |-| | ChatTracker-L| 71.78 | 94.25 | 86.82 | | ChatTracke-L_random_sample | 42.98 | 58.74 | 52.11 | | ChatTracke-L_upperbound | 73.91 | 96.17 | 88.61 | ***Q4 Related works section can be expanded a bit.*** Thanks for your suggestions. We will add a section in the revised manuscript to introduce related benchmarks for broader audiences to understand the insights. This section will include whether the LaSOT, TrackingNet, TNL2K, and other benchmarks contain text annotations, the methods used for text annotation, and potential reasons for low quality. ***Q5 What’s the purpose of $T_{pos}^i$ in equation (6) ?*** In equation (6): $ T_{neg}^i = \\{ w_m^i|for\ all\ n\ that\ S_z^{nm}>\theta_2 \wedge IoU(P_z^n,G)<\theta_3 \\} \backslash T_{pos}^i$ The purpose of $T_{pos}^i$ is to ensure that a word does not appear in both $T_{pos}^i$ and $T_{neg}^i$ . $T_{pos}^i$ and $T_{neg}^i$ are used to construct a reflection prompt during the iterative optimization process to help the MLLM generate better language descriptions. If a word is present in both$T_{pos}^i$ and $T_{neg}^i$ , it may confuse the MLLM and prevent it from generating an effective response. ***Q6 Lines [152-153] - What happens if a word matches to no proposals? Can we add that to negative words? Why not?*** If a word matches no proposals, it will be ignored and not added to negative words. $T_{neg}^i$ is used to inform the LLM that the GVLM has associated this word with objects in the background. The LLM will further enable the GVLM to identify the correct target based on this information. When a word matches no proposals, it is considered meaningless (i.e., it doesn't contain any information that helps the LLM distinguish the target), so we do not add it to $T_{neg}^i$ . For example, in the swing-17 example in Fig. 2, when the text description is “Human on a swing seat,” "Human" and "Swing" are categorized as negative words, and "seat" is categorized as positive. They all correspond to image regions in the figure. However, "on" and "a," which match no proposals, do not provide effective feedback to the MLLM. Therefore, we do not add them to negative words. Due to the word limit in the rebuttal, we provide the remaining responses in the following comment. --- Rebuttal 2: Title: Response to Reviewer gPPX(part2) Comment: Due to the word limit in the rebuttal, we provide the remaining responses here. ***Q7 Lines [204-205] - To confirm, the anchor sample is always the target template, but do you sample positive samples from tracker result or foreground proposal? Visual tracking is mentioned in Line 202, but not in the Appendix.*** No, we do not sample positive samples from the visual tracker's results during training. We only randomly choose a target patch from other frames of the same video as the positive sample. In Line 202, we use the trained $ f(.) $​ (which is a neural network) to calculate the foreground score of the visual tracker's result. We will clarify this in the revised manuscript. ***Q8 Line [221] - The IoU threshold for especially foreground seems a little low, what’s the reason for this? Do you have any statistics for what the accepted IoUs are from the foreground verification? Is it generally lower with the proposals?*** - The IoU threshold for especially foreground seems a little low, what’s the reason for this? Is it generally lower with the proposals? For the IoU threshold for foreground(i.e. $\theta_1$ in equation (5) ), we set $\theta_1$ to 0.3. This is to ensure there are enough positive words $T_{pos}^i$ as feedback during the early iterations, helping the iterative optimization process converge quickly. For accepted IoUs (i.e., $\epsilon$ in line157) ,we choose the IoU threshold based on the experiments. We tested on the LaSOT dataset, and the results are shown in the table below. The results indicate that an IoU threshold (i.e., $\epsilon$ in line157) of 0.4 leads to the best results. **Table.2 The performance of chattracker on Lasot dataset with different IoU thresholds** | IoU threshold | 0.3 | 0.4 | 0.5 | 0.6 | | ------------- | ----- | ----- | ----- | ----- | | AUC | 69.87 | 71.68 | 71.54 | 70.92 | - Do you have any statistics for what the accepted IoUs are from the foreground verification? Yes, the average accepted IoU is 0.79. We calculated the distribution of accepted IoUs on the LaSOT dataset, and the results are shown in the table below. **Table.3 Data distribution of IoU values on the Lasot dataset.** | IoU | [0,0.4) | [0.4,0.6) | [0.6,0.8) | [0.8, 1.0]| | ------------------ | ------- | --------- | --------- | ---------- | | number.of.sequence | 20 | 14 | 52 | 194 | Thank you for your question. We will add the relevant results and analysis in the revised manuscript. ***Q9 Please use $P_{norm}$ or NP consistently across tables. Some tables report NP and P, others one or the other. It would be better to report consistently even if it’s just across smaller tables.*** Thanks for pointing this out. In the revised manuscript, we will use $P_{norm}$ or NP consistently and report metrics consistently. ***Q10 baseline tracker results in Table 3 are not matching the same tracker’s results in Table 1.*** Thanks for pointing this out. In Table 1, the baseline trackers (i.e., JointNLT[3] and UVLTrack[4] ) are initialized using both natural language and the initial bounding box to compare with the best-performing model. In Table 3, the baseline trackers (i.e., JointNLT and UVLTrack) are initialized using only natural language. The purpose is to compare the effectiveness of ChatTracker-generated text versus manually annotated text for Vision-Language trackers without the influence of the initial bounding box. We will clarify this in the revised manuscript. ***Q11 Fig. 3 - not seeing any blue boxes in the image, even though CiteTracker is in the legend.*** Thanks for pointing out this typo. CiteTracker[5] is not included in this comparision and we reuse the legend picture in this figure mistakenly. We revise the manuscript accordingly. For clarification, we have placed the revised figure in the .pdf file. --- Rebuttal 3: Title: Response to Reviewer gPPX(part3) Comment: Due to the word limits, we provide the remaining citations here. [1]Heng Fan, Hexin Bai, Liting Lin, Fan Yang, Peng Chu, Ge Deng, Sijia Yu, Harshit, Mingzhen Huang, Juehuan Liu, et al. Lasot: A high-quality large-scale single object tracking benchmark. International Journal of Computer Vision, 129:439–461, 2021. [2]Y. Wu, J. Lim and M. -H. Yang, "Object Tracking Benchmark," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 9, pp. 1834-1848, 1 Sept. 2015, doi: 10.1109/TPAMI.2014.2388226. [3] Li Zhou, Zikun Zhou, Kaige Mao, and Zhenyu He. Joint visual grounding and tracking with natural language specification. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 23151–23160, 2023. [4]Yinchao Ma, Yuyang Tang, Wenfei Yang, Tianzhu Zhang, Jinpeng Zhang, and Mengxue Kang. Unifying visual and vision-language tracking via contrastive learning, 2024 [5]Xin Li, Yuqing Huang, Zhenyu He, Yaowei Wang, Huchuan Lu, and Ming-Hsuan Yang. Citetracker: Correlating image and text for visual tracking. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9974–9983, 2023. --- Rebuttal Comment 3.1: Comment: I want to thank the authors for all their efforts in responding to my and other reviewers' questions and comments. The analyses the authors provided were really helpful for clarifying the mechanics of their method for me. After reading the other reviews and the authors' responses to them, I'm more comfortable with my assessment of the paper. Thanks to the authors for including all this new information in their revised manuscript. I do agree with reviewer z5a5's comment that it would very valuable to analyze the proposed method on MGIT's multi-granular annotations. --- Rebuttal 4: Title: Thanks to Reviewer gPPX Comment: Dear Reviewer, Thank you once again for your valuable comments. We also agree with reviewer z5a5's advice on this further research direction. It would be very interesting to extend our framework in MGIT's multi-granular annotations and dataset. Since analyzing the effect of multi-granular annotations is non-trivial and beyond the scope of this paper, we will investigate this direction in our future work. Thank you very much! Best wishes, The Authors
Rebuttal 1: Rebuttal: Dear All Reviewers, Thank you again for taking the time to review our work and for providing us with valuable feedback. We are excited that you found our results impressive (gPPX, z5a5, Bwy6) and our experiments well-designed (gPPX, aCWh, Bwy6), appreciate the innovative uses of multimodal large language models for visual tracking (gPPX, aCWh, z5a5), and found our paper well-written (gPPX, Bwy6). We will open-source the code to facilitate a better understanding of our method. The revised Table 1 and Figure 3 are included in the PDF submitted with this response. We have carefully considered your comments and have provided our responses. If you have any further questions or require additional clarification, please kindly let us know. Thank you again for your valuable input. Best wishes, Authors Pdf: /pdf/738d125b41e988a14cc816d48bf5f06e112470ec.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Reinforcing LLM Agents via Policy Optimization with Action Decomposition
Accept (poster)
Summary: The paper proposes Bellman backup with Action Decomposition (BAD) and its realization on PPO (POAD), which aims to train with token-level policy for more fine-grained credit assignment on the crucial part of the language response made by the language agent. To address the issue of distorted Q-value by the intra-sentence discount factor, the authors propose a novel Bellman backup rule that applies discount factor and reward only at the end of each sentence (high-level action), and provide the realization, POAD, on PPO. Experiments have shown that the proposed method works better than several baselines on decision-making and coding tasks. Strengths: 1. The paper is well-written and easy to follow. The paper does a good job in stating its motivation; Figure 1 clearly shows why token-level training is necessary, while Eq. 11 and 12 in Sec 4.2 helps much in understanding the inherent problem of token-level agent to be addressed in the rest of the paper. The appendix shows a good amount of details that helps the understanding of the paper, including step-by-step breakdown, hyperparameter tables, wall-clock training time and pseudocode. 2. The proposed method is tested on a variety of test cases, including decision-making and coding, which shows the wide range of potential application for the proposed method. There is also ablation showing the generalizability (i.e., not overfitting) of the trained model, which is a desired property of the LLM community. Weaknesses: While working better than sentence-level method TWOSOME, the paper is still not convincing enough in showing that token-level critic is indeed better than sentence-level critic for three reasons: 1) there are potential disadvantages such as increased training complexity; 2) the paper did not show results that "key token" identified by token-level critic actually matters; 3) there are other sentence-level critic RL methods, and TWOSOME might not be the best sentence-level critic RL method. See questions section for details. **Minor issues:** The colors of the lines are inconsistent throughout the paper. For example, in Fig. 5, POAD is orange in the left subfigure, but it is purple and blue in the right subfigure. Technical Quality: 3 Clarity: 3 Questions for Authors: I have a question for the authors: in line 200-202, the author claims that "another advantage of BAD is ... reducing the complexity of RL problem with language agent to $O(|a|\times|V|)$. While it is true that the action space at each step reduces, the episode length is greatly increased from a few steps to potentially hundreds or thousands of steps with much sparser reward, which could be potentially harder. Could the author explain intuitively why the benefit of reduced action space outweighs the problem of extended episode length with sparse reward? Also, it will be great if the author can do the following two things (which corresponds to point 2) and 3) in weakness): 1. conduct an ablation showing the intuition in Fig. 2 is actually true, i.e., for the sentence "turn on / off TV", the Q-value actually changes more on the word "on / off"; 2. compare with ArCHer [1], which is a sentence-level RL method, to further show that token-level Bellman update is better. **Reference** [1] Y. Zhou et al. ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL. In ArXiv, 2024. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors mentioned the problem of acquiring quantitative reward function as the limitation, which is a valid concern in some LLM tasks such as reasoning with chain-of-thought, and gave reasonable directions for solution. The paper did not mention any societal impact, for which I would encourage the authors to include a paragraph (the paper is not "theoretical"; it proposes empirical solutions with experiments). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### We thank Reviewer 9Wzo for his/her constructive comments that will surely turn our paper into a better shape. > **Q1** Could the author explain intuitively why the benefit of reduced action space outweighs the problem of extended episode length with sparse reward? **A1** Thank you for your thoughtful question. We believe this discussion is valuable and will significantly help readers better understand our work. **We plan to add a dedicated discussion section on this topic in the revised version.** At first glance, action decomposition indeed extends the episode length, seemingly exacerbating the sparse reward problem. However, we can analyze this issue more deeply from different perspectives: 1. Efficiency of credit backpropagation. The increased episode length due to action decomposition could lead to inefficient training if using a 1-step TD, as credit would only backpropagate one step per training epoch. Fortunately, BAD can be combined with n-step TD or GAE to achieve efficient propagation comparable to action-level Bellman backup within a single training epoch. For this reason, our POAD implementation is based on GAE. We appreciate the reviewer's reminder and will explicitly state in future versions that BAD is best used in conjunction with GAE or N-step TD methods to avoid confusion and misuse. 2. Credit decay. While the extended episode length could cause exponential decay of credit strength based on the discount factor, which makes it difficult for sparse reward signals to act on actions/tokens in the early stages of an episode, BAD removes the intra-action discount factor. This theoretically prevents the exponential decay of signal strength caused by action decomposition. Our experimental results, where POAD significantly outperforms NTPO, confirm this point. 3. Credit estimation variance. Even with addressing the efficiency and decay issues, increased horizons often lead to higher variance in credit estimation. However, in linguistic action environments, the dynamics between intra-action tokens are strictly deterministic. In this case, we are reasonable to expect that the additional variance introduced by action decomposition would be much less severe than that in traditional RL problems with increased episode length. We hope this discussion intuitively explains why the benefit of reduced action space outweighs the problem of extended episode length with sparse reward when using BAD. Besides, our experiments on the VirtualHome also empirically confirm this. When the agent only receives a binary reward signal upon task completion or reaching the maximum step limit (50 steps), our POAD still outperforms the action-level method, TWOSOME, despite such a sparse reward setting. > **Q2** Conduct an ablation showing the intuition in Fig. 2 is actually true. **A2** We are very grateful for this insightful suggestion and **we have conducted a case study/ablation for in-depth analysis to validate the effectiveness of BAD**. Please refer to the **Figure 3: Case Study** section in meta-responses for detailed settings and experiemental results. By comparing the credit assigned to each token in a symmetric (positive, negative) action pair by the BAD-learned critic in different states, our experimental results confirm that the BAD critic can effectively assign most of the credit to the key tokens while rarely influencing other irrelevant tokens. By investigating the volume of credits assigned to key tokens by POAD and NTPO, compared with the credit assigned to the entire action by TWOSOME, results show that the POAD enjoys a far smaller discrepancy than NTPO. > **Q3** compare with ArCHer to further show that token-level Bellman update is better. **A3** We are very grateful to the reviewer for recommending this timely work. Although ArCHer [1] adopts sentence-level critics, it also aims to provide fine-grained token-level supervision for optimizing language agents in interactive (multi-turn) tasks. Upon further investigation, we found that its key idea for token-level credit assignment plays an intermediate role between NTPO and POAD. Thus, in our revised manuscript, **we have incorporated ArCHer as a new baseline in our experiments with comparative analysis.** Please refer to the **Figures 1 and 2: New Baseline, ArCHer and Corresponding Analysis** section in meta-responses for detailed comparison between ArCHer and our methods, as well as preliminary experimental results. Thanks again for the reviewers’ valuable recommendation, we believe these comparisons and analyses involving ArCHer will better illustrate the advantage of our method as well as the research trajectory in the community toward providing finer-grained supervision for language agents. > **Q4** Minor Issue: The colors of the lines are inconsistent throughout the paper. **A4** Thanks to the reviewer for his/her careful examination, we have adjusted the related figures and unified the color used in our updated version. > **Q5** Lack of societal impact. **A5** We appreciate the reviewer's suggestion. In our updated version, we have added the following paragraph addressing the societal impact: **Social Impact.** The advancements in RL for language agents can significantly enhance decision-making processes in various domains such as healthcare, finance, and autonomous systems. Improved decision-making can lead to better outcomes, increased efficiency, and reduced errors. However, we acknowledge that when optimizing agents using our method, language agents may potentially resort to unscrupulous means to maximize rewards, which could lead to potentially harmful results. Thus, we advocate for a more comprehensive consideration when designing the reward function, or combining it with safety-constrained RL methods to mitigate these risks. --- [1] Zhou, Yifei, et al. "Archer: Training language model agents via hierarchical multi-turn rl." arXiv preprint arXiv:2402.19446 (2024). --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response. I think the authors have addressed most of my concern, including empirical comparison with ArCHer, societal impact, and whether Fig. 2 is true. However, there is one problem: is the updated figure for ArCHer in the global pdf correct? Judging from their Fig. 2 in the arXiv version, they never use a token-level critic and thus should not have discount rate inside a sentence. In fact, in page 9 they wrote: "Informally, Theorem 1 shows that the error in estimating advantages using the token-level critic is $\gamma\sqrt{L}$ larger than the utterance-level critic (in the worst case)". Thus, I think ArCHer's figure should be the same as POAD. --- Rebuttal 2: Title: Response to Reviewer 9Wzo Comment: Thank you very much for your recognition of our efforts. Yes, ArCHer adopts sentence-level critics exclusively and does not utilize token-level critics. However, the absence of token-level critics does not always imply that there is no discount factor applied within sentences. In Section 3.4 of the original ArCHer paper, the part of "Low-level token actor," authors stated: "We update the token-level policy with the policy gradient computed via REINFORCE (also known as the Monte Carlo Policy Gradient)". This indicates that rather than approximating token-level credits with critic networks, they employ Monte Carlo estimation to backpropagate token-level credits from sentence-level advantage values, $A(s_c, a_t)$, which serve as terminal rewards (and thus bypass the need for token-level critic networks). We also noted that in one instance provided by ArCHer, specifically in Equation (3), they applied a vanilla (relatively old) version [2] of REINFORCE that does not involve a discount factor (they didn't explain why). However, it is important to highlight that in most contemporary practices involving Monte Carlo estimation or REINFORCE, the use of a discount factor is standard for managing horizons and balancing bias-variance (e.g. $G_t=\sum_{k=0}^\infty \gamma^k r_{t+k+1}$, see Equation (13.8) in Section 13.3 and Equation (3.8) in Section 3.3 of [3]). Furthermore, considering that the authors claim ArCHer is a framework that can integrate with other reinforcement learning approaches, we decided to retain the discount factor in the REINFORCE process as depicted in Figure 1 of the global PDF for a more general representation. Nevertheless, we have considered ArCHer without a discount factor, which aligns with our insights presented in the paper. We conducted an ablation study on this aspect, as illustrated in Figure 2 of the global PDF, i.e. the ArCHer-BAD variant in the right one. In this case, the performance of ArCHer should theoretically match that of POAD, as they share the same theoretical results of credit assignment. Unfortunately, due to the inherent challenges associated with extra network complexity and the difficulties of hyperparameter tuning, ArCHer still falls short of achieving the performance levels of POAD. Thank you so much again for this valuable feedback, we will add this clarification along with Figure 1 of the global PDF in our later version. --- [2] Williams, Ronald J. "Simple statistical gradient-following algorithms for connectionist reinforcement learning." Machine learning 8 (1992): 229-256. [3] Sutton, Richard S., and Andrew G. Barto. Reinforcement learning: An introduction. MIT press, 2018. --- Rebuttal Comment 2.1: Comment: Thanks for the detailed response. I think this is a minor issue anyway, since the proposed method has already been compared with ArCHer-BAD. The ArCHer paper has github repo available; I suggest the authors to double-check the repo to make sure the figure is correct. In conclusion, I decide to keep my score. --- Reply to Comment 2.1.1: Title: Thank you for your deep engagement Comment: We would like to say thank you for your deep engagement with our work, especially the direction you have suggested helps us make the paper more complete and understandable. We will check the ArCHer repo again and refine our statement continuously. Thanks a lot!
Summary: The paper proposes a novel approach to optimizing language agents in reinforcement learning (RL) environments, addressing the challenges of limited environmental knowledge and vast action spaces. Traditional methods like GLAM and TWOSOME optimize language actions as whole units, leading to inefficiencies in credit assignment and optimization complexity. This paper introduces Policy Optimization with Action Decomposition (POAD), which decomposes actions to the token level, allowing for finer-grained supervision and manageable optimization complexity. The authors derive the Bellman backup with Action Decomposition (BAD) to ensure theoretical consistency and integrate it with the PPO algorithm. They validate POAD across various testbeds, demonstrating its advantages in learning efficiency, generalization, and theoretical correctness. Strengths: - This paper introduces a novel method of decomposing actions to the token level, which provides finer-grained supervision and reduces optimization complexity. - The derivation of the Bellman backup with Action Decomposition (BAD) ensures theoretical consistency, addressing a missing gap in previous research. - The paper provides many empirical validation across different testbeds, demonstrating the effectiveness and efficiency of POAD compared to baseline methods. Weaknesses: 1. This paper lacks a thorough comparison and analysis of some important pioneering works, such as LLM decoding from a control or reinforcement learning perspective and hierarchical reinforcement learning. 2. This paper emphasizes that BAD can perform efficient and reasonable credit assignment at the token level, but the experimental section lacks in-depth analysis, such as case studies. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. Lines 201-203, the authors mention that the action space is reduced from exponential to polynomial compared to works like GLAM and TWOSOME. However, from Equation 16, it can be seen that the algorithm learned by BAD still samples each token from the entire vocabulary, which does not seem to change the action space? 2. Can the authors demonstrate the token-level credit assignment learned by the BAD method through a case study? 3. BAD has a significant correlation with some existing token-level optimization methods [2][3] and hierarchical optimization methods [3]. Can the authors make a more in-depth comparison between BAD and these methods? Is there any equivalence between BAD and these methods? Is it just extending existing methods, such as [2][3], from single-turn to multi-turn? --- [1] Zhou, Yifei, et al. "Archer: Training language model agents via hierarchical multi-turn rl." arXiv preprint arXiv:2402.19446 (2024). [2] Rafailov, Rafael, et al. "From $ r $ to $ Q^* $: Your Language Model is Secretly a Q-Function." arXiv preprint arXiv:2404.12358 (2024). [3] Zeng, Yongcheng, et al. "Token-level Direct Preference Optimization." ICML 2024 Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The author has already discussed the limitations of the method and its potential impacts in the paper, and provided possible solutions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### We thank Reviewer 671a for his/her constructive comments that will surely turn our paper into a better shape. > **Q1** BAD has a significant correlation with some existing token-level optimization methods [2][3] and hierarchical optimization methods [1]. Can the authors make a more in-depth comparison between BAD and these methods? Is there any equivalence between BAD and these methods? Is it just extending existing methods, such as [2][3], from single-turn to multi-turn? **A1** We appreciate the reviewer for recommending these relevant works. Firstly, [2] and [3] primarily focus on single-turn RL problems, performing decomposition and token-level optimization on single-step outputs. While these works, including [3] here, emphasize the importance of fine-grained token-level supervision, they didn’t take policy consistency into account (which has a minor impact on single-turn tasks). Our work, however, concentrates on multi-turn scenarios, where the discrepancy grows with the number of turns, making it an unavoidable challenge in multi-turn RL. Moreover, one of the baselines in our paper - the naive token-level Bellman backup behind NTPO - can be viewed as a direct extension of [2] and [3] from single-turn to multi-turn. BAD, then, is an improvement that further addresses the inconsistency issue. This highlights both the differences and connections between our method and [2][3]. Our experimental results demonstrate that resolving the inconsistency problem, beyond simply extending, has a significant improvement on the ultimate performance of language agents in multi-turn RL problems. **We will incorporate these comparisons in our Related Work section.** As for ArCHer [1], we are very grateful to the reviewer for recommending this timely work. ArCHer aims to provide fine-grained token-level supervision for optimizing language agents in interactive (multi-turn) tasks. Upon further investigation, we found that its key idea for token-level credit assignment plays an intermediate role between NTPO and POAD. Thus, **in our revised manuscript, we have incorporated ArCHer as a new baseline in our experiments with comparative analysis.** Please refer to the **Figures 1 and 2: New Baseline, ArCHer and Corresponding Analysis** section in meta-responses for detailed comparison between ArCHer and our methods, as well as preliminary experimental results. Thanks again for the reviewers’ valuable recommendation, we believe these comparisons and analyses involving ArCHer will better illustrate the advantage of our method as well as the research trajectory in the community toward providing finer-grained supervision for language agents. > **Q2** This paper emphasizes that BAD can perform efficient and reasonable credit assignment at the token level, but the experimental section lacks in-depth analysis, such as case studies. Can the authors demonstrate the token-level credit assignment learned by the BAD method through a case study? **A2** We are very grateful for this insightful suggestion and **we have conducted a case study in updated manuscript for in-depth analysis to validate the effectiveness of BAD**. Please refer to the **Figure 3: Case Study** section in meta-responses for detailed settings and experiemental results. By comparing the credit assigned to each token in a symmetric (positive, negative) action pair by the BAD-learned critic in different states, our experimental results confirm that the BAD critic can effectively assign most of the credit to the key tokens while rarely influencing other irrelevant tokens. By investigating the volume of credits assigned to key tokens by POAD and NTPO, compared with the credit assigned to the entire action by TWOSOME, results show that the POAD enjoys a far smaller discrepancy than NTPO. > **Q3** Lines 201-203, the authors mention that the action space is reduced from exponential to polynomial compared to works like GLAM and TWOSOME. However, from Equation 16, it can be seen that the algorithm learned by BAD still samples each token from the entire vocabulary, which does not seem to change the action space? **A3** We apologize for the confusion. BAD does not change the action space. In lines 200-203, we intended to convey that, by using BAD, the critic can provide more fine-grained token-level supervision signals, i.e. the token-level advantage values as shown in Equation 16, for the language agent's policy updates, which reduces the complexity of policy optimization from exponential to polynomial. Specifically, before using BAD, we have an action-level signal, which is used to optimize a policy with an action space of $O(|V|^{|a|})$. After using BAD, we have $|a|$ signals, each guiding an optimization with a token space of $O(|V|)$. Essentially, BAD decomposes a large problem into several smaller problems, thereby reducing the complexity of solving it. Thank you for your comment, we acknowledge that our initial statement might have caused some confusion, and **we will further clarify our statement in lines 200-203 based on this discussion to avoid any unnecessary confusion.** --- [1] Zhou, Yifei, et al. "Archer: Training language model agents via hierarchical multi-turn rl." arXiv preprint arXiv:2402.19446 (2024). [2] Rafailov, Rafael, et al. "From 𝑟 to 𝑄∗: Your Language Model is Secretly a Q-Function." arXiv preprint arXiv:2404.12358 (2024). [3] Zeng, Yongcheng, et al. "Token-level Direct Preference Optimization." ICML 2024 --- Rebuttal 2: Title: We kindly inquire if reveiwer has any further comments or concerns regarding our responses Comment: Dear reviewer 671a, We sincerely appreciate the time and effort you dedicated to our work and we have made every effort to address your concerns in our rebuttal submission. Given that the discussion deadline is less than 24 hours away, we wanted to kindly inquire if you have any further comments or concerns regarding our responses. Your insights are incredibly important to us, and we are eager to ensure that all of your points have been adequately addressed. If you feel that our responses have addressed your concerns, we would greatly appreciate your consideration in reflecting this in the final evaluation of the manuscript. Thank you once again for your time and support throughout this process. We look forward to hearing from you soon. Best regards, Authors --- Rebuttal Comment 2.1: Comment: Many thanks to the authors for thoroughly supplementing the key experiments and discussions. I have raised the score to 6. --- Reply to Comment 2.1.1: Title: Thank you very much for your recognition of our efforts! Comment: Thank you very much for your recognition of our efforts. Your constructive feedback has been invaluable in guiding our revisions and enhancing the quality of our work. Thanks a lot!
Summary: The paper investigates LLM agents: RL agents where sampling actions from a policy means sampling a sequence of tokens mapping to an action from a (suitably conditioned) large language model. Authors notice a problem with previous implementations of this idea: since actions are typically described as a sequence of more than one token, this introduces issues around credit assignment between tokens contributing to an action. However, fixing it naively, treating each token as an individual action, leads to a problem: when time discounting is present in the MDP, the new (intra-token) MDP is no longer equivalent to the old one. The proposed solution is to only discount "fully formed" actions. This idea is then integrated with standard PPO implementation, and empirically tested against SOTA on three RL environments, showing a meaningful improvement. In addition, the paper presents evidence that another fix of simply removing the discount factor (ie setting it to 1) degrades the performance, and through ablation study proves that it is indeed the intra-token discounting removal that improved the final performance. Strengths: The paper is written clearly, and investigates an important topic. The experiments convinced me that indeed, the method works as described and improves the performance of LLM agents. I also highly appreciated the investigation into removing time discounting completely and the impact of varying intra-token discount factor, and into the possible degradation of NL capability of the model after fine-tuning. Weaknesses: I am sceptical with regards to the formal treatment of POMDPs in the paper. First, the definition is missing the initial state (/initial distribution). Then, there are numerous issues: the reward function is defined as operating on states and actions, but is then used as taking an observation and an action. Expectations in equations (1) and (2) are ill-defined if conditioned only on observations: they either have to be conditioned on states, or the observation has to somehow induce a probability distribution over states. The definition of Bellman updates for value function update in eq (5) and (10) refer to the next action/next token, which haven't been defined, and also uses $|a_t|$ which is ambiguous for a given sequence $w_t^{1:j}$. The equations (11) and (12) refer to the next observation $o_{t+1}$ which is a random variable (first, because POMDP, second, because the transition function is not assumed to be deterministic). I understand that the method was implemented and works fine, and the intuition behind it is clear, so I believe that these mistakes are fundamentally not that serious. However, they confuse and frustrate careful reader, and have to be fixed. (In comparison, TWOSOME paper uses simple MDP formalism, while GLAM paper sweeps the formalism of POMDPs under the rug by informally talking about "approximations"). --- Small notes: - typo in "policy" in line 173 - Figure 2 was very hard to read, and referring the reader to the appendix to understand it - which is, I think, necessary - seems to warrant additional work to make it more readable - "NTPO" typo in the description of Figure 4 - LHS and RHS subfigures are swapped in Figure 5 Technical Quality: 3 Clarity: 3 Questions for Authors: 1. There is a well-known equivalence between time-discounted MDP and adding an auxiliary state $\s_0$ with zero reward and transition probabilities $T(s, s_0) = \gamma$ for every other state $s$. As far as I understand, it fits into the paradigm in the paper by postulating that additional transitions are added from completed actions only - do authors have any additional insight into this? 2. Did authors thought about incorporating the discount factor trick into ReAct? Since the impact of $\gamma$ is rather large (from experimental evidence), it seems that distinguishing between thoughts (potentially incurring low discount penalty) and true env actions might be a potentially useful direction. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### We thank Reviewer JxkT for his/her constructive comments that will surely turn our paper into a better shape. > **Q1** The unclear formalism and notation errors. **A1** We apologize for the confusion caused to the reviewers and readers. Indeed, we recognize that the use of POMDP in our formulation was unnecessary, as our paper does not delve into discussions related to partial observability. The inaccurate use of POMDP and the resulting omissions have inadvertently introduced additional confusion and distraction for the readers. To address this issue, in the updated version of our manuscript, we have: 1. referring to TWOSOME, replaced the POMDP formulation with MDP and adjust all related notations accordingly. We believe this change will make our formulation and derivation more clear and concise. 2. after these modifications, thoroughly reviewed all notations and equations to ensure that all relevant variables are properly defined and clearly explained. > **Q2** There is a well-known equivalence between time-discounted MDP and adding an auxiliary state  $s_0$ with zero reward and transition probabilities $T(s,s_0)=\gamma$ for every other state $s$. As far as I understand, it fits into the paradigm in the paper by postulating that additional transitions are added from completed actions only. Do authors have any additional insight into this? **A2** Yes, we agree with the reviewer's insight about the equivalence between adding the additional transitions for only completed actions, i.e., action-level, and our BAD paradigm. Meanwhile, we appreciate the reviewer providing us with another perspective to understand why removing the intra-action discount is necessary. Specifically, in environments with linguistic actions, the dynamics between intra-action tokens within an action are strictly deterministic. That is, given a token $w^j$ for $(s,w^{1:j−1})$ , we can only transition to $(s,w^{1:j})$. In this case, if we do not remove the intra-action discount, it would be equivalent to adding an extra transition for intra-action tokens, which contradicts the deterministic nature of linguistic action generation. Additionally, we can consider the implications of setting different discount factors for intra-action and inter-action tokens from another perspective. Originally, the discount factor was used to balance short-term and long-term rewards, with a large discount factor typically applied in scenarios where actions within the horizon are strongly correlated. In our work, given the particularity of intra-action tokens—where meaningful environmental feedback can only be obtained if intra-action tokens can be combined into a valid action—it is crucial to encourage agents to generate highly correlated intra-action tokens. Therefore, it is reasonable to consider setting the intra-action discount factor to 1, then agents will evaluate each of its intra-action tokens based on the action-level feedback. This approach aligns with the core idea of solving specific MDPs using time-varying discount factors, as seen in works like [1]. > **Q3** Did the authors think about incorporating the discount factor trick into ReAct? **A3** Yes, we recognize the potential benefits of integrating different discount factors into distinct components of the language agent's outputs, such as actions and thoughts. Based on the following intuitions: 1. The thoughts do not directly affect the final state-action transition—only the final action does—instead, thoughts guide the generation to converge to specific low-level actions (they play different roles in interactive tasks). 2. We can remove the gamma in intra-action tokens since words later in an action may not be more expressive than earlier ones. However, thoughts usually present a progressive structure, i.e. content closer to the final action may be more closely related to the final action, which makes it reasonable to distinguish them from final actions, e.g. with a low discount factor. For now, we have observed that discount factors significantly influence the final policy of language agents learned from RL algorithms. Thus, considering separate outputs with varying intentions in a ReAct-like agent, we believe applying different discount factors to these components could be beneficial, whether through heuristic methods or learning-based approaches. For instance, [2] proposed a gradient-based meta-learning approach to optimize the discounting ratio of future rewards, while [1] solves specific MDPs using time-varying discount factors. Both could be promising directions to explore in the future. > **Q4** Figure 2 should be more readable. **A4** Thanks very much for the reviewer's suggestion. We are working hard to improve the readability of this figure. **Figure 4 (b) in the meta-responses** is a current version of this diagram, hopefully more readable than the previous one. Besides, to facilitate understanding the effects of BAD shown in this figure, we also conducted a case study (See the **Figure 3: Case Study** section in meta-response) to visually demonstrate the token-level credit assignment results learned by the BAD. > **Q5** Typos and subfigures swapping issue. **A5** Thanks to the reviewer’s careful examination, we have addressed these typos and adjusted the figures in our updated version. --- [1] Gan, Jiarui, et al. "Markov decision processes with time-varying geometric discounting." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 10. 2023. [2] Xu, Zhongwen, Hado P. van Hasselt, and David Silver. "Meta-gradient reinforcement learning." Advances in neural information processing systems 31 (2018). --- Rebuttal Comment 1.1: Title: Response Comment: I thank the authors for providing a thorough response, and for committing to fixing the issues pointed out in the review. Based on this, but also others reviewers feedback regarding the baselines comparison, and authors providing additional experiments, I decided to increase my score by one point. --- Reply to Comment 1.1.1: Title: Thank you for the time and effort you dedicated to re-evaluating our work! Comment: We sincerely appreciate the time and effort you dedicated to re-evaluating our work. Your constructive feedback has been invaluable in guiding our revisions and enhancing the quality of our work. Thanks a lot!
Summary: This paper introduces Policy Optimization with Action Decomposition (POAD), a novel method for reinforcing language agents by optimizing at the token level rather than the action level. The authors derive a theoretical framework called Bellman backup with Action Decomposition (BAD) to address discrepancies between action-level and token-level optimization. Implementing BAD within PPO, POAD offers finer-grained credit assignment and lower optimization complexity. Experiments across various environments demonstrate POAD's improved learning efficiency and generalization compared to baseline methods, supporting the theoretical analysis and advancing language agent optimization in interactive settings. Strengths: 1. The paper seems to provide a theoretical foundation for its approach, analyzing the discrepancies between action-level and token-level optimization and deriving the Bellman backup with Action Decomposition (BAD) to address these issues. This theoretical grounding lends good analysis to the proposed method. 2. The authors implement their theoretical insights into a practical algorithm (POAD) and demonstrate its effectiveness across diverse testbeds, including environments with both restricted and unrestricted action spaces. The empirical results show improved learning efficiency and generalization abilities compared to baseline methods, validating the practical utility of the theoretical contributions. Weaknesses: Limited testing environment and lack of baseline. The author only show two baselines for comparison. More baselines should be include for comprehensive analysis. Another main concern is that the finetuning is only done on using Lora, when finetune the model using the full capacity, the effect is unknown. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. How did you tune the baseline? 2. If you use full scale finetuning instead of Lora, what do you expect? Could you perform on experimentation on this? 3. How would this method work on standard NLP task? Confidence: 2 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Yes, author addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### We thank Reviewer H4uQ for his/her constructive comments that will surely turn our paper into a better shape. > **Q1** More baselines should be included for comprehensive analysis. **A1** We appreciate the feedback from H4uQ and also extend our gratitude to Reviewers 671a and 9Wzo for recommending the recent relevant method, ArCHer [1]. ArCHer aims to provide fine-grained token-level supervision for optimizing language agents on interactive (multi-turn) tasks. Upon further investigation, we found that its key idea for token-level credit assignment plays an intermediate role between NTPO and POAD. Thus, **in our revised manuscript, we have incorporated ArCHer as a new baseline in our experiments with comparative analysis.** Please refer to the **Figures 1 and 2: New Baseline, ArCHer and Corresponding Analysis** section in meta-responses for detailed comparison between ArCHer and our methods, as well as preliminary experimental results. We believe the extra comparisons and analyses involving ArCHer will better illustrate the advantage of our method and the research trajectory in the community toward providing finer-grained supervision for language agents. > **Q2** If you use full-scale finetuning instead of Lora, what do you expect? Could you experiment with this? **A2** We conducted an ablation comparing our method with the action-level baseline, TWOSOME, on both full-scale and LoRA fine-tuning settings, with results referring to **Figure 4 (a) in our meta-responses**. 1. Comparing POAD's performance under full-scale fine-tuning and LoRA fine-tuning, we observed consistent performance, indicating that POAD remains effective under full-scale fine-tuning. However, due to increased memory consumption with full-scale fine-tuning, we significantly reduced the batch size during training, leading to slightly reduced stability in the training process. This accounts for the observed larger fluctuations in the curves during full-scale fine-tuning compared to LoRA fine-tuning. 2. Contrasting the two methods under full-scale fine-tuning, POAD demonstrated greater advantages over the action-level baseline method compared to Lora fine-tuning. Potential reasons for this include: a) full-scale fine-tuning's heightened sensitivity, requiring higher precision and granularity of supervised signals compared to Lora; b) the further increase in demand for precision of supervised signals due to the drastically reduced batch size. We greatly appreciate the reviewer for this valuable suggestion. These experimental results further underscore the importance of our approach, highlighting that in scenarios involving full-scale fine-tuning, fine-grained and precise supervision signals are more critical compared to situations involving LoRA fine-tuning. > **Q3** How did you tune the baseline? **A3** Regarding the baseline, such as TWOSOME, we integrated its implementation into our code framework based on the official repository. Throughout this process, for fair experimental comparisons, we ensured the following: 1. Key components of the baseline method aligned with those in the original repository. 2. Be able to replicate the performance reported in the paper using the original hyper-parameters list in their papers. 3. Regarding the code framework, apart from necessary adaptations specific to the method, we maintained consistency in other aspects. 4. When tuning hyperparameters, we utilized the same search ranges to maintain consistency (we reused the search ranges as defined in TWOSOME). > **Q4** How would this method work on standard NLP tasks? **A4** Theoretically, our method can also be applied to standard NLP tasks such as QA, writing, coding, etc (DataSciCoding in our experiments is a coding environment where agents can generate code freely). Notably, the most suitable application scenarios would be those language tasks involving multi-turn interactions (which language agents usually target). In multi-turn situations, our method's advantage in decomposing inter-action and intra-action credit assignments can be fully leveraged. For other single-turn tasks, in the worst-case scenario, our method would be similar to standard RL approaches with terminal rewards derived from environmental feedback or a reward model. --- [1] Zhou, Yifei, et al. "Archer: Training language model agents via hierarchical multi-turn rl." arXiv preprint arXiv:2402.19446 (2024). --- Rebuttal 2: Title: We kindly inquire if reveiwer has any further comments or concerns regarding our responses Comment: Dear reviewer H4uQ, We sincerely appreciate the time and effort you dedicated to our work and we have made every effort to address your concerns in our rebuttal submission. Given that the discussion deadline is less than 24 hours away, we wanted to kindly inquire if you have any further comments or concerns regarding our responses. Your insights are incredibly important to us, and we are eager to ensure that all of your points have been adequately addressed. If you feel that our responses have addressed your concerns, we would greatly appreciate your consideration in reflecting this in the final evaluation of the manuscript. Thank you once again for your time and support throughout this process. We look forward to hearing from you soon. Best regards, Authors --- Rebuttal Comment 2.1: Comment: thanks the author, I have increased my score --- Reply to Comment 2.1.1: Title: Thanks for upgrading the score! Comment: We would like to say thank you sincerely for your recognition of our efforts. Your constructive feedback has been invaluable in guiding our revisions and enhancing the quality of our work. Thanks a lot!
Rebuttal 1: Rebuttal: # Meta Responses We are delighted to receive positive feedback from all the reviewers, and thank the reviewers for their valuable suggestions. ## Answers to some common questions and New results (see the PDF attached here) For convenience, we provide detailed answers to some of the common questions here. ### Figures 1 and 2: New Baseline, ArCHer and Corresponding Analysis We appreciate the feedback from reviewer H4uQ about involving extra baselines and also extend our gratitude to reviewers 671a and 9Wzo for recommending the recent relevant method, ArCHer [1]. Upon further investigation, we found that its key idea for token-level credit assignment plays an intermediate role between NTPO and POAD. Thus, **we have incorporated ArCHer as a new baseline in our experiments with comparative analysis.** Specifically, from a theoretical perspective: Figure 1 demonstrates the key processes of TWOSOME, NTPO, ArCHer, and POAD, in terms of credit assignment. ArCHer, positioned as an intermediate between NTPO and POAD, theoretically enjoys reduced discrepancy compared to NTPO (although the discrepancy issue is not explicitly discussed in its literature). Importantly, our theoretical analysis regarding BAD is compatible with ArCHer, allowing us to apply insights gained from our theoretical analysis to optimize ArCHer. From a practical perspective: ArCHer is implemented based on a hierarchical RL framework, utilizing an off-policy Q-network for credit backpropagation at the action-level (referred to as utterance-level in the original text, unified as action-level here for clarity). This off-policy Q-network enhances sample efficiency, especially when pre-collected data is available. At the token-level, ArCHer directly backpropagates action-level credits to each intra-action token using REINFORCE. Notably, in practice, ArCHer introduces a V-network at the action-level to facilitate computing action-level advantage values, using these advantage values (rather than Q-values) as terminal rewards at token-level, which may introduce additional inconsistency. Moreover, the use of multiple value estimation networks (including an optional token-level baseline value model besides the Q and V network) increases manual tuning efforts and may lead to cumulative bias and variance, potentially impacting stability. We have made every effort to test ArCHer's performance in the VirtualHome environment, yielding preliminary experimental results as shown in Figure 2. In complex Entertainment tasks, ArCHer is second only to POAD, aligning with our theoretical expectations. However, in tasks such as Food Preparation where the methods’ performance gap was less pronounced, ArCHer performed poorly, likely due to instability in its system. Furthermore, we applied insights from our theoretical analysis to enhance ArCHer's performance, i.e. removing the intra-action gamma, as depicted in ArCHer-BAD in Figure 2 (right one), showing improved results that validate the effectiveness of our insights. Nevertheless, due to the inherent challenges of excessive network complexity and difficult hyper-parameter tuning, ArCHer still falls short of matching POAD's performance. ### Figure 3: Case Study Thanks for the insightful suggestions from reviewers 671a and 9Wzo, **we have conducted a case study for in-depth analysis to validate the effectiveness of BAD in terms of token-level credit assignment.** **Case study settings.** We selected the last three states from the Food Preparation task, which form a complete subtask: "Heat the Pancake with Microwave". This serves as a simple yet practical case for our analysis. The corresponding transitions are as follows: $$\mathcal{T}(s_0,\text{``open the microwave"})\rightarrow s_1$$ $$\mathcal{T}(s_0,\text{``close the microwave"})\rightarrow s_0$$ $$\mathcal{T}(s_1,\text{``put the pancake into the microwave"})\rightarrow s_2$$ $$\mathcal{T}(s_2,\text{``open the microwave"})\rightarrow s_2$$ $$\mathcal{T}(s_2,\text{``close the microwave"})\rightarrow \text{success with reward 1.0}$$ According to these transitions, the optimal trajectory to complete this subtask is (open the microwave, put the pancake into the microwave, close the microwave). Additionally, the maximum step length for the task is 5; if the task is not completed within 5 steps, it is considered a failure, resulting in a reward signal of -1.0. Based on this, we first sampled 1,000 examples using a random policy. We then trained the critic to convergence using three different backup methods: TWOSOME (action-level Bellman backup), NTPO (naive token-level Bellman backup), and POAD (BAD). In Figure 3, we recorded the credit assignment results for each token in positive and negative actions under the First State and Last State conditions for the critics trained with these three methods. The first two subfigures in Figure 3 show the credit assigned to each token in a symmetric (positive, negative) action pair by the BAD-learned critic in two different states, results confirm that the BAD critic can effectively assign most of the credit to the key tokens while rarely influencing other irrelevant tokens. The right one illustrates the volume of credits assigned to key tokens by POAD and NTPO, compared with the credit assigned to the entire action by TWOSOME, showing that POAD enjoys a far smaller discrepancy than NTPO. ### Figure 4 (a): Full-scale Fine-Tuning We greatly appreciate the reviewer H4uQ for the valuable suggestion, we conducted an ablation comparing our method with the action-level baseline, TWOSOME, on both full-scale and LoRA fine-tuning settings, with results shown in Figure 4 (a). ### Figure 4 (b) Thanks for the suggestion from JxkT. we are working hard to improve the readability of the Figure 2 in our original paper. Figure 4 (b) here is a current version of this diagram. --- [1] Zhou, Yifei, et al. "Archer: Training language model agents via hierarchical multi-turn rl." arXiv preprint arXiv:2402.19446 (2024). Pdf: /pdf/36c91b48af2d4944c80e3ca87ca12ee2a115c888.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Communication Bounds for the Distributed Experts Problem
Accept (poster)
Summary: The paper considers a distributed variant of the classical problem of learning with experts, where the cost of each expert needs to be aggregated across different servers. Based on three different aggregation models, i.e., sum, maximum and $\ell\_p$ norm, the authors propose three different algorithm based on the classical exponential weights algorithm. The authors derive bounds on regret and communication costs of the three proposed algorithms. Authors also derive matching lower bounds to demonstrate optimality of their results. Theoretical results are supplemented by empirical results. Strengths: It is an interesting idea to consider the distributed version of this classical problem, especially as applications of distributed continue to grow. The use of randomization in max aggregation and $\ell\_lp$ norms (through exponential random variable) is interesting. Weaknesses: The novelty of the paper especially in terms of analysis is limited. The analysis of all the algorithms is largely based on existing results. Please refer to the next section for additional questions. Technical Quality: 2 Clarity: 2 Questions for Authors: I was a reviewer of this paper during ICML 2024 where I had raised some questions which the authors did not answer. Since the content of the paper is largely similar to the ICML submission of this work, my following questions continue to remain. 1. Why does the coordinator need to initiate communication? In both the models, it is assumed that the coordinator needs to initiate the communication in each round. This is not a very typical consideration in a distributed setting. Usually, there is uplink channel that the clients can use to communicate whenever they want and downlink channel to broadcast messages to the clients/servers. While it might seem that this is a minor issue, I am not convinced that is the case. Firstly, I think it is reason as to why the communication costs have the $Ts$ term. If the clients can themselves initiate the communication, then these terms go away both in the upper and lower bounds. However, it is important to note that if such a coordination does not exist, then the randomization scheme used for aggregation via maximization does not work, or equivalently incurs a cost of $\mathcal{O}(s + b\_e\log(s/\delta))$ while the lower bound would be independent of $s$. This rather unnatural setting is somehow implicitly important for the algorithm to achieve optimal communication cost in the maximum aggregation model. 2. I think there is misrepresentation of results/implicit assumption in the sum aggregation model, which the authors should highlight. There is an implicit assumption that the loss vectors are sparse, in the sense that loss for any expert is high value only for small set of servers. In the experimental section, the authors claim that their results hold "even under extreme settings of sparse loss vectors" while the truth is their theoretical results hold _only_ for sparse regime. Note that the authors claim that in the sum aggregation model the loss for $i^{\text{th}}$ expert at time $t$ is given as $l_{i}^t = \sum_{j = 1}^{s} l_{i,j}^t$ and that the loss is normalized to ensure $l_{i}^t \in [0,1]$. This implicitly places a constraint that $\sum_{j = 1}^{s} l_{i,j}^t \leq 1$ forcing sparser loss vectors. The sparsity is crucial to obtain the improved communication guarantees. A natural choice for "normalization" (as the authors mention) is to average the total loss, i.e., $l_{i}^t = \frac{1}{s} \sum_{j = 1}^{s} l_{i,j}^t$ to ensure $l_{i}^t \in [0,1]$. This is also something the authors have done in the experiments themselves. While such an averaging step does not change the regret guarantees, it significantly impacts the communication cost as the communication cost is proportional to $b_e \cdot T \cdot \sum_{j = 1}^{s} l_{i,j}^t$. If $l_{i}^t = \frac{1}{s} \sum_{j = 1}^{s} l_{i,j}^t \in [0,1]$ then $ \sum_{j = 1}^{s} l_{i,j}^t \leq s$ implying that the worst case communication cost is $\mathcal{O}(b_e T s)$ as opposed to $\mathcal{O}(T(b_e + s))$. I think this is very important point that the authors have not elaborated upon and can change the significance of the results. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 1 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive comments and address the main concerns below. **Q1. The coordinator assumption** We agree with the reviewer that in some scenarios there does not exist a coordinator to coordinate the communications among downstream servers. The motivation for the coordinator initialization assumptions is: 1. to avoid the complexities in asynchronous communications, e.g., handshake protocols or extra buffers to guarantee ordering; 2. the coordinator is commonly seen in both message-passing and broadcast models in practice as well as theory [1, 2]. Additionally, for the case where the coordinator does not need to initiate communication, we can achieve an $O(b_e \log{(s/\delta)})$ communication cost per time step with the following protocol: Initialization: each individual server initializes a $\hat{h_i^t}$ to record the maximum cost for each expert. 1. For each server who has a cost larger than the current maximum, send its value to the broadcast channel after a $\delta_{i, j}$ time delay, where $\delta_{i, j}$ is randomly sampled from $[0, 1]$. 2. Once the broadcast channel has been occupied, all other servers stop the sending action and update their corresponding $\hat{h_i^t}, \delta_{i, j}$ instead. Then we can repeat this process and use the maximum value collected after $s$ unit time steps as an estimate to the maximum value. In this protocol, we assume that the broadcast channel can only be occupied by one server. The random ordering is guaranteed by the random delay and the expected number of communication rounds to get the maximum value is given in Lemma B.1. Additionally, notice that for each time step the protocol is guaranteed to end within $s$ time steps as the worst case delay is 1 unit time step for each server. By using this protocol, we can still obtain a near optimal communication cost of $O(b_e \log{s/\delta})$. We thank the reviewer for bringing this up and inspiring us to design a new protocol under different assumptions. We will add the new results and setups in the next version. **Q2. Sparsity for the summation aggregation function** First we want to point out that normalizing a cost vector to $l_i^t \in [0, 1], l_{i, j}^t \geq 0$ is a common practice in the experts problem literature. The way we distribute $l_i^t$ to different servers is purely random, which does not impose any sparsity assumption. In fact, even if $l_i^t \in [0, \rho], l_{i, j}^t \geq 0, \forall \rho > 0$, the communication upper bound for our algorithm will not change as our sampling probability (more specifically $\beta_{i, j}^t$) is designed to be $\frac{l_{i, j}^t}{\rho}$, and then the communication cost is proportional to $b_e \cdot T \cdot \frac{\sum_{j=1}^s l_{i, j}^t}{\rho}$, where $\frac{\sum_{j=1}^s l_{i, j}^t}{\rho} \in [0, 1]$. As we have assumed that $\rho=1$ in our setup without affecting the optimality, we have ignored the $\rho$ factor during our derivation, which might cause the confusion. We thank the reviewer for pointing this out and will make our definitions and explanations clear to avoid potential confusion. **References:** 1. Kanade, Varun, Zhenming Liu, and Bozidar Radunovic. "Distributed non-stochastic experts." Advances in Neural Information Processing Systems 25 (2012). 2. Braverman, Mark, and Rotem Oshman. "On information complexity in the broadcast model." Proceedings of the 2015 ACM Symposium on Principles of Distributed Computing. 2015. --- Rebuttal Comment 1.1: Title: Response to Authors Comment: Thank you for your response. Response to coordinator assumption: The proposed fix seems interesting and should fix the issue. Can the authors please elaborate a bit on why do they need the additional random delay? Response to sparsity of aggregation function: I think I misunderstood that $\beta$ will need to be updated accordingly. That should resolve the concern. --- Reply to Comment 1.1.1: Title: Clarifications by Authors Comment: We thank the reviewer for additional feedback. The random delay provides randomness to the protocol, so that in each round, there can only be a single random server who can occupy the broadcast channel. Randomness is required to guarantee this upper bound against the strongest adversary. Indeed, otherwise, the adversary can exploit the communication pattern and compromise the correctness of the protocol. For instance, if we fix the server communication order to be from 1 to s, then the adversary can exploit the pattern by assigning $s$ monotonically increasing cost values to the $s$ downstream servers, which, because of our fixed ordering, requires $s$ rounds of communications per expert. Thus, the overall communication cost would be $\tilde{\Omega}{(Tns)}$, which is much worse than the optimal communication cost we have if we incorporate randomness in our algorithm by using random delays.
Summary: The paper investigates the experts' problem in a distributed context, where the costs associated with experts are distributed across multiple servers. The authors present results for two communication models: the message-passing model and the blackboard model. They explore two aggregation functions: the sum and max of an expert's cost across servers. The paper introduces communication-efficient protocols designed to achieve near-optimal regret in these settings. The paper considers two communication models: the message-passing model, which involves two-way communication channels, and the blackboard model, which utilizes a broadcast channel. The objective is to balance near-optimal regret and communication efficiency. The proposed algorithms leverage sampling techniques to approximate the aggregated functions and reduce communication overhead. Strengths: The paper examines an important problem, thoroughly covering upper bounds, lower bounds, and empirical evaluations. Weaknesses: The paper acknowledges the contributions of Kanade et al. in related works. However, it relegates the discussion of the connection with this work to the appendix. This is somewhat unexpected, as the mentioned work shares a significant relation to the current study, probably more so than other cited literature. The paper considers two aggregation functions: sum and max. The selection of max, however, seems a bit odd and lacks clear motivation. The lower bound in this paper requires a limit on the memory the central server can utilize. Such a situation is not common in communication lower bounds. This condition seems to be more characteristic of the lower bounds for streaming algorithms rather than for communication models. Technical Quality: 3 Clarity: 3 Questions for Authors: Could you move the discussion comparing your work with the Kanade et al. paper into the main text? Could you provide more justification for selecting the 'max' function as one of the aggregation functions in your study? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their review and address their main concerns below. **Q1. Comparison with Kanade et al.** We thank the reviewer for the suggestion. We will move the comparison with Kanade et al. to the main text in a revision, as we indeed consider it very relevant to our work. **Q2. Max aggregation function** The maximum aggregation function is usually used when the objective is dependent on the worst cost across different servers, e.g., if the objective is to lower the worst case serving latency for some downstream tasks to satisfy certain SLOs (Service Level Objectives), or we want to bound the maximum drawdown for a diversified investment across different regions. Notice that besides the summation and maximum aggregation function, we also support the $l_p$ aggregation function, which is a much more general aggregation function, where the maximum function is a very special case $l_\infty$ as we assume all costs are positive. We believe the $l_p$ aggregation function protocols also introduce several interesting technical ideas. **Q3. Memory bound assumption** We thank the reviewer for pointing out our memory assumption on the lower bound. First, we note that our memory bound is only imposed on downstream servers with no memory bound assumption on the central coordinator. We also note that our lower bound is against the weakest possible adversary (oblivious adversarial) and thus holds in the most general setting. On the other hand, notice that the overall memory requirement for all downstream servers is $O(\frac{n}{TR^2}+s)$ which increases linearly as we have more servers. This does model situations in which the memory budget is constant for each server. --- Rebuttal Comment 1.1: Comment: Thank you for the response. It is satisfactory. I will maintain my score.
Summary: I reviewed this paper a few years back a few times and declined to review this paper for a while to be fair to the authors. I found the paper’s quality has not improved much (as opposed to the average quality improvement for many recycled theory papers) so I can only give somewhere between borderline and accept vote again this time. I would like to emphasize that contains interesting insights to distributed expert problems and could easily be a solid Neurips work. For example, it considers a very generic setting where the aggregation function is $\ell_p$ for all $p$, and its lower bound that utilizes communication complexity also appears to be quite interesting (as opposed to the standard expert lower bound that relies on anti-concentration). But the writing really makes it difficult to champion the paper. Strengths: It performs a set of interesting theoretical exercise to a fundamental expert problem in the distributed setting. Techniques developed in this work are useful addition to this area. Weaknesses: I feel more effort is needed to make this paper a smoother read for both theoreticians and general audience. Some (writing) issues I found: 1. It was not even completely clear if this is a multi-arm problem or an expert problem from Sec. 3.1. It looks like an expert problem but the authors mentioned the multi-arm setting quite early on. For the broadcasting setting, how is cost counted? Number of broadcasts? 2. Tables 1 to 3 are not really very comprehensible. R is left unexplained but it looks like a tunable parameter related to regret but Table 3 has a bound for R whereas Table 1 and 2 do not have. 3. Notation like R \in [O(something), O(something)] is not really rigorous as the authors are probably aware themselves. Maybe just be explicit to write down constants, or at least do $O(something) \cap \Omega(something)$. 4. The lower bound on the space constraint $M$ also seems not carefully thought out, or parameterized in a quite awkward manner. The space has to shrink as the number of servers grows? Those are all quite minor stuff so I wish the authors could make some effort to clear them up. Technical Quality: 3 Clarity: 3 Questions for Authors: I do not have any questions. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their review and address their main concerns below. **Q1. Presentation** We thank the reviewer for pointing out the issues in our presentation. The problem is indeed an experts problem as we assume each server can observe the full cost vector. The reason we introduced the multi-arm bandit problem in the related work is that the optimal algorithm (Exp3) proposed for the multi-arm bandit problem can serve as one possible solution for our problem and thus a baseline that we can compare with. We thank the reviewer for pointing out the inconsistency of assumptions on $R$ between the different tables. In the last paragraph of Section 1 we mention the assumptions: we assume $R \in [ \tilde{O} ( (\frac{\log n}{T} )^{\frac{\varepsilon}{1+\varepsilon}} ), \tilde{O} ( (\frac{n\log n}{T} )^{\frac{\varepsilon}{1+\varepsilon}} ) ]$ for DEWA-L as well as DEWA-L-P when $1+\varepsilon < p \leq 2$, and $R \in [\tilde{O}(\sqrt{\frac{\log{n}}{T}}), \tilde{O}(\sqrt{\frac{n\log{n}}{T}}) ]$ for the others. We will make the regret bound clearer and more consistent across all tables. We thank the reviewer for pointing out the notational confusion. The reason why we write the regret R to be in a range is that our algorithm allows for a hyper-parameter $b_e \in [n]$, which can be specified to give a tradeoff between the regret and the communication cost. Depending on the different choice of $b_e$ our algorithm can achieve a corresponding optimal regret $R=O(\sqrt{\frac{n\log{n}}{T b_e}})$. Thus, given that $b_e \in [n]$, the optimal regret $R will be in $[O(\sqrt{\frac{\log{n}}{T}}), O(\sqrt{\frac{n\log{n}}{T}})]$ accordingly. To avoid confusion, we will make this point clearer in the preliminaries. **Q2. Memory bound** We thank the reviewer for pointing out our memory assumption on the lower bound. First, we note that our memory bound is only imposed on downstream servers. We also note that our lower bound is against the weakest possible adversary (oblivious adversarial) and thus holds in the most general setting. Here we obtain an $O(\frac{n}{sTR^2}+1)$ bound, which for the near-optimal regrets $R \in [O(\sqrt{\frac{\log{n}}{T}}), O(\sqrt{\frac{n\log{n}}{T}})]$ we consider, we have that the $\frac{1}{\sqrt{T}}$ term cancels with the $T$ in our $TR^2$ bound. We will make the memory bound clearer in our revision to avoid unnecessary confusion. The memory constraint on each server will indeed become stronger as the number of servers increases due to more communication required to achieve the optimal regret. It is a challenging open question to completely remove this assumption. On the other hand, notice that the overall memory requirement for all servers is $O(\frac{n}{TR^2}+s)$ which increases linearly as we have more servers. This does model situations in which the memory budget is constant for each server.
Summary: This paper studies the classical experts setting in a new communication-focused model that is motivated by evaluating models when the data points are stored across many different servers. At a high level, the model is the following. There are $n$ experts (think of them as different models). There are $s$ servers, each of which is presumably storing some data. At time $t$, there is some $l_{i,j}^t$ for the $i$'th expert on server $j$. The overall loss for expert $i$ at time $t$ is $f(l_{i,1}^t, \dots, l_{i,s}^t)$ for some aggregation function $f$. Natural choices of the aggregation function, which this paper studies, are the sum, the max, and more generally $\ell_p$-norms. The distributed nature comes from the fact that we, the coordinator, do not have access to the server-specific losses unless we ask them and so incur a communication cost. So we might have only partial information about each expert, unless we query all servers for all experts. And since communication is an important bottleneck, we naturally want to minimize it. So we have an obvious question: what is the tradeoff between communication and achievable regret? As the authors point out, there are two obvious algorithms. First, we could simply ask every server for the loss of every expert in every round. This would incur $O(ns)$ communication per round, but would allow us to run any standard experts algorithm to get regret $O(\sqrt{\frac{\log n}{T}})$. On the other hand, we could use a *bandit* algorithm like Exp3 to select a single expert at each time, and query every server for that one expert. This would incur $O(s)$ communication in each round, but would give regret bounds of $O(\sqrt{\frac{n\log n}{T}})$, i.e., the standard bandit regret bound. These algorithms would work for any aggregation function, since they collect full information for whatever experts they are considering. So the natural question is whether it is possible to do better: for example, can we get the communication of the bandit algorithm ($O(s)$ per round) with the regret bound of the expert algorithm ($O(\sqrt{\frac{\log n}{T}})$)? This paper also distinguishes between two communication models, message-passing and broadcast. In message-passing, sending a message from the coordinator to a server (or vice versa) has a cost of 1 (or the number of words). In broadcast, by contrast, we assume that the coordinator and the receiver are on a shared channel, so sending one word to *everyone* costs $1$. This paper gives algorithms and some lower bounds for the natural aggregation functions in this model. At a very high level, in message-passing they can only handle the sum aggregation function, but has regret like the experts setting and communication $O(n+s)$ per round. So if $n \leq s$, they get the communication of the bandit setting but the regret of the expert setting, getting the best of both worlds. They also get a similar bound for the max aggregation function, but only in the broadcast communication model. Strengths: Overall, I like this paper and would advocate acceptance, although it has a few weaknesses. But fundamentally, it's an interesting problem and they give reasonable results. - The motivating example of distributed online optimization seems quite reasonable to me, and more generally I like the idea of "evaluating different experts requires talking to many different servers". While there has been previous work on "distributed experts", this paper is (to the best of my knowledge) the first to study this particular setting. - The given algorithms are reasonably simple, which is great, but are not obvious. The main idea is that we need to evaluate the quality of an expert without actually querying every server about them. They do this probabilistically, getting an unbiased estimator of the cost of each expert for very cheap and then using this estimator. This has some of my favorite type of analysis, where the math isn't "hard", but there are clever and non-obvious ideas. Weaknesses: In my opinion, there are two main weaknesses of this paper: some of the motivation and assumptions of the results, and some of the writing. - The most interesting (and first) result is for the sum aggregation and the message-passing model, but for the other aggregation functions they need the broadcast model. And I don't understand the motivation for this model. They discuss how it is a standard model of a one-hop wireless network, which is totally true, but they don't talk at all about why we might have this kind of distributed expert problem in a one-hop wireless network. And I can't think of any plausible story for this. Instead, the obvious motivation is their first one on distributed online optimization, which presumably is happening in a datacenter and so is a natural fit for the message-passing model. So I found it a bit disappointing that most of their results need a communication model that they do not really justify as being plausible for the problem that they're trying to solve. - The lower bound makes very strange assumptions. In particular, it assumes that the memory at each server is bounded by a function that depends *inversely* on both $T$ and the regret $R$. Why would this be? I don't understand why there would be any such dependence, particularly as a function of $T$. If anything, one might imagine the memory *growing* with $T$ (as the stream gets longer we buy more and more space on the server); I certainly don't understand why it would be *shrinking*. - The writing in this paper is quite poor, in a few different ways. - At a high level, the authors do a poor job of explaining what they're trying to do (get expert-style regret bounds with bandit-style communication bounds). I spent a long time thinking that they were trying to get improved dependence on $T$, rather than changing the dependence on $n$. I'll note that I usually think of $n$ (the number of experts) as being fixed and $T$ going to infinity, so I personally have never cared too much about the difference in the $n$ dependence between experts and bandits. It makes sense for $n$ to be large in the distributed online optimization setting, though, so I actually do believe that this is an interesting result. They just don't explain it well at all. - At a more detailed level related to the above point, they basically do not explain or give context to their results at all. They discuss their results starting on line 73, but basically just say "our results are in Tables 1, 2, and 3". They don't discuss how we should interpret these bounds at all -- are they good? bad? Is there room for improvement? What's tight and what's not? How do they relate to non-distributed classical settings? This is just crazy writing -- I've never seen such a lack of discussion about the results in any paper. This is one of the main reasons why I found it so hard to understand the importance of the results (see above). - At an even more detailed level, even the results tables are strange. First, they don't seem to actually prove these bounds anywhere. Look, for example, at the communication bound in Table 1 for DEWA-S. As far as I can tell, this bound does not appear in any theorem, corollary, or lemma in the rest of the paper. Instead, the bound for DEWA-S that they actually prove is Theorem 5.1 for the regret (they don't have a theorem statement about the communication anywhere, although they argue it in Section 4.2). So there is a mismatch between the bounds in the intro and the bounds in the paper. Of course, the bounds in the intro can be derived from the bounds that they actually prove, but for some reason they do not actually do this, or discuss why they phrase the bounds in these two different ways. - Similarly, if you look at the results table or the theorem statements, since they are hiding logarithmic terms the bounds for the high probability results are identical to the constant probability results. So what's the point of the constant probability results? Why not just give the high probability results as your results, and the constant probability algorithms as useful subroutines for the high probability case? Technical Quality: 3 Clarity: 2 Questions for Authors: - What is the argument for why the broadcast communication model is reasonable for this particular problem (not other distributed problems)? - Why does the lower bound assume memory that decreases with $R$ and $T$? What's the motivation for this, and why is it a plausible and interesting scenario? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: This is fine. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their review and address their main concerns below. **Q1. Motivation for the Broadcast Model** We thank the reviewer for acknowledging the setup for the message-passing model. For the broadcast model, there are fewer scenarios for the distributed experts problem than in the message-passing model, but there are still several important ones. Imagine a multi-level distributed online learning setting, where the upper-level consists of message-passing models while the lower-level in the hierarchy is a one-hop wireless model. For example, consider an online federated learning setting in which the end servers are edge devices such as cell phones. Our algorithm can be used in different levels within the hierarchy based on different communication models to achieve optimal performance. Additionally, a satellite communication model is another scenario in which the broadcast model is important. Other than practical motivations, the broadcast model is a well-studied model [1, 2] across different domains including streaming [3, 4, 7], cryptography [5], and mechanism design [6]. We therefore introduce the broadcast model for the distributed experts problem and provide optimal communication and regret tradeoffs under various aggregation functions such as the maximum and the $l_p$ norm. **Q2. Lower Bound** We thank the reviewer for pointing out our memory assumption on the lower bound. First, we note that our memory bound is only imposed on downstream servers. We also note that our lower bound is against the weakest possible adversary (oblivious adversarial) and thus holds in the most general setting. Here we obtain an $O(\frac{n}{sTR^2}+1)$ bound, which for the near-optimal regrets $R \in [O(\sqrt{\frac{\log{n}}{T}}), O(\sqrt{\frac{n\log{n}}{T}})]$ we consider, we have that the $\frac{1}{\sqrt{T}}$ term cancels with the $T$ in our $TR^2$ bound. We will make the memory bound clearer in our revision to avoid unnecessary confusion. **Q3. Writing** We thank the reviewer for pointing out the issues in our presentation. The communication bound for DEWA-S is derived in Section 4.2, but we will also include the formal theorem and proof of the communication bound for DEWA-S to align with DEWA-M and DEWA-L. We will also make the notation of the paper clearer and more consistent. **References:** 1. Braverman, Mark, and Rotem Oshman. "On information complexity in the broadcast model." Proceedings of the 2015 ACM Symposium on Principles of Distributed Computing. 2015. 2. Kushilevitz, Eyal. "Communication complexity." Advances in Computers. Vol. 44. Elsevier, 1997. 331-360. 3. Noga Alon, Yossi Matias, and Mario Szegedy. The space complexity of approximating the frequency moments. Journal of Computer and System Sciences, 58(1):137 – 147, 1999. 4. Ziv Bar-Yossef, T. S. Jayram, Ravi Kumar, and D. Sivakumar. An information statistics approach to data stream and communication complexity. J. Comput. Syst. Sci., 68(4):702–732, 2004. 5. Shahar Dobzinski, Noam Nisan, and Sigal Oren. Economic efficiency requires interaction. In Proceedings of the 46th Annual ACM Symposium on Theory of Computing, STOC ’14, pages 233–242, 2014. 6. Oded Goldreich and A Warning. Secure multi-party computation. unpublished manuscript, 1998. 7. André Gronemeier. Asymptotically optimal lower bounds on the NIH-multi-party information complexity of the AND-function and disjointness. In Proc. 26th Symp. on Theor. Aspects of Comp. Sc. (STACS), pages 505–516, 2009 --- Rebuttal Comment 1.1: Comment: I've read the rebuttal, and have the following comments and questions. **Q1. Motivation for Broadcast** I totally agree with the authors that broadcast is a standard model in theoretical computer science; I have certainly written papers in a variety of different broadcast models. But I still don't really buy it for this particular setting. In your example of a federated hierarchical system where only the bottom level is broadcast, it seems to me like if that's the actual setting that is motivating this, you should explicitly model it and argue that doing well in the bottom layer is sufficient (maybe because the cost incurred in upper layers is negligible)? My intuition is that all of the settings discussed in the rebuttal (and the paper) for broadcast "feel" like the stories that theorists tell ourselves in order to focus on the math without thinking too hard about the applications, rather than actual applications or motivations. And that is, to me, a weakness (although note I still like the paper and think it should be accepted). **Q2. Lower Bound** If I understand your rebuttal correctly, you're saying that due to your assumption on $R$ the memory upper bound is really something like $O(n / (s \log n))$ (for the smallest regret) or $O(1/(s\log n) + 1)$ (for the largest regret). Is that what you're saying? That makes a little more sense than my initial reaction. But this still seems to me to be a very strong assumption on the memory (particularly in the large regret case). Is there some justification for it? Why should I think that the servers are limited to this much memory? --- Reply to Comment 1.1.1: Title: Clarifications by Authors Comment: We thank the reviewer for additional feedback. For **Q1**, we thank the reviewer for acknowledging our theoretical contribution and agree that the broadcast model is mostly brought up in the field of theoretical computer science as a natural, though arguably less practical model. Nevertheless, we hope our derived algorithms can provide insight for settings such as distributed online learning in the satellite communication model, or for abstracting the communication costs that we may be interested in, e.g., the coordinator sends a single message to a router which lists multiple destinations, and we only count the cost incurred at the coordinator. For **Q2**, yes, the memory bound is $O(\frac{n}{s\log{n}}+1)$ for the smallest regret and $O(\frac{1}{s\log{n}}+1)$ for the largest regret, but note that the memory assumption is only required for our lower bound argument and is only needed for the downstream servers. The reason we pose such a memory bound in our lower bound proof is that the general case (without the memory bound) is a hard communication complexity problem and we currently do not have a solution for the most general case. On the other hand, our memory bound always allows for at least any constant amount of memory for each downstream server, which is practical in scenarios in which the memory budget for each server is limited. We hope our initiation of the study of this problem in this model can inspire further work to close the gap in the fully general setting.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Mixture of Adversarial LoRAs: Boosting Robust Generalization in Meta-Tuning
Accept (poster)
Summary: The paper introduces Adversarial Meta-Tuning (AMT) to enhance the robust generalization of pre-trained models for out-of-domain few-shot learning by constructing a robust LoRAPool through meta-tuning Low-rank Adapters (LoRAs). The approach significantly outperforms previous methods in both clean and adversarial generalization across various benchmarks. Strengths: - this study is novel and significant. - the proposed method AMT achieves superior performance. - The writing and presentation are good and easy to follow. Weaknesses: - Lack of evaluation on unseen adversarial attacks. The authors claimed that they boosted the adversarial generalization. Test the generalization of robustness on unseen attacks would be necessary. - Evaluation of adversarial robustness based on PGD-10 is insufficient. [2] points out that the PGD-10 attack is a weak attack and may suffer from a 'gradient mask' problem. It is necessary to present robustness based on AA attack[1]. [1] Croce, Francesco, and Matthias Hein. "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks." International conference on machine learning. PMLR, 2020. [2] Athalye, Anish, Nicholas Carlini, and David Wagner. "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples." International conference on machine learning. PMLR, 2018. Technical Quality: 3 Clarity: 3 Questions for Authors: - Could you show whether the robustness achieved by AMT can be generalized to unseen attacks? -Could you show the robustness under AA attack? - How does the performance change with varying the rank R in each Lora adaptor? and with varying the number of Lora adaptors? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We extend our appreciation for your constructive feedback on our manuscript. Below, we address each of your points comprehensively. Should there be any additional queries or clarifications needed, please feel free to let us know. #### Q1. Adversarial robustness evaluation of unseen attacks > - We appreciate the reviewer's invaluable suggestion. > - Our claim that "AMT boosts the adversarial generalization" is mainly grounded in our results in Table 3 where adversarial robustness is evaluated under distribution shifts using the same meta-tuned LoRAPool on the source domain ImageNet. > - We conducted additional experiments to measure the adversarial robustness generalization to unseen threat models, including $\ell_{\infty}$- and unseen $\ell_2$-bounded attacks with different perturbation budgets $\epsilon$. For each dataset, we sample 600 $5$-way $1$-shot tasks and generate adversarial examples using 10 steps of PGD with the step size $\epsilon/10$ for $\ell_{\infty}$ and $\epsilon/8.5$ for $\ell_2$ attacks, respectively. The results, shown in Table 10 of the rebuttal PDF, demonstrate our method AMT **significantly enhance adversarial robustness against unseen attacks under distribution shifts** for pre-trained vision transformer. Also, compared to previous adversarial few-shot learning methods StyleAdv, our AMT does **not sacrifice in-domain generalization**. #### Q2. Adversarial robustness evaluation of AutoAttack under distribution shifts > - We appreciate this great suggestion, and in response, we have incorporated additional experiments to measure adversarial robustness against AutoAttack [1] under distribution shifts. Specifically, we ground our method and the baseline on the adversarially pre-trained ViT-Small [2] and use APGD with cross-entropy and targeted DLR loss, FAB-attack and the Square Attack to generate adversarial examples on the 100 sampled $5$-way $1$-shot tasks on each dataset. The $\ell_{\infty}$-bounded perturbation at the radius $\epsilon_{\infty}=4 / 255$ is adopted. The results, shown in the below table, demonstrate our method **AMT consistently boosts adversarial generalization even under stronger AutoAttack** in terms of both in-domain and out-of-domain robust accuracy. > > | Method | ImageNet | Omniglot | Aircraft | CUB | DTD | QuickDraw | Fungi | Flower | Traffic Sign | MSCOCO | Avg. | > | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | > | PM | 29.36 | 36.52 | 3.88 | 15.06 | 14.20 | 29.80 | 3.61 | 20.48 | 10.26 | 8.26 | 17.14 | > | AMT | 39.96 | 61.48 | 8.88 | 24.04 | 23.12 | 51.48 | 11.09 | 44.76 | 23.20 | 22.00 | 31.00 | > > [1] Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In ICML, 2020 > > [2] Revisiting Adversarial Training for ImageNet: Architectures, Training and Generalization across Threat Models. In NeurIPS, 2023. #### Q3: Hyper-parameter Study of the LoRA rank and the pool size > - We conduct the ablation analysis following the reviewer's valuable suggestion to investigate the impact of the rank of LoRA and the size of LoRAPool. We also report the mean and variance of perturbation budget candidates $\epsilon$ during adversarial meta-tuning. The results, as illustrated in Tables 5 and 7 of the rebuttal PDF, indicate that (1) our model is not very sensitive to the rank of LoRA; (2) **a sufficiently diverse but the largest pool leads to improvements in performance**.
Summary: This paper proposes to tackle the problem of improving the generalization of pre-trained models to data drawn from a different distribution than the training data. To achieve this, the authors propose using meta-learning to train the models. Since they consider a single source domain, they adversarially generate query samples while the support samples directly come from the source domain data. The inner gradient is computed on the source, while the meta-gradients are computed on adversarial examples. They ground their approach by citing references that suggest an adversarially robust model also generalizes better on OOD data. Unlike conventional meta-learning approaches, which update the entire model parameters, the authors propose updating models through multiple LoRA steps, similar to FLUTE, which only updates the batch norm parameters. Each LoRA is computed using a query set of adversarial examples generated through a fixed attack budget. At test time, they identify the right combination of LoRAs based on simple cosine similarity measures between features of prototypes for each class (computed during training) and the OOD sample and its corresponding labels. They propose to look at intra-class similarity and inter-class diversity to this end. Furthermore, instead of perturbing only the images, they also perturb the singular values/vectors of the model weights (gradients) to strengthen the principal components (based on the observation that singular vectors undergo significant change during training). They evaluated their method on multiple datasets and demonstrated good performance gains compared to other methods such as StyleAdv, which alters the style of query samples using AdaIn instead of adversarial attacks. They also conducted ablation studies and demonstrated the benefits of various design choices. Strengths: The paper is very well motivated, and the authors ground their approach appropriately, citing a large collection of work. Each piece is well-motivated and clearly written. Perturbing singular vectors when performing LoRA is a novel idea, and the results indicate the benefits of this approach. Weaknesses: A primary weakness of the approach is its lack of comparison to another popular method for fine-tuning adaptation of foundation models—adapters. Adapters introduce new learnable parameters in each transformer block and update only them using the new data while freezing everything else. Some references include [MiMi](https://openaccess.thecvf.com/content/WACV2024/papers/Marouf_Mini_but_Mighty_Finetuning_ViTs_With_Mini_Adapters_WACV_2024_paper.pdf) and [AdaptFormer](https://openaccess.thecvf.com/content/WACV2024/papers/Marouf_Mini_but_Mighty_Finetuning_ViTs_With_Mini_Adapters_WACV_2024_paper.pdf). There are many more references within these papers and in the [scholar link](https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=ViT+adapter&btnG=). The current paper does not consider this PEFT technique at all. It would be beneficial to see a comparison or discussion section with adapters. Importantly, adapters are closely related to the approach in FLUTE too and instead of a LoRA pool, one could have an adapter pool. Secondly, the adversarial attack for images is the standard attack model. If all that is needed is better and harder augmentations, papers like [Improving Diversity with Adversarially Learned Transformations for Domain Generalization](https://arxiv.org/abs/2206.07736) offer a better alternative and they also work with a single source domain. Other options to consider include augmentation techniques such as PixMix and Rand Conv, where the tradeoff between ID and OOD accuracies is well established, and they don’t require additional steps such as adversarial attacks. Additionally, as an improvement, from the OOD detection literature this paper could use virtual outlier synthesis (VOS). VOS uses a per-class GMM in the latent space to synthesize feature space outliers. These outliers can then be used in training instead of relying solely on adversarial attacks. Technical Quality: 3 Clarity: 2 Questions for Authors: Please see weaknesses section Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Provided limitations are appropriate. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your constructive comments on our paper. Please kindly find our response to your comments below. We hope that our response satisfactorily addresses the issues you raised. Please feel free to let us know if you have any additional concerns or questions. #### Q1. More discussions concerning other PEFT techniques during meta-tuning > - We sincerely thank the reviewer for the constructive comments and will incorporate the following discussion into our revised manuscript. > - We have conducted additional experiments to adversarially meta-tune full parameters [3], FiLM [1] (after LN layers since there are no BN layers in ViT), Adapter [2], and LoRA. The attack budget is randomly sampled for each training task from the candidate pool which is the same as AMT. The results, shown in the table below, demonstrate that the performance of Adapter and LoRA is comparable in the context of adversarial meta-tuning and outperforms full or FiLM-based meta-tuning. > - We would like to highlight that compared with FLUTE which combines multiple FiLMs with a parametric classifier as the initialization for FiLM-based test-time fine-tuning, our LoRAPool with non-parametric merging mechanism adaptively integrates the LoRAPool into pre-trained weights, and thus provides better flexibility and compatibility with advanced test-time fine-tuning techniques to further improve the few-shot learning performances. For example, Tabel 1 in the work [4] has demonstrated that both LoRA- and Adapter-based test-time fine-tuning are not optimal choices for vision transformers. > - We are still pending for the results of FLUTE-style test-time fine-tuning, which we will supplement as soon as possible during the discussion period. > > | Adversarial Meta-tuning | Test-time merge| Test-time fine-tuning | ImageNet | Omniglot | Aircraft | CUB | DTD | QuickDraw | Fungi | Flower | Traffic Sign | MSCOCO | Avg. | > | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | > | Full | - | - | 64.31 | 62.81 | 38.46 | 76.23 | 60.42 | 57.99 | 56.31 | 81.80 | 57.31 | 54.22 | 60.98 | > | Single FiLM | - | - | 63.23 | 63.41 | 37.67 | 74.41 | 59.29 | 57.60 | 55.23 | 80.05 | 58.86 | 54.57 | 60.43 | > | Single Adapter | - | - | 64.68 | 65.32 | 38.43 | 75.37 | 59.68 | 58.35 | 55.90 | 81.69 | 58.31 | 54.05 | 61.18 | > | Single LoRA | - | - | 63.91 | 65.05 | 39.44 | 76.95 | 58.46 | 58.35 | 56.39 | 82.29 | 59.56 | 53.69 | 61.41 | > | FiLM Pool | classifier | FiLM | | | | | | | | | | | | > | Adapter Pool | classifier | LoRA | | | | | | | | | | | | > | LoRAPool | classifier | - | 67.22 | 64.60 | 37.99 | 77.96 | 62.65 | 57.11 | 56.62 | 80.23 | 58.36 | 56.10 | 61.89 | > | LoRAPool | criteria | - | 68.80 | 71.95 | 42.90 | 79.95 | 62.99 | 59.62 | 59.06 | 85.37 | 63.78 | 57.14 | 65.16 | > | LoRAPool | criteria | LoRA | | | | | | | | | | | | > | LoRAPool | criteria | PMF [3] | 68.80 | 77.83 | 42.90 | 79.95 | 63.77 | 63.72 | 59.06 | 85.37 | 63.87 | 57.37 | 66.26| > | LoRAPool | criteria | ATTNSCALE [4] | 68.80 | 79.43 | 42.90 | 79.95 | 63.08 | 65.66 | 59.06 | 85.37 | 64.13 | 58.24 | 66.66| > > [1] Learning a Universal Template for Few-shot Dataset Generalization. In ICML, 2021 > > [2] Adaptformer: Adapting vision transformers for scalable visual recognition. In NeurIPS, 2022 > > [3] Pushing the limits of simple pipelines for few-shot learning: External data and fine-tuning make a difference. In CVPR, 2022 > > [4] Strong baselines for parameter-efficient few-shot fine-tuning. In AAAI, 2024 #### Q2. More discussions concerning other data augmentation techniques > - Thank you for your insightful suggestion. We have conducted additional comparisons for AMT against other data augmentation methods, including ALT [1] and Rand Conv [2], as suggested. > - Our experiments were conducted under fair conditions using a single LoRA (pool size $P=1$) during meta-tuning for all methods. Specifically, following ALT [1], we employed a learnable adversarial transformation network consisting of 5 convolutional layers with a kernel size of 3 and LeakyReLU activation. The adversarial learning rate was set to $5 \times 10^{-5}$, with 10 adversarial steps. For the method employing an attack candidate pool, we randomly select the attack budget from candidates for each training task, with $\epsilon$ values of {8/255, 6/255, 0.1/255, 0.01/255} for our method, and step number of {1, 3, 5, 10} for ALT. > - The results in Table 9 of the rebuttal PDF demonstrate that static data augmentation cannot effectively simulate the large domain shift required for robust generalization across diverse datasets (e.g., Omniglot). Our AMT with standard pixel-level adversarial attack achieves comparable or superior generalization improvements for pre-trained vision transformers across OOD tasks. #### Q3. Other data augmentation techniques deserve future work > We sincerely appreciate the reviewer's invaluable suggestion. We would like to highlight that we present a framework by constructing the robust LoRAPool with test-time merging to significantly boost the robust generalization of the pre-trained vision transformer. In this context, we use adversarial attacks, characterized by the size of the perturbation budget, as an example to mimic different distributional shifts to construct different LoRAs. Our experiments demonstrate the effectiveness of using adversarial training. The focus and contribution of this paper is the whole framework of the algorithm instead of the optimal data augmentations. In addition, we believe the data augmentation to achieve the optimal performance also depends on the OOD test set we use. Nevertheless, we agree data augmentation technique is important for this problem and leave investigating better training data augment techniques (e.g., Virtual Outlier Synthesis (VOS)) as future works. --- Rebuttal Comment 1.1: Comment: I thank the authors for addressing my concerns. I wanted to know if the experiments have finished with adapter pool? I commend the authors for running experiments in such a short time. I would also recommend to discuss other PEFTs in the main paper. Given that most of my concerns are addressed, I am going to increase the score. Please do comment when you have adapter pool results too. --- Reply to Comment 1.1.1: Title: Q: Gratitude to Reviewer VJpc's prompt response and updated results of FLUTE-style test-time fine-tuning. Comment: > - We sincerely thank the reviewer for your prompt response and raising the score. > - We will definitely follow the reviewer's suggestion to include discussions about other PEFTs in our related work. > - Besides, please find the FiLM/adapter pool results below, which we will also include in our revision. > - Regarding the FiLM pool [1] and Adapter pool [2], we have conducted additional experiments by setting the pool size to 4 and adopting the same attack candidate pool used in AMT during adversarial meta-tuning. To estimate the combination coefficients, we follow the method outlined in FLUTE [1]. Specifically, a classifier is trained in a separate stage to predict which FiLM or Adapter the input belongs to, taking as input a batch of adversarial examples generated by attacking different FiLMs or Adapters in the pool. > - In the following table: > - The superiority of FiLM/Adapter Pool over FiLM/Adapter signifies that our adversarial pool design indeed contributes to the out-of-distribution performance without the compromise of in-domain accuracy. > - Ours with the additional (1) perturbation in singular values/vectors and (2) non-parametric test-time merging mechanism utilizing the criteria (i.e., Line 204-208) enjoys significant performance improvement over FiLM/Adapter Pool. > - Compared with the FLUTE-style test-time fine-tuning strategy that requires further tuning of pool components (either a FiLM or an adapter), our framework shows better compatibility with different test-time fine-tuning approaches, including LoRA tuning, full fine-tuning [3], and attention scaling [4]. > > | Adversarial Meta-tuning | Test-time merge| Test-time fine-tuning | ImageNet | Omniglot | Aircraft | CUB | DTD | QuickDraw | Fungi | Flower | Traffic Sign | MSCOCO | Avg. | > | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | > | Full | - | - | 64.31 | 62.81 | 38.46 | 76.23 | 60.42 | 57.99 | 56.31 | 81.80 | 57.31 | 54.22 | 60.98 | > | Single FiLM [1] | - | - | 63.23 | 63.41 | 37.67 | 74.41 | 59.29 | 57.60 | 55.23 | 80.05 | 58.86 | 54.57 | 60.43 | > | Single Adapter [2] | - | - | 64.68 | 65.32 | 38.43 | 75.37 | 59.68 | 58.35 | 55.90 | 81.69 | 58.31 | 54.05 | 61.18 | > | Single LoRA | - | - | 63.91 | 65.05 | 39.44 | 76.95 | 58.46 | 58.35 | 56.39 | 82.29 | 59.56 | 53.69 | 61.41 | > | FiLM Pool | classifier | FiLM | 67.45 | 65.42 | 37.58 | 75.02 | 62.63 | 59.22 | 55.09 | 79.00 | 60.40 | 55.69 | 61.75 | > | Adapter Pool | classifier | Adapter | 67.48 | 65.33 | 38.58 | 80.16 | 62.76 | 58.09 | 57.63 | 75.23 | 57.41 | 54.32 | 61.70| > | LoRAPool | criteria | - | 68.80 | 71.95 | 42.90 | 79.95 | 62.99 | 59.62 | 59.06 | 85.37 | 63.78 | 57.14 | 65.16 | > | LoRAPool | criteria | LoRA | 68.80 | 80.00 | 43.49 | 79.95 | 62.99 | 59.62 | 59.06 | 85.37 | 66.42 | 57.14 | 66.28 | > | LoRAPool | criteria | PMF [3] | 68.80 | 77.83 | 42.90 | 79.95 | 63.77 | 63.72 | 59.06 | 85.37 | 63.87 | 57.37 | 66.26| > | LoRAPool | criteria | ATTNSCALE [4] | 68.80 | 79.43 | 42.90 | 79.95 | 63.08 | 65.66 | 59.06 | 85.37 | 64.13 | 58.24 | 66.66| > > [1] Learning a Universal Template for Few-shot Dataset Generalization. In ICML, 2021 > > [2] Adaptformer: Adapting vision transformers for scalable visual recognition. In NeurIPS, 2022 > > [3] Pushing the limits of simple pipelines for few-shot learning: External data and fine-tuning make a difference. In CVPR, 2022 > > [4] Strong baselines for parameter-efficient few-shot fine-tuning. In AAAI, 2024
Summary: This paper deals with how to effective adapt a pretrained model to cross-domain few-shot learning task. It focus on both adversarial robustness and clean accuracy of trained model. To realize the goal, it utilizes adversarially trained LoRA to adapt the pretrained model. Specifically, it utilizes SAM to determine the worst-case perturbations to update the matrices A and B of LoRA which are initialized to approximate the modification of principle singular value and vectors of original weight matrix in pretrained model. Further, it designs LoRAPool. LoRAPool consists a series of LoRA modules, each corresponding to a different robustness level controlled by the size of the adversarial budget. Based on that, the authors design a test-time merging strategy to adaptively combine all the LoRAs for optimizing test-time task performance. Extensive experiments verify the effectiveness of proposed method. Strengths: - This paper is generally easy to follow and understand. - The idea is reasonable. - The experiment results is good. Weaknesses: - It is unclear that what key factors make the proposed framework outperforms previous works. From technical level, the operations or strategies used in the paper is not that new, e.g., using LoRA to finetune the pretrained model, using SAM to improve the generalization ability of the model, test-time ensemble, etc. I just wonder what makes the proposed framework most distinct from previous works.The authors should more clearly discuss this to give the readers some insights. - Some technical details are not clear. For example, in Eq. (7), what does the top_k means? Or to say, what is the operation detail of top_k? In experiment part, the authors compare their method with previous works under two settings, i.e., tuning-free setting and test-time fine-tuning setting. What are the difference between those two settings? In Line 259-260, what is the exact form of the variant which removes the adversarial perturbations on singular values and vectors? Does it mean not using SAM to update A and B of LoRA? - The authors compare the adversarial robustness and the clean accuracy separately in two tables. I just wonder if those two numbers are achieved simultaneously by the same model, or they are evaluated with two model where one tends to be robust and the other one tends to be more accurate (it can be so by employing different $\lambda_{adv}$). I would like to see how the proposed method balances those two evaluations. - Missing key ablations. The authors design a special initialization to make the update of A and B approximately represents the modifications of singular values and vectors of original weight matrix. I just wonder how the performance is if we don't employ such an initialization, i.e., we just simply add LoRA to the pretrained model which is initialized as zero matrix. I would like to see the comparison between plain LoRA initialization and the proposed initialization. Otherwise, the effectiveness of proposed initialization cannot be proved. - Lack hyper-parameter study. Technical Quality: 3 Clarity: 2 Questions for Authors: See the weakness part. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you sincerely for your thoughtful feedback on our work. Below, we have provided a detailed explanation for your concerns as follows. Please do not hesitate to let us know if you have any further questions. #### Q1. Technical contributions concerning adversarial singular value and vector perturbation and robust LoRAPool > - We acknowledge the reviewer's scrutiny of our technical contributions and would like to clarify that our work is definitely not a simple summation of existing techniques. Different components of our methods are closely related and adapted to our goal: to boost the robust generalization of pre-trained models in out-of-domain few-shot learning. > - Specifically, inspired by the low-rank strategies when training transformers, we do not directly perturb the weight matrices as done in SAM, instead **our perturbations are in the spectral space**: we perturb the singular values and the singular vectors to boost the performance. > - In addition, we **use different adversarial perturbations (either different types or different magnitudes) to generate a LoRA pool** consisting of multiple low-rank structures and use adaptive merging to construct the best-adapted parameters for different out-of-distribution tasks during the test time. We acknowledge that both SAM and LoRA motivate our solution, but we need to clarify that our algorithm is significantly different from vanilla SAM or LoRA and much better adapted to the problem we aim to solve. Below are additional comparisons to validate the effectiveness of our novel solutions. > - The results, reported in Table 2 of the rebuttal PDF, demonstrate that **our method outperforms SAM by 1.56% on average**. > - As shown in Table 1 of the rebuttal PDF, the **robust LoRAPool with perturbation-specific parameters effectively avoids interference between attacks** and significantly enhances the OOD generalization without ID compromise. For uniform strategy, we adopt the average attack strength ($\epsilon=3.5$) of candidate configurations and meta-tune 4 LoRAs with different seeds. The random strategy means we randomly sample one attack budget $\epsilon$ for each training task from the same attack candidate configurations. > - We also humbly highlight that our contribution of **singular value trimming and non-parametric test-time merging mechanism is also novel and effective for few-shot learning**, as supported by the results in Tables 4 and 5 of the main paper. #### Q2. Clarification of the top_k operation > We apologize for any potential misunderstandings. The operation top_k before softmax in Eq. (7) refers to selecting the top $k$ LoRA modules with the largest score of $-\beta (1- (\lambda C-(1-\lambda)V)$ and the rest LoRAs are deactivated for the current task. #### Q3. Clarification of the tuning-free setting and test-time fine-tuning setting > We apologize for any potential misunderstandings. > - The tuning-free setting does not involve additional training on the support set. We adaptively merge meta-tuned LoRA into pre-trained models via the formula in line 206 and perform prototype-based classification. Aside from this, the test-time fine-tuning setting allows for training on the support set according to different fine-tuning methods such as fine-tuning full parameters (PMF) or partial parameters (ATTNSCALE) > - We evaluate under both settings to demonstrate AMT's (1) effectiveness in learning a well-generalized initialization for pre-trained models even without time-consuming fine-tuning, and (2) compatibility with advanced fine-tuning techniques to further improve the few-shot learning performances. #### Q4. Clarification of variant removing the adversarial perturbations on singular values and vectors > We appreciate the reviewer's feedback. When removing the adversarial perturbations on singular values and vectors, we inject the worst-case perturbations into the input only, without perturbing the singular value and vectors of LoRA. Other configurations remain the same as the final version of AMT. We would like to humbly clarify that this adjustment does not imply not using SAM. #### Q5. Clarification of evaluated model > We appreciate the reviewer's invaluable feedback and we clarify that all results presented in Table 1, 2 and 3 are achieved simultaneously by the same model using a meta-tuned robust LoRAPool on the source domain ImageNet. > - We would like to highlight that thanks to the diverse design of the adversarial LoRAPool, our **AMT improves the trade-offs between adversarial robustness and clean accuracy, as well as between ID and OOD generalization**, as supported by the results compared to the clean meta-tuning method (PMF) and adversarial few-shot learning method (StyleAdv) in Table 1, 2 and 3. > - We conducted additional experiments, employing different $\lambda_{adv}$. The results, in Table 3 of rebuttal PDF, demonstrate that we can use $\lambda_{adv}$ to adjust the preference of our robust LoRAPool to either clean or adversarial environments. #### Q6. Ablation studies to support the improvement of perturbation on singular values and vectors > We appreciate the reviewer's insightful suggestion and conducted additional comparisons, pitting AMT against original LoRA initialization, for which we incorporated adversarial perturbations in the weight space. The results in Table 4 of rebuttal PDF demonstrate that AMT achieves superior performance, highlighting the effectiveness of our adversarial singular value and vector perturbation in boosting the model's generalization capability. #### Q7. Hyper-parameter study > - We have conducted supplementary experiments. The results, as illustrated in Tables 5, 6, 7, and 8 of the rebuttal PDF, indicate that (1) our model is not very sensitive to the rank of LoRA and the number of attack steps; (2) **a sufficiently diverse but large pool leads to improvements in performance**. The results also justify our choice of top-2. --- Rebuttal Comment 1.1: Title: Re: Rebuttal Comment: Thanks for the authors' detailed response. The response clarify most of my concerns. I would like to raise my score. Hope the authors can include all the necessary details and results and polish their paper to make it clearer.
Summary: The paper proposes a method for training loras for vision transformer models such that the model easily adapts to an unseen few shot classification task. The goal is to have these loras robust to adversarial noise. Strengths: The experiments seem comprehensive and show impressive performance. The idea of perturbing eigenvectors of weights instead of weights seems interesting to me. Further, the idea of training and merging multiple loras that are robust to different levels of adversarial noise is interesting. Weaknesses: - What are the instances in medical and self-driving domain "where encountering novel and adversarial environments is common" (line 26)? - The writing can be improved a lot. a. I struggling to understand how the start of the second sentence of the introduction (line 15) is connected to the previous sentence. b. Same with the starting two sentences of the next paragraph (line 21). I don't see why "However" is necessary on line 23. c. Third, fourth, and fifth sentences of the third paragraph (lines 29, 31, and 33) are not connected at all. d. What is "double strongest perturbation"? (line 45) e. This is a prime example of a sentence that needs to be simplified: "To robustify the learned meta-knowledge, adversarial meta-tuning adopts the worst-case optimization by injecting the adversarial perturbation δ to the input x through the minimax strategy" (lines 130-132) I consider myself a decently well educated researcher and the sentence doesn't tell me anything. The paper seemed promising to me, but I had to parse out a lot of stuff to get to the interesting part. Hence, I couldn't spend time properly understanding the method and results. For now, I am keeping accept as the result seem promising. Technical Quality: 2 Clarity: 1 Questions for Authors: See above. Confidence: 2 Soundness: 2 Presentation: 1 Contribution: 2 Limitations: The limitations section in the paper seems adequate. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for providing valuable feedback. We detail our response below point by point. Please kindly let us know whether you have any further concerns. #### Q1. Instances of novel and adversarial environments > We appreciate the reviewer's great feedback and agree that discussion of instances of adversarial environments in real-life applications will strengthen the motivation of this work. We will incorporate the discussions and the related literature into our revised manuscript. > - **Instances of novel environments**: In real-world deployments, deep learning models in both medical and self-driving domains often encounter novel environments and suffer from distribution shifts between training and deployment data, including unseen pathologies [1], variations in hospital equipment and protocols [1], and diverse urban road scenarios [2]. > - **Instances of vulnerability to adversarial attacks**: Furthermore, deep neural networks are vulnerable to physical-world adversarial attacks, which can lead to harmful diagnoses or unsafe driving decisions. For instance, adversaries can perturb sensor signals to deceive 2D or 3D medical imaging models [3, 4], manipulate traffic signs with malicious stickers to mislead autopilot systems [5], or fool the autopilot into following fake lane lines[6] or making unsafe trajectories [7]. > > [1] Understanding silent failures in medical image classification. In MICAAI, 2023. > > [2] Are we ready for autonomous driving? the kitti vision benchmark suite. In CVPR, 2012. > > [3] Self-adaptive adversarial training for robust medical segmentation. In MICAAI, 2023. > > [4] Adversarial attacks and defenses on AI in medical imaging informatics: A survey. In Expert Systems with Applications, 2022. > > [5] Robust Physical-World Attacks on Deep Learning Visual Classification. In CVPR, 2018. > > [6] Dirty road can attack: Security of deep learning based automated lane centering under physical-world attack. In USENIX Security, 2021 > > [7] On Adversarial Robustness of Trajectory Prediction for Autonomous Vehicles. In CVPR, 2022. #### Q2. Clarification of writing > We apologize for the ambiguity and thank the reviewer for valuable feedback on our manuscript. We have carefully considered each of the points and propose the following revisions to enhance the coherence and readability of our paper: > > - a) Connection with the previous sentence (line 15): We will clarify the transition between sentences. Specifically, we will revise the first two sentences as follows: "Building upon the capabilities demonstrated by large-scale pre-trained vision transformers in zero-shot scenarios, a few annotated examples can be leveraged to further enhance generalization capability, achieving impressive performance across a broad spectrum of downstream tasks." > > - b) Transition in Paragraph (line 21): We acknowledge the abruptness of the transition and the unnecessary use of "However." We will revise as follows: "While few studies have explored how meta-tuning can maintain high performance under these conditions at the same time, it is crucial for real-world applications..." > > - c) Coherence in Third Paragraph (lines 29, 31, 33): We will revise these sentences to ensure a more cohesive flow of ideas. > > - d) Clarification of "double strongest perturbation" (line 45): We inject the strongest perturbations twice by initially attacking the input and subsequently perturbing the singular values and vectors with adversarial examples. > > - e) Simplification of the sentence (lines 130-132): We will simplify the sentence to "To robustify the learned meta-knowledge, adversarial meta-tuning injects the worst-case adversarial perturbation $\delta$ to the input $x$."
Rebuttal 1: Rebuttal: # Summary of changes We extend our sincere thanks to the reviewers for their constructive feedback. We have summarized additional experiments and clarification made during the rebuttal period as follows. **Clarification:** 1. Illustrated our technical contributions concerning adversarial singular value and vector perturbation, robust LoRAPool, and test-time merging mechanism. (Reviewer cLZu Q1 and Q6) 2. Clarified the writing and added instances of novel and adversarial environments. (Reviewer ek5A Q1 and Q2) 3. Clarified the top-k operation. (Reviewer cLZu Q2) 4. Clarified the experiment details and evaluation settings. (Reviewer cLZu Q3, Q4, Q5) 5. Added discussion of other parameter-efficient tuning methods and data augmentation techniques. (Reviewer VJpc Q1 and Q2) **Additional Experiments:** 1. Conducted a comparative analysis between our different design choices of proposed AMT. (Reviewer cLZu Q1 and Q6) 2. Conducted the hyper-parameter study of LoRA rank, pool size, and loss coefficient (Reviewer cLZu Q5, Q7, Reviewer cdgf Q3) 3. Analyzed the robust generalization for unseen attacks and AutoAttack under distribution shifts. (Reviewer cdgf Q1 and Q2) 4. Conducted a comparative analysis with other parameter-efficient tuning methods and data augmentation techniques. (Reviewer VJpc Q1 and Q2) Pdf: /pdf/2a4a23e85c1466ad5990826d5027335f0bd0b221.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Mitigating Biases in Blackbox Feature Extractors for Image Classification Tasks
Accept (poster)
Summary: The paper tries to address the critical issue of biases in blackbox feature extractors used for image classification tasks. These biases can impact the performance of models when adapted for downstream tasks. The authors investigate existing debiasing techniques and propose a novel method using a clustering-based adaptive margin loss, which does not require prior knowledge of bias attributes. Their experiments demonstrate the effectiveness of this approach across multiple benchmarks, highlighting its practical applicability in scenarios where feature extractor weights are not accessible. Strengths: (1) The paper's shows originality and practical relevance. It tackles a challenging problem with a novel approach that is both effective and efficient. The clustering-based adaptive margin loss is a creative solution that demonstrates substantial improvements over existing methods. The thorough experimental validation across multiple benchmarks further strengthens the paper's contributions. (2) The paper is well-written and clearly presented. The introduction effectively sets the stage for the problem being addressed, and the methodology is detailed comprehensively. Weaknesses: (1) The paper could benefit from a deeper theoretical analysis of why the proposed method works, which would enhance its scientific rigor. Technical Quality: 3 Clarity: 3 Questions for Authors: (1) How does the clustering-based adaptive margin losses compare to other adaptive loss functions in terms of computational complexity? (2) Could the proposed method be applied to other types of pre-trained models beyond image classification tasks? If so, what adjustments would be necessary? (3) How sensitive is the performance of the proposed method to the choice of clustering algorithm and the number of clusters? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: (1) A more detailed discussion on the scalability of their approach to extremely large datasets and models would be beneficial. (2) While the method shows effectiveness in mitigating biases, exploring its impact on model interpretability could provide valuable insights. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful comments. We address the concerns below: **Deeper analysis of the approach.** We refer the reviewer to the ArcFace paper [6] for details on the margin loss. Here, we show how the adaptive nature of the margin loss aids in the learning of the bias-conflicting samples. As shown in [6], the margin loss increases the angle between a sample's MLP feature ($\hat{f}$) and its ground truth weight vector. The degree of increase depends on which cluster the sample belongs to, and the inverse frequency of the ground truth class in that cluster. Let $x$ be an image in the bias-conflicting group, $m$ be its corresponding margin penalty (eq 2 and L248 in the main paper), and $\theta$ be the angle between the weight vector corresponding to the ground truth class and the feature $\hat{f}$. As $x$ belongs to a bias-conflicting group, the penalty $m$ is high (assuming that the clusters obtained from the features are good approximations of the different groups in the training set), which means that the angle between the feature vector and its weight vector increases considerably in the softmax term inside the margin loss in eq 4 (main paper) (recall that $\cos(\theta+m) < \cos(\theta)$ for $m>0$). Thus the loss value rises for the bias-conflicting samples (owing to the logarithm and the negative sign in eq 4). For the bias-aligned samples, as $m$ is lesser (as it belongs to the majority class in the cluster), the degree of increase in the loss is lower than those for the bias-conflicting samples. This is how the adaptive margins help mitigate biases. For further evidence on the usefulness of the adaptive margin loss, we plot the training loss for each group (summed across all epochs and normalized by group size) in Waterbirds and CelebA between an ERM model and our method in Figure 2 in the rebuttal pdf. While both models have higher losses for the minority groups (for majority groups, the loss values are negligible and similar), the proposed method's minority loss values are much higher than that of the ERM model. **Computational Complexity of competing methods** See the below table for the analysis of computational complexity. | Time | ERM | BPA | LfF | CoAda | Ours | |:----------:|:----:|:----:|:----:|:-----------:|:----:| | WaterBirds | 22s | 48s | 44s | 615s | 65s | | CelebA | 114s | 400s | 832s | 127 mins | 803s | | RAM | ERM | BPA | LfF | CoAda | Ours | |:----------:|:-----:|:-----:|:------:|:-----:|:-----:| | WaterBirds | 1.28G | 1.34G | 0.890G | 7.5G | 1.3G | | CelebA | 1.71G | 3.87G | 1.28G | 112G | 1.82G | | VRAM | ERM | BPA | LfF | CoAda | Ours | |:----------:|:---:|:----:|:---:|:-----:|:----:| | WaterBirds | 25M | 28M | 20M | 44M | 20M | | CelebA | 20M | 752M | 20M | 34M | 20M | **Application to other pretrained models** In the main paper, we show the performance of our model on self-supervised pretrained model like CLIP, and demonstrate the effectiveness of our method in Table 4. Our method can further be applied to any classification based task if a pretrained feature encoder is present. For example, if one wants to run it for a text-based classification task, they only need the features from a pretrained model (e.g., BERT). No other adjustment is necessary. It is to be noted that the method only relies on the angle between the feature vector and the ground truth class' weight vector. As an application, we run our method on the CivilComments dataset [a] using the BERT model pretrained on BookCorpus and Wikipedia datasets (https://huggingface.co/google-bert/bert-base-uncased), and report the results below. We find that both Contrastive Adapter and our method far outperform the ERM scores. | | Average | Worst | |-------|---------|-------| | ERM | 58.14 | 13.5 | | CoAda | 78.19 | 67.96 | | Ours | 76.43 | 68.06 | **Clarification for the clustering task** Kindly see the general response section. **Scalability of approach to extremely large datasets and models** Since the method involves clustering the features, if the dataset is extremely large, one can randomly sample a small percentage of it to do the clustering. For example, on clustering only 10% randomly sampled images from celeba, we find that the worst group accuracy is $81.11$%, whereas the average group accuracy is $85.6$%. The scores become $80.55$% and $85.6$% respectively when the clustering is performed only on 1% of the images (Note that scores reported in the paper are $81.61$% and $86.04$% respectively). On the other hand, for extremely large models, the requirement is to be able to load the model into a GPU memory. Our method adds negligible overhead owing to the addition of only an adapter and a classifier layer. We will add these points to the limitations section in the camera ready version. **Exploring the method impact on model interpretability** We thank the reviewer for this valuable suggestion. We intend to investigate this as a future work. [a] Borkan, Daniel, et al. "Nuanced metrics for measuring unintended bias with real data for text classification." Companion proceedings of the 2019 world wide web conference. 2019. --- Rebuttal Comment 1.1: Title: Official Comment by the Authors Comment: We once again appreciate the thoughtful feedback provided by the reviewer. We hope our responses have adequately addressed the concerns of the reviewer. --- Rebuttal 2: Title: Please read the rebuttal to check if the authors addressed your concerns Comment: Dear Reviewer oZ3K, Can you have a look at the rebuttal and see if your concerns have been addressed? Best regards Your AC.
Summary: In this paper, the authors propose a simple method with a clustering-based adaptive margin loss for debiasing blackbox pretrained models. Whereas prior works have explored settings where pretrained models are tunable, the authors instead explore a more constrained and realistic setting, where a black-box network is frozen and only a classifier is trained on the fine-tuning dataset. The proposed approach involves training an adapter to amplify biases in the frozen encoder; then, a mitigation procedure is executed using a novel cluster-based margin loss. The proposed approach appears to outperform state-of-the-art methods across three datasets. Strengths: 1. The paper introduces a creative approach for addressing a high-impact problem that has been previously under-researched. As large-scale pretrained encoders become more commonplace, it is important to create debiasing methods that can operate in settings where the model weights are frozen. 2. The proposed approach appears to demonstrate performance improvements over existing approaches while also demonstrating high efficiency. Weaknesses: 1. **Applicability**: One important weakness of this approach is that it is difficult to know when it can be used. The authors claim that their "method is specifically targeted towards cases where the bias in the downstream dataset is already encoded in the pretrained model". However, this is difficult (and in some cases, impossible) to know a priori. - In their evaluations in Table 1, the authors assume that spurious attribute labels are available in order to determine whether a pretrained model encodes the same bias as the downstream dataset. However, these subgroup labels are not likely to be available in real-world settings (which the authors claim as well later in the paper), making it difficult to know when the authors' approach is useful. - The authors also make a note about how their adapter-based bias amplification procedure will consistently work because they assume that the bias in the downstream dataset aligns with that of the pretrained model. As stated above, this is a strong assumption that is difficult to verify a priori, especially since the authors assume that bias annotations are not available for their downstream dataset. - As a result, I am unsure on the real-world applicability of this approach. 2. Presentation: The organization of this manuscript is somewhat confusing, particularly in section 3. 3. Additional clarification on design decisions: Some critical design decisions are somewhat vague in the main text. - How was the value of $\lambda$ selected in practice for each dataset? The paper states that $\lambda$ was selected as a high value while ensuring that the training accuracy did not drop drastically. Are there specific thresholds at which the training accuracy drop was viewed as "drastic"? - The paper could benefit from additional details on the clustering approach. How was the number of clusters selected? In Table 11, it appears that the number of clusters K selected by the authors varies significantly across encoders and datasets. Is this a hyperparameter that needs to be set by the user? How does performance vary depending on the number of clusters selected? It seems that this is a critical design decision that can affect efficacy of the proposed approach. Technical Quality: 3 Clarity: 2 Questions for Authors: I have listed my questions above in the “weaknesses” section. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Yes, the authors have adequately described the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful comments. We address the concerns below: **Clarifications about Table 1:** Table 1 is solely meant for analyzing the nature of the different feature encoders, wherein we can see that different feature encoders may have different levels of awareness of the target labels. If a feature encoder is highly target-aware, even removing all bias-conflicting samples from the training data does not hamper the worst group accuracies to a large extent. On the other hand, if the feature encoder is bias-aware, the worst group accuracies are low even for the original dataset -- these are the scenarios that require explicit bias mitigation. Note that the scores in this table are those of the ERM method, obtained by varying the proportion of the bias-conflicting samples, and is not related to the proposed approach. **Assumption on alignment of the bias in the pretrained features and downstream dataset:** Please see the general response section. **Clarifications on the presentation in Section 3:** We apologize if the presentation of this section was not clear. We explain the main highlights of the section below and will make the presentation clearer in the camera-ready version upon acceptance. - We begin by formally defining the existence of biased groups in deep learning datasets, explaining how model behavior differs for different groups (section 3.1). - As mentioned above, section 3.2 analyzes the nature of different feature encoders, and highlights that the bias-aware feature encoders require explicit mitigation. - In section 3.3, we show how existing methods (designed for full finetuning) perform inconsistently for different datasets in the proposed problem setting, while Contrastive Adapter [50] (designed for such frozen encoders) performs decently, but is highly time consuming. This motivates the requirement of dedicated research in the direction of the proposed problem setting. - Section 3.4 describes our method, where we first motivate amplifying the bias, and then mitigating it through the clustering-based margin loss approach, after exploring a number of simple baselines involving weighted losses. We can clarify further if the reviewer still has doubts. **Clarifications on selection of weight decay value:** In the paper, we have shown that high weight decay leads to a reduction in the bias-conflicting group performances in the training set, thereby helping us detect the bias easily. The value of $\lambda$ is chosen using the validation accuracy in the mitigation stage in practice from a range of high values (0.01-1). Details are present in Appendix A.4. When $\lambda$ is sufficiently high, we find that the training accuracy collapses (~50% for Waterbirds and CelebA, <30 % for ColorMNIST-0.995). This fall can be clearly seen in Figure 2c in the main paper for ColorMNIST-0.995 for $\lambda \geq 0.5$. This helps us eliminate a few values of $\lambda$ in our specified range during the amplification stage itself. For Waterbirds and CelebA, the fall is typically seen at $\lambda \geq 3$ for the ResNet-18 backbone. This happens as the model’s learning is impeded with extremely high weight decay, which is not the goal of our approach. Recall that the purpose of this step is to learn the bias-aligned samples well, not the bias-conflicting samples. Hence, in the paper we suggest picking $\lambda$ lower than this threshold. We stick to the search space of (0.01-1) as we find that the scores remain similar for $\lambda \geq 1$ for both datasets (see below). | $\lambda$ | Worst - Waterbirds | Avg - Waterbirds | Worst- CelebA | Avg - CelebA | |:---------:|:------------------:|:----------------:|:-------------:|:------------:| | 1 | 80.29 | 84.56 | 81.61 | 86.04 | | 1.5 | 79.44 | 84.52 | 81.87 | 85.35 | | 2 | 81.46 | 84.59 | 80.51 | 85.21 | **Clarifications on the clustering algorithm:** Please see the general response section. --- Rebuttal 2: Title: Please read the rebuttal to check if the authors addressed your concerns Comment: Dear Reviewer wKPv, Can you have a look at the rebuttal and see if your concerns have been addressed? Best regards Your AC. --- Rebuttal Comment 2.1: Title: Response to authors Comment: I thank the authors for their detailed responses, which have addressed most of my concerns. However, my concerns on the applicability of this approach still stand, since it is difficult to know a priori whether the method will be effective. I will maintain my rating. --- Reply to Comment 2.1.1: Title: Response to the reviewer Comment: We thank the reviewer for the response. As the reviewer pointed out in the original review, we agree that identifying if the bias in the pretrained encoder aligns with that in the downstream dataset is highly difficult. This challenge is posed due to the nature of the proposed problem setting, since no finetuning/backpropagation is allowed into the pretrained encoder. In this regard, during our experiments, we find that the features of the ViT-B backbone do not align very strongly with the bias in Waterbirds (we refer the reviewer to the general responses section for more details). As a consequence, we find that most competing methods are unable to outperform the ERM method. Our method is able to surpass the ERM scores by a significant margin, though the scores are not higher than that of the ResNet-18 backbone for the same dataset. This finding highlights the difficulty of this problem setting, and we firmly believe that answering whether it is possible to mitigate biases when they are not encoded in the pretrained model is a very vital research direction.
Summary: This work explores a problem setting where one wants to train a classifier on top of a large and frozen pre-trained model, while avoiding bias and improving fairness. It proposes a computationally efficient methodology to accomplish this objective, centered around a novel loss function, the Adaptive Margin Loss. Strengths: -The manuscript is very clear and well-written. -Multiple experiments were performed. -The proposed methodology seems computationally efficient and effective at avoiding bias. -Results are promising, and the objective of adapting large pre-trained models in an unbiased manner will probably be increasingly more important in the future, with the rise of such pre-trained models. Weaknesses: My comments here are minor suggestions, as I have no major concerns. 1- Figure 2: this analysis is very interesting. It would be nice to start the weight decay as 0, not 5e-2 in the plots. 2- A question I have is, how many unbiased samples do we need for this methodology (and the alternative ones) to work well? Table 8 shows some results that help us answer this question, as we can see how performances vary from CMNIST-0.9 to CMNIST-0.995. What would happen for CMNIST-0.999? Where is the limit where the debiasing methodologies fail? In summary, it would be great to expand these experiments, varying the proportion of the majority group in all datasets (not only CMNIST, but also CelebA and Waterbirds), until the point where all methods fail. This experiment would show the reliance of each methodology on the number of samples in the minority group. 3- In the tables, we have standard deviations only for the proposed methodology. They should also be reported for the competing methods as well. Moreover, although authors say that “All experiments have been done over 3 seeds”, multiple tables show no standard deviation. Technical Quality: 4 Clarity: 4 Questions for Authors: My main question is why we do not see standard deviations in most tables, if all experiments have been done over 3 seeds? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: Authors have correctly stated the method's limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive comments and feedback and address the specific concerns below. **Effects of starting from weight decay $\lambda=0$ in Figure 2 of main paper**: We have shown the effects starting from weight decay = 0 on the different group accuracies for Waterbirds, CelebA and ColorMNIST-0.995 in Figure 1 in the rebuttal pdf, where we see that the lower the weight decay, the higher the training accuracy, because of overfitting to all the groups. **Performance of different models for varying degree of bias-conflicting samples** This is a great suggestion, and likewise we train our method on ColorMNIST-0.999 (Table 1, rebuttal pdf), and two versions of Waterbirds and CelebA. For Waterbirds, we show performance of different methods with a) no bias-conflicting samples, b) 50% bias-conflicting samples, where the bias-conflicting samples constitute two groups: waterbirds on land and landbirds on water (Table 2, rebuttal pdf). We show the scores for similar versions of CelebA, where the bias-conflicting samples constitute the blond males (Table 3, rebuttal pdf). As predicted by the reviewer, with a decrease in percentage of bias-conflicting samples, the scores of all methods drop. In Waterbirds, for the case of no bias-conflicting samples, the drop in performance is substantial, as is for ColorMNIST-0.999 (see Table 1, rebuttal pdf). For all other cases, we find Co-Ada and our method to perform decently. It is to be noted that our method outperforms all others in all the explored cases. **Standard Deviation for competing methods and other tables** We will rectify this in the camera-ready version, upon acceptance. We apologize for not showing the standard deviation values in all cases to ensure ease of readability. We show the values for some of the competing methods below (for ResNet-18 backbone). | Worst Group Accuracy | ERM | DebiAN | BPA | LfF | CoAda | Ours | |:------------:|-------------------|-------------------|-------------------|-------------------|-------------------|-------------------| | WaterBirds | $38.9^{\pm1.4}$ | $58.94^{\pm0.97}$ | $58.7^{\pm1.67}$ | $66.09^{\pm0.62}$ | $67.57^{\pm1.29}$ | $80.29^{\pm2.5}$ | | CelebA | $27.20^{\pm0.89}$ | $26.10^{\pm1.12}$ | $66.71^{\pm0.44}$ | $13.26^{\pm0.67}$ | $78.37^{\pm1.36}$ | $81.61^{\pm1.02}$ | | Average Group Accuracy | ERM | DebiAN | BPA | LfF | CoAda | Ours | |:------------:|-------------------|-------------------|-------------------|-------------------|-------------------|-------------------| | WaterBirds | $76.22^{\pm1.04}$ | $80.47^{\pm0.97}$ | $80.83^{\pm1.12}$ | $81.39^{\pm0.47}$ | $80.10^{\pm0.14}$ | $84.56^{\pm1.2}$ | | CelebA | $75.43^{\pm0.67}$ | $75.41^{\pm0.44}$ | $84.14^{\pm0.22}$ | $69.42^{\pm0.36}$ | $85.79^{\pm0.78}$ | $86.04^{\pm0.26}$ | --- Rebuttal Comment 1.1: Comment: I appreciate the authors responses, and all my concerns were addressed very well. I am increasing my score to 8. --- Reply to Comment 1.1.1: Title: Thank you for your response Comment: We thank the reviewer for appreciating our rebuttal and updating the score. --- Rebuttal 2: Title: Please read the rebuttal to check if the authors addressed your concerns Comment: Dear Reviewer qb1H, Can you have a look at the rebuttal and see if your concerns have been addressed? Best regards Your AC.
Summary: The papers address the debiasing problem using pretrained but frozen feature extractors on downstream applications. Then, they propose a clustering-based method that relies on bias-amplified training through cross-entropy loss. After training, they cluster the biased features and mitigate the biases using the resultant clusters. The experimental protocol considers CIFAR, Waterbirds, and CelebA as evaluation datasets. Also, several ablations exist about distinct loss functions and comparison with literature methods under two scenarios. Strengths: The authors build upon debiasing literature, emphasizing using a frozen feature extractor. This approach requires that debiasing interventions are applied post-feature extraction, a strategy that forms the core of their methodology. Their experimental design is iterative, with each step informed by empirical results. Despite its simplicity, the authors' contributions are clear and understandable. The experimental protocol is comprehensive, utilizing diverse datasets, including CIFAR, Waterbirds, CelebA, BAR, and UTKFace, for evaluation. This selection allows for a broad assessment of the proposed method's effectiveness across various contexts. The authors also conduct extensive ablation studies on different loss functions, components of their proposed method, and comparisons with existing literature methods. These studies demonstrate the superior performance of the new ideas in multiple scenarios. Additionally, the authors provide extra details on reproducing the main results in the appendix. Weaknesses: In my opinion, the main weakness lies in assuming that the bias aligns with the feature encoder. There are many cases where the practitioner does not know the model's or the dataset's biases. Also, the paper lacks instructions on how to identify scenarios that fit the previous assumption. Technical Quality: 2 Clarity: 2 Questions for Authors: - About Table 8, Is there any explanation about ResNet-18 achieving best performances than a Vit-B on waterbirds? I Would expect that ViT achieving higher performances in this dataset. Confidence: 2 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: Despite the primary focus being on debiasing, one step of the proposed method involves amplifying existing biases. It introduces a significant risk: if the following step, designed to compensate for this amplification, fails to function as intended (which is a possibility in specific scenarios), the method may ultimately exacerbate the very biases it aims to mitigate. This could potentially allow malicious actors to exploit the technique, leading to harmful outcomes. Flag For Ethics Review: ['No ethics review needed.', 'Ethics review needed: Discrimination, bias, and fairness'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing our contributions and appreciating the comprehensive experiments. We address the concerns below: **Assumption on alignment of bias of the downstream dataset with that in the pretrained model**. Please see the general responses. **Table 8: Scores of ViT-B on Waterbirds**: Please see the general responses. **Limitation on exacerbation of biases**. We thank the reviewer for pointing out this limitation, where after bias-amplification, if the debiasing module fails, the method may end up exacerbating the bias instead of mitigating them. We agree that this is a limitation in general for most of the existing debiasing methods, and attackers can leverage this for all methods that involve a bias amplification stage. We will mention this in the camera ready version. --- Rebuttal 2: Title: Please read the rebuttal to check if the authors addressed your concerns Comment: Dear Reviewer pwPo, Can you have a look at the rebuttal and see if your concerns have been addressed? Best regards Your AC. --- Rebuttal 3: Comment: Thank you for your response. The authors have adequately addressed my primary concerns, and I have no further questions. I will maintain my previous rating, but conditioned to 'bias exacerbation' revision. --- Rebuttal Comment 3.1: Title: Response to the Reviewer Comment: We thank the reviewer for the response. We will definitely revise our manuscript to include the 'bias exacerbation' issue upon acceptance.
Rebuttal 1: Rebuttal: We thank all the reviewers for their thoughtful comments, questions, and suggestions. We are pleased that our work has been appreciated and positively rated. The reviewers recognized the significance and under-researched nature of our problem statement (Reviewer *wKPv*), noting its growing importance (Reviewer *qb1H*). They also appreciated the novelty and creativity of our solution (Reviewers *qb1H*, *wKPv*, *oZ3K*), found our experiments to be comprehensive (Reviewer *pwPo*), and considered the paper well-written (Reviewers *oZ3K*, *qb1H*) . We address the common concerns below: **Assumption on the alignment of the bias in the pretrained model and the downstream task (*pwPo*, *wKPv*)**: While we agree that our method’s efficacy is dependent on this assumption, we would like to highlight that since the feature encoder is frozen and cannot be finetuned based on the downstream dataset, detecting and mitigating biases in such a scenario becomes challenging. As an example, we discuss the case of ViT-B for Waterbirds here (we appreciate Reviewer *pwPo* for raising this question on *Table 8*). We find the ViT-B features to depict the target class more strongly than the bias class. We verify this by clustering the ViT-B features and comparing the Normalized Mututal Information (NMI : https://scikit-learn.org/stable/modules/generated/sklearn.metrics.normalized_mutual_info_score.html) of the cluster labels with the bias (NMI=0.6) and target labels (NMI=0.62) respectively. The worst group accuracy is still low (59%), indicating a weak yet definitive presence of the bias in the model. As a result, all methods suffer, with most methods scoring close to the ERM values, demonstrating this to be a limitation for the compared methods as well. Co-Ada has a worst group score of 63.91%, which is higher than the other competing methods and the ERM. In contrast, our method achieves 74.92%, compared to 80.29% that we obtain from the ResNet-18 backbone. This highlights the difficulties in mitigating biases in different scenarios in the given setup. We find this problem to be interesting, and mention solving this as a future work in the paper. We believe that detecting if our assumption holds apriori is highly challenging in the absence of the bias labels. We put forward a few suggestions to identify the scenarios that fit this assumption (*pwPo*, *wKPv*): - Obtain bias annotations for the small validation set. If the worst group accuracy of the validation set does not reduce substantially with increasing weight decay, it indicates that the features have stronger signals of the target class than that of the bias, making it harder to capture the bias. - If the bias annotations of the validation set cannot be obtained due to privacy concerns, the overall validation accuracy can indicate strength of the bias. For example, the difference between the validation and training accuracy is 25% for an ERM trained method for the Waterbirds dataset on the ResNet-18 backbone. The higher this difference, the more the indication that the model is overfitting to more and more samples. Such overfitting can indicate that the model is learning the bias in the dataset, thus not generalizing on the bias-conflicting samples. **Details on the Clustering Approach (*wKPv*, *oZ3K*)**: A common question that has been raised is on the selection of the number of clusters, K. In the paper, we mention K to be a hyperparameter within a fixed set of values (appendix A.4), selected based on the validation accuracy. We demonstrate in the below table that our method is not highly sensitive to the value of K (*wKPv*, *oZ3K*). Below we show the performance variation for *CelebA* over different values of K. The bold values denote the numbers reported in the paper. | $K$ | Worst | Avg | |:---:|:--------------:|:------------:| | 2 | 81.70 | 86.07 | | 4 | **81.61** | **86.04** | | 6 | 81.36 | 85.96 | | 8 | 81.08 | 85.93 | For ColorMNIST-0.995, we show the effect of different K's below (the bold values denote the numbers reported in the paper): | $K$ | Bias-Conflicting | Avg | |:---:|:--------------:|:------------:| | 10 | 73.34 | 84.21 | | 20 | **72.56** | **84.42** | | 30 | 72.17 | 83.59 | | 40 | 72.54 | 83.49 | Next, we change the clustering algorithm from KMeans to GMM, and show that the underlying algorithm does not alter the outcome of the method significantly. | Dataset | Worst/Bias-Conflicting (GMM) | Avg (GMM) | Worst/Bias-Conflicting (KMeans) | Avg (KMeans) | |:---:|:--------------:|:------------:|:--------------:|:-----------:| | CelebA | 81.11 | 85.19 | **81.61** | **86.04** | | CMNIST-0.995 | 73.08 | 84.84 | **72.56** | **84.42** | **Attached pdf**. We have attached a pdf containing a few tables and plots, based on the questions and comments of Reviewers *qb1H* and *oZ3K*. Pdf: /pdf/9696e0119e133ecb615d5502877b48d12a30665f.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Variance estimation in compound decision theory under boundedness
Accept (poster)
Summary: This submission studies the variance estimation problem for Gaussians under the compound decision/empirical Bayes settings, assuming bounded means and variance. The main results are up-to-constants matching upper and lower bounds on the estimation rate, for mean squared error. For the upper bound, the proposed algorithm is based on estimating cumulants and using Bell polynomials to find the cumulant $\gamma_2$, which is almost directly related to the desired variance parameter. For the lower bound, the authors use a moment matching and $\chi^2$-divergence approach to show indistinguishability results. Strengths: The paper is very very well-written, with intuition properly explained and motivated. It is refreshing to see a properly-written paper being submitted to NeurIPS/ICML. Technically, it is also nice that the authors settle the estimation rate up to constants for this problem. I also find the cumulant-based approach interesting --- I don't think I've seen many papers (at least in recent memory) that forego moments and deal with cumulants instead. Weaknesses: Here are some constructive criticisms that I hope the authors will find useful to improve the paper. - The title: I think it is a little too general and perhaps slightly misleading, given that there has been a lot of other works studying the Gaussian variance estimation problem in the compound decisions setting, under other assumptions. The authors should consider including something to the effect of "under bounded means/variance assumption". - In Section 3, consider re-ordering the explanation. Lines 340-341 says "This can be achieved by constructing $G$ to share a large number of moments with $\mathcal{N}(0,\tau^2)$." without much explanation. For readers who haven't seen this kind of approach, it would be helpful to explain immediately to them why this moment matching is relevant. The prose on $\chi^2$-divergence etc. later on should be moved up in my opinion, instead of diving straight into a discussion on how many moments one can match. - My main criticism is actually that I don't find the problem setting super well-motivated, although I don't necessarily hold this against this submission in particular. I understand that this problem is very well-studied in the literature under different assumptions. However, for one, I wish the motivation were more explicit in this submission, since I'm not familiar with the model until reading this paper. On top of that, perhaps more importantly, when I first saw problem definition on the first page, my immediate thought was "ah, this is *really* going to depend on the Gaussian noise isn't it", and had pretty much the same identifiability doubts as what turned out to be discussed in Section 4. Given that information-theoretically one pretty much needs to have quite a bit of information about the noise distribution, I'm not too convinced about the applicability of the results. Having said that, I again understand that the problem has been well-studied in the literature, so clearly there are others who care about the problem, so I won't hold this against this paper in particular (as evidenced by my "Accept" score). Technical Quality: 4 Clarity: 4 Questions for Authors: - The parameter space $\Theta(L)$, I presume the estimator (through the many hidden constants in propositions and theorems) depend on the knowledge of what $L$ is? My next question is, how does the estimation rate depend on $L$? Similarly, how does the estimation rate depend on the bound on the $\ell_\infty$ norm of $\mu$? - What is the concentration behavior/tails of the estimator? Theorem 1 is in mean squared error only. - Line 283 mentions that $\hat{M}_r$ concentrates around the expectation $M_r$ uniformly over the input $\gamma$. Can you comment roughly on the technique to show that, or at least why it deserves to be true? I think it's useful to comment on that even in the main paper, since uniform concentration is usually an annoying thing to show, if not sometimes a technical gist of a problem. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your very helpful comments and thorough feedback! Our responses below address the points raised in your review. [Weakness 1] We agree the title can be revised for clarity. In the revision, we will revise the title to ``Variance estimation in compound decision theory under boundedness''. [Weakness 2] We agree the explanation would improve after the suggested re-ordering. We will do this in the revision. [Weakness 3] This comment is acknowledged and well-taken. Indeed, we are leaning heavily on statistical tradition and the existence of a substantial literature to motivate consideration of the problem studied in our article, and especially so to justify the Gaussian assumption. [Question 1] Yes, our estimator relies on knowledge of $L$. Please see the global response to all reviewers regarding the dependence of the rate on $L$. The question regarding the rate's dependence on the bound on the $\ell_\infty$ norm of $\mu$ is very related. Suppose we are in the setting where we know $||\mu||\_\infty \leq B$ and $\sigma \leq L$. We can transform the data without loss of any information as $Y_i = X_i/B$, and so we have $Y_i \overset{ind}{\sim} N(\theta_i, \tau^2)$ where $\theta_i = \mu_i/B$ and $\tau = \sigma/B$. Now note $||\theta||_\infty \leq 1$ and $\tau \leq L/B$, and so we are in the original setting of our paper. Estimation of $\tau^2$ from $Y$ is equivalent to estimation of $\sigma^2$ from $X$ (up to an appropriate scaling). For the same reasons outlined in the global response, estimation of $\tau^2$ is essentially only interesting in the case $L/B \asymp 1$. In this case, the minimax rate of estimation of $\tau^2$ is $(\log\log n / \log n)^2$, which thus implies the minimax rate of estimating $\sigma^2$ is $B^4 (\log\log n/\log n)^2$. [Question 2] The question regarding the concentration behavior of the estimator appears to us to be quite tricky. As can be surmised from Proposition 3, the fluctuations of the error $|\hat{\gamma}\_2(r) - \gamma\_2|$ can be related to the fluctuations of $\sup\_{\gamma \in [0, 1]} |\hat{M}\_r(\gamma) - M\_r(\gamma)|$. The latter random variable involves polynomials of growing degree (since we choose $r$ to grow), and so it appears difficult to us to obtain concentration results beyond that obtained by a simplistic analysis using very crude tools. [Question 3] It turns out the structure of the Bell polynomials makes showing uniform concentration very painless. The relevant analysis occurs in the proof of Proposition 7. In particular, recalling definitions from our paper, consider $|\hat{M}\_r(\gamma) - M\_r(\gamma)| \leq \sum_{l=1}^{r} |B\_{r, l}(\hat{\gamma}\_1, \gamma, \hat{\gamma}\_3,...,\hat{\gamma}\_{r-l+1}) - B_{r, l}(\gamma\_1, \gamma, \gamma\_3,...,\gamma\_{r-l+1})| \leq \sum_{l=1}^{r} \sum \frac{r!}{j\_1! j\_2! \cdot\cdot\cdot j\_{r-l+1}!} \left|\frac{\gamma}{2!}\right|^{j\_2} \cdot W\_{\mathbf{j}}$ where $W\_{\mathbf{j}}$ is a random variable which does not depend on $\gamma$. Here, the inner sum is taken over appropriate sequences $\mathbf{j} = \{j\_1,j\_2,...,j\_{r-l+1}\}$ as in the definition of Bell polynomials (Definition 1). Therefore, it is immediate to obtain the deterministic inequality $\sup\_{\gamma \in [0, 1]} |\hat{M}\_r(\gamma) - M\_r(\gamma)| \leq \sum\_{l=1}^{r} \sum \frac{r!}{j\_1! j\_2! \cdot\cdot\cdot j\_{r-l+1}!}\cdot W\_{\mathbf{j}}$, and now the right hand side no longer involves a supremum. The Bell polynomial structure is very convenient. In the revision, we will make a comment about it in the main text. --- Rebuttal Comment 1.1: Comment: Thanks for your responses. I'll maintain my current score, and I hope the authors will incorporate the promised revisions, including the explanation of the scaling with $L$ and $B$ at least in the appendix (fleshed out, even if it's a straightforward rescaling argument). By the way, for the dependence on $B$, do you mean $B^2$ instead of $B^4$ or am I misreading/misunderstanding? It's just a linear rescaling of the variable with a factor of $B$, so it should change the variance by $B^2$? --- Reply to Comment 1.1.1: Comment: Thank you very much for reviewing our responses. As you correctly note, it is a linear rescaling with a factor of $B$, which scales the variance by $B^2$. However, when speaking of the "rate", it is referring to squared error, so squaring yields a factor of $B^4$.
Summary: This paper gives a sharp minimax rate of the variance estimation under mild assumption on a Gaussian-based model. Strengths: The theoretical results are solid and complete, and the related reference are discussed and linked with their own work. Weaknesses: The sign in equation (5) and also in line 94 in not clear for me, does it means proportional to? This paper is pretty theortical-based one, and it would be better if there are some experiments. Technical Quality: 4 Clarity: 3 Questions for Authors: N/A Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback! Our response to your review is below. [Weakness 1] We have conducted an experiment in the attached pdf, and the details are in the global response to all reviewers. Yes, the symbols $\asymp$ and $\lesssim$ mean ``up to universal constants''. The notation we use is defined in Appendix E, and in the revision we will note this in the main text to make the reader aware. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: I thank the authors for their rebuttal. It is good to see the experiment is done, but a larger/complex problem with real-world data is more preferable. Thanks for taking my suggestions on the symbols and moving the definition to the main text. Overall, I will keep my score.
Summary: This paper studies the problem of variance estimation in compound decision setting, with the assumption that means are bounded. Main contributions: - The authors prove the minimax rate of variance estimation in the setting of the paper, with the proof of the lower bound and a proposed estimator achieving the convergence rate. - The authors discuss the importance of the Gaussian assumption of the result. Post rebuttal: I acknowledge that I have read the authors responses and other reviewers' comments. Strengths: Strengths: - This paper is well-written and well-organized. - The authors provide the minimax rate of variance estimation in the setting that means are bounded and noise follow the Gaussian distribution. - The authors discuss the noise agnosticism that the estimator is designed for Gaussian noises. Weaknesses: Weaknesses: - There is no discussions on the computation complexity of the proposed estimator, and no experiments. Technical Quality: 3 Clarity: 3 Questions for Authors: Questions: - In real applications (where $\mu's$ are not chosen to maximize the errors), how does the proposed method compare to other variance estimation methods, say what are the advantages and disadvantages? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: NA. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments! Below, we address some of the points raised in your review. [Weakness 1] We have conducted an experiment in the attached pdf, and the details are in the global response to all reviewers. [Question 1] Most other variance estimation procedures in the literature (beyond the simple maximum-based estimator and the naive second moment estimator discussed in the experiment) require assumptions about the structure of the means; for example, the means are distributed according to a $k$-atomic distribution, spike-and-slab, etc. However, these structural assumptions can be unrealistic in some applications. The advantage of our method is that the means need only be assumed bounded. The disadvantage of our cumulant-based method is that it cannot, in its current form, adapt to any underlying structure of the $\mu$'s. If the means are truly structured, previous estimators can achieve faster convergence rates than the proposed cumulant-based estimator. As you point out, the choice $r \asymp \frac{\log n}{\log \log n}$ in our estimator is really made to protect against the worst case choice of $\mu$'s. It is an interesting question whether $r$ could be chosen in a data-driven way to adapt to the underlying structure of the $\mu$'s; this way, nothing but boundedness is assumed, yet existing structure could be exploited. At the current moment, it is not clear to us how to construct an adaptive version of our estimator; it is a nice direction for future work.
Summary: The paper studies the variance estimation of the normal means model and establishes the minimax squared error in terms of $n$, the number of observations. The results assume a bounded parameter space, where the absolute means and variance are at most 1 and $L^2$ (a large hidden constant), respectively. The estimator achieves the optimal rate through cumulant estimation. Strengths: - The major strength is determining the exact min-max rate regarding sample size, which requires both deriving a concrete estimator and establishing a lower bound (through moment matching). - The results hold under relatively weak conditions, i.e., the boundedness of the parameter space. In comparison, prior research makes assumptions like smoothness, which might be less practical. - The use of cumulants on top of the Gaussian character of the noise is innovative and might be of independent interest. Weaknesses: - The results show only the dependence on $ n$ but not $ L$, the upper bound on $ sigma$. Given that the optimal rate in $n$ is only inverse logarithmic, it seems important to understand if there are any gaps in $L$ between the lower and upper bounds (though it's treated as a universal constant). - The boundedness assumptions are weaker, but Gaussianity is critical for establishing the paper's results. From this perspective, the methods may not be as widely applicable as some prior ones. - The paper lacks numerical experiments. It might be helpful to run the estimator on synthetic data to understand the hidden constants and the estimator's actual performance. Technical Quality: 3 Clarity: 3 Questions for Authors: - Please feel free to clarify/respond regarding the above comments. - Would it be possible to go slightly beyond Gaussianity and make the results more broadly applicable? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The comparisons are detailed, and the authors explain the limitations of their results. The paper is mostly theoretical, so potential negative societal impact may not apply. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your helpful feedback! Our responses below address the points raised in your review. [Weakness 1] Please see the global response to all reviewers regarding the dependence of the rate on $L$. [Weakness 2 and Question 2] We completely agree with your comment that Gaussianity is critical to the results. Indeed, the applicability is consequently limited. It is not at all clear to us how to generalize the cumulant methodology, and it would be an interesting, challenging direction for future work. The key property we use is that the variance of Gaussian noise only shows up in the second cumulant; this enables us to state a result like Proposition 1 to identify the noise variance from the marginal cumulants. For a different noise distribution, a different/analogous version of Proposition 1 needs to be established as the noise variance can now appear in other cumulants. The difficulty is reminiscent of the challenges of moment-based approaches discussed in Remark 2. [Weakness 3] We have conducted an experiment in the attached pdf, and the details are in the global response to all reviewers.
Rebuttal 1: Rebuttal: We thank all of the reviewers for their helpful comments. Below, we address those points which were raised by multiple reviewers. Two reviewers asked about the dependence of $L$ in the minimax rate. Thank you both for the comment; it is well-received. We would like to make the case that $L \asymp 1$ addressed in the article is essentially the only interesting setting with a gap in the literature. In the small $L$ case where $L \lesssim \frac{1}{\sqrt{\log n}}$, it can be shown quite easily that the minimax rate is $L^4$ and is achieved by the trivial estimator $\hat{\sigma}^2 = 0$. Of course, one can then ask to capture the rate dependence on $L$ in the intermediate setting $\frac{1}{\sqrt{\log n}} \lesssim L \lesssim 1$, but we feel this case is of very limited interest and relevance. On the other side, the large $L \gtrsim 1$ case can be essentially addressed by existing literature. To elaborate, note one can transform the data by $Y_i = X_i/L$ which then is distributed as $Y_i \sim N(\theta_i, \tau^2)$ where $\theta_i = \mu_i/L$ and $\tau^2/L^2$. It is clear then that $\tau^2/L^2 \leq 1$ and $||\theta||\_\infty \leq L^{-1}$. To estimate $\sigma^2$, we can equivalently estimate $\tau^2$ and then rescale. Since $L$ is large, intuitively the nuisance $\theta_i$ should not have much effect, and so we can directly use $\hat{\tau}^2 = \frac{1}{n} \sum_{i=1}^{n} Y_i^2$. It is straightforward to show $E(|\hat{\tau}^2 - \tau^2|^2) \lesssim n^{-1} + L^{-4}$. It turns out this simple estimator can actually be optimal for estimating $\tau^2$. For example, in the scaling $L = n^\alpha$ where $\alpha > 0$ is fixed, $\hat{\tau}^2$ achieves the rate $n^{-1} + n^{-4\alpha}$, which is well known to be the minimax rate for estimating $\tau^2$, e.g. it can be seen as a consequence of reference [61]. Consequently, the rescaled estimator $\hat{\sigma}^2 = L^2\hat{\tau}^2$ must also be minimax rate optimal for $\sigma^2$. Therefore, in our view, essentially the only interesting case where new methodology is needed is $L \asymp 1$. We had thus focused our article on this setting. Three reviewers pointed out that an experiment would be helpful. We have conducted an experiment in the attached pdf. The implementation details are as follows. We considered $20$ different various sample sizes $n$ spread evenly on the log-scale between $e^{7}$ and $e^{10}$. For each sample size $n$, we ran $100$ simulations each for four different data-generating processes. In particular, we chose four different priors $\mu \sim G$ being the Rademacher distribution ($G = \frac{1}{2}\delta_{-1} + \frac{1}{2}\delta_1$), a rescaled Beta distribution ($\mu = 2q - 1$ where $q \sim \text{Beta}(5, 1)$), the uniform distribution ($G = \text{Uniform}[-1, 1]$), and a Gaussian prior ($G = N\left(0, \frac{\log\log n}{\log n}\right)$). Note though the Gaussian prior is not supported on $[-1, 1]$, it is informative to consider since it is quite close to (more precisely, matches many moments with) the prior constructed in the lower bound argument of Section 3. For each choice of prior $G$, we sampled $\mu_1,...,\mu_n \overset{iid}{\sim} G$, and generated data $X_i \,|\, \mu_i \sim N(\mu_i, \sigma^2)$ with $\sigma = 2.25$. We computed the maximum-based estimator $\hat{\sigma}^2\_{\max} = \left(\max\_{1 \leq i \leq n} \frac{X_i}{\sqrt{2\log n}}\right)^2$, the naive second-moment estimator $\hat{\sigma}\_{\text{mom}}^2 = \frac{1}{n} \sum\_{i=1}^{n} X\_i^2$, and our cumulant-based estimator $\hat{\sigma}^2$ with the choice of tuning parameters $r = \left\lfloor \frac{\log n}{\log\log n}\right\rfloor$ and $\varepsilon = n^{-0.45}$. We computed the square error for each estimator, then computed the average square error across the $100$ simulations; these are the dots plotted in the figure. The error bars in the plot are standard errors. From the experimental results, we can see that our cumulant-based estimator performs consistently well no matter the prior specification. In contrast, the maximum-based estimator (which achieves the upper bound $\frac{1}{\log n}$ as pointed out by reference [63]) performs consistently worse than the cumulant-based estimator across all priors. The behavior of the mean square error of the naive second-moment estimator is unsurprising, namely it related to $(E(\mu^2))^2$. Consequently, it does well in the Gaussian and Uniform prior settings where this quantity is low, but worse in the other settings where it is higher. Pdf: /pdf/a99648f07d481634cd7091fa14809467a549656f.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
bit2bit: 1-bit quanta video reconstruction via self-supervised photon prediction
Accept (poster)
Summary: 1. The paper introduces a novel self-supervised method for reconstructing high-quality image sequences from sparse binary quanta image data. 2. The paper mainly adapt a self supervised denoising algorithm called GAP. Instead of directly adopting the GAP method, the authors extended it to spatiotemporal structures and proposed a novel masked loss to address the correlation issue between input and target images. 3. Experimental results demonstrate that the proposed method substantially outperforms state-of-the-art techniques such as Quanta Burst Photography (QBP) in both reconstruction quality and throughput. 4. Additionally, the paper presents a new dataset and discusses the potential of the method for generalizing to other spatial event point processes beyond the specific application of quanta image sensors. Strengths: 1. The strengths of this approach mainly lie in its ability to effectively utilize spatiotemporal information, its novel masked loss function that addresses the correlation between input and target images, and its demonstrated superiority over existing methods like Quanta Burst Photography in terms of reconstruction quality and throughput efficiency. 2. In addition, as a self supervised algorithm, the training of this scheme does not require the construction of large-scale synthetic datasets, making it more flexible and convenient. Weaknesses: 1. In Equation 1, there is a confusion between x_{in} and x_{inp}. Moreover, the right side of Equation 1 seems to have overlooked x_{tar}. 2. To illustrate the application of QIS in high-speed and low-light scenarios, it would be beneficial for the authors to provide specific data on the rotation speed of the fan and the light intensity of the scene. 3. The authors used a 10-shot approach to calculate the average results. However, methods similar to GAP require many iterations during inference, and the combination of these two factors could lead to significant computational time. The authors should provide more detailed explanations, such as the number of iterations and the computational workload involved. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. I still have doubts about the role of the masked loss. The authors state that this loss is intended to address the correlation issue between the input and target images. However, after convolution, the information from locations that were originally 1 in the input image has already spread to other pixel locations. Simply masking the original 1 positions may not fully conceal the information from those positions. 2. I am not quite clear about what the network's output is, given that the input is a t*h*w matrix. Is the output also t*h*w? If so, in the supervised loss comparison experiments, are there 32 grayscale images gt? For the N2N comparison experiment, is the supervision using QIS data with a shape of t*h*w? How was the GAP-2D experiment designed—does the network input consist of the first frame of QIS data from t*h*w? 3. Why does N2N lead to a granular pattern? Why can GAP-like method resolve the granular pattern caused by the N2N method? A good response to the Weaknesses and Questions will improve my initial rating. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: There is no discussion of potential negative societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer’s insightful comments. Here, we answer the reviewer’s questions one by one: 1. **In Equation 1, there is a confusion between x_{in} and x_{inp}. Moreover, the right side of Equation 1 seems to have overlooked x_{tar}.**\ Thank you for pointing out the mistake. We will correct the typo in our revision. $x_{in}$ should be $x_{inp}$. There should be a $x_{tar}$ before the $\ln$ in Equation 1. 2. **To illustrate the application of QIS in high-speed and low-light scenarios, it would be beneficial for the authors to provide specific data on the rotation speed of the fan and the light intensity of the scene.**\ Agree and we will include the information in the revision. The rotation speed of the fan is 1500 rpm. The light intensity of the scene is in the order of 1-10 lux. This is a scene in a dark room with the room light turned off, with the only light source the computer monitor pointing towards the wall and some small LEDs on the motherboard. It is worth noting that even at this low light condition, the imaging is not photon-limited because the hardware is highly sensitive. We had to reduce the aperture of the camera lens to ensure the sensor was not over-saturated. 3. **The authors used a 10-shot approach to calculate the average results. However, methods similar to GAP require many iterations during inference, and the combination of these two factors could lead to significant computational time. The authors should provide more detailed explanations, such as the number of iterations and the computational workload involved.**\ Thank you for this comment. We notice the 10-shot approach is not clearly defined in the original manuscript. We will clarify this in L197 when the term is first introduced in the revision. To clarify, the 10-shot approach is unrelated to the iterative sampling in GAP. GAP uses multiple iterations to achieve posterior sampling and function as a generative model. In this work, we only use the network for MMSE denoising, which requires only a single iteration. The 10-shot refers to running the inference 10 times using data Bernoulli-resampled from the raw data (L246). This takes 10 times the inference time but is not a major time-limiting factor. If the data throughput is a concern, we also provide a practical one-shot inference solution (L166, L195, L243). 4. **I still have doubts about the role of the masked loss. The authors state that this loss is intended to address the correlation issue between the input and target images. However, after convolution, the information from locations that were originally 1 in the input image has already spread to other pixel locations. Simply masking the original 1 positions may not fully conceal the information from those positions.**\ The purpose of the masking is not to fully conceal the information from these positions. The only input information to the network is photon positions, so it is necessary for the network to spread this information to other pixels to make predictions about the clean signal. Due to the binary nature of the sensor and the splitting of the truncated Poisson (Bernoulli) distributed data, pixels that have the value 1 in the input will always have the value 0 in the target. Training a network without masking will lead to the network predicting dark pixels at the locations where there is a photon in the input (see figure in the rebuttal PDF). The mask prevents the network from learning this deterministic relationship by zeroing the loss computation for pixel locations that have value 1 in the input. Other (neighboring) locations are not affected by this issue and do not need to be masked. The mask prevents the network from learning from the target pixels that are determined to be 0 because the input is 1. 5. **I am not quite clear about what the network's output is, given that the input is a thw matrix. Is the output also *thw*? If so, in the supervised loss comparison experiments, are there 32 grayscale images gt? For the N2N comparison experiment, is the supervision using QIS data with a shape of *thw*? How was the GAP-2D experiment designed—does the network input consist of the first frame of QIS data from *thw*?**\ Correct. The network's input and output have the same *thw* dimension (3D to 3D). There are 32 grayscale ground truth frames in the supervised loss comparison experiments. The shape of the training data was always the same *thw* for all 3D network experiments, including the N2N comparison experiments. The GAP-2D experiment is designed so that individual 2D frames are used in training and inference. The network input consists of single frames of the QIS data, randomly selected during training (random *t* from *thw*). 6. **Why does N2N lead to a granular pattern? Why can GAP-like method resolve the granular pattern caused by the N2N method?**\ We believe the granular pattern is caused by overfitting due to limited training data. In N2N, the same binary data pairs are repeatedly used for training. In our method, random splitting alleviates this problem by creating a different binary pair each time. We see this as a type of data augmentation that effectively increases the amount of available training data. L266 has some discussion regarding this. L275 and Fig. S4 demonstrate some aspects of the problem with an experiment using random Poisson noise. 7. **There is no discussion of potential negative societal impacts.**\ We will clarify potential negative societal impacts in our revised conclusion. The method indeed can be misused in research (e.g., using inappropriate data, making wrong assumptions about the noise, etc.) and the prediction results can be misinterpreted, leading to incorrect scientific conclusions and potential negative societal impact. --- Rebuttal Comment 1.1: Comment: Thank you for the kind reply, most of my concerns have been addressed. I am willing to raise the score to match the contribution of this paper.
Summary: This paper presents a self-supervised method for reconstructing high-quality video from sparse binary quanta image data produced by single-photon avalanche diode (SPAD) arrays. The authors propose a novel masking strategy to handle the binary nature of the data and extend their method to 3D to leverage spatiotemporal information. They evaluate their approach on both simulated and real SPAD data, demonstrating improved reconstruction quality and throughput compared to existing methods like Quanta Burst Photography (QBP). The paper also introduces a new dataset of real SPAD high-speed videos under various challenging imaging conditions. Strengths: 1. The paper addresses an important problem in computational imaging, proposing a novel self-supervised approach for reconstructing high-quality video from sparse binary quanta data. 2. The authors' masking strategy to handle binary data is innovative and appears effective based on the presented results. 3. The introduction of a new real SPAD high-speed video dataset is valuable for the research community. Weaknesses: The paper fails to convincingly demonstrate the effectiveness and advantages of the proposed method over existing approaches. The theoretical foundations are not well-developed, and the experimental results are not sufficiently rigorous or comprehensive to support the claims made. The writing lacks clarity in many sections, making it difficult to fully understand the proposed method and its implications. 1. The discussion on selecting the photon splitting variable p is confusing and lacks practical considerations. The paper doesn't adequately address how to ensure correct signal reconstruction when the signal level is unknown before capture. 2. Many crucial implementation details are absent: - The composition and size of the training dataset are not specified. - It's unclear if the quantitative results for simulated data are computed from only one video (L204). - The number of videos in the real test data is not mentioned. 3. The comparison with existing quanta video reconstruction methods is limited to only QBP (proposed in 2020). There is a lack of quantitative comparisons with other existing methods. 4. The paper lacks a detailed analysis of the method's runtime and memory requirements compared to existing approaches. 5. More extensive ablation studies exploring the impact of various components of the method (e.g., network architecture choices, hyperparameters) are needed to provide deeper insights into the method's performance. 6. Several technical terms and abbreviations are not properly defined or explained: - L1: "SPAD" is not defined on first use. - L15: The unit for "0.06 photons per pixel" is not specified (e.g., per frame or per second). - L22: "QBP" is not defined on first use. - L98: The meaning of s_i and i is unclear (intensity or pixel position?). - L112: The probability of zero photons hitting a pixel being e^{s_i} needs a detailed explanation. - L197: The term "10-shot inference" is not explained. 7. Some experimental details are missing: - L205: More information on the iPhone 15 slow motion mode (e.g., resolution, exact frame rate) would be helpful. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. How do you propose to select the photon splitting variable p in practice when the signal level is unknown before capture? Can you provide guidelines for selecting an optimal value? 2. Can you provide a more comprehensive comparison with other existing quanta video reconstruction methods beyond QBP, including quantitative results? 3. Can you provide a detailed analysis of the computational complexity and how it scales with dataset size? 4. Could you elaborate on the runtime and memory requirements of your method compared to existing approaches? 5. Can you provide more extensive ablation studies on the impact of various components of your method, such as network architecture choices and hyperparameters? 6. Can you provide more details on the composition and size of the training dataset used? How many videos were used in both simulated and real tests? 7. How does the method perform on extremely low photon count data, and what is the lower limit of photon count where the method remains effective? 8. How does the method perform on different types of scenes or motion patterns? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors have adequately addressed limitations, discussing the scope of applicability to Poisson noise and computational considerations. They have also provided a thorough discussion of assumptions and potential limitations in the conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank the reviewer for the comments. We noticed several factual errors in this review: 1. **L1: "SPAD" is not defined on first use.**\ SPAD is defined in L1 on first use. 2. **L22: "QBP" is not defined on first use.**\ QBP is defined in L21-22 on first use. 3. **L205: More information on the iPhone 15 slow motion mode (e.g., resolution, exact frame rate) would be helpful.**\ The frame rate 240 fps is reported in L205. The video is cropped/resampled to 3990x512x512. 4. **The composition and size of the training dataset are not specified.**\ Data size is provided in L208. Each dataset is shown and described in Fig. S5-S12. The intended meaning of the term composition is unclear. Some questions and comments are vague and not specific. We tried our best to address them. We grouped related questions and comments: 1. **The paper fails to convincingly demonstrate the effectiveness and advantages of the proposed method over existing approaches. The comparison with existing quanta video reconstruction methods is limited to only QBP (proposed in 2020). There is a lack of quantitative comparisons with other existing methods. Can you provide a more comprehensive comparison with other existing quanta video reconstruction methods beyond QBP, including quantitative results?**\ To our knowledge, our self-supervised 3D binary to 3D grayscale task is novel. We are not aware of any existing work that solves this task. We hope the reviewer can specify. Although similar, QBP is a substantially different task, which constructs a single 2D grayscale image from many binary frames. QBP also did not show any comparable quantitative results. We compared our method to GAP, N2N, and N2V quantitatively. 2. **The theoretical foundations are not well-developed, and the experimental results are not sufficiently rigorous or comprehensive to support the claims made.**\ It is unclear which claims are not supported. 3. **The writing lacks clarity in many sections, making it difficult to fully understand the proposed method and its implications.**\ It is unclear which sections and specific parts of the explanation lack clarity. 4. **The discussion on selecting the photon splitting variable p is confusing and lacks practical considerations. Can you provide guidelines for selecting an optimal value [for p]?**\ We extensively discussed methods for selecting p in L157, L266 and L465, plus Fig. 3, 4, S3, S4, S6, S8, including two best practices. The simple method is using a fixed p (L160). The more robust method is selecting p randomly between 0-1 for each training pair (L166). Both methods are easy to implement in practice. 5. **Not adequately address how to ensure correct signal reconstruction when the signal level is unknown before capture. How to select the photon splitting variable p in practice when the signal level is unknown before capture?**\ Our method is self-supervised. The model is trained on the very data to be denoised at the detected signal level. The selection of p is unrelated to the signal level, as it corresponds to the fraction of photons to split for already-known measurements. 6. **It's unclear if the quantitative results for simulated data are computed from only one video (L204).**\ We stated the simulated data are computed from L204: a ground truth reference video. 7. **The number of videos in the real test data is not mentioned. Can you provide more details on the composition and size of the training dataset used? How many videos were used in both simulated and real tests?**\ We reference all 7 real test data in L211. Each test is a video. 8. **The paper lacks a detailed analysis of the method's runtime and memory requirements compared to existing approaches. Can you provide an analysis of the computational complexity and how it scales with dataset size?**\ We discussed practical computational requirements, optimization, performance, runtime, and VRAM in L185-L198. We compared our runtime to QBP (L294). The computational complexity is more relevant to the architecture rather than the data size. 9. **More extensive ablation studies exploring the impact of various components of the method (e.g., architecture, hyperparams) are needed to provide deeper insights into the method's performance. Can you provide more extensive ablation studies on the impact of various components of your method?**\ It is unclear which specific ablation studies and hyperparameters are overlooked, as we provided substantial details (Table S1-8, Fig. 4, Fig. S1, 3, supp hp.CSV) relevant to the core concept of the method. Besides, our contribution is not about the model architecture, but the novel task, a theoretical framework for the solution, and its practical implementation. 10. **L15: The unit for "0.06 photons per pixel" is not specified (e.g., per frame or per second).**\ We will clarify this to 0.06 photons per pixel per frame. 11. **L98: The meaning of s_i and i is unclear (intensity or pixel position?).**\ We will clarify that i indicates the pixel position, and s_i indicates the Poisson rate at i. 12. **L112: The probability of zero photons hitting a pixel being e^{s_i} needs a detailed explanation.**\ We explained this in A1 L445-463 in detail. We will clarify that in our revision. 13. **L197: The term "10-shot inference" is not explained.**\ The 10-shot inference refers to the inference strategies in L243. This will be clarified in the revision. 14. **How does the method perform on extremely low photon count data, and what is the lower limit of photon count where the method remains effective?**\ We simulated low photon count data with an average of 0.06 photons/pixel. Some data have even lower photon count (e.g., plasma data Fig. 2, S7, with photon count in the order of 0.001). Unclear what is considered extremely low or effective. 15. **How does the method perform on different types of scenes or motion patterns?**\ Results cover a wide range of scene and motion patterns are provided in Fig. 1, 2, 5, S4-S14. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my concerns. While the reorganization of the response clarifies some aspects, it makes it difficult to follow my original concerns. Therefore, I've outlined my remaining comments below, following the original numbering for clarity. 1. [Comments] I am satisfied with the explanation provided for selecting the photon splitting variable p, which significantly impacts the method's applicability. 2. Many crucial implementation details about the dataset are absent. [Comments] The response lacks crucial implementation details, particularly regarding dataset statistics. In lines 208-211, more specific information about the dataset size is needed, such as: "The proposed dataset contains XX real videos and XX synthetic videos, each with XX frames. Synthetic sequences were simulated from XX dataset videos using XX method." While Figures S5-S12 provide examples of real SPAD data and reconstruction, I am seeking a comprehensive statistical profile, not just illustrative examples. Without this information, the evaluation may be biased if it is based on a single video. 3. The comparison with existing quanta video reconstruction methods is limited to only QBP (proposed in 2020). There is a lack of quantitative comparisons with other existing methods. Can you provide a more comprehensive comparison with other existing quanta video reconstruction methods beyond QBP, including quantitative results? [Comments] The theoretical foundations of N2N, N2V, and GAP do not restrict their application to specific data dimensions. If I understand the proposed method correctly, it shares this flexibility. Therefore, the applicability to 3D data is not a unique contribution of the proposed method. Additionally, QBP can be adapted for 3D data by applying a densely sampled sliding window, which further emphasizes the need for a broader comparison. 4. The paper lacks a detailed analysis of the method's runtime and memory requirements compared to existing approaches. Can you provide a detailed analysis of the computational complexity and how it scales with dataset size? Could you elaborate on the runtime and memory requirements of your method compared to existing approaches? [Comments] To clarify, I am seeking a fair quantitative comparison of inference computational complexity between different methods under the same input data size and hardware requirements, as commonly found in the literature. This typically involves evaluating the minimal memory and runtime required for inference. The current discussion only mentions the maximum GPU memory, total training time, and a rough estimate of minutes used without any constraints. 5. More extensive ablation studies exploring the impact of various components of the method (e.g., network architecture choices, hyperparameters) are needed to provide deeper insights into the method's performance. Can you provide more extensive ablation studies on the impact of various components of your method, such as network architecture choices and hyperparameters? [Comments] As stated in line 57, one of the paper's main contributions is providing insights into network design. However, Table S1 only lists the selected hyperparameters, not the results of ablation studies. The only relevant ablation study is Table S7, which examines the effect of model size. 6. Several technical terms and abbreviations are not properly defined or explained: - L1: "SPAD" is not defined on first use. - L22: "QBP" is not defined on first use. [Comments] I mean the first use in the main text, not the abstract. My suggestion is that the terminologies should be defined in the main text when they are used for the first time, even if they appear in the abstracts. 7. Some experimental details are missing. - How does the method perform on extremely low photon count data, and what is the lower limit of photon count where the method remains effective? [Comments] The statistics provided are about the simulation. The effectiveness means quantitative results demonstrating the method's performance under different illumination levels measured in lux. - How does the method perform on different types of scenes or motion patterns? [Comments] Can you provide an related analysis including insights into potential limitations or guidelines for practical applications? --- Reply to Comment 1.1.1: Title: Response to the reviewer iaWM's new comments Comment: We appreciate the reviewer’s last-minute response. We want to note another factual error in this comment: 6. **I mean the first use in the main text, not the abstract. My suggestion is that the terminologies should be defined in the main text when they are used for the first time, even if they appear in the abstracts.**\ QBP is also defined on first use in the main text (L291).\ We acknowledge that SPAD is not defined in L25. The initial definition is on the same page. We will fix it in the revision. Please find other responses below: 2. **The response lacks crucial implementation details ...**\ We stated that each real SPAD dataset has 100-130k frames in L208. Each data set has a single video. We can provide other required information in the revision. We are not sure what the term “comprehensive statistical profile” means in this context. 3. **The theoretical foundations of N2N, N2V, and GAP ...**\ We acknowledge that most convolutional neural networks can be applied to multidimensional data. In fact, we tested 3D N2N, N2V, and GAP with the same network architecture as shown in Table 1. However, directly applying these methods in 3D to 1-bit SPAD data produced suboptimal results. We also applied masked loss to 2D GAP (Table 1). It did not produce comparable results either. The combination of the 3D implementation, masked loss, and p-thinning-based sampling strategy, together with important architectural design choices such as group normalization, is the key to high-quality 1-bit quanta image reconstruction. The main contribution of this work is the combined concept and approach. While QBP can be adapted for 3D data by applying a densely sampled sliding window, the original manuscript indicates that speed (30 minutes/frame) is a major limitation as noted in L295. This is not a practically viable approach for reconstructing thousands of densely sampled sliding windows. Additionally, QBP works in a fundamentally different way than our method and we are not aware of how to make fair quantitative comparisons. It registers and bins adjacent frames, therefore the output is an accumulation of aligned images instead of a direct volume prediction. Despite that, we tested real SPAD data from QBP with our method and reported the result in Fig. 5. We are not aware of any method beyond QBP for a broader comparison. 4. **To clarify, I am seeking a fair quantitative comparison ...**\ There are no differences in inference computational complexity for the 3D methods compared in Table 1. The only difference is the self-supervised training strategy. In L214, we mentioned “to ensure fair comparisons, we incorporated the same network architectures, hyperparameters, and training steps into individual baselines…” Therefore, we believe we provided a fair quantitative comparison of inference computational complexity between different methods under the same input data size and hardware requirements. Also, this work highlights a new self-supervised concept for Bernoulli distributed data. Still, we provide a network design that is readily useful within the defined constraints. We acknowledge the network performance can be further improved using optimal architectures and hyperparameters, but it is beyond the scope of this work. The numerical information we provided in Table S1, L192, and L298 can be used to estimate the general runtime requirements in different situations by scaling the parameters. 5. **As stated in line 57, one of the paper's main contributions ...**\ We conducted extensive ablation studies besides Table S7. Table S6 and Fig. 4d compare the initial filter size and indicate both large and small sizes can degrade the performance of the network. Fig. 4a and L261 compare the group normalization size, which is key for quality reconstruction. We also consider p as a highly relevant parameter in our method, and we conducted thorough ablation studies with different p ranges in Fig. 4b, c and Table S2-5, 8. We also included results with many different experiments in hp.CSV as noted in the Table S1 caption. All of the studies include numerical results. It is unclear which specific ablation studies the reviewer wants. \ 6. **See above** 7. **The statistics provided are about the simulation ...**\ We will clarify that the pixel values indicate the absolute illumination level (photon count). SPADs are very sensitive devices and work at the single photon level. Lux is a perceptual measurement of visible illuminance, which is not an appropriate measurement. For example, SPAD measures the spectrum through UV-NIR. Lux only accounts for visible light.\ The requirements/assumptions for applying this method are discussed in L49-51. In extreme cases, as shown in Fig. S4, we applied the method to a random binary array with 1% of the pixels occupied with ones. The result is an expected output of a blank image. One limitation of the method is that if the pixels are not independent, the predictions can be erroneous. --- Rebuttal 2: Comment: We thank the reviewer for their comments. Regarding our used datasets, please note that we commit to publishing our datasets. We agree that the suggested clear and to-the-point summary of the used data can be helpful. We will add a sentence, in the form suggested by the reviewer, stating the number of videos and frames in the main paper: "To evaluate our method, we use a total of 7 real SPAD videos, each containing 100k-130k frames, and a synthetic video with simulated noise consisting of 3990 frames. Additionally, we use a real video with 100k frames published in [8]". We additionally will point to the description of the simulation in L199 “Simulated data” section. We will also add a table to the supplementary material describing the characteristics of each video including frame rate, number of frames, frame size (resolution), and subject content. However, we would like to stress that our manuscript includes experiments on a total of 9 (7 real + 1 synthetic + the video from [8]) videos, each showing vastly different content including low-signal, high-signal, high-contrast, high-ambient-light, moving camera, moving object, linear movement, random movement, combined movement, ultra-fast events, and stochastic events. The real data include dynamically moving objects, plasma balls, high-frequency bubble dynamics, histology images, and fluorescence microscopy data. The simulated data was carefully taken to cover a large range of image contrast and structures. We will include results on additional simulated data from other domains, including microscopy, in the supplementary material of the final version.
Summary: The paper proposes to extend the Generative Accumulation of Photons that was proposed for Poisson noise to 1 bit Quanta image sensors. Strengths: The proposed method is novel, and mathematically interesting. The authors have put in a lot of effort to fit GAP for the problem of 1-bit QIS reconstruction. Weaknesses: The proposed method does not seem to be performing better than a supervised method. It is not clear what is the supervised method that was used in the comparisons. It is not clear what the unique advantages of this method are compared to standard techniques like supervised learning or data simulation based training. Technical Quality: 3 Clarity: 3 Questions for Authors: Wouldnt randomly assigning each photon event to one of the two bins lead to loss of temporal information to some extent in both the input and target? The supervised method seems to be performing better than the proposed method on static simulated scenes? Would it achieve similar fps as the proposed method on the video data too? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. The authors have made the limitations of the proposed method clear. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer’s insightful comments and we want to answer the reviewer’s questions one by one: 1. **The proposed method does not seem to be performing better than a supervised method. It is not clear what is the supervised method that was used in the comparisons.**\ We apologize for the confusion caused by the insufficiently detailed description of the supervised method. The purpose of the experiment was to characterize the theoretical upper limit of the network performance if clean ground truth was available. The supervised method was trained using pairs of clean ground truth data and simulated noisy data. The supervised method used the identical network architecture as our self-supervised method and was trained with cross-entropy loss. It ensured that the selected network architectures and hyperparameters could effectively represent the ground truth distribution. We will include that in the revised manuscript. 2. **It is not clear what the unique advantages of this method are compared to standard techniques like supervised learning or data simulation-based training.**\ Supervised training has many limitations in practice, especially in scientific experiments (e.g., microscopy, ultra-high-speed imaging, etc.). Supervised training requires ground truth data that covers the distribution of the noisy measurements. Unlike natural image processing, very often we cannot acquire such a large amount of ground truth data, cannot acquire ground truth data at all, or cannot ensure the distribution of the measurements matches the distribution of the training data. The mismatch between the training data and the measurements can cause erroneous predictions. Our self-supervised method can restore the image using solely the information from the noisy dataset itself, making it appealing for scientific data processing.\ Regarding data simulation-based training, if noise-free ground truth data is available, we can use simulated noise to produce the required training data. We believe the truncated Poisson noise model is sufficiently accurate for training the model. It may be possible to simulate the ground truth data given the knowledge of the underlying ground truth signal distribution. However, this knowledge is often unavailable in non-natural imaging applications. \ *In summary, the self-supervised method does not require ground truth data or prior knowledge about the signal distribution, making it an effective solution for many scientific applications.* 3. **Wouldn't randomly assigning each photon event to one of the two bins lead to loss of temporal information to some extent in both the input and target?**\ Each photon event has its own spatial-temporal index (pixel location and frame number). Assigning a photon event to one of the two bins (input and target) does not affect its frame number and pixel location, so it does not lead to loss of temporal information. 4. **The supervised method seems to be performing better than the proposed method on static simulated scenes? Would it achieve similar fps as the proposed method on the video data too?**\ We apologize for the confusion. The simulated scenes presented in the work are also dynamic video data (see L204, Fig. S2). There are 3990 frames in the ground truth data with motion, recorded at 240 fps. See the response #1 above regarding the comparison of performance. Regarding the inference performance, the supervised method has the same network architecture as our method and outputs the same temporal resolution. Therefore the fps of inference is identical. We are happy to provide further clarification in the discussion phase of the rebuttal process.
Summary: This paper introduces a method to reconstruct/denoise high-resolution high-frame-rate videos captured by 1-bit quanta imaging sensors (e.g., SPAD arrays) without heavy spatio-temporal binning. The paper also captures and will release a new SPAD dataset. The proposed method is loosely based on Generative Accumulation of Photons [9], which trains a CNN to reconstruct images in a self-supervised fashion by reconstructing the pdf of the photon arrivals. This paper makes several significant modifications to [9] so that it can work with quantum image sensors: First, in order to account for 1-bit sensors, the proposed method models the photon arrival pdf as a Bournoulli (rather than Poisson) distribution to account for the binary nature of the measurements. Second, in order to improve reconstruction accuracy it incorporates temporal information regularization. The proposed method is tested on experimentally captured data and produces visually compelling results, including in the presence of non-rigid motion (e.g., guitar string). Strengths: A SPAD dataset would be extremely valuable to the research community. Proposed method is novel and works well. Paper is well written. Denoising quanta image sensor data is an important problem with many scientific imaging applications Weaknesses: Proposed technique largely follows from GAP. Unclear if Poisson distribution assumption is invalid in the photon starved regime, i.e., would GAP work for dimmer scenes? Figure/table captions could use more info. E.g., state whether Table 1 results are experimental or simulated. Underperforms supervised methods. Technical Quality: 4 Clarity: 4 Questions for Authors: Does a Poisson signal model work in the low-flux regime (where it would be rare for more than one photon to arrive in a pixel)? Can one bin a large # of frames in overlapping windows to ensure Poisson statistics while still recovering high temporal resolution? How important is self-supervised training? Is the simulated noise model accurate enough that one could train an effective QIS denoiser using only simulated data? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Well discussed Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer’s insightful comments and we want to answer the reviewer’s questions one by one. For conciseness, we grouped similar and relevant questions and comments. 1. **Is the Poisson distribution valid in the photon-starved regime (fewer than one photon per pixel on average)? How does GAP behave in this regime? How does the proposed method differ?**\ It depends on the physics of the photon-counting process. If the sensor produces true Poisson photon counts, GAP should be applicable even for very low photon counts. Pixels can have more than one photon even if the Poisson rate is low. However, single photon detectors (like SPAD) are binary, leading to a truncated Poisson distribution / Bernoulli distribution (see L107). In this case, the Poisson distribution assumption becomes invalid at the pixel level leading to the artifacts shown in the main paper Page 7 Table 1 (Ours / No Mask, which is equivalent to a 3D version of GAP), where every pixel that has a photon in the input image leads to a darker result in the corresponding pixel in the output image (also see the figure in the rebuttal PDF). This is a consequence of correlations introduced by the splitting operation. Reducing the amount of light (and photons) will not remedy this problem, and the output images will still exhibit the same artifacts, albeit for fewer pixels, as there are fewer photons in the input. Addressing this issue is a major contribution of this manuscript. Our masking scheme can reliably avoid these artifacts even in low-light conditions. 2. **Can one bin a large # of frames in overlapping windows to ensure Poisson statistics while still recovering high temporal resolution?**\ Binning a large number of binary frames will make the data behave more Poisson, but the temporal resolution cannot be recovered. There is a trade-off. As soon as we sum multiple frames, we can no longer distinguish which frame each photon is coming from. The temporal information is lost in the process. In our experiments, each frame has a 6 ns exposure time, but the frame rate is 100k fps (10 us/frame). There is a relatively long period between frames when the camera is not collecting light. If there is fast movement, the movement between frames will be integrated after binning, and binned frames will represent a much longer time scale, causing motion blur. 3. **Comparison to supervised methods and training using simulated data.**\ We acknowledge that the supervised method is the upper limit of the network performance under ideal conditions. However, supervised training requires high-quality ground truth data that covers the distribution of the noisy measurements, leading to limited practical applicability. In many cases, we cannot acquire such a large amount of ground truth data, cannot acquire ground truth data at all, or cannot ensure the distribution of the measurements matches the distribution of the training data. The mismatch between the training data and the measurements can cause erroneous predictions.\ If noise-free ground truth data is available, we can use simulated noise to produce the required training data. We believe the truncated Poisson noise model is sufficiently accurate to generate training data. In many applications, ground truth data is unavailable. In some cases, it may be possible to simulate the ground truth data, given the knowledge of the underlying ground truth signal distribution. This knowledge is often not available in non-natural imaging applications (e.g. scientific imaging).\ *The ability to restore noisy data without ground truth data or prior knowledge about the signal distribution is a necessity for many scientific applications. Using a self-supervised method is important in this situation.* 4. **Figure/table captions could use more info. E.g., state whether Table 1 results are experimental or simulated.**\ We will include more information in the Figure/Table captions in the revision. Table 1 results are from simulated data to have a ground truth for evaluation. --- Rebuttal Comment 1.1: Comment: Thank you for answering my questions.
Rebuttal 1: Rebuttal: We have addressed each reviewer's comments and questions in detail in reviewer-specific rebuttals underneath each review. A new figure demonstrating the photon splitting process is presented in the attached PDF. The figure also more clearly indicates the artifact resolved by the masked loss. This figure is cited in abBE Response #1 and VmGM Response #4. We plan to add this figure to our revised manuscript. Pdf: /pdf/0d9ce767f7c29146b3f8e25a2aca38bb41c06447.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Beyond Single Stationary Policies: Meta-Task Players as Naturally Superior Collaborators
Accept (poster)
Summary: The authors proposes a Bayesian policy reuse-based framework referred to as CBPR, which allows for a collaborative artificial agent to adaptively select optimal collaborative policies from multiple policy networks. They did so by extending intra-episode belief to collaborative scenarios and incorporating this extension to the vanilla BPR. Their novel framework bypasses the need to model human behavior, and is suitable for human-AI collaborative tasks. The authors also provide theoretical proofs that their framework converges in cooperation with non-stationary humans, and that it can also establish the optimal collaboration policy. Finally, the authors evaluated this CBPR framework with Overcooked, a collaborative cooking game. Through experiments with this game, they demonstrated that CBPR outperforms baselines, showing that it is more effective to design multiple agents rather than a single agent. Strengths: The authors' incorporation of intra-episode belief into traditional BPR to develop CBPR was smart and original. According to their experiment with Overcooked, this novel combination of existing concepts made significant advancements in the field of human-AI collaboration. The authors went above and beyond to present the technical soundness of their work by proving their framework's convergence and optimality. Additionally, the paper is well-structured and clearly written, with meticulous attention to the setup of their experiments, enhancing the overall clarity and impact of their findings. Weaknesses: I couldn't identify any area of weakness for this manuscript. Technical Quality: 4 Clarity: 4 Questions for Authors: In theorem 1, the statement first mentions a stationary human policy as a given. At the end, the theorem states that convergence is also guaranteed for a non-stationary policy. It's not clear why the theorem specifically starts with non-stationary policy if it could be applied to both cases. Moreover, the proof does not clarify why the convergence guarantee for a stationary policy also applies to a non-stationary policy. Additional clarification to point out this gap would be helpful for readability. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The authors pointed out that their meta-tasks are manually-designed, rule-based policies, meaning that this method would not scale in real-world applications such as power grid dispatching and autonomous driving. While CBPR offers a strategy to model meta-tasks for human-AI collaboration, the challenge of summarizing domain experts' meta-tasks still remains. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for the time and effort that reviewer QDdz has invested in reviewing our paper, and we appreciate the recognition of the main advantages of our method. Additionally, thank you for your insightful comments regarding Theorem 1 and its proof. **Clarification on the Relationship Between Stationary and Non-Stationary Policies** Thank you for your thoughtful and perceptive question. Your question highlights an important aspect of our theorem formulation, and we acknowledge the need for clearer explanation in our original manuscript. Our approach is developed on the key insight that, despite the inherent diversity in human behaviors, the underlying meta-tasks within specific collaborative contexts are quite similar. This observation facilitates the transformation of a non-stationary cooperative problem into one involving a series of *stationary* policies addressing distinct meta-tasks. Theorem 1 outlines this transformation by situating the collaborative process between humans and AI within a Non-Stationary MDP framework. To manage the inherent non-stationarity, a methodical decomposition of the broader non-stationary decision-making process into *a series of temporally contiguous stationary segments, which represent the stationary process of meta-task accomplishing*. Initially, Theorem 1 defines trajectories collected from interactions with what we referred to as a 'stationary human policy.' To clarify, this term was intended to denote a stationary policy that humans use to accomplish a specific meta-task. It was not meant to imply that our theorem begins by establishing convergence for a stationary human policy and then extends this to a non-stationary human policy. We acknowledge that the term 'stationary human policy' may have led to some confusion. Thank you for your insightful query. In response, we have revised the terminology to 'stationary meta-task performing policy' in our manuscript to improve clarity and readability. **Clarification of Convergence in Stationary and Non-Stationary Contexts** Your question regarding convergence is astute. We believe our CBPR framework is innovative, offering a straightforward approach to achieving convergence in non-stationary human-AI cooperative settings. To ensure convergence while interacting with a non-stationary human, a cooperative policy must: 1) recognize changes in human behavior policies, and 2) adapt a consistent response policy that aligns with these changes. In the proof of Theorem 1, we first establish the convergence properties of the Bayesian update mechanism. This ensures that, with the convergence provided by Bayesian updates, the belief on the current meta-task will consistently converge. This convergence is crucial for the CBPR framework to effectively track changes in human behavior. Subsequently, we demonstrate that when utilizing Bayesian updates, CBPR algorithms consistently converge to a specific meta-task cooperative policy, provided that the meta-task being performed by the human remains unchanged until this fixed response policy is achieved. A key advantage of our CBPR method is the use of Bayesian updates to dynamically update beliefs on human behavior. This approach is effective for both stationary and non-stationary policies. The belief about the meta-task will consistently converge to match the behavior that the human is exhibiting. The primary distinction between stationary and non-stationary human policies is that the meta-task in the former remains constant, while in the latter, it evolves over time. We hope these clarifications address your concerns and enhance the readability and comprehension of our work. We thank you again for your valuable input, which has significantly contributed to improving our paper.
Summary: This work explores how to address the challenges of non-stationary human behavior in human-AI collaboration. The authors propose a Bayesian framework that adaptively selects optimal models during training episodes to capture the underlying consistent human behavior in solving meta-tasks. Theoretical analysis shows that their proposed method can converge to optimal solutions even when the human‘s policy is non-stationary. Both simulations and real human-subject evaluations show that the proposed method outperforms other baselines in a cooperative game Overcooked. Strengths: The motivation is realistic, and the introduction section is very clear. The proposed solution is solid, with both theoretical analysis and empirical evaluations. Sufficient details are provided for reproducibility. The idea to include both intra-episode belief and inter-episode belief is innovative. Weaknesses: The problem formulation and notations are unclear (see more discussion in Questions 1 and 2). Empirical results seem to suggest that the proposed method only makes minor improvements, and ablation studies imply that $\rho$ has almost no impact (which I believe is one of the innovations of this work). The authors do not thoroughly discuss these results. Instead, they directly conclude that the proposed method is effective. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. In line 113, $\sigma$ is defined as “any signal aiding cooperation, such as reward or interaction trajectory”. But since reward and trajectory could be derived from the given task $\tau$ and policy $\pi$, why is there uncertainty? Is the problem setup a stochastic, or is the policy stochastic? 2. In Equation (1), the belief is updated in Bayes’ rule, so the task with higher expected utility (if $\sigma$ represents reward) given current policy will be assigned a higher belief probability. Is this under the assumption that the game is cooperative and both human and AI decision-makers are acting to maximize expected utility? If not, why is such belief updating reasonable? 3. The intra-episode belief should be more stable and accurate than inner-episode belief, and contain different messages about human behavior. But from ablation studies in Section 4.2, $\rho$ seems make no difference in mean reward, why is this the case? 4. In simulation results (Figure 4), SP outperforms CBPR in asymmetric advantages, and FCP outperforms CBPR in soup coordination. Could the author discuss more about this? Similar results appear in the human experiment (Figure 6) as well. Detailed discussions about results should be added to the paper. 5. In the meta-task of Overcooked, if the player is holding an onion, then their only task is to put it in a pot; if holding a pot, the only task is to deliver it. Why is it necessary to identify the task? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: No further limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We really thank you for your valuable comments on improving our work. We sincerely hope the reviewer can raise the assessment score of our paper if the following responses have successfully addressed the concerns. > Q1: The uncertainty of σ? The human-AI collaboration problem we address is inherently non-stationary. We utilized a rule-based agent with noise (i.e. τ ) a stochastic policy (π), thus the reward is uncertain. Signal model P (σ | τ, π), which is also known as observation model represents the distribution of observed signal σ when AI performs the meta-task τ with policy π in vanilla BPR. Note that in CBPR, we choose the episodic reward as the signal σ of signal model. During offline stage of CBPR, we construct each signal model (i.e. performance model) by fitting a Gaussian distribution given a stochastic AI policy π and a noisy rule-based agent τ (see l124-128 of the main text and l533-534 of the Appendix). To further clarify the definition of σ, we will add additional explanation of signal model in the revision. > Q2: The assumption of Equation 1? Why the belief updating reasonable? Thanks for the question. We would like to clarify that only the AI decision makers are acting to maximize the expected utility, whereas humans may exhibit varying levels of initiative to cooperate in human-AI collaboration. The rationale of the updating the belief in Eq 1 adheres to the vanilla BPR, which reuses prior experience from the class of tasks to cope with unknown task variations. Note that Following works primarily focused on extending BPR in competitive tasks (lines 93-95 in original paper). In this work, we firstly extend BPR in human-AI collaboration scenario. Human-AI collaboration is essentially an incomplete information game since human players may exhibit varying levels of initiative to cooperate and AI players do not have complete knowledge of payoff functions of human players. On the contrary, AI players need to adapt to the non-stationarity of human players, and are acting to maximize expected utility. > Q3: ρ seems make no difference in mean reward, why is this the case? We examine the setting of rho in the evaluation of our experiments. According to Eq 4, when ρ=0.1 the weight of the inter-episode belief decays to 0.01 after just 2 time steps, while when ρ=0.9, the inter-episode belief nearly decays to 0.01 within 40 time steps. Therefore, for a game with 600 time steps in our evaluation, only the first 40 steps of integrated belief will be influenced by the inter-episode belief, leading to the observation that ρ make no difference in mean reward. In other words, to demonstrate the effectiveness of ρ, the inter-episode belief should decay to below 0.01 after a greater number of time steps in an episode. To further evaluate this new setting, we set ρ = 0.99 and ρ = 0.95. Therefore, we have conducted more ablation study of ρ (Fig 3 in rebuttal PDF), the result indicates that in simple layout (i.e. Cramped Rm.) since the partner’s policy is simple, variations in ρ have little impact on the reward. However, in complex scenarios (i.e., Soup Coord.), adjusting rho can enhance cooperative performance to a certain extent. > Q4: Results of (Figure 4) and Figure 6. We provide a more detailed analysis of our results in Fig 4 as follows and include them in our updated paper: **The inherent advantage of SP and FCP agents.** Partners with low, medium and high skill levels are represented by checkpoints (essentially are SP agents) at initial, the middle and the end of training of FCP. There- fore, SP and FCP agents have an inherent advantage in the evaluation presented in Figure 4. Despite this, CBPR performs better when faced with partners of a lower skill level. When collaborating with real humans (Fig 6), FCP and SP no longer have such advantage. This leads to almost all FCP and some SP performing well against agents of various skill levels (Fig 4), but falling short when facing human players (Fig 6). **The cooperative advantage of CBPR in non-separated layouts.** In separated layouts (i.e. Asymm. Adv. and Soup Coord.), agents can usually complete tasks independently without considering the hindrance of the other partner’s moves to themselves. However, players’ own position (e.g. stand still in front of the serving areas) can obstruct their partners from completing the task in the non-separated layouts. Therefore, non-separated layouts require more cooperation between players compared to separated layouts. As shown in Fig 4, CBPR’s better performance in Cramped Rm. and Coord. Ring suggests its advantage in collaborative tasks. **The double-edged sword of SP’s simple policy.** In Asymm. Adv., SP agent exhibits outstanding perfor- mance when it cooperates with the agent of high skill level (Fig 4c). We replayed the game and found that the SP agent learned the simplest and most effective policy (i.e. in the right room, just pick an onion from onion dispenser and then place it in a pot within the shortest path) during training. However, other agents exhibit some superfluous actions due to their own complexity. However, when SP cooperates with the agent of low skill level, it performed poorly because the SP agent on the right only learned the simplest policy (putting onions in the pot), and when the agent with low skill level on the left does not deliver the cooked soup, SP will wait in place rather than deliver the cooked soup. Thus, cooperating with SP agents results in low performance (Fig 4d). > Q5: Why is it necessary to identify the task? We would like to clarify the process of ”identify” meta-task in the online stage of CBPR. Identifying task is to calculate the probabilities to generate a queue Q of human behaviors given each meta-task τ . During the online collaborating, we are unable to ascertain human cooperative strategies and initiative; however, human behaviors reveal the meta-tasks they are currently undertaking. Therefore, it is necessary to identify the task. --- Rebuttal Comment 1.1: Title: Thank you for the responses Comment: Thank the authors for their detailed responses. The clarifications provided regarding the meta-task (Q1 by reviewer 3rF3) and the experimental results (Q4) have addressed most of my concerns. I would be willing to raise my rating if these discussions are incorporated into the revised manuscript. --- Reply to Comment 1.1.1: Comment: Thank you so much for your reply. We will certainly include the additional experiments and these discussions in the updated version of our manuscript.
Summary: This paper introduces Collaborative Bayesian Policy Reuse (CBPR), a framework that addresses the challenge of collaborating with non-stationary human behavior by adaptively selecting optimal collaborative policies based on the current meta-task. CBPR identifies meta-tasks underlying human decision-making and trains specialized meta-task playing (MTP) agents to enhance collaboration. Evaluations in the Overcooked simulator demonstrate CBPR's superior performance compared to existing baselines. Strengths: Overall, I found his paper seems to be interesting. It introduces the CBPR framework that addresses collaborating with non-stationary humans by matching meta-tasks rather than directly modeling complex human dynamics. The presentation of the paper could be improved. For instance, I found figure 2 to be confusing, I encourage the author could simply the diagram and improve the presentation style. The strengths are: 1. The paper provides strong theoretical proofs and extensive empirical results across various conditions that compellingly validate the effectiveness of CBPR over baseline approaches. 2. The paper presents an approach seem to be different from the main stream: it avoids modeling human behavior but instead "focusing on constructing meta-tasks that underpin human decision making" Weaknesses: 1. The effectiveness of the CBPR framework relies heavily on accurately predefined meta-tasks, which might limit its application in environments where meta-tasks are not clearly defined or are too complex to categorize effectively. 2. While the authors extensively evaluate CBPR in the Overcooked environment across various conditions, the paper lacks experiments in other domains beyond gaming. Overcooked, being a relatively simplistic and controlled environment, may not fully capture the complexities and challenges of human-AI collaboration in real-world applications such as autonomous driving, robotics, or complex decision-making systems. Testing CBPR's performance in more diverse and realistic domains would strengthen the paper's claims and demonstrate its broader applicability. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. How does the CBPR framework handle environments where human behaviors and tasks are not only non-stationary but also highly unpredictable or undefined? 2. The author used 5 different random seeds, but from the experimentation sections, looks like the variance of the run aren't that much, does this mean that the learning process is more deterministic? 3. How would the performance be like, if the episode has longer horizon? 4. How would the algorithm scales with LLMs or large model? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Yes, the author discussed the limitation and societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the time and effort reviewer 3rF3 has invested in reviewing our paper. For the Weakness1, Weakness2, Q1 and Q3 you mentioned, we have added experiments to support our insight. We apologize for the confusion caused by Figure 2. We have simplified it in the rebuttal PDF to make the framework of CBPR more clear. We provide detailed explanations for the other questions and hope this can address your concerns. > Weakness1 and Q1: The difficulty of predefined meta-tasks and the unpredictability of human behavior. For many tasks, defining meta-tasks based on human knowledge is not so difficult. For instance, in the context of autonomous driving, proactive lane changing and forward driving can serve as meta-tasks. In robotic scenarios, navigation and robotic arm grasping can also be considered as meta-tasks. To address leftover tasks not included in predefined meta tasks, we introduce a meta-task category as ”others” (Fig 1, bottom-Left). For the undefined and unpredictable tasks, they can, to some extent, be addressed by the “others” category by learning a general model for them in our current framework. To demonstrate this, we add an ablation study based on section 4.2 to examine the impact of the number of meta-tasks undefined for the Soup Coord. scenario, in which we defined 4 more meta-task (\textit{place_onion_and_deliver_soup}, \textit{place_tomato_and_deliver_soup}, \textit{pickup tomato and place mix}, \textit{pickup_ingredient_and_place_mix}). The results show that without "others" category, the performance deteriorates significantly, while the performances degrade relatively gracefully with less meta-tasks defined and more included in "others" category. | | 7 predefined+"others" | 5 predefined+"others" | 3 predefined+"others" (original paper) | 3 predefined w/o "others" | | :---- | :----: | :----: | :----: | :----: | | High | 620.3(193.3) | 600.7 (234.0) | 647.7(159.3) | 622.8(205.8) | | Medium | 757.8(100.3) | 735.8(98.7) | 717.1(148.1) | 607.3(278.5) | | Low | 689.8(43.9) | 680.52(51.6) | 668.9(49.0) | 40.0 (59.1) | In some extremely complex tasks, it may be difficult to define what a meta-task is, which will be the direction of our future research. We will explore approaches to decompose tasks into finer-grained pieces so that definitive meta-tasks can appear and become definable. We will add these discussions to limitations and future work in our revised manuscript. > Weakness 2: Lacking experiments in other in real-world applications. Following your recommendation, we conducted additional experiments using the CARLA autonomous driving simulator. To acquire driving behavior data, we employed human driving models from the SUMO traffic simulator, selecting random origins and destinations for each vehicle to collect driving data. This data was used to train a Behavioral Cloning (BC) agent (Agent 1) to autonomously control the vehicle. For the Control-Based Priority Routing (CBPR) agent, we designed two meta-tasks: the "line up" meta-task, developed by selecting vehicle queuing data and training a second BC agent (Agent 2), and the "others" meta-task, utilizing BC Agent 1. We evaluated both BC and CBPR agents in a 'Queue for a Left Turn' scenario, employing a rule-based simulated human. This simulated human intervention ensured the vehicle remained in line whenever the AI attempted to change lanes. Initially, as vehicles began to queue, the AI chose to turn right and overtake. At this point, human would redirect the vehicle to maintain its position in the queue through a left turn. After a single instance of human intervention, the CBPR agent switched to the "line up" meta-task and continued to wait in the queue, whereas the BC agent repeatedly attempted to overtake by turning right, necessitating continuous human intervention. Detailed results are shown in Figure 4 of the rebuttal material. > Q2: The learning process is more deterministic? In Appendix D, we provide the training curve of all agents (Fig 9-12) over five random seeds, which indicate the learning process is not that deterministic. The shaded area in Fig 3 is indicated by standard error, which is the standard deviation divided by the square root of the the number of seeds to represent the randomness of multiple experiments. To make the Fig 3 easier to understand, we scaled down the errors in original paper. We have replotted the Fig 3 using the standard deviation as shown in Fig 1 of rebuttal PDF file. > Q3: CBPR performance in the episode with longer horizon? Thanks for this interesting question. Following your suggestion, we increased the original horizon setting from 600 to 3000, and then had the CBPR agents cooperate with the policy-switching agents, recording the cooperative rewards under five different frequencies of policy switching. The experimental results are shown in Fig 2 of rebuttal PDF, which implies that CBPR remains highly efficient in long-horizon tasks. > Q4: CBPR scales with LLMs? Thank you for your question, which has inspired us to undertake some work combining large models with CBPR in the future. We consider CBPR scaling with LLMs or large models in following two aspects: \textbf{Meta-task identification} \ In \textit{Overcooked} simulator, meta-tasks are easy to defined. However, in environments where meta-tasks are not clearly defined or are too complex to categorize, the context comprehension ability of large models can deconstruct multiple meta-tasks based on behavior data demonstrated by humans. \textbf{Acceleration of belief $\beta_k(\tau)$ convergence} \ In an environment where two meta-tasks do not have clear boundaries. LLMs or large models can accelerate the convergence of $\beta_k(\tau)$ by adding an additional factor $\lambda$ to the numerator of Eq 1 or Eq 3. $\lambda$ can be determined by self-reflection [1] of LLM. [1]Shinn et al. Reflexion: Language Agents with Verbal Reinforcement Learning. 2023 --- Rebuttal Comment 1.1: Title: Clarification of errors in the rebuttal above Comment: We would like to clarify two errors in the above rebuttal: 1. Due to the page limit of the rebuttal PDF file, we removed the simplified version of Figure 2, which is inconsistent with the description in above text; 2. Meta-task category "others" is shown in Fig 1 bottom-Right of original paper, rather than the bottom-Left as stated in above text. We sincerely apologize for these errors and hope they have not caused any unnecessary confusion or affected your assessment of our paper.
null
null
Rebuttal 1: Rebuttal: We thank the time and effort reviewers have invested in reviewing our work. We have provided detailed explanations and clarifications to address your concerns regarding problem definition, experiments and discussions. During this stage, we have supplemented the experiments to address reviewers' concerns regarding (1) a clarification of the meta-task. (2) CBPR's application in complex scenarios such as autonomous driving. (3) the performance of CBPR in long-horizon tasks. (4) the impact of inter-episode beliefs on the experimental results. Specific details can be found in the responses below the uploaded PDF file. For other questions, we have provided point-by-point responses and have updated our paper accordingly. We believe that incorporating your feedbacks has greatly strengthened our paper, and we hope you will agree with our improvements. We express our sincere gratitude to all the reviewers for their valuable comments on our manuscript again. Pdf: /pdf/bb09e1cc6f844c4a82bad2b7f45dd37a46208de8.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Periodic agent-state based Q-learning for POMDPs
Accept (poster)
Summary: The paper proposes a new RL algorithm for POMDPs. The standard approach to convert the POMDP in a belief MDP is not possible in the RL setting, when there is no knowledge about the system model. The alternative is to use an agent-state which is a function of the observation history. Standard RL algorithms can be adapted to the agent-state to learn policies for POMDPs. However, these standard algorithms lead to stationary policies which might not be optimal, as the agent-state may not satisfy the Markov property. As a consequence, the authors propose a special form of nonstationary policies that may be better suited for non-Markovian agent-states. Strengths: 1. The paper has an interesting problem setting and describes well the limitations of stationary policies applied to non-Markovian agent-states. 2. The authors provide an extensive discussion of the related work. Weaknesses: 1. The proposed algorithm leads to a very limited class of policies. Having periodic policies might only help, because several policies mixed might lead to a better strategy than having only one deterministic policy. Apart from that, is there an inductive bias to repeat the policies? 2. Another weakness is, that the paper mainly discusses deterministic policies. It would be more interesting to increase the set of policies to stochastic policies. It could be that a single stochastic policy has the same effect on mixing the policies and therefore the periodic policy has no advantage. 3. The numerical experiments consider only small examples. Technical Quality: 2 Clarity: 2 Questions for Authors: Have you considered other non-stationary approaches? Confidence: 2 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The authors addressed the limitations adequately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback. ### **1. Considering other non-stationary approaches** As we mention in the related work section, there are some other approaches to non-stationarity that have been taken in the literature e.g., continual learning and hierarchical learning including options. However, for the most part, the theoretical analysis is limited and when available is largely restricted to the MDP setting. Given the insights that are obtained from our current analysis, it may be possible to analyze some of these more general algorithms for POMDPs using the tools developed in our paper. ### **2. The numerical experiments consider only small examples.** The purpose of our numerical experiments is two-fold. First, to show that the convergence results do agree with what is predicted by the theory. Second, to show that PASQL can outperform ASQL. To make the first point, we need to restrict ourselves to simple models, as we have presented in the paper. ### **3. Deterministic vs stochastic policies.** Please see our response in the global rebuttal. --- Rebuttal Comment 1.1: Comment: I thank the authors for their answers. However my concern,that periodic policies might only help, because several policies mixed might lead to a better strategy than having only one deterministic policy, still remains. I keep my original score, but I would not object if other reviewers championed the paper. --- Rebuttal 2: Comment: Thank you for your reply. We are not completely sure what you mean by "several policies mixed". Note that we have already shown via the example on line 959 (page 22) that periodic policies can do better than stochastic policies. So, we believe that you mean something other than stochastic policies. Another interpretation of "mixed policies" is according to the terminology used in game theory. In this setting, the agent has a set of $m$ deterministic policies $(π_1, \dots, π_m)$. Before the system starts running, the agent picks a random variable $M$ in the set $\lbrace 1,\dots, m\rbrace$ with probability $(p_1, \dots, p_m)$ and then plays the policy $π_M$ forever. This is called a "mixed policy" in game theory and we will denote it by $μ$. Let $J(π_i)$ denote the performance of policy $π_i$. Then, by linearity of expectation $ J(μ) = p_1 J(π_1) + \cdots + p_m J(π_m). $ In particular, this means that $\displaystyle J(μ) \le \max_{i \in \lbrace 1, \dots, m\rbrace} J(π_i). $ Thus, **mixing cannot improve performance.** In the above argument, there was no restriction on the class of policies. So, if we take a policy that mixes between stationary deterministic policies, then it cannot do better than the best stationary deterministic policy. So, a periodic deterministic policy will perform better than mixed policies that mix between stationary deterministic (or even stationary stochastic) policies. --- Rebuttal Comment 2.1: Comment: Thank you for your response. After reconsidering your discussion of deterministic vs. stochastic policies, I have decided to increase the score by one. --- Reply to Comment 2.1.1: Comment: Thank you for the discussion and for raising your score.
Summary: This work proposes a type of non-stationary policy for POMDPs that is periodic. The authors argue that typical agent states in partially observable RL do not satisfy the Markov property and illustrate why introducing non-stationarity can improve the optimal policy within the policy class (vs considering only stationary policies), even if this policy cannot achieve global optimality across all history-dependent policies. The claims are supported by several well chosen examples, rigorous theory, and numerical experiments. I'm happy to strongly recommend acceptance. Strengths: **Originality**: To my knowledge, the proposal of periodic policies for POMDPs is novel (though I am unfamiliar with theoretical papers in this area). The authors do a good job of presenting the relevant prior literature and what contributions in the work at hand are novel. Since the work generalizes stationary policies (using a period of $L=1$), I appreciated the remarks on how previous theoretical results can be obtained using the new theorems in this work. **Quality**: The approach is rigorously supported by theory, and some numerical experiments. While I did not verify all the proofs, the theorems seem correct. Throughout the paper, the discussion presented by the authors was extremely insightful. **Clarity**: The paper is well written and the ideas are clearly communicated. While the writing can be technically dense, this should be expected of a work of this nature. I particularly commend the authors for the example in Figure 1, since it immediately conveys a number of important features of this work in a concise manner. **Significance**: While it is not surprising that performance in POMDPs can be improved by leveraging non-stationarity (in the absence of belief states), it can be challenging to implement non-stationarity due to the infinite parameter requirement the authors highlight. This work is an interesting proposal of how this can be done in principle with only finite parameters. While it may still not necessarily be practical, I think the ideas are nonetheless valuable. Weaknesses: **Originality**: No weaknesses that are not already apparent. **Quality**: My only major concern is with the numerical experiments. First, the agent models used are rather weak (the agent only considers the current observation when deciding an action). Perhaps some stronger choices that would be more convincing are frame stacking or an approximation of the belief state with added noise. The following is a list of limitations. These are mostly stated clearly in the paper and I don't think they constitute reasons to reject. - Choosing an appropriate period $L$ is difficult. While it is shown that choosing $L = n!$ monotonically improves, large values of $L$ have large memory requirements. - For the above reason, I'm not sure this particular algorithm will see practical use. - The choice of behavioural policy is critical. - The framing of the problem is limited to deterministic Q-learning. It does not consider stochastic or policy gradient methods. **Clarity**: A couple of minor issues: - In the introductory example, explicitly writing the values of $J^*_{BD}$, $J^*_{SD}$ under $\gamma = 0.9$ would aid in the comparison with $J^*_L$. - In the numerical examples, is there an explanation for why $u_1$ is a better choice than $u_2$ and $u_3$? **Significance**: See prior comments on practicality. Technical Quality: 4 Clarity: 4 Questions for Authors: (As mentioned above:) 1. Could you please comment on the relevance of the insights towards larger POMDPs, like those common in deep RL? 2. In the numerical examples, is there an explanation for why $u_1$ is a better choice than $u_2$ and $u_3$? 3. I can see why most agent states that aren't the belief state will not satisfy the Markov property. Could you elaborate on some other cases to help clarify this (in the paper)? For example, when the agent state is uninformative (e.g. constant), or when the agent state is an RNN encoding of the history, do these satisfy the Markov property? Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments and the positive endorsement. We address your questions and concerns below. ### **1. Could you please comment on the relevance of the insights towards larger POMDPs, like those common in deep RL?** The current state of the art $Q$-learning based deep RL algorithms for POMDPs always converge to a stationary policy. Our main insight is that non-stationary deterministic policies perform better than stationary deterministic and stochastic policies. As far as we are aware, none of the current deep RL algorithms for POMDPs exploit this insight. In this paper, we present a simple way to exploit this insight via periodic policies. We agree with your point that this introduces additional memory requirements, but as we argue in global response, the additional memory requirements are not too severe. At the same time, for practical implementation of PASQL, one would leverage the insights of deep RL algorithms, especially when it comes to the choice of behavioral policy. The convergence of both AQSL (which is an idealized version of practical Q-learning based RL algorithms for POMDPs) and PASQL depends on the behavioral policy. However, practical algorithms such as R2D2 and dreamer use distributed actors, which help with exploration. Similar ideas can be used in practical implementation of PASQL. ### **2. In the numerical examples, is there an explanation for why $u_1$ is a better choice than $u_2$ and $u_3$ ?** It is difficult to provide an intuitive explanation for why policy $\mu_1$ is a better choice than $\mu_2$ or $\mu_3$. The main reason is that under policy $\mu_1$, the agent executes actions with positive reward (the blue and the green arcs in Fig. 2) more often. The exact reason for this hard to explain verbally but we can "see" that easily via a Monte Carlo simulation. In general, it is difficult to provide a good intuition behind what constitutes a good behavioral policy in PODMPs (this is also true for ASQL). We did perform some numerical experiments on small models where we computed the performance of all behavioral policies (by discretizing the behavioral policies, computing the converged policy $\pi_{\mu}$, and evaluating its performance via Monte Carlo). We found that in several models, there are behavioral policies that lead to a converged policy $\pi_{\mu}$ that is optimal, but they were not always intuitively obvious choices. ### **3. I can see why most agent states that aren't the belief state will not satisfy the Markov property. Could you elaborate on some other cases to help clarify this (in the paper)? For example, when the agent state is uninformative (e.g. constant), or when the agent state is an RNN encoding of the history, do these satisfy the Markov property?** To address this, we will adapt the paragraph starting on line 42 as follows: When the agent state is an information state, i.e., satisfies the Markov property, i.e., $\mathbb{P}(z_{t+1} | z_{1:t}, a_{1:t}) = \mathbb{P}(z_{t+1} | z_t, a_t)$ and is sufficient for reward prediction, i.e., $\mathbb{E}[R_t | y_{1:t}, a_{1:t}] = \mathbb{E}[R_t | z_t, a_t]$, where $R_t$ is the per-step reward, $a_t$ is the action and $y_t$ is the observation, the optimal agent-state based policy can be obtained via a dynamic program (DP) [Sub+22]. An example of such an agent state is the belief state. But, in general, the agent state is not an information state. For example, frame stacking and RNN do not satisfy the Markov property, in general. It is also possible to have agent-states that satisfy the Markov property but are not sufficient for reward prediction (e.g., when the agent state is always a constant). In all such settings, the best agent-state policy cannot be obtained via a DP. Nonetheless, there has been considerable interest to use RL to find a good agent-state based policy ... ### **4. My only major concern is with the numerical experiments. First, the agent models used are rather weak (the agent only considers the current observation when deciding an action). Perhaps some stronger choices that would be more convincing are frame stacking or an approximation of the belief state with added noise.** The purpose of our numerical experiments is two-fold. First, to show that the convergence results do agree with what is predicted by the theory. Second, to show that PASQL can outperform ASQL. To make the first point, we need to restrict ourselves to simple models, as we have presented in the paper. In principle, we could have used more sophisticated agents (e.g., by using frame stacking), but we did not do so in the paper because it is more difficult to visualize the theoretical results in that case. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for the detailed response to my questions. After reading all of the other reviews I continue to endorse this paper. As a sidebar, I don't necessarily think the method proposed here is practical but that's beside the point. The paper introduces insightful and interesting new ideas that fly under the radar in the existing deep RL literature and that alone is significant.
Summary: This paper presents the problem of learning a policy in a partially observable environment in a model-free way. The authors address this problem by proposing to learn a non-stationary periodic policy that can outperform stationary ones. This aspect is motivated by the fact that the policy can be constructed either using the last observation or a fixed set of last observations as a state, which in general do not satisfy the Markov property. An approach based on learning periodic policies using Q-learning is presented and convergence guarantees for this type of policies are presented under some assumptions. Finally, a numerical experiment validates the claims and the benefits of using the devised approach. Strengths: The writing of the paper is clear. The authors consider the approach of learning periodic policies and are able to extend the convergence guarantees that hold under the stationary case also to periodic policies. Weaknesses: I will highlight here some weaknesses of the presented approach. The authors claim in lines 71-73 that "a non-stationary deterministic agent-state based policy performs better than stationary deterministic agent-state based policies". This claim is reported here using only the results reported based on example 2, thus lacking a more formal characterization of the statement. As a second note, if we formally consider the set of non-stationary policies as containing the set of stationary ones, the results of the claim are always true since the set of non-stationary policies is more expressive and thus more powerful than the second set. Theorem 1 addresses the main challenge of the convergence analysis of PASQL that is the non-markovianity of the agent state. However the same challenge is already faced, under stationary policies (for ASQL), in [1]. Since the same result was also proved in [1], this reduces the contribution of the presented statement which seems to extend it to the set of periodic policies. It is not clear what in general the benefits of using periodic policies are and, even though the authors show some examples where they are effective, it is in general not easy to choose the right value of the period $L$. Nonetheless, as stated by the authors, an increase in $L$ does not necessarily reduce the sub-optimality of the policy. Its increase however leads to an increase in the complexity of finding a good estimate of the Q-function for each $l \in L$. Furthermore, no indication is given on the choice of the behavioral policies that lead to the convergence to good policies. [1] Erfan SeyedSalehi, Nima Akbarzadeh, Amit Sinha, and Aditya Mahajan. “Approximate information state based convergence analysis of recurrent Q-learning”. In: European conference on reinforcement learning. 2023. Technical Quality: 3 Clarity: 2 Questions for Authors: It is not clear to me the point (ii) in line 80. In particular, what does it mean that "the set of all periodic deterministic agent-state based policies of period L is not monotonically increasing"? It is also not clear to me why periodic policies with period specified as in line 83 have monotonically increasing performance. It is not clear why the authors state that the main challenge in this setting is the absence of the Markov property of the agent state $[Z_t]_{t\ge 1}$, however Lemma 1 defines the process (including the agent state $\{Z_t\}$) as a Markov chain. Another questions is: how is Lemma 1 derived? Is it an assumption? The authors claim that previous results hold under a restrictive assumption on the learning rates but they do not state what these assumptions are. Since this aspect is claimed as an improvement over previous work, it would be useful to describe the previous assumptions. The convergence guarantees of PASQL hold under the assumption that the Markov chain is periodic. How difficult is this assumption in practice? Would in this case the period of the optimal policy coincide with the periodicity of the chain? Furthermore, in section 5, the authors claim that, differently from previous results which assume the irreducibility and the aperiodicity of the Markov chain, the current work assumes that the chain is periodic which is "a weaker assumption". I do not get why this assumption is weaker since it appears that the irreduciblity and aperiodicity assumption should also hold for each $l \in L$ as appears in Assumption 3 in the appendix. Could the authors clarify this aspect? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback. We address your questions and concerns. ### **1. Monotonically increasing performance of periodic policies** We illustrate this via an example. Let $Π_L$ denote the set of all agent-state based policies with period $L$. Consider a policy $(π_1, π_2, π_1, π_2, \dots) \in Π_2$, where $π_1 \neq π_2$. This policy does not belong to $Π_3$ because the policy at time 1 ($π_1$) is not the same as the policy at time 4 ($π_2$). However, this policy does belong to $Π_4$ because any sequence that is periodic with period 2 is also periodic with period 4. Since $Π_2 \not\subset Π_3$, the set of periodic policies is not monotonically increasing. However, if we take any integer sequence $\lbrace L_n \rbrace_{n\ge 1}$ that has the property that $L_n$ divides $L_{n+1}$, then the performance of the policies with period $L_n$ is monotonically increasing in $n$. The sequence $L_n = n!$ mentioned on line 83 is just one such sequence. We will adapt the discussion around line 83 to make this clear. ### **2. Non-Markov agent state $Z_t$** The standard analysis of Q-learning for MDPs relies on two facts: (i) the agent-state (which is just the state in MDPs) is a controlled Markov process and (ii) the Q-learning iteration is a stochastic approximation of the Bellman update. The first fact is needed in order to define a Bellman operator. In POMDPs, if the agent state is not a controlled Markov process, we cannot define a Bellman operator and therefore need a different technique to analyze the convergence of Q-learning. Lemma 1 is different from the fact that agent state is a controlled Markov process. The controlled Markov property states that $\def\PR{\mathbb{P}}\PR(z_{t+1}|z_{1:t},a_{1:t})=\PR(z_{t+1}|z_t,a_t)$. while Lemma 1 states that $\def\1{\PR(s_{t+1},z_{t+1},y_{t+1},a_{t+1}|s_{1:t},z_{1:t},y_{1:t},a_{1:t})}\1=\PR(s_{t+1},z_{t+1},y_{t+1},a_{t+1} |s_t,z_t,y_t,a_t).$ These two are different and the latter does not imply the former because the functions of Markov chains are not necessarily Markov. **Proof of Lemma 1**: Lemma 1 is an immediate consequence of the law of total probability, which implies that $\def\2{\phantom{\1}}\def\I{\mathbb{1}} \1=\PR(a_{t+1}|s_{1:t+1},z_{1:t+1},y_{1:t+1},a_{1:t})\PR(z_{t+1}|s_{1:t+1},z_{1:t},y_{1:t+1},a_{1:t})\PR(s_{t+1},y_{t+1}|s_{1:t}, z_{1:t},y_{1:t},a_{1:t})$ $\2=μ(a_{t+1}|z_{t+1}) \I\lbrace z_{t+1} = \phi(z_t, y_{t+1}, a_t) \rbrace \PR(s_{t+1},y_{t+1}| s_t, a_t)$ $\2=\PR(s_{t+1},z_{t+1},y_{t+1},a_{t+1} | s_{t}, z_{t}, y_{t}, a_{t})$. ### **3. Improvement over previous learning rates** We compare our assumptions on learning rates with those in the literature in App E. To re-iterate, the previous learning rates considered were of the form $\displaystyle α_t(z, a) = \frac{\I\lbrace Z_t = z, A_t = a\rbrace}{1+\sum_{n=0}^t \I\lbrace Z_n = z, A_n = a\rbrace},$ whereas assumption 1 is weaker and only requires $\sum_{t \ge 1} α^\ell_t(z,a) = \infty$ and $\sum_{t \ge 1} (α^{\ell}_t(z,a))^2 < \infty$. ### **4. Periodic Markov chains and optimal policies** It is relatively easy to ensure that the Markov chain $\def\M{\lbrace S_t,Y_t,Z_t,A_t\rbrace_{t \ge 1}}\M$ is periodic simply by choosing a behavior policy that is periodic. The periodicity of the converged policy $π_μ$ is the same as the periodicity of the MC $\M$. However, this is not related to the periodicity of the optimal policy. In fact, as shown in the example in the introduction, in general, the optimal policy need not be periodic. ### **5. Weaker assumption of irreducibility and aperiodicity in periodic MC** Let $P_t$ denote the transition probability of the Markov chain $\M$. The previous results are assuming that $P_t$ is time-homogeneous, say equal to $P$, and that $P$ is irreducible and aperiodic. We are assuming that $P_t$ is time-periodic, say equal to $(P^0, \dots, P^{L-1})$, and that $P^0 P^1 \cdots P^{L-1}$, $P^1 \cdots P^{L-1} P^0$, etc. are aperiodic and irreducible. Note that we are not assuming that $P^0, \dots, P^{L-1}$ are individually irreducibile and aperiodic. In fact, by construction, these matrices are periodic. When $L=1$, the two assumptions are equal. For $L \neq 1$, if the previously assumption holds, then so does our assumption but our assumption does not imply the previous assumption. In that sense, our assumption is mathematically weaker than the previous assumption. ### **6. Comparison with [Sey+23]** We strongly disagree with this characterization. The key point that we emphasize in the paper is that non-stationary policies can perform better than stationary policies in POMDPs with an agent state. Previous papers that analyze convergence of Q-learning for POMDPs such as [Sey+23] do so under very restrictive assumptions which do not hold in the non-stationary setting. As mentioned in point 3 above, they impose a strong assumption on the learning rates. But more importantly, [Sey+23] assume that the MC $\M$ converges to a stationary limiting distribution. Under this assumption, Q-learning would converge to a stationary limit and therefore the greedy policy will be stationary. Removing this assumption is technically non-trivial, as there is no existing literature on time-periodic Markov chains (the existing literature is on "state-periodic" but time-homogeneous Markov chains). Please see App B.2, in particular Prop 2 and 3, which are both new. These results enable us to generalize stochastic approximation results to time-periodic Markov chains (Prop 4), which is then a key tool used in the convergence analysis presented in this paper. None of these results follow from the analysis presented in [Sey+23]. ### **7. Claim regarding non-stationarity is confusing** Please see the official comment below. --- Rebuttal 2: Title: Regarding the comment on the claim "a non-stationary deterministic agent-state based policy performs better than stationary deterministic agent-state based policies". Comment: In general, the set of non-stationary policies is a superset of the set of stationary policies. So, one would expect the performance of the best non-stationary policies to be greater than or equal to the performance of the best stationary policy. We are interested in whether this relationship is strict or not. In MDPs, the relationship is an equality: it is well known that stationary policies perform as well as non-stationary ones. The same is true for POMDPs when using a belief-state. All RL algorithms for MDPs leverage this fact and only learn stationary policies. When these algorithms are adapted to POMDPs, one typically replaces the "state of an MDP" with an "agent-state for a POMDP". Our main point is that such a naive replacement is not sufficient and more drastic structural changes in the algorithms may be needed because, in general, non-stationary policies can strictly outperform stationary policies in PODMPs. We illustrate this by providing examples that disprove "stationary policies have the same performance as non-stationary policies". --- Rebuttal Comment 2.1: Comment: I thank the authors for the detailed answers and for clarifying my doubts. I will raise my score. --- Reply to Comment 2.1.1: Comment: Thank you for your reply and for raising the score!
Summary: The paper introduces PASQL (Periodic Agent-State Based Q-Learning), a novel reinforcement learning approach tailored for Partially Observable Markov Decision Processes (POMDPs). Traditional methods often rely on transforming POMDPs into fully observable MDPs by employing belief states. However, belief states require system model knowledge, limiting their practicality in model-free settings like RL. PASQL addresses this by learning non-stationary, periodic policies using agent states—compressed representations of history—which do not assume the Markov property. This method leverages periodic Markov chains and stochastic approximation theories to prove that PASQL converges to a cyclic limit and can outperform traditional stationary policies in terms of approximation error. Strengths: 1. PASQL creatively combines concepts from periodic Markov processes and Q-learning to handle non-Markovian dynamics, a significant departure from traditional RL methods. 2. The paper provides a rigorous theoretical framework, including convergence proofs and bounds on sub-optimality, which substantiates the claims made about PASQL's efficacy. 3. Through numerical experiments, the paper effectively demonstrates that PASQL can outperform standard Q-learning in POMDP settings, offering practical relevance to the theoretical claims. 4. The paper offers a comprehensive comparison with existing methods, highlighting PASQL’s advantages in handling agent states that do not satisfy the Markov property. Weaknesses: 1. The introduction of periodic policies increases complexity and may require more computational resources, which could limit practical applications. 2. The current study focuses on tabular representations, which might not scale well to high-dimensional problems or those requiring function approximation. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How does PASQL scale with the dimensionality of the state and action spaces in practical scenarios? 2. On what basis were the specific periods chosen for the policies in the experiments, and how sensitive is PASQL's performance to these choices? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive endorsement. We address your questions and concerns. ### **1. How does PASQL scale with the dimensionality of the state and action spaces in practical scenarios?** In the paper, we have focused on the tabular setting to analyze the simplest form of the algorithm. In a practical scenario, one would use function approximation with deep neural networks. The current state of the art ASQL implementations like R2D2 use LSTMs which involve using a recurrent layer that stores an "internal state'' of the agent state which acts as some form of memory of the history and implementations like dreamer use recurrent state space models (RSSMs) which consists of a self-predictive model that can predict forward in a latent space similar to a sequential variational autoencoder. So, the natural approach to implement PASQL will be to use a similar architecture, with a shared RSSM or LSTM base and $L$-different heads for the different $Q^{\ell}$. The scaling with respect to state and action spaces will remain similar to that of ASQL with a worst-case additional scaling factor of $L$. There are two ways in which we can think about scaling in this setting: memory and sample complexity. For memory, the RSSM/LSTM part would remain the same as in ASQL but we would need a larger replay buffer, because a sample in the replay buffer is now associated with a particular value $\ell$. For the sample complexity, in the worst-case, we would require $L$-times more samples, but in practice, the scaling is much better because each $Q^{\ell}$ is bootstrapped from $Q^{[[\ell + 1]]}$, which leads to improved sample efficiency. We can see this fact in the experiments included in the paper where PASQL (Fig 3) and ASQL (Fig 5) have roughly the same sample complexity. ### **2. On what basis were the specific periods chosen for the policies in the experiments, and how sensitive is PASQL's performance to these choices?** In general, one can view the period $L$ as a hyper-parameter. In our experiments, we only tried with $L=2$. The main point that we wanted to make was that PASQL can give a better performance than ASQL, and the experiment with $L=2$ demonstrates that; so we did not experiment with larger values of $L$. It is difficult to give a precise characterization of the sensitivity of PASQL to the choice of $L$ mainly because the performance depends on the choice of behavioral policy and there is no obvious way to "lift" the behavioral policy for period $L$ to a behavioral policy for a larger period. In theory, as we argue on line 83, the performance of periodic policies with period $L \in \lbrace n! : n \in \mathbb{N} \rbrace$ increases monotonically. In fact, if we take any integer sequence $\lbrace L_n\rbrace_{n\ge1}$ that has the property that $L_n$ divides $L_{n+1}$, then the performance of the policies with period $L_n$ is monotonically increasing in $n$. We will adapt the discussion around line 83 to clarify this point.
Rebuttal 1: Rebuttal: We thank the reviewers for the comments and feedback. We address some of the common issues raised by the reviewers below. ### **1. Practical implementation of the algorithm** We want to clarify certain points regarding the practical implementation of the algorithm. - In terms of implementation complexity, the deep RL version of the algorithm can be implemented with a shared LSTM/RSSM base (similar to R2D2 [Kap+19] / Dreamer [Haf+20]) and multiple heads corresponding to each $L$. So, the additional memory overhead of implementing the algorithm is not too restrictive. - In terms of sample complexity, in the worst case the sample complexity will decrease by a factor of $L$, but in practice the sample complexity will not be so large because the function $Q^{\ell}$ is bootstrapped from $Q^{[[\ell+1]]}$. This is indeed the case in our current experiments where the sample complexity for $L=2$ (Fig 3) is roughly the same as $L=1$ (Fig 5). - In terms of performance, as we mention in the introduction, PASQL with period $m$ will perform at least as well as PASQL with period $n$ if $m$ is a multiple of $n$. Therefore, PASQL with any $L > 1$ would have performance which is at least as good as the performance of ASQL (which corresponds to $L=1$). Hence, one can think of $L$ as a hyper-parameter which is tuned with other hyper-parameters. ### **2. Deterministic vs Stochastic policies** The main thesis of the paper is that non-stationary policies perform better than stationary policies in POMDPs with agent state. This is true for both deterministic and stochastic policies. For instance, as we point out on line 959 (page 22) that non-stationary deterministic policies (and therefore non-stationary stochastic policies) can in general perform better than stationary stochastic policies. Thus, adding non-stationarity is beneficial for both deterministic and stochastic policies. In this paper, we propose and analyze a variant of Q-learning that converges to a non-stationary policy. The reason that we focus on Q-learning is because the current state of the art RL algorithms for POMDPs such as R2D2 and Dreamer are variants of Q-learning. In principle, the same idea would work for actor-critic algorithms as well. In fact, our results provide a strong motivation that one should investigate periodic variants of actor-critic algorithms for POMDPs ### **3. Choice of the behavioral policy** The converged value of any agent-state based Q-learning (ASQL) algorithm depends on the behavioral policy. This is in contrast to MDPs, where at least in theory, any behavioral policy works as long as all state action pairs are visited infinitely often. Even then, the behavior policy does impact the sample complexity of Q-learning, and there is a rich literature on choosing the behavior/exploration policy. Sucn an understanding does not exist for ASQL. Since our algorithm generalizes ASQL to learn non-stationary policies, it shares the same limitations as every Q-learning algorithm for POMDPs.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
SG-Nav: Online 3D Scene Graph Prompting for LLM-based Zero-shot Object Navigation
Accept (poster)
Summary: This paper presents a new framework for zero-shot object navigation. Unlike previous methods that only provide objects in close proximity, this paper constructs a scene graph that captures the relationships between objects, groups, and rooms. This scene graph allows for a more comprehensive understanding of the environment by the navigation system, enabling it to make decisions based on a wider context rather than immediate surroundings. This method significantly improves the robustness and versatility of the navigation system, making it more effective in a variety of settings. Strengths: 1. The paper tries to solve the problem of zero-shot navigation with LLM and scene graph, and experimental results show that this combination does improve the navigation performance. 2. The way the scene graph is updated is interesting. The authors combine the capabilities of LLM and VLM to create connections between nodes and reduce irrelevant edges to make the graph more precise. 3. The experimental results show that there is no significant difference between the results of LLaMA-7B and GPT-4, indicating that the method can work in small-volume LLMs, which is friendly to the demand of computational resources. Weaknesses: 1. The use of scene graphs is not very new in the field of navigation. I think it's important to discuss the use of the scene graph in navigation in related work. 2. I think the authors should take some LLM-based object Navigation method as baseline in the experiments to compare with the proposed method. The authors mention in the abstract that the previous LLM based navigation method promotes LLM with the text of spatially closed objects. Yet no comparison is made in the experimental section, which hardly demonstrates the superiority of the scene context. For example, "L3MVN: Leveraging Large Language Models for Visual Target Navigation" shows comparable success rate than the proposed method in HM3D benchmark, which is also mentioned in the Line 77. 3. The way LLM utilizes the scene graph is still through text prompt, which makes me think that the authors have overpacked it as "LLM friendly structure" a little bit. I think here the usage of the scene graph is just a prompt design and doesn't reflect the hierarchy of the scene graph very well. I consider this reference to be the basis on which I made this judgment: "SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning". Technical Quality: 2 Clarity: 3 Questions for Authors: 1. Line 198, I think a hint that $a$ is an action would be clearer. In addition, why is it said that direct prediction of action by LLM is difficult? Any references or experiments to prove this? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The authors discuss limitation very well. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1. Discussion of scene-graph-based navigation** We thank the reviewer for the constructive advice. We will add more literature on scene graph-based navigation in the final version of paper. 3D scene graph is widely utilized in various embodied tasks, such as grounding [1] and task planning [2]. In the field of navigation, we find [3,4] most relevant to our work, which focus on graph-based ObjectNav. StructNav [3] constructs a scene graph to represent the observed scene. It adopts BERT to predict the similarity between each object node and the goal. Then the scene graph is used to propagate semantics to the geometric frontiers for action planning. Here the edges of scene graph are only used to represent connectivity, rather than relationship between objects. VoroNav [4] constructs a Voronoi graph, where nodes represent regions and edges represent whether two regions are connected. VoroNav directly prompts LLM with the object categories contained in each node to score regions for navigation. The graph is only used for determining explorable area. These two methods do not make use of the relationship between objects. On the contrary, SG-Nav constructs a hierarchical 3D scene graph with rich node and edge information. We also propose a Chain-of-Thought Prompting method to exploit the nodes and edges contained in scene graph with LLM. In this way, LLM can understand the hierarchical and structural information in the 3D scene graph for better reasoning. [1] Context-Aware Entity Grounding with Open-Vocabulary 3D Scene Graphs, CoRL 2023 [2] SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning, CoRL 2023 [3] How To Not Train Your Dragon: Training-free Embodied Object Goal Navigation with Semantic Frontiers, RSS 2023 [4] VoroNav: Voronoi-based Zero-shot Object Navigation with Large Language Model, ICML 2024 **2. Comparison with more LLM-based navigation methods** We have compared with ESC, a LLM-based ObjNav method in our experiments. Following the reviewers’ advice, we further compare with some more recent LLM-based methods including L3MVN, VoroNav and OpenFMNav on MP3D, HM3D and RoboTHOR. We uniformly set LLaMA-7B as the LLM for fair comparison. The results are shown in the table below. It is shown that SG-Nav achieves state-of-the-art performance on the challenging MP3D and RoboTHOR benchmarks. On HM3D, SG-Nav also shows comparable performance with the most recent methods. We observe SG-Nav works better on MP3D and RoboTHOR than on HM3D. This is because HM3D is the relatively simplest benchmark among the three, where SG-Nav’s ability to reason and plan in complex and large scenes cannot be fully utilized. Considering the overall performance on three benchmarks, SG-Nav shows leading advantage, especially on complicated and challenging scenarios. | **Method** | **MP3D SR** | **MP3D SPL** | **HM3D SR** | **HM3D SPL** | **RoboTHOR SR** | **RoboTHOR SPL** | | -- | :--: | :--: | :--: | :--: | :--: | :--: | | L3MVN | 34.9 | 14.5 | 48.7 | 23.0 | 41.2 | 22.5 | | VoroNav | 31.2 | 14.2 | 42.0 | **26.0** | 38.4 | 22.2 | | OpenFMNav | 37.2 | 15.7 | **52.5** | 24.1 | 44.1 | 23.3 | | **SG-Nav** | **40.2** | **16.0** | 49.1 | 24.3 | **47.5** | **24.0** | [1] L3MVN: Leveraging Large Language Models for Visual Target Navigation, IROS 2023 [2] VoroNav: Voronoi-based Zero-shot Object Navigation with Large Language Model, ICML 2024 [3] OpenFMNav: Towards Open-Set Zero-Shot Object Navigation via Vision-Language Foundation Models, NAACL 2024 **3. The usage of hierarchy in SG-Nav** We have considered the hierarchical graph structure in our Chain-of-Thought Prompting. Although we prompt LLM with only texts, the LLM is required to exploit the graph through nodes and their connected edges. For example, the prompt “Given [nodes] and [edges] of subgraph, answer above [questions].” makes the reasoning of LLM aware of the graph structure. The ablation study in Table 4 in our paper also shows that there will be a performance drop if we remove all edges and keep only nodes. This means our method really makes use of the graph structure. We further conduct ablation experiments by removing edges between nodes in different level. We observe the performance on MP3D changes from **40.2/16.0** to **38.2/15.5** (SR/SPL), which shows that our CoT-Prompting effectively utilize the hierarchical information of the 3D scene graph. But we agree with the reviewer that how to better exploit the structure information contained in 3D scene graph with LLM is a very promising direction. The current prompting strategy could be further improved (e.g., by designing a more graph-related workflow for LLM), which we leave for future work. | **Method** | **SR** | **SPL** | | -- | :--: | :--: | | Removing Inter-level Edges | 38.2 | 15.5 | | **Raw** | **40.2** | **16.0** | **4. Explanation of $a$ in Line 198** We have defined $a$ in L105, which means the action of agent. It is very difficult to directly predict action for navigation. In most state-of-the-art object navigation methods [1,2,3], the prediction is divided into global and local policy. First, a global policy is adopted to predict the position to be explored. Then a local policy is used to plan action based on the predicted position and occupancy map. As for LLM, its advantage mainly lies in reasoning, so it is more suitable for LLM to serve as a global policy, while action planning can be simply solved by traditional methods such as Fast Marching Method and A*. [1] PONI: Potential Functions for ObjectGoal Navigation with Interaction-free Learning, CVPR 2022 [2] L3MVN: Leveraging Large Language Models for Visual Target Navigation, IROS 2023 [3] ESC: Exploration with Soft Commonsense Constraints for Zero-shot Object Navigation, ICML 2023 --- Rebuttal Comment 1.1: Comment: Thanks to the author for the reply. I appreciate the author's detailed discussion of the scene graph based navigation approach. Here are my questions. On the HM3D website(https://aihabitat.org/datasets/hm3d/), the authors of HM3D compare in detail the differences between MP3D, RoboTHOR and HM3D. As we can see from the table, HM3D is not the "simplest" as the authors claimed. I am grateful to the authors for their detailed comparison of other LLM-based approaches. But this raises more questions. For a fair comparison, the authors performed all methods with LLaMA-7B. **I think this is fair but not reasonable.** As seen in the paper, the performance of the authors' method under GPT-4 and under LLaMA-7B is very similar (almost identical). This means that LLM's improved reasoning is hardly a gain for the authors' methods. However GPT-4 or other stronger open source LLMs are not inaccessible in real world. In a real deployment, it is unlikely that users would be able to forcibly restrict the model used for navigation to LLaMA-7B, and they would be perfectly reasonable to use a stronger model for stronger performance - but the authors' methods are unable to show that stronger LLMs result in stronger performance. But for baselines, for example, OpenFMNav achieves better results with GPT-4 and is superior to the author's SG-Nav-GPT-4 in HM3D. Moreover, the authors of L3MVN used the RoBERTa-large model and achieved better results than in the table shown by the authors and is superior to the author's SG-Nav-GPT-4 in HM3D. I think it is unreasonable for the authors to use LLaMA-7B instead of RoBERTa-large because the two are not pre-trained in the same way probably not in line with the motivation of the authors of L3MVN. And the number of RoBERTa-large parameters is much smaller than that of LLaMA-7B. In summary, the authors did not adequately test the potential of all methods under reasonable conditions. So I don't think the author's comparison is reasonable. I thank the authors for the additional experiments on The usage of hierarchy in SG-Nav. One of my concern is that building a scene graph relies heavily on LLM's predictions (Line 172-176), and using a scene graph relies heavliy on LLM's predictions (Line 213-223). With LLM highly involved in navigation, theoretically the navigation performance should be affected by the reasoning ability of LLM, however, the authors show that the experiments with GPT and LLaMA-7B show that the navigation performance is almost independent of the reasoning ability of LLM, can the authors do some analysis on this? Why do GPT-4 and LLaMA behave almost indistinguishably in the navigation of this graph structure? My other big concern is that even if scene structure is used as part of the prompts, the novelty is limited as the method is largely a prompt engineering. I consider the authurs' method lacking in novelty, in large part because of recent new developments in the integration between LLM and scene graph. For example, in SayPlan, the LLM can choose to collapse nodes in the scene graph that are irrelevant to the task and unfold nodes that are relevant. In short, the authors' use of graph structure as prompts for LLM lacks novelty and has some limitations. These limitations suggest that the authors' method is unable to obtain navigation performance improvements with LLM performance improvements. Therefore, the authors' method cannot exceed the performance of the same type of method when both use GPT-4. I think these limitations are a big disadvantage, because nowadays, with the development of LLM, there are more and more models with smaller size and more reasoning power, which other methods can benefit from, but the author's method cannot. Taking all these considerations into account, I keep my rating. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the reply. The reviewer still have two concerns, for which we provide our answers as below. (With less than two days to go, we're not providing experimental results.) **1. SG-Nav is unable to obtain performance boost when using a stronger LLM** The reviewer may think that the reason why SG-Nav achieves similar performance when using Llama-7B and GPT-4 resepctively is due to our method cannot fully exploit the reasoning ability of LLM. However, we should point out that this view is not correct. Other state-of-the-art LLM-based method, like OpenFMNav, prompts LLM with very long and complex scene context. In this way, LLM needs to analyze the complex and unstructured text, which requires very strong reasoning and analyzing ability. So using GPT-4 will lead to better performance that Llama-7B. While in our SG-Nav, we prompt the LLM with the subgraph by using chain-of-thought. In this way, LLM only needs to understand the structure information contained in 3D scene graph and follow the proposed sub-reasoning processes, which is a much easier task with more formatted input. With our chain-of-thought promting, even Llama-7B is able to fully understand the scene context and achieve comparable performance with GPT-4. In future work, **we can construct more detailed and larger 3D scene graph to prompt LLM**. In this case, smaller LLM may not have the ability to fully exploit the large 3D scene graph, and LLM with stronger reasoning ability will achieve better performance. Moreover, we should highlight that **the ability to achieve high performance with only Llama-7B is a huge advantage of our method**. Just as the reviewer comments in the strength part: “The experimental results show that there is no significant difference between the results of LLaMA-7B and GPT-4, indicating that the method can work in small-volume LLMs, which is friendly to the demand of computational resources.” Note that in real world application, the navigation framework may need to be deployed on the robot agent. In this case, using GPT-4 as the LLM will be very slow and expensive. On the contrary, Llama is open-sourced and there are many works focusing on how to improve the efficiency of Llama for better deployment. So we think Llama-7B will be a good choice for real robots. **2. Lack of novelty** The reviewer thinks SG-Nav lacks novelty because recent works such as SayPlan conducts in-depth study on the integration between LLM and scene graph. However, we think this judgement is unfair. First, **SayPlan studies a totally different task with ours**. In their work, they assume a pre-constructed 3D scene graph is avaible and mainly focus on task planning. While in SG-Nav, we propose to build an online 3D scene graph to prompt LLM for navigation reasoning. Second, **the novelty of SG-Nav includes an efficient online 3D scene graph construction method, a hierarchical chain-of-thought prompting method and a re-perception mechanism**. We propose an online 3D scene graph-based navigation framework, rather than just focusing on how to let LLM understand a given 3D scene graph. In terms of the chain-of-thought prompting, we have explained its novelty in the rebuttal. We also notice that reviewer zzdn thinks our chain-of-thought prompting is interesting.
Summary: The paper proposes a 3D scene graph prompting strategy and designs a hierarchical chain-of-thought prompt for improving LLM-based zero-shot object navigation. The 3D scene graph is incrementally updated and pruned to reduce the computational complexity. A re-perception mechanism is also introduced to correct the perception error. Experimental results on MP3D, HM3D, and RoboTHOR environments show the superiority of the proposed method over the competitors. Strengths: 1 The proposal of 3D scene graph prompting and hierarchical chain-of-thought is well-motivated and effective for encouraging the scene context reasoning and improving the decision interpretability. 2 The author provides a detailed time complexity analysis for the proposed edge updating method. 3 Extensive experiments are conducted on three different datasets and the results show the effectiveness of the proposed method. 4 The paper is well-written with figures and tables nicely presented. Weaknesses: 1 The approach utilizes the LLM to predict the possible relationships between objects, and possible distances between object and goal. However, this information may be various in different scenes. What if the prediction is inconsistent with the real scene? Would it largely impact the navigation success? I suggest the author give more results and analyses about this. 2 Although Figure 6 gives the visualization of the navigation process, the author did not provide the visualization for the sequential frontier scoring and reasoning output of LLM. It would be better to present this visualization to help further understand the advantages of the approach. 3 There are some recent works surpassing the reported compared methods, e.g., L3MVN (IROS 2023), VoroNav (ICML 2024), OpenFMNav (NAACL 2024), and VLFM (ICRA 2024). The authors should add them to the table for a more complete comparison and analysis. Technical Quality: 3 Clarity: 4 Questions for Authors: See details in the weaknesses. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Limitations have been discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1. Error in the prediction of relationship and distance** We measure the accuracy of relationship and distance prediction on episodes of MP3D validation set. For relationship prediction, we conduct human study to annotate the correctness of each relationship predicted by SG-Nav. For each episode, we let the annotators to evaluate the scene graph of the final navigation step. If all annotators think more than 80% edges in this scene graph correctly describe the real relationship, we annotate this episode with correct relationship. For distance prediction, we consider the prediction to be correct if the error between a predicted distance and the actual distance is less than 20%. For each episode, we compute the success rate of distance predictions on the final prompt, because the scene graph at this time is the most complete and the number of subgraphs is sufficient. As shown below, the overall accuracy of relationship prediction and distance prediction is **74.7%** and **68.4%** respectively, which is accurate. We observe that correct relationship and distance prediction will increase the success rate of navigation. However, even if the predicted relationship and distance is incorrect, the agent still has a relatively high probability to successfully navigate to the goal (**33.2%** success rate of navigation for incorrect relationship, and **35.4%** for incorrect distance). This validates the robustness of our framework. SR of navigation: | Condition |SR| | -- | :--: | | Correct Relationship | 42.6 | | Incorrect Relationship | **33.2** | | Correct Distance | 42.8 | | Incorrect Distance | **35.4** | | Unconditional | 40.2 | Accuracy of relationship: | Result/Relation | Correct Relationship | Incorrect Relationship | Total | | -- | :--: | :--: | :--: | | Navigation Success | 31.8% | 8.4% | 40.2% | | Navigation Failure | 42.9% | 16.9% | 59.8% | | Total | **74.7%** | 25.3% | 100% | Accuracy of distance: | Result/Relation | Correct Distance | Incorrect Distance | Total | | -- | :--: | :--: | :--: | | Navigation Success | 29.3% | 11.2% | 40.5% | | Navigation Failure | 39.1% | 20.4% | 59.5% | | Total | **68.4%** | 31.6% | 100% | **2. Visualization for the sequential frontier scoring and reasoning output of LLM** We provide the visualization in the attached PDF file. **3. Comparison with more methods** We further compare SG-Nav with more recent works including L3MVN, VoroNav, VLFM and OpenFMNav on MP3D, HM3D and RoboTHOR. For LLM-based methods (SG-Nav, L3MVN, VoroNav, OpenFMNav), we uniformly set LLaMA-7B as the LLM for fair comparison. The results are shown in the table below. It is shown that SG-Nav achieves state-of-the-art performance on the challenging MP3D and RoboTHOR benchmarks. On HM3D, SG-Nav also outperforms recent methods like L3MVN and VoroNav. However, we observe SG-Nav fall slightly behind VLFM and OpenFMNav on HM3D. This is because HM3D is the relatively simplest benchmark among the three, where SG-Nav’s ability to reason and plan in complex and large scenes cannot be fully utilized. Considering the overall performance on three benchmarks, SG-Nav shows leading advantage, especially on complicated and challenging scenarios. | **Method** | **MP3D SR** | **MP3D SPL** | **HM3D SR** | **HM3D SPL** | **RoboTHOR SR** | **RoboTHOR SPL** | | -- | :--: | :--: | :--: | :--: | :--: | :--: | | L3MVN | 34.9 | 14.5 | 48.7 | 23.0 | 41.2 | 22.5 | | VoroNav | 31.2 | 14.2 | 42.0 | 26.0 | 38.4 | 22.2 | | VLFM | 36.2 | 15.9 | 52.4 | **30.3** | 42.3 | 23.0 | | OpenFMNav | 37.2 | 15.7 | **52.5** | 24.1 | 44.1 | 23.3 | | **SG-Nav** | **40.2** | **16.0** | 49.1 | 24.3 | **47.5** | **24.0** | --- Rebuttal Comment 1.1: Comment: Thanks to the authors for providing additional quantitative and qualitative results. Most of my concerns are addressed and I am happy to raise my score. I sincerely suggest the authors add these results and analysis to their revised versions, which are helpful for improving the paper further. --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer for the positive feedback. Your constructive comments and suggestions are indeed helpful for improving the paper. Also, many thanks for raising the score. We will continue to improve our work and release the code. If the reviewer has any follow-up questions, we are happy to discuss!
Summary: This paper propose to use 3D Scene Graph Prompt in LLM-based Zero-shot Object Navigation, which fully use information of whole scene and is explainable. Also it propose prune-based method to accelerate the construct of graph. Enough expriments show the superority of the method and the effectiveness of each modules. Strengths: 1.This paper solves the problem of insufficient use of scene information in LLM based Zero shot Object Navigation in the past. 2. A fast way to establish a scene graph is proposed to ensure fast speed, and theoretical proof is provided. 3. The experiment achieves good results and achieved SOTA results on multiple datasets. 4. The language of the paper is fluent, easy to understand, and there are few grammar errors. Weaknesses: 1.VLM and is needed for Short Edge Pruning which may increase the time consumption of model to construct graph. 2.The Incremental Updating and Pruning may have some issue. (See in Question.) Technical Quality: 3 Clarity: 3 Questions for Authors: 1.In line 39-41, the author mentions previous methos are abstract and unexplainable. Please explain this viewpoint in more detail and show how your method solve this problem. 2.Will the already constructed scene graph vary based on changes in perspective? (For example, in the previous timestamp when A and B objects did not appear in the same image, Long Edge Pruning is required, while in the next time when A and B appear in the same image, Short Edge Pruning is required, the scene graph may be differnet). Does using Incremental Updating and Pruning ignore this change and result in cumulative errors. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1. Inference latency of SG-Nav** In each step of navigation, the time cost of SG-Nav can be divided into perception, graph construction and reasoning, which takes 0.3s, 1.3s, 0.14s on average, accounting for 17.3%, 74.8% and 7.9%. The edge pruning belongs to graph construction, which only takes **11.5%** of the overall time. | **Component** | **Average Time (s)** | **Percentage (%)** | |---------------------------|:----------------------:|:--------------------:| | Perception | 0.3 | 17.3 | | Graph Construction | 1.3 | 74.8 | | &emsp;&emsp;**Edge Pruning** | **0.2** (part of GC) | **11.5** (part of GC) | | Reasoning | 0.14 | 7.9 | **2. Why SG-Nav is explainable compared with previous methods like L3MVN and ESC** Previous LLM-based ObjNav methods like L3MVN and ESC directly score each frontier based on the nearby object categories. For example, if there are table, computer and chair near a frontier, these method directly prompting language model with the text of these three categories to predict the probability of this frontier, which do not consider the relationship between objects. **3. Inconsistency of Long Edge and Short Edge** We thank the reviewer for the constructive advice. Currently, we do not consider the change of perspective in our paper. We further add this verification mechanism to our system. At each timestamp, we not only validate newly generated edges, but also check whether there are some existing object nodes that has been updated (merged with newly detected objects). For edges which have at least one updated node, we also perform validation on them. With this modification, the performance of SG-Nav changes from 40.2/16.0 to 40.2/16.1 (SR/SPL), which shows that the proportion of inconsistency between long and short edges in actual samples is relatively small. | **Method** | **SR** | **SPL** | | -- | :--: | :--: | | w/o Verification | 40.2 | 16.0 | | **w Verification** | **40.2** | **16.1** | --- Rebuttal Comment 1.1: Comment: Thanks for the authors' further clarification. The response has resolved my confusion, so I decide to keep the score of 6. --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer for the positive feedback. Your constructive comments and suggestions are indeed helpful for improving the paper. We will continue to improve our work and release the code. If the reviewer has any follow-up questions, we are happy to discuss!
Summary: This paper introduces a zero-shot object navigation framework using a 3D scene graph to represent the environment. It employs a hierarchical chain-of-thought prompt to LLMs for goal location navigation and includes a re-perception mechanism to correct errors. Experiments are conducted on MP3D, HM3D, and RoboTHOR. Strengths: - The task is aimed at open vocabulary ObjectNav, which is a more general task. - Edge updating considers computational complexity. - The idea of a hierarchical chain-of-thought method to prompt LLM is interesting. - The paper is well-structured and easy to understand. Weaknesses: - Although the paper claims to target open vocabulary (zero-shot) object navigation, the method uses some closed-set settings, such as a predefined relationship dictionary, which contradicts the open vocabulary premise. - In the experiments, the target objects chosen (on MP3D and HM3D) are the same as those in regular ObjectNav settings, where the objects are common. This seems to undermine the method's capability in handling open vocabulary objects. - The proposed 3D scene graph is defined with objects, groups, and rooms. There is a related work for ObjectNav that also constructs a similar graph with objects, zones, and scene nodes. I suggest that authors can compare their graph construction with this work and include it in the references to enrich the paper. [1] Hierarchical object-to-zone graph for object navigation, ICCV 21. Technical Quality: 3 Clarity: 3 Questions for Authors: - How is the accuracy of the 3D scene graph validated? - The chain-of-thought method to prompt LLM is interesting. Are there other methods to prompt LLM? Comparing different prompt methods could make the experimental results more solid. - Is the zero-shot object definition meant to be training-free, or does it mean that the target is not predefined? While the problem definition mentions open-vocabulary, Figure 1 and the selected object categories in the experiments are still common objects. - I would like to see some failure cases. What are the reasons behind these failures? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors discuss potential limitations in the conclusion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1. About open-vocabulary setting, is it training-free** Our method is training-free and open-vocabulary. Since most objects occur in the scene belong to some common categories, we can pre-define the relationship among these categories and save them to a dictionary, which accelerates inference speed. Note that the relationship is acquired by prompting LLM, so this process is also training-free. When SG-Nav receives object categories that have not been saved before, we simply prompt LLM to generate the relationship and update the dictionary. **2. The choice of object categories** We select the categories as same as those in regular ObjectNav settings in order to compare with supervised and close-vocabulary methods, following the experimental setting of ESC. But SG-Nav is not limited to any category set. **3. Comparison with “Hierarchical object-to-zone graph for object navigation”** This paper proposes a hierarchical object-to-zone (HOZ) graph for ObjectNav. There are several differences between SG-Nav and this work. First, SG-Nav is completely training-free, but HOZ needs to finetune a Fast-RCNN for graph construction and train a LSTM policy module via reinforcement learning. Second, the 3D scene graph in SG-Nav contains more hierarchy and relationship between objects, while HOZ mainly consists of zone nodes and using adjacent probability as the edges between nodes. Third, SG-Nav adopts LLM to exploit the hierarchical and structural information in 3D scene graph in zero-shot, while HOZ trains a network to parse the graph. We will cite this paper and compare with it in the final version of paper. **4. The accuracy of 3D scene graph** Since our method mainly focuses on how to leverage 3D scene graph to enable context-aware reasoning for ObjectNav, we do not quantitively evaluate the accuracy of 3D scene graph. To better demonstrate the intermediate process of SG-Nav, we provide some visualization results in the attached PDF file. **5. Ablation on CoT-Prompting** We further design ablation study on the Chain-of-Thought Prompting method. The raw performance on MP3D is 40.2/16.0 (SR/SPL). If we directly convert the 3D scene graph into a text and prompting LLM, the performance on MP3D is 36.5/14.9 (SR/SPL). By converting the nodes and edges into text separately, the performance on MP3D is 37.0/15.0 (SR/SPL), which is better than previous one but still worse than our CoT-prompting. The results show that fully exploiting the hierarchical graph structure of scene graph can help LLM understand the scene context and make better decision. | **Method** | **SR** | **SPL** | | -- | :--: | :--: | | Text prompting | 36.5 | 14.9 | | Text prompting seperately | 37.0 | 15.0 | | **Raw** | **40.2** | **16.0** | **6. Failure case** We demonstrate failure case in the attached PDF file. Please refer to the global rebuttal. --- Rebuttal Comment 1.1: Comment: Many thanks for the detailed rebuttal. The responses have addressed some of the concerns. In the response, the authors mentioned, "When SG-Nav receives object categories that have not been saved before, we simply prompt LLM to generate the relationship and update the dictionary." Does this imply that the proposed method mainly benefits pre-defined categories? If so, although the method can accommodate open vocabulary, the actual focus of the paper still revolves around pre-defined categories. This limits the method's generalizability, especially since it aims at zero-shot object navigation. --- Reply to Comment 1.1.1: Comment: Thanks for the reply. We explain the question as below: The purpose of pre-defining a dictionary is **only to improve navigation efficiency**, which is **irrelevant to the performance**. Because we can save the relationships between common objects into the dictionary to save the time for LLM prediction. Even if we do not pre-define any category and start with an empty dictionary, the performance of SG-Nav will not change. In this way, it will be slower in the early episodes. But as more and more scenes are explored, the dictionary will gradually accumulate and cover common categories, and then SG-Nav will become faster. So the role of the pre-defined dictionary is to skip the early episodes and make SG-Nav efficient even at the beginning. It **does not affect the zero-shot ability** of SG-Nav.
Rebuttal 1: Rebuttal: We would like to thank the reviewers for their valuable and constructive comments. We provide detailed answers to these questions and will revise the paper accordingly. Pdf: /pdf/6ab6cb986782b821085d5f8db742971928c99131.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Neural Embeddings Rank: Aligning 3D latent dynamics with movements
Accept (poster)
Summary: The authors propose a novel method for reducing the dimensionality of neural dynamics and aligning them with movements. This method is compared with six existing methods across different brain regions. Experiments demonstrate that all movement parameters can be accurately decoded from neural dynamics using this approach. Strengths: The paper offers an extensive comparison to prior works on the dimensionality reduction task. It is important to note that baselines represent diverse techniques in the field. Additionally, all code used in the study is provided, ensuring transparency and reproducibility. The paper also includes numerous visualizations and is well-structured. Weaknesses: The methodology section is unclear, with equations lacking necessary explanations and annotations. Additionally, there appear to be typographical errors, such as the inconsistent notation where temperature $\tau$ is sometimes mistakenly written as $\textit{r}$. Moreover, all the experiments lack an assessment of statistical significance, which is crucial for validating the results. Technical Quality: 2 Clarity: 3 Questions for Authors: - Could you please provide a statistical analysis of your results? - Could you also provide some quantitative comparisons? Confidence: 1 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The authors have mentioned limitations of their work and I do not perceive any specific negative societal impacts that would need to be addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the constructive feedback from R#3. We thank R#3 for recognizing the contribution of our work. **Strengths** Extensive comparison to prior works\ Thanks! **Weaknesses1** Unclear methodology\ We briefly described our method in the global rebuttal with an intuitive diagram of our model NER vs. CEBRA in Rebuttal Fig. 7. A detailed explanation is provided in the Rebuttal to R#2. **W2** Typographical errors in formulas\ We apologize for this mistake and have fixed it now (Rebuttal to R#2). **W3** Lack of statistical significance\ This is a great suggestion! From the Rebuttal Table, we could validate that: 1. NER reveals significantly more consistent 3D latent dynamics than CEBRA and pi-VAE. 2. NER always significantly outperforms CERBA and pi-VAE when using a linear decoder for both within- and cross-session decoding. 3. NER always significantly outperforms CERBA and pi-VAE when using a k-NN decoder for cross-session decoding. **Rebuttal Table**: statistical analysis and quantitative comparisons of presented results. Figs. 2 and 5a measure the correlation coefficient between each pair of averaged latent dynamics. $\mu$, mean; $\sigma$, standard deviation; *t*, t-statistic. A larger value indicates a larger difference between two groups; **N**, NER; **C**, CEBRA; **pi**, pi-VAE; *F*, F-statistic. A higher F indicates a greater disparity between groups; *p*, p-value; Friedman test is a non-parametric statistical test that is similar to the ANOVA but does not assume the data is normally distributed. *s*, overall differences among the three models. d., values on the diagonal; o., values off the diagonal; L, left panel; R, right panel. | Fig. | NER $\mu$ $\sigma$ | CEBRA $\mu$ $\sigma$ | piVAE $\mu$ $\sigma$ | N vs C $t$ $p$ | N vs pi $t$ $p$ | C vs pi $t$ $p$ | ANOVA $F$ $p$ | Friedman $s$ $p$ | |-----------|---------------------|----------------------|----------------------|----------------|-----------------|-----------------|---------------|------------------| | 2 | 0.95, 0.03 | 0.92, 0.05 | 0.43, 0.25 | 7.9, 5.5e-10 | 15.3, 2.9e-19 | 14.8, 1.0e-18 | 226.0, 2.2e-35 | 86.2, 1.9e-19 | | 4a-d. | 0.71, 0.1 | -3.08, 0.6 | -1.21, 1.4 | 16.8, 4.3e-08 | 4.1, 2.5e-03 | -5.8, 2.8e-04 | 57.9, 1.4e-08 | 18.2, 1.1e-04 | | 4a-o. | 0.64, 0.1 | -3.22, 1.0 | -2.05, 1.6 | 40.9, 1.7e-59 | 16.3, 1.7e-28 | -9.5, 3.9e-15 | 458.7, 5.8e-71 | 156.4, 1.1e-34 | | 4b-d. | 0.76, 0.1 | 0.75, 0.1 | 0.87, 0.1 | 0.4, 7.0e-01 | -3.5, 7.2e-03 | -3.2, 1.1e-02 | 9.5, 1.5e-03 | 9.8, 7.4e-03 | | 4b-o. | 0.70, 0.1 | -1.07, 1.5 | -1.05, 1.1 | 11.5, 3.3e-19 | 15.4, 6.4e-27 | -0.1, 8.9e-01 | 95.2, 7.6e-29 | 135.2, 4.4e-30 | | 4c-d. | 0.94, 0.0 | 0.74, 0.0 | 0.67, 0.2 | 11.6, 9.9e-07 | 4.4, 1.8e-03 | 1.4, 1.9e-01 | 17.4, 6.3e-05 | 12.6, 1.8e-03 | | 4c-o. | 0.88, 0.1 | 0.71, 0.1 | 0.23, 0.4 | 19.2, 2.5e-33 | 17.9, 3.0e-31 | 12.6, 1.4e-21 | 240.0, 2.9e-51 | 166.4, 7.3e-37 | | 4d-d. | 0.93, 0.0 | 0.93, 0.0 | 0.96, 0.0 | 0.5, 6.6e-01 | -3.7, 5.0e-03 | -3.1, 1.2e-02 | 9.5, 1.6e-03 | 8.6, 1.4e-02 | | 4d-o. | 0.89, 0.0 | 0.18, 0.5 | 0.13, 0.4 | 13.1, 2.0e-22 | 16.5, 8.0e-29 | 0.7, 4.8e-01 | 115.7, 6.3e-33 | 135.8, 3.2e-30 | | 4e-d. | 93, 4.4 | 90, 4.6 | 81, 11.9 | 2.5, 3.6e-02 | 2.4, 4.0e-02 | 1.9, 9.5e-02 | 4.7, 2.3e-02 | 3.2, 2.0e-01 | | 4e-o. | 90, 5.9 | 79, 9.0 | 43, 15.2 | 13.6, 1.7e-23 | 26.0, 2.6e-43 | 17.5, 1.2e-30 | 445.6, 5.0e-70 | 160.2, 1.7e-35 | | 4f-d. | 92, 4.1 | 93, 4.2 | 98, 1.5 | -0.3, 7.5e-01 | -4.9, 8.3e-04 | -3.4, 7.5e-03 | 10.1, 1.1e-03 | 11.5, 3.1e-03 | | 4f-o. | 89, 5.8 | 57, 16.3 | 44, 15.9 | 19.1, 3.2e-33 | 25.7, 5.4e-43 | 5.6, 2.5e-07 | 297.7, 1.7e-57 | 138.8, 7.2e-31 | | 5a | 0.87, 0.08 | 0.81, 0.10 | 0.56, 0.09 | 14.8, 1.1e-18 | 29.3, 1.6e-30 | 20.7, 2.8e-24 | 590.5, 1.0e-51 | 90.0, 2.9e-20 | | 5bL-d. | 0.75, 0.0 | -3.06, 0.4 | -1.56, 1.1 | 35.2, 1.2e-12 | 7.2, 1.8e-05 | -4.8, 5.8e-04 | 103.2, 6.6e-12 | 22.2, 1.5e-05 | | 5bL-o. | 0.74, 0.0 | -3.10, 0.3 | -2.78, 2.2 | 124.6, 5.7e-138 | 18.2, 9.7e-38 | -1.6, 1.1e-01 | 355.4, 2.3e-75 | 205.3, 2.6e-45 | | 5bR-d. | 0.78, 0.1 | 0.78, 0.1 | 0.84, 0.1 | -0.0, 9.8e-01 | -2.4, 3.3e-02 | -1.9, 7.7e-02 | 3.5, 4.8e-02 | 6.2, 4.6e-02 | | 5bR-o. | 0.76, 0.1 | -0.72, 1.0 | -0.62, 0.8 | 17.7, 1.8e-36 | 19.8, 3.5e-41 | -0.9, 3.7e-01 | 177.6, 1.8e-49 | 192.2, 1.9e-42 | | 5cL-d. | 0.94, 0.0 | 0.75, 0.0 | 0.58, 0.1 | 36.3, 8.4e-13 | 9.5, 1.3e-06 | 4.5, 9.7e-04 | 67.2, 4.3e-10 | 22.2, 1.5e-05 | | 5cL-o. | 0.93, 0.0 | 0.74, 0.0 | 0.21, 0.3 | 77.2, 4.3e-111 | 24.3, 2.6e-50 | 18.2, 1.1e-37 | 482.5, 1.5e-88 | 262.0, 1.3e-57 | | 5cR-d. | 0.93, 0.0 | 0.94, 0.0 | 0.95, 0.0 | -0.1, 9.3e-01 | -1.2, 2.4e-01 | -1.0, 3.4e-01 | 1.0, 3.9e-01 | 4.7, 9.7e-02 | | 5cR-o. | 0.92, 0.0 | 0.25, 0.5 | 0.18, 0.5 | 15.3, 5.6e-31 | 17.2, 2.6e-35 | 1.1, 2.8e-01 | 140.5, 3.4e-42 | 195.1, 4.3e-43 | | 6d-d. | 0.67, 0.0 | -4.59, 1.3 | -0.38, 0.3 | 5.6, 3.0e-02 | 5.9, 2.8e-02 | -5.5, 3.2e-02 | 31.1, 3.6e-03 | 6.0, 5.0e-02 | | 6d-o. | 0.56, 0.1 | -4.79, 1.1 | -0.75, 0.1 | 11.4, 9.0e-05 | 19.4, 6.7e-06 | -8.3, 4.1e-04 | 101.3, 2.3e-07 | 12.0, 2.5e-03 | | 6e-d. | 0.90, 0.0 | 0.60, 0.0 | 0.73, 0.1 | 18.5, 2.9e-03 | 3.1, 8.9e-02 | -1.8, 2.2e-01 | 16.1, 1.2e-02 | 4.7, 9.7e-02 | | 6e-o. | 0.77, 0.1 | 0.45, 0.1 | 0.51, 0.1 | 5.9, 1.9e-03 | 3.3, 2.1e-02 | -0.9, 4.1e-01 | 12.5, 1.9e-03 | 9.0, 1.1e-02 | **Q1** Statistical analysis\ See answers to W3. **Q2** Quantitative comparisons\ In the above Rebuttal Table, we provide the mean and standard deviation of all comparisons we have made for Figs. 2, 4, 5, and 6. The explained variance of Fig. 3a is in the caption, and the correlation coefficient of the six curves shown in Fig. 3b is in Fig. 12b. Fig. 3c, d shows the explained R² among 19 sessions for 6 models. The explained R² between NER and CEBRA in the second motor task is shown in the caption of Fig. 16. Please let us know if you find we missed quantification for any figures or claims, or if there are other useful quantitative methods you suggest. We would be more than happy to add them. --- Rebuttal Comment 1.1: Title: Feedback Comment: Dear Authors, Thank you for taking the time to address my concerns. I appreciate your thorough examination and the quantitative results you provided. However, I still find the large discrepancies in values across different sessions puzzling. For instance, the mean value for 4d-d is 0.93, while for 4e-d, it jumps to 93 for the same method. Could you clarify the reason for such significant differences? Additionally, could you explain why there are substantial differences in the mean latent dynamics between your method, CEBRA, and pi-VAE on certain sequences, such as 6d-d? Thank you in advance! --- Reply to Comment 1.1.1: Title: R2 vs. Accu.% and Velocity decoding using Linear Regression Comment: Dear Reviewer rR7L, Thank you for your careful review and attention to detail in the Rebuttal Table. Regarding your first question, in Fig. 4d, the value 0.93 represents the **R²** of position decoding, while in Fig. 4e, the 93 refers to **direction classification accuracy (%)**. We understand this could be confusing, so we will add **%** for all values in Fig. 4e and 4f in the revised paper (though we couldn’t update our first rebuttal). For your second question about Fig. 6d (velocities decoding using a linear regression decoder for NER, CEBRA, and pi-VAE), we agree that the R² range from 0.67 to -4.59 is confusing. Reviewer VFtX (R#1) also asked about large negative R² values, indicating the model is predicting in the opposite direction. We addressed this in point #2 of the general rebuttal letter, and we copied the answer here for your convenience: *R#1 asked about CEBRA generating an R² of -3. Negative R² values occurred only when using linear decoders to predict velocities (Fig. 4a). This is because velocity changes faster than position (Rebuttal Fig. 4a), and CEBRA struggles to capture the infrequent large amplitude velocities (Rebuttal Fig. 4b). Consequently, large negative velocities are decoded as small positive velocities, and vice versa (Rebuttal Fig. 4c). This leads to decoded velocities and positions being smaller than actual ones (Rebuttal Fig. 4d).* As you noticed, substantial differences occur in velocity decoding using a linear regression decoder in Fig. 4a, 5bL, and 6d. We addressed this in point #3 of the general rebuttal letter. The linear regression decoder has no hyperparameters, unlike more complex linear decoders such as Ridge regression (used in the Neural Latents Benchmark paper, Pei et al., *NeurIPS*, 2021), which includes an alpha hyperparameter. **We chose the simplest linear mapping for all models to prevent more complex decoders from compensating for poor alignment between latent dynamics and behaviors.** Under this challenging condition, only NER works (R² > 0.67 in M1/PMd/S1), while CEBRA and pi-VAE predict in the opposite directions. Sincerely, The Authors --- Rebuttal 2: Title: Thank you Comment: Dear Reviewer rR7L, Thank you once again for reviewing our manuscript and rebuttal, and for supporting the publication of our work. Sincerely, The Authors
Summary: The paper introduces a novel dimensionality reduction method called Neural Embedding Ranks (NER) for aligning neural dynamics with movement in brain-computer interfaces. NER uses a ranking loss to embed neural dynamics into a 3D latent space that aligns with continuous movement parameters. The authors apply NER to neural recordings from primary motor cortex (M1), dorsal premotor cortex (PMd), and primary somatosensory cortex (S1) in monkeys performing reaching tasks, comparing it with six other dimensionality reduction techniques.The paper finds that NER out-performs all other methods to align latent dynamics with both hand position and direction. Strengths: 1. The experimental design is comprehensive, covering multiple brain areas (M1, PMd, S1), multiple monkeys, and a variety of task conditions including curved reaching movements, and the statistical rigor is evident in the consistent performance across multiple sessions and conditions. 2. The benchmarking against six other dimensionality reduction techniques provides a thorough evaluation of NER's performance. 3. Extensive figures and graphs were used to illustrate the statistical results of the paper, most of which were informative and well-designed. 4. The effectiveness of NER is significant and proven to exceed the performance of multiple other methods under different tasks. Weaknesses: 1. The explanation of the method is too limited. Only a very small portion of the paper is dedicated to the Model/Method section, which makes it difficult to understand the fundamental algorithm of NER. 2. It is also reasonable to suspect the main method of the paper is the application of the existing method RNC, which limits the contribution of the paper. 3. It is empirically evident that the method works well in the setting of the paper, but there is limited discussion on what could be the cause of such an improvement over other methods (such as CEBRA), which by the formulation of the loss function, is very similar in nature. Technical Quality: 4 Clarity: 2 Questions for Authors: 1. Why is the Rank and Contrast loss effective in this task? Specifically, how does it differ from other contrastive-loss-based methods like CEBRA? 2. I believe the paper would benefit if a more detailed description were given to the method section, especially a description of RNC and how your method adapted it and could be different from it. Confidence: 3 Soundness: 4 Presentation: 2 Contribution: 3 Limitations: The limitations are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate R#2 for recognizing the contribution and strength of our work. **Weaknesses1** Limited explanation of method\ Agree. We will add a detailed explanation (see below) later. **W2** Application of existing RNC method\ We came up with the idea of modifying the loss in the CEBRA paper (2023 May) using regression before the publication of the Rank and Contrast (RNC) paper (Zha et al., NeurIPS, 2023 Nov). *In the neuroscience field, ramped neural responses to continuously increasing sensory features are widely observed (as mentioned in the Introduction). It is unnatural to classify sound levels, light intensities, inter-eye distances, etc*. Before modifying it, we searched "regression AND contrastive learning" and found "Supervised Contrastive Regression" (SupCR, ICLR Rejection), which is the same paper as RNC. So we borrowed their loss. The CEBRA paper is conceptually similar to supervised contrastive learning for classification tasks (SuperCon, Khosla et al., NeurIPS, 2020), and SuperCon has been cited over 4400 times. In contrast, RNC+SupCR has only 30 citations now (9 in 2023), so it is almost impossible for neuroscientists to hear the RNC paper unless they purposely search for it. Even if we applied RNC loss in NER, there are significant differences between NER and the RNC paper: RNC encodes features into a 512D space for prediction, whereas NER reduces neural dynamics to 3D latent dynamics. We believe **NER is a neural-inspired method for neural-behavioral analysis**. **W3** Loss in CEBRA and NER\ Two losses are indeed very similar! We briefly described two models in the global rebuttal with an intuitive diagram of NER vs. CEBRA in Rebuttal Fig. 7.\ We will use $x$ as high-dimensional neural dynamics, $f$ as feature extractor, and $v=f(x)$ as low-dimensional neural embeddings. The batch size $N$ is 512 and temperature $\tau$ is fixed to 1. Unlike common contrastive learning methods such as SuperCon and RNC, data augmentation in CEBRA and NER is achieved by selecting the distribution of embeddings whose labels fall within the $\Delta t$ (e.g., 10 ms) offset of the anchor's label.\ In each iteration, CEBRA gets one batch of anchor $v_i$, one batch of embedding $v_j$ that will positively contrast with anchor, and a third batch of randomly sampled embeddings $v_n$ that will negatively contrast with anchor. Anchor loss $l$ in CEBRA is:\ $$l_{CEBRA}^{(i)} = \-\log \frac{\exp(\text{sim}(v_i, v_j)/\tau)}{\sum_{n=1}^{N} \exp(\text{sim}(v_i, v_n)/\tau)}$$ where ${sim}(., .)$ is the similarity between two neural embeddings.\ In each iteration, NER gets one batch of anchor $v_i$, one batch of augmented embedding $v_j, v_k$ that will either positively or negatively contrast with an anchor, and a third batch of labels $y$ for $v_i$ and $v_j, v_k$ . Anchor loss $l$ in NER is:\ $$l_{NER}^{(i)} = \frac{1}{2N-1} \sum_{j=1, j \neq i}^{2N} -\log \frac{\exp(\text{sim}(v_i, v_j)/\tau)}{\sum_{v_k \in S_{i,j}} \exp(\text{sim}(v_i, v_k)/\tau)}$$ There are two differences from CEBRA. First, batches of $v_i$ and $v_j, v_k$ will merge and labels $y$ will be duplicated, resulting in a batch of $2N$ for both embeddings and labels. The purpose here is to ensure that each anchor and its augmented embedding exist in the same batch. Second, we introduce $S_{i,j}:=\\{v_k \mid k \neq i, d(\{y_i}, \{y_k}) \geq d(\{y_i}, \{y_j})\\}$ to denote the set of embeddings $v_k$ that are of higher ranks than $v_j$ in terms of label distance with respect to $v_i$, where $d(·, ·)$ is the distance measure between two labels. Intuitively, for an anchor $v_i$, any other embedding $v_j$ in the batch is positively contrasted with it, enforcing the similarity between $v_i$ and $v_j$ to be larger than that of $v_i$ and any other embedding $v_k$ in the batch if the label distance between $y_i$ and $y_k$ is larger than that of $y_i$ and $y_j$. Minimizing $l_{NER}^{(i)}$ will align the order of embeddings with their corresponding orders in the labels with respect to anchor $v_i$. **Questions1** Why RNC loss is effective\ NER with RNC loss aims to solve the **high dimensionality and class imbalance** issues in CEBRA. 1. CEBRA is an extension and generalization of the standard InfoNCE loss for classification tasks. By treating continuous labels as many discrete classes, it faces difficulty in representing them with low-dimensional latent spaces. To compensate for this drawback, one solution is to increase its dimensionality. For example, in simple tasks like rats running on a rail, CEBRA can achieve behavior-aligned latent dynamics in 3D, but these simple tasks can be represented in 2D using NER. For relatively complex tasks like reaching in 8-directions, CEBRA requires 4D for representation and 16D for decoding, whereas NER can handle these tasks in 3D. For complex behaviors like curved reaching, CEBRA struggles in 3D, but NER works well. 2. Representation learning of CEBRA in 16D is still worse than NER in 3D. In each batch, a very low percentage (say 5%) of embeddings come from infrequent classes, such as large velocities. In CEBRA, only a single augmented embedding is positively contrasted with its anchor, so only 5% of $l_{CEBRA}$ have access to 5% infrequent embeddings. In NER, all the *2N-1* embeddings can contrast with an anchor, so 100% of $l_{NER}$ have access to 5% infrequent embeddings. Through the RNC loss, NER resolves the class imbalance issue and can effectively represent infrequent classes. **Q2** RNC description\ We will add our answers to the revised paper. **Q2** How we adapted RNC\ One just needs to replace the embeddings that will be negatively contrasted with the anchors using the labels of the anchors, and replace the InfoNCE loss with the RNC loss. **Q2** How the NER paper differs from the RNC paper 1. NER is for dimensional reduction, while RNC is for prediction. 2. NER operates in 3D, whereas RNC operates in 512D. 3. For augmentation, NER uses existing nearby data, while RNC creates new data. --- Rebuttal Comment 1.1: Title: Last day of discussion Comment: Dear Reviewer f3GE, This is a kind reminder that the author-reviewer discussion will close today. Please let us know if you have any questions or need clarification on our rebuttal. Sincerely, The Authors
Summary: The authors propose a dimensionality reduction method, specifically to learn latent neural dynamics in a 3-dimensional space. The authors perform experiments that test the transfer of their model from one hemisphere to another, from one year to another, from one brain region to another, and how well dimensionality reduction methods separate straight and curved hand movements. Each experiment is atleast compared with CEBRA and Pi-VAE, which are both methods that also (potentially) have access to the target variables during training. Strengths: The authors perform a variety of interesting generalization tasks and generally find their model to work very well. Some of the experiments the authors perform are fairly novel in the specific type of generalization they are testing, e.g. wide vs straight reaching in Section 4.6, which is interesting. The paper is relatively clear, but does require some rewriting, especially grammatically and Section 4.6 requires a rewrite to clarify exactly how the authors are training the models. Weaknesses: Weaknesses: 1. The authors do not compare their model with AutoLFADs [1] or a target-based model like CTRL-TNDM [2] that extends LFADs with labels and currently has the highest performance on the Neural Latents Benchmark [3]. Moreover, especially given that many of the tasks tested in the paper are related to generalization across a number of different variables, I believe it is important to compare the results with POYO [4], which specifically focuses on generalization. 2. I think the authors should submit the exact code they used for these experiments, not just the code that is required to reproduce the figures once the code has been run. I am surprised by the low performance of both Pi-VAE and CEBRA and am curious how the authors specifically performed their experiments, e.g. did both methods also have access to the continuous labels like the authors’ model. The results do not instill confidence in me, even though the authors provide many specifics in their Appendix. Specifically, in the Neural Latent Benchmark [3], CEBRA obtains the second best R^2 performance on a completely held out test set of velocities. However, and I understand this is a different task, on inter-hemispheric generalization it obtains a -3 R^2 (Figure 4), for example. This would indicate that to some extent the model is essentially predicting the opposite of the signal. Given that the authors use the exact same neural feature encoder as CEBRA, I find it hard to believe that such a dramatic change comes from changing the loss function. Especially given that CEBRA-behavior would also contrastively make latent embeddings of the same velocity/direction more similar. I do believe there is potentially a benefit to the authors’ loss function, but I find the baseline results very surprising and believe a full submission of the code would help clear up this concern. Note: I am absolutely willing to be proven wrong. 3. The authors only use one dataset to verify their results, I believe it is imperative for the authors to verify their results in another dataset as well. 4. The authors fail to look at animal generalization, which I would say is probably more interesting even than generalization from one brain region to another. 5. The figures in the paper are not aligned in scale (per row), and text sometimes overlaps (Figure 5). Generally the colorbars are unclear because they have different scales and are not colored. Please add one colorbar per row of figures. Moreover, Figures 9 and 10 in the Appendix have very small 3D plots that are hard to view. 6. The authors do not provide a lot of motivation for their method and do not explain the intuitions or implications of their results. Why is their loss function better? Why does their model generalize better? Why does generalization between brain regions make sense? Etc. [1] Keshtkaran, M. R., Sedler, A. R., Chowdhury, R. H., Tandon, R., Basrai, D., Nguyen, S. L., ... & Pandarinath, C. (2022). A large-scale neural network training framework for generalized estimation of single-trial population dynamics. Nature Methods, 19(12), 1572-1577. \ [2] Kudryashova, N., Perich, M. G., Miller, L. E., & Hennig, M. H. (2023, March). Ctrl-TNDM: Decoding feedback-driven movement corrections from motor cortex neurons. In Computational and Systems Neuroscience (Cosyne) 2023. \ [3] https://eval.ai/web/challenges/challenge-page/1256/leaderboard/3183 \ [4] https://poyo-brain.github.io Grammatical and language weaknesses (some of these are personal preference, so feel free to ignore ones you deem unimportant): - L83: “Fourth, label-guided generative methods using VAE…” -> using VAEs - L86: “It reveals … but cannot align with movement trajectories, …” -> The revealed embeddings … but do not align well with movement trajectories. - L87: “… for learning robust… “ -> to learn robust - L98: “… Fig 1 gives the pipeline of this studies.” -> Fig 1 visualizes the pipeline used in this study. - L128: “… and the its less … “ -> and is less - L131: “Both models fails…” -> fail - L133: “A major limitation of both method…” -> methods - L134: “… for revealing movements aligned …” -> movement-aligned - Figure 3 caption: “b Hand directions” -> hand direction - Figure 3 caption: “X-axis indicate…” -> indicates - Figure 4 caption: “… b, d, f are trained on the 80% … “ -> on 80% - Figure 4 caption: “Brighter color indicate” -> colors indicate/color indicates - L142: “A linear regression decoder could explain…” -> can - L145: “This tuning curve is highly correlate with…” -> correlated - L150: “Together, combined with with …” -> with - L154: “… and position in the 80% of training datasets …” -> in the 80% training data - L156: “We will performance the … “ -> perform - L159: “… we also tested the nonlinear decoder …” -> a nonlinear decoder - L165: “NER outperforms the CEBRA …” -> outperforms CEBRA - L165: “… over large margin …” -> by a large margin - L166: “A kNN decoder has no improvement over the NER, but …” -> The kNN decoder does not improve NER performance compared to the linear decoder, but for CEBRA and Pi-VAE, the kNN decoder improves within session data performance over NER. - L168: “However, this sacrificed …” -> this sacrifices - L174: “… NER revealed same …” -> the same - L177: “… using linear regression decoder … “ -> using a linear regression decoder - L180: “… NER is robust to different decoder …” -> robust to different decoders - L180: “… CEBRA and Pi-VAE change from failed state to outperform NER …” -> the performance of CEBRA and Pi-VAE changes from very low performance to outperforming NER on within-session decoding with the kNN decoder. - L198: “… when monkey perform …” -> the monkey performs/the monkeys perform - L205: “… NER revealed narrower latent dynamics for straight movements …” -> I am not sure what the authors mean by this sentence. - L210: “… CEBRA achieves compatible performance …” -> comparable performance - Figure 6 caption: “The reference target is same sessions used in …” -> is the same session as used in - Figure 6 caption: “… is first session in S1.” -> is the first session - Figure 7 caption: “The cartoon from [18].” -> Subfigure a is reproduced from [18]. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Why is the generalization from one brain region to another in the same monkey with the exact same trained model important? And why should it work? 2. I would also be interested as to why the authors believe the model performs relatively well in generalizing from one brain region to another. It is not entirely intuitive to me directly why a model trained on one region should produce latent embeddings for another region that, when rotated with the Procrustes method, obtains latent embeddings that are still linearly related to the task. My question arises because different brain regions (especially S1 compared to PMd) are entirely new neurons, with a new level of abstraction, which would intuitively require atleast some level of fine-tuning the model. 3. Do you specifically use the CEBRA model that has access to targets during the self-supervised learning (CEBRA-behavior)? This is unclear in the text, and I could not find what type of CEBRA models the authors use given that in the original CEBRA paper [1] they propose both CEBRA time and CEBRA behavior. The same question arrises for the Pi-VAE, which is a method that can be used for both discrete and continuous labels. The proposed method in the paper uses continuous labels, but from L86 it almost sounds like the authors trained the Pi-VAE with discrete labels. If so, this comparison is unfair. 4. Why is it important that X-Y velocity prediction accuracy is aligned with the direction tuning curve in PMd? 5. Why are the CEBRA models 16 dimensions instead of 3 for the k-NN decoders? [1] Schneider, S., Lee, J. H., & Mathis, M. W. (2023). Learnable latent embeddings for joint behavioural and neural analysis. Nature, 617(7960), 360-368. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: I appreciate the last sentences in the authors’ discussion that indicates where their model fails. I think it would be great if the authors include these results as well in the Appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the detailed and constructive feedback from R#1. We thank R#1 for recognizing the contribution of our work. R#1's concern about the technical flaws in evaluating our models against CEBRA and pi-VAE is valid and deserves detailed explanations. **Strengths** Generalization across two different tasks \ Thanks! We need to emphasize that NER is a **dimensionality reduction method** that reveals behavior-aligned latent dynamics in 3D. We think it is unfair to compare NER with POYO or other methods instead of dimensionality reduction methods on generalization because POYO has a 128D latent space. **Weaknesses1** Compare with more models\ We compared NER with AutoLFADS using the same datasets (Rebuttal Figs. 2, 5). The performance of AutoLFADS is much worse than NER. We could not test CTRL-TNDM as there are no public codes. POYO is not a dimensionality reduction method (128D) and requires several days of training using eight A50 GPUs. **W2** Codes, low performance and -3 R2 of CEBRA\ See the general rebuttal. We thank R#1 for raising this question about the -3 R², which is indeed confusing. **W3** More datasets\ We validated NER on the rat's hippocampus data that has been used by the CEBRA and pi-VAE papers (Rebuttal Fig. 5). The performance of NER is similar to CEBRA but only requires 2D, and both are better than pi-VAE and AutoLFADS. **W4** Animal generalization\ We analyzed and uploaded the datasets from Monkey Mihili during the initial submission <Fig2_NER/Cebra/piVAE_embeddings_align>. We did not report this data due to the nine-page limit. We are glad R#1 raised this question so that we could talk about animal generalization (Rebuttal Fig. 6), which we think is also very important. **W5** Figure alignment and colorbar\ We believe R#1 is talking about the different scales we used in Fig. 4a-b, 5b, and 6d. We agree with R#1 and will use the same colored scale for all panels (Rebuttal Fig. 6) and show only one colorbar on the right side of each row. We will enlarge the 2D/3D latent dynamics shown in Figs. 9 and 10. **W6** Why our loss is better\ See the general rebuttal and rebuttal to R#2. **W6** Why NER generalize better\ A well-trained monkey performs almost the same stereotyped hand movements, and only NER could align (almost perfectly) the latent dynamics with the movement trajectories. The explained variance of velocity and position in NER is >80% for all 19 sessions (Fig. 3c, d). It outperformed all other four models in any session. **W6** Why generalization between brain regions make sense\ See the general rebuttal. **W7** Grammar and language\ We sincerely appreciate this reviewer for carefully examining our writing in the main text and figure captions. We will include those suggestions in our revised paper. **Questions 1** Why the generalization is important and works\ See the general rebuttal. **Q2** Why generalization works\ See the general rebuttal. Note that NER/CEBRA models are trained on each session or dataset separately. **Q3** Whether the comparison is fair\ For CEBRA, we never used CEBRA-time and always used velocities and directions as targets or labels. In our uploaded codes like <Batch_M1_0510>, it is used for both CEBRA and NER (identical data preparation and inputs). The only difference is the "cebra" folder, which could be the original CEBRA downloaded on 2024-01-10 or our NER_0517. For pi-VAE, in our uploaded <Batch_piVAE_0518>, "dim_u = 3" indicates the labels. They are either location+0+1 for rat's hippocampus data (01 for left and 10 for right) or X+Y+direction for monkey's reaching data. "continuous_index" includes both continuous velocities and directions. Notice that directions also belong to continuous labels because their values (i.e., angles) can be ranked and pairwise distances can be calculated. We apologize for this confusion and will include this critical information in the revised paper. **Q4** Velocity prediction aligns with direction tuning in PMd\ A model should not classify the movement direction when there is no hand movement. However, CEBRA achieves over 80% classification accuracy when there are no movements (Fig. 3b and 12a). Neurons in the PMd are tuned to reach direction (Chandrasekaran et al., 2017, *Nature Comm.*; Glaser et al., 2018, *Nature Comm.*). After the "go" cue, they increased their responses, reached a peak at 500 ms, and decreased (Jerjian et al., 2020, *Elife*). The X-Y velocity prediction accuracy should be aligned with the X-Y velocity and neural responses. This is exactly what we observed in NER and pi-VAE. Note that the pi-VAE paper observed the same X-Y velocity correlated prediction accuracy using the same datasets (Fig. 3d, e in pi-VAE). **Q5** CEBRA 16D for k-NN\ A comparison of NER 3D with CEBRA 16D is unfair to NER. We chose 16D for k-NN because this is the exact setting used in the CEBRA paper (Fig. 3h). CEBRA has >0.9 R² for simple rat left and right running tasks under 3D (Fig. 1e in CEBRA). However, it has a much lower R² in eight-direction hand movement tasks within 8D (Supplementary Fig. 8f in CEBRA). With the nonlinear k-NN decoder, the decoding accuracy of CEBRA 16D is no better than NER 3D for the same session data but is much worse than NER 3D for cross-session data (diagonal vs. non-diagonal; Fig. 4b, d, f, and 5b, c). We will explain this in the revised paper. **Limitations** our model fails\ Our model fails to reveal identifiable and interpretable 3D latent dynamics when there are more than 20 conditions of movement. However, it is far better than CEBRA. Under three pairs of straight-curve movements (six conditions), the 3D latent dynamics are well separated in NER but are collapsed in CEBRA (Fig. 7f vs. Fig. 16b). The difference is even stronger under three pairs of curve-curve movements (Fig. 7h vs. Fig. 16e). Again, both models have full access to movement labels (velocity and direction).\ **Limitations** include results\ Agree and will do that. --- Rebuttal 2: Title: A few questions Comment: Dear authors, Great work and thank you for thoroughly answering my questions. I am still going through your rebuttal, but I have two quick questions: where did you upload the code you used to fit CEBRA and Pi-VAE? I re-downloaded the zip attached to your submission but didn’t see it, I am probably just looking in the wrong place. Did you perform a hyperparameter search for CEBRA? I mention the hyperparameter search because it is thoroughly recommended by the original authors of that paper (for example, I think having a better time offset for the positive/negative sampling may improve the CEBRA results). Lastly do you know what’s causing the seemingly tanh-type of behavior for CEBRA in Fig 4B of your rebuttal PDF? Thank you in advance, I’ll keep going through your rebuttal in the meantime. --- Rebuttal 3: Title: Code links, hyperparameter settings, and velocity prediction Comment: Dear Reviewer VFtX, Thank you for reviewing our rebuttal. The code is not included in the uploaded zip file but was privately shared with the Area Chair (AC). On August 6, 2024, we provided the AC with three anonymous links: one for the training code before the rebuttal, one for the raw and intermediate data used before the rebuttal, and one for the training and figure generation code during the rebuttal. Per the AC’s suggestion, we used “Anyone with the link can view” sharing, which ensures that we do not know who accesses the files. We are prohibited from sharing these links directly with the reviewers but will consult with the AC to see if sharing can be facilitated. We employed the same hyperparameters for both our NER and CEBRA models when reporting comparison results. The only differences between the models are the batches used to calculate the loss and the loss function itself (please refer to our rebuttal to Reviewer f3GE for more details). Since both models utilize the same neural feature encoder, the choice of hyperparameters is likely to have a consistent effect. We conducted hyperparameter searches for both dimensionality reduction and decoding. The four key hyperparameters considered were the time offset, batch size, temperatures, and k in the k-NN decoder: 1. Time Offset: For Rebuttal Fig. 2 and Fig. 3 of the CEBRA paper (1 session of center-out reaching in S1), we used a bin size of 1ms (with raw data at 1ms) and the 'offset10-model,' as in the CEBRA paper. For Fig. 7 and 16 of the NER paper (1 session of straight and curve reaching in M1), we used a bin size of 5ms (with raw data at 1ms) and the 'offset1-model.' For all other figures in the NER paper (19 sessions of center-out reaching in M1, PMd, and S1) and Rebuttal Fig. 6, we used a bin size of 30ms (with raw data at 30ms) and the 'offset1-model.' 2. Batch Size: Following the CEBRA paper’s recommendation, we used the largest batch size our GPU (NVIDIA A5000, 24GB) could handle, which was 512—this is also the batch size used by CEBRA. 3. Temperature: During pilot testing, we evaluated three temperatures (1, 2, and 3). Higher temperatures compress the single-trial latent dynamics. We fixed the temperature at 1, consistent with CEBRA. 4. Number of Neighbors (k) in k-NN Decoder: We searched the number of neighbors ([2, 4, 8, 16, 32, 64, 128, 256, 512, 1024]) using the same settings as CEBRA. The tanh-type behavior in CEBRA is observed when predicting velocities. The distribution of velocities is highly imbalanced, with many more instances of near-zero velocities. Class imbalance poses a significant challenge for classification models like CEBRA but is less problematic for regression models like NER (see our motivation and rebuttal to Reviewer f3GE for more details). CEBRA performs relatively well, though still worse than NER, when continuous labels are uniformly distributed, such as in the positions of rats (Fig. 1 and 2 of the CEBRA paper) or video features (Fig. 4 and 5 of the CEBRA paper). In the shared link with the AC titled "code_during_rebuttal," the Jupyter Notebook "Demo_NER_Cebra_Lin_Reg_0801" includes the model training and figure plotting code used in Rebuttal Fig. 4B. Sincerely, The Authors --- Rebuttal Comment 3.1: Title: Time offset and regression Comment: Dear Authors, Thank you for the quick and thorough response. The reason I asked about hyperparameters is because intuitively decreasing the time_offset parameter in the CEBRA hybrid model could atleast improve some of the issues you reported with CEBRA in terms of not capturing large spikes in the velocity. The reason I am bringing this up is because I wonder how much of the improvement comes from the loss function or can potentially be explained by some of the self-supervised parameters like time_offset being mismatched with the underlying behavioral measures. Just to make sure I understand this correctly: do you use both the position and velocity in the hybrid CEBRA model to mine positive and negative samples? I am trying to understand response point 2 in the rebuttal. Thanks! --- Rebuttal 4: Title: Time offset (1ms) vs. kernel width (50ms) Comment: Dear Reviewer VFtX, Thank you once again for carefully examining the details of our model comparison. If we understand your question correctly, you are asking whether decreasing the time offset would improve the performance of CEBRA by allowing it to capture larger spikes in velocity due to higher temporal resolution. The answer is no. The CEBRA paper used a 10ms time offset in both Fig. 3 and in their submitted results to the Neural Latents Benchmark (NLB). We tested both 1ms and 10ms time offsets and observed no significant differences in performance. The main limiting factor is the width of the Gaussian kernel used for smoothing spike counts to obtain spike rates, which is 40ms or 50ms. As described in the CEBRA paper, they convolved the data with a Gaussian kernel with a standard deviation of 40ms for their Fig. 3, and 50ms for the NLB results ("We performed smoothing of input neural data using a Gaussian kernel with 50 ms s.d."). Therefore, regardless of the bin size (whether 1ms, 5ms, or 30ms) or time offset (1ms or 10ms), the firing rate for each neuron (or neural embedding) inherently has low temporal resolution. To increase temporal resolution, the Gaussian kernel width would need to be reduced. However, this would introduce more noise into the firing rate, negatively impacting overall performance. Regarding your other inquiry, yes, we used the hybrid CEBRA. For the rat’s left and right running, the target labels were 1-dimensional positions (which the hippocampus encodes) and two directions. For monkey center-out reaching, the target labels were 2-dimensional velocities (which M1/PMd/S1 encode) and eight directions. Sincerely, The Authors --- Rebuttal 5: Title: Last question regarding time offset Comment: Dear authors, Thank you again for the quick response and thorough explanation. I have one last question regarding this topic: do you have any specific intuition in the context of the spike kernel and time offset why your method performs this much better without having a higher temporal resolution for the spiking data? --- Rebuttal 6: Title: Do time offset and spike kernel contribute? Comment: Dear Reviewer VFtX, Thank you for your insightful question. Regarding the time offset: We have tested both 1ms and 10ms offsets in CEBRA and NER, but found no significant differences. Given that this time offset effect may be masked by the 50ms Gaussian kernel, we do not believe that time offset plays a critical role in this study. Regarding the spike kernel: A more fundamental question, beyond mentioning any specific model, is whether kernel width affects decoding performance. To address this, we ran the curved-reaching task from our paper using the public code provided by the NLB on Colab (mc_maze.ipynb). The dataset was MC_Maze-L, the bin size was 5ms, the method was smoothed spikes, and the decoder was Ridge regression. We tested six Gaussian spike-kernel widths, which yielded significantly different R² values: 0.28 @ 5ms, 0.40 @ 10ms, 0.52 @ 20ms, 0.62 @ 40ms, 0.61 @ 60ms, and 0.55 @ 100ms. The best performance was achieved with a 40ms kernel, consistent with the NLB paper (Table 1), and much better than with a 5ms kernel. This indicates that smaller spike kernel widths (i.e., higher temporal resolution of the firing rate) severely affect decoding performance, as we mentioned in the previous comment. After smoothing the downloaded data in MATLAB with a Gaussian kernel and exporting it to Python, all our models (NER, CEBRA, pi-VAE, UMAP with labels, UMAP without labels, dPCA, and PCA) were evaluated using the same fixed kernel width. Therefore, we do not believe that NER’s superior performance is attributable to the spike kernel used. In summary, it is highly unlikely that the time offset and spike kernel are responsible for the improved performance of NER compared to other models, particularly CEBRA. We greatly appreciate these questions, as they have provided valuable insights for us as well. Sincerely, The Authors --- Rebuttal Comment 6.1: Title: Last day of discussion Comment: Dear Reviewer VFtX, This is a kind reminder that the author-reviewer discussion will close today. Please let us know if you have any questions or need further clarification. Sincerely, The Authors --- Rebuttal 7: Title: Score update Comment: Dear authors, I have decided to change my score to an accept after our discussion. I do think you should improve the motivation and explanation of the method for the camera-ready submission, but I believe this to be feasible within the given timeframe. It is important, I believe, to really highlight the motivation for the RNC loss in the paper. It is clear from your rebuttal and explanations here, but as a reader it is not immediately clear why you have applied the RNC loss and why it would improve performance so much while reading the paper. All in all great job! --- Rebuttal Comment 7.1: Title: Thank you and camera-ready submission Comment: Dear Reviewer VFtX, Thank you very much for reviewing our paper and rebuttal, actively engaging in discussions with us, and supporting the publication of our work. It has been a great pleasure to work with you! In the camera-ready submission, which allows for *one additional page*, we will include the **motivation** as outlined in the global rebuttal and our rebuttal to R#2. We will also provide a detailed **explanation** of the RNC loss used in our method and its distinction from the loss used in CEBRA, referencing our global rebuttal, rebuttal to R#2, and the diagram in Rebuttal Fig. 7. Sincerely, The Authors
null
null
Rebuttal 1: Rebuttal: We appreciate the Area Chair for handling our manuscript and the constructive feedback from Reviewer VFtX, f3GE, and rR7L (R#1, R#2, and R#3). We were encouraged that all three reviewers recognized our paper's contribution (3/3/3). We hope our general rebuttal letter, along with the attached 7 Rebuttal Figures and three individual rebuttal letters, addresses their concerns about the soundness (2/4/2) and presentation (2/2/3) of our paper. 1. R#1 questioned whether we fairly benchmarked our model, NER, against CEBRA (Schneider, Lee, and Mathis, *Nature*, 2023) and pi-VAE (Zhou and Wei, *NeurIPS*, 2020), specifically regarding the potential limited access to target labels when training CEBRA and pi-VAE. **We addressed this concern in four independent ways:** 1. We evaluated 20 sessions in our submitted paper, whereas the pi-VAE paper tested only one session (PMd 20161006), which we included in our paper. This allowed for a direct comparison of the latent dynamics shown in the pi-VAE paper with NER. Our implementation of pi-VAE (Fig. 13) showed similar results to the pi-VAE paper (Fig. 3f-i). However, the latent dynamics revealed by NER are much clearer than those of pi-VAE in both single trial (Rebuttal Fig. 1) and trial-averaged (Fig. 5a) analyses. 2. We conducted an additional experiment comparing the latent dynamics generated by NER with those in the CEBRA paper, using the same datasets (Rebuttal Fig. 2). NER outperforms CEBRA in both 2D and 3D latent spaces. 3. We performed ablation studies by training CEBRA without continuous velocity labels (Rebuttal Fig. 3). The latent dynamics failed to represent movement trajectories using only direction labels. 4. We uploaded our training codes and data for verification. 2. R#1 asked why CEBRA generated an R² of -3. Negative R² values occurred only when using **linear** decoders to predict **velocities** (Fig. 4a). This is because velocity changes much faster than position (Rebuttal Fig. 4a), and CEBRA fails to capture the large amplitudes of velocities that occur infrequently (Rebuttal Fig. 4b). As a result, large negative velocities are decoded as small positive velocities and vice versa (Rebuttal Fig. 4c). Consequently, the decoded velocities are smaller than the actual velocities, leading to decoded locations being smaller than the actual positions (Rebuttal Fig. 4d). 3. R#1 asked why generalization works. Animal and brain area generalization is a trending topic in the neuroscience field, as evidenced by recent studies in monkeys (Safaie et al, *Nature*, 2023) and mice (Schneider 2023; Ehret et al, *Nature Neuro.*, 2024). Generalization requires: 1. For animal generalization, the same species of animals need to perform similar behaviors. It will not work between mice vs monkeys (Safaie 2023), and straight vs curve-reaching (Fig. 7). 2. The examined brain areas have similar roles during behaviors. It will not work well between cortex vs striatum (Safaie 2023), and M1/PMd vs S1 (Fig. 6). 3. Latent dynamics instead of population neural dynamics. It fails when directly using the neural dynamics, as demonstrated by this brain area generalization study (Gallego et al, *Nature Neuro.*, 2020). 4. Latent dynamics are informative about behaviors. A low-performance dimensionality reduction method requires a high-dimensional latent space with a larger representational capacity. For example, 10D for Safaie 2023 and Ehret 2024, and 16D for Schneider 2023. 5. Latent dynamics are linearly aligned with each other. 6. Decoders trained on 80% of the latent dynamics from one session are used to decode behavior from the remaining 20% holdout latent dynamics in another session. Nonlinear decoders like kNN (Schneider 2023), SVM (Ehret 2024), and LSTM (Safaie 2023) are usually applied.\ **Together, 3D latent dynamics with a linear decoder is the most challenging setup for generalization and is the most straightforward way to benchmark dimensionality reduction methods. Only NER works (off-diagonal in Fig 4, 5).** 4. R#1 asked us to examine animal generalization (Rebuttal Fig. 6). **NER reveals almost identical 3D latent dynamics in a newly added Monkey M compared to the previous Monkey C**. Using the same dataset, NER performs much better than Safaie et al., 2023 regarding the aligned latent dynamics. Animal generalization only fails in one session (140218). Interestingly, brain hemisphere generalization (Fig. 4a) is stronger than animal generalization (0.64 vs 0.43) but weaker than brain area generalization within the same hemisphere (0.64 vs 0.74, Fig. 5b). 5. R#2, R#3, and R#1 asked us to explain the difference between CEBRA and NER, and our motivation (Rebuttal Fig. 7). In brief, CEBRA considers each embedding in a batch (say 3) as a discrete class. For an anchor, it contrasts with its augmented embedding as a positive pair and 3 randomly sampled embeddings as negative pairs. NER ranks 6 embeddings according to their continuous labels. Then it contrasts an anchor with its augmented or 1st embedding as a positive pair and the remaining 4 embeddings as negative pairs. NER does not stop here; it continues by contrasting the 2nd embedding as a positive pair and the remaining 3 embeddings as negative pairs. This process continues until all the embeddings have been positively contrasted with the anchor. *NER learns a regression-aware representation that orders all embeddings in a batch.*\ **We are motivated by the fact that CEBRA treats continuous labels as many discrete classes, which cannot be well separated in low-dimensional space. These classes are also highly imbalanced, with many more near-zero classes (Rebuttal Fig. 4a, b). NER aims to solve the high dimensionality and class imbalance issues present in CEBRA.** 6. R#3 asked us to provide statistical analysis. In brief, the latent dynamics revealed by NER are **significantly more consistent** than those by CEBRA for M1 (t=7.9, p=5.5e-10) and PMd (t=14.8, p=1.1e-18). Pdf: /pdf/c440523854eaab518f4aa805e9b03c1697d7b914.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
HDR-GS: Efficient High Dynamic Range Novel View Synthesis at 1000x Speed via Gaussian Splatting
Accept (poster)
Summary: The manuscript presents a novel approach to High Dynamic Range (HDR) novel view synthesis (NVS) by proposing a new framework called High Dynamic Range Gaussian Splatting (HDR-GS). The proposed HDR-GS framework addresses the limitations of existing HDR NVS methods, which are primarily based on Neural Radiance Fields (NeRF) and suffer from long training times and slow inference speeds. Strengths: Comprehensive experiments demonstrate that HDR-GS outperforms state-of-the-art NeRF-based methods while achieving 1000× faster inference speed and requiring only 6.3% of the training time. The methodology is straightforward and well-organized, and the experimental results are impressive. The paper is logically structured and easy to follow. Weaknesses: While the methodology is straightforward and the experimental results are strong, the authors could further highlight their innovations and provide detailed explanations of their model design and theoretical underpinnings. This would enhance the manuscript's persuasiveness. Technical Quality: 3 Clarity: 3 Questions for Authors: 1) The authors' method is simple and effective, and the significant speed improvement due to 3D Gaussian Splatting (3D-GS) is not surprising. However, it would be beneficial if the authors could provide an analysis of what specifically leads to the substantial performance difference between HDR-GS and HDR-NeRF. 2) The authors chose to model HDR color first using the initialized DDR Gaussian point cloud rather than LDR color. It would be helpful if the authors could elaborate on the rationale behind this choice and how it positively impacts the generation of HDR and LDR results. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Section 3.2 could be omitted or condensed. The mathematical explanation of 3D-GS rendering and the snowballing process does not seem to be a primary innovation of this paper. Additionally, the emphasis on parallel HDR and LDR rendering and rasterization is somewhat unclear. The authors should clarify why parallel rendering is necessary for their approach, as it seems that similar results could be achieved without parallel rendering. Specifically, the necessity of parallel rendering for the subsequent loss calculation involving 𝐼𝑙 and 𝐼ℎ should be justified. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: &nbsp; ## Response to Reviewer VwS8 &nbsp; `Q-1:` Highlight of innovations and explanations of model design and theoretical underpinnings `A-1:` We propose the first 3DGS-based framework with 1.91 dB improvements and 1000x inference speed for HDR novel view synthesis (NVS). Firstly, we find that directly applying the original 3DGS to HDR NVS does not work well. In Table 3 (a), 3DGS only yields 12.35/11.83 dB on LDR-OE/LDR-NE and cannot render HDR images. To improve the performance, we make innovations in algorithm and data. (i) Algorithm. The original 3DGS is designed for LDR imaging. If directly enforcing an HDR constraint to the output of 3DGS like Eq.(16), 3DGS only yields a suboptimal result of 30.15 dB on HDR, as shown in the following table. This is because the broader range of brightness and high-frequency details in HDR images are difficult to model by the spherical harmonics with limited order in the original Gaussian point clouds. Plus, 3DGS cannot change the light intensity of the rendered LDR images, thus yielding poor results on LDR. |Method|LDR-NE|LDR-OE|HDR| |:-|:-:|:-:|:-:| |3DGS + HDR supervision|15.12|13.47|30.15| |HDR-GS|41.10|36.33|38.31| To address these problems, we design the DDR Gaussian point cloud model based on the tone-mapping theory [16]. Our DDR Gaussian model in Eq.(1) contains more attributes required for HDR NVS, such as HDR color, exposure time, etc. We perform a global tone-mapping operation on all Gaussian point clouds at one time to extract more contextual color information and accelerate the inference speed. Then, we develop the parallel differentiable rasterization to render the LDR and HDR images. As shown in Table 3 (a) of the paper and the above table, our innovation in Gaussian point cloud model leads to significant improvements of 21.64, 17.36, and 8.16 dB on LDR-OE, LDR-NE, and HDR. (ii) Data. As analyzed in Lines 174 - 181, the normalized device coordinate (NDC) system restricts the representing ability and spatial transformations of 3D Gaussians. Besides, the datasets collected by HDR-NeRF do not provide point clouds for initialization. To address these problems, we recalibrate the camera parameters and compute the initial point clouds as Eq. (14). In Table 3 (a), the data recalibration and point cloud initialization lead to 2.27/2.58 and 4.84/4.56 dB improvements on LDR-OE/LDR-NE, while alleviating the over-fitting issue of 3DGS. &nbsp; `Q-2:` Analysis of the performance difference between HDR-GS and HDR-NeRF `A-2:` We are not sure if the "performance difference" you mention refers to speed or image quality. Thus, we analyze both aspects. Firstly, two techniques lead to the speed advantage of HDR-GS. (i) Rendering Scheme. As analyzed in Lines 40 - 45, HDR-NeRF suffers from a time-consuming rendering scheme for LDR and HDR images. It needs to sample many 3D points and then compute their densities and colors for every single ray, severely slowing down the training and inference processes. In contrast, HDR-GS adopts the parallel differentiable rasterization for LDR and HDR rendering. As described in section 3.2, the rasterization computes different tiles divided from the 2D projection in high parallelism on GPU, thus enjoying much faster inference speed. (ii) Tone-mapping Method. HDR-NeRF conducts tone-mapping in a time-consuming ray-tracing manner. It needs to convert HDR to LDR for all sampled points along every single ray. It can only exploit the limited color information along a single ray for the tone-mapping transformation and requires repeated computation many times. In contrast, HDR-GS performs a global tone-mapping operation that converts the HDR colors of all Gaussian point clouds at one time. This operation can extract more contextual color information for HDR-to-LDR transformation and accelerate the inference speed. The following table compares the effects of the two tone-mapping methods on HDR-GS. Our global tone-mapping method performs better and faster than the ray-tracing tone-mapping of HDR-NeRF. |Tone-mapping|Train Time (min)|Infer Speed (fps)|LDR-NE|LDR-OE|HDR| |:-|:-:|:-:|:-:|:-:|:-:| |Ray Tracing (HDR-NeRF)|58|78|39.51|34.68|36.12| |Global Infer (Ours)|34|126|41.10|36.33|38.31| Secondly, although our work is based on 3DGS, directly using 3DGS for HDR NVS yields poor image quality. To improve the quality to surpass HDR-NeRF, we make innovations. Please refer to `A-1` for details. &nbsp; `Q-3:` Why modeling the HDR color first? `A-3:` This is because HDR images capture a broader range of illuminance levels retaining the details in dark and bright regions. HDR images contain information stored in LDR images. Thus, HDR-to-LDR transformation is an easy information compression process. In contrast, the transformation from LDR to HDR is an ill-posed problem because it needs to reconstruct the missing brightness and details that are not captured in LDR images. This process usually involves more complex computations. We do experiments to compare HDR-to-LDR and LDR-to-HDR transformations in the following table. HDR-to-LDR performs better. |Transformation|LDR-NE|LDR-OE|HDR| |:-|:-:|:-:|:-:| |LDR-to-HDR|37.65|34.79|29.54| |HDR-to-LDR|41.10|36.33|38.31| &nbsp; `Q-4:` Condensing section 3.2 `A-4:` We will follow your advice to condense this section &nbsp; `Q-5:` Questions about the parallel differentiable rasterization `A-5:` In fact, parallel differentiable rasterization does not mean parallel computing for HDR and LDR in one rasterization. We perform HDR rasterization and LDR rasterization, separately. The word "parallel" here has two meanings: (i) The HDR and LDR rasterization processes query the same attributes and spatial transformations of the same 3D Gaussians in Eq. (10) - (13), except for the color. (ii) The tiles divided from the 2D projection are computed in parallel on GPU during the rasterization to accelerate the speed, see Line 165 - 169 and Eq. (13). We will add an explanation in the revision --- Rebuttal Comment 1.1: Comment: I have read all the reviews and the authors' responses. Most of my concerns have been addressed and I am inclined to keep my previous rating. --- Reply to Comment 1.1.1: Comment: Thanks for keeping your positive view of our paper. We really appreciate it.
Summary: This paper proposes a 3D Gaussian Splatting-based method, HDR-GS, for the high dynamic range novel view synthesis. To efficiently perform this task, a new Dual Dynamic Range Gaussian point cloud model is presented (in Section 3.1). This point cloud model has more attributes including the HDR color, exposure time, and a global-shared neural network functioning as a tone-mapper. Then in Section 3.2, two Parallel Differentiable Rasterization processes are designed to render the low and high dynamic range colors. Besides, in Section 3.3, the authors recalibrate the camera parameters for the real and synthetic multi-view HDR datasets to make the scene-level data suitable for 3D Gaussian Splatting-based algorithms. Strengths: + It is a good attempt to design the first 3DGS-based framework for the task of high dynamic range novel view synthesis. The core idea of assigning more attributes to the Gaussian point cloud model and rendering the high dynamic range and low dynamic range views in the parallel rasterization is novel and cool, which makes the proposed 3D Gaussian model multifunctional and have great practical values in photography, film making, etc. + The performance is superior. The running speed of the proposed HDR-GS is more than a thousand time that of the state-of-the-art NeRF-based method, HDR-NeRF. As compared in Table 1 of the main paper, previous NeRF-based methods suffer from the slow inference speed (< 0.2 fps). The proposed HDR-GS can not only infer at a much faster speed of 126 fps (>> 30 fps) but also surpass the SOTA method by large margins. These advantages enable the HDR-GS to capture and measure dynamic scenes, e.g., the camera on the robot, in real time. + The writing style is clear and easy to follow. I notice that the authors did not follow the mainstream introduction of the 3D Gaussian Splatting part, which I think is very confusing. Instead, the authors adopted the strategy of summarizing first and then dividing. They first introduced what attributes the Gaussian point cloud model contains, and then gradually introduced its working pipeline. This introduction order can give the readers an overall knowledge of what the Gaussian model is (its attributes) and thus helps the readers better understand the method. + The data re-calibration is a bonus. In my opinion, the biggest obstacle to researching the topic of high dynamic range novel view synthesis is the data issue. Although the multi-view synthetic and real datasets are collected, the normalized device coordinates and lack of SfM points for initialization cannot make the 3DGS work because of the severe blur and overfitting problems, especially for the unbounded scene-level reconstruction. How to re-calibrate the data is critical and executing the SfM algorithm is time-consuming. Weaknesses: There are some minor issues: - In Line 124 – 131, the authors analyzed the advantages of using log tone-mapper than linear one from the point of view of training stability. Yet, my understanding is that the option of taking the logarithm can shorten the gaps between the training data samples, which makes the originally discrete data samples become more continuous. The processed data samples are easier to fit by neural networks. The exposure time in table 3d is an example. So, adding this analysis can help better explain the motivation of taking the logarithm instead of directly using the linear form. - In Eq (14), the authors just used part of images of a scene to re-calibrate the camera poses without explanation. My concern is why not use all of the images instead of just using the views under the same exposure time? Did you try that? It is interesting to know and analyze this result. - It would be better to add some legend or annotation to the teaser figure like the unit of the numerical results, higher is better or lower is better. Because some results are higher is better, e.g., PSNR while some are lower is better such as training time. - More details of the experimental setup in section 4.1 could be provided to make the implementation clearer. For instance, the authors used the LPIPS as one of the metrics but they did not specify which perceptual network is adopted since this choice may drastically affect the LPIPS score. Technical Quality: 3 Clarity: 4 Questions for Authors: I have two questions: a) In the paper of HDR-NeRF, the training set and testing sets of real scenes are completely separate and have no intersection according to the implementation details. However, in the official github repository of HDR-NeRF, the training and testing sets for real experiments have intersection, which makes me very confusing. So I want to figure out, in your real experiments, did you separate the training and testing sets or just follow the official code of the HDR-NeRF? I think this is important. b) I want to know the training stability of the proposed method since I found HDR-NeRF easily collapse and need to train multiple times to make it work on some scenes. Confidence: 5 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Yes, the authors have analyzed the limitations and broader impact of the method in section 4 and 5 of the supplementary pdf. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: &nbsp; ## Response to Reviewer tLXM &nbsp; `Q-1:` Analysis of logarithmic tone-mapper vs. linear tone-mapper `A-1:` Thanks for providing a mathematical explanation of the logarithmic tone-mapping operation. We will add this analysis in the revision with acknowledgment. &nbsp; `Q-2:` Why using the images under the same exposure of different views to recalibrate the camera parameters instead of using all images? `A-2:` The recalibration is done by the Structure-from-Motion (SfM) algorithm. SfM needs to detect and then match feature keypoints between different views. Thus, if using all images for calibration, the feature detection and matching might be less accurate due to the change of light intensity at the same position. As a result, the calibrated camera poses may also be inaccurate. We conduct the experiment of using the camera parameters calibrated from all images on the synthetic datasets to train HDR-GS, the model achieves results of only 34.21, 36.19, and 33.75 dB on HDR, LDR-OE, and LDR-NE. Compared to Table 3 (d), these results are 2.67, 3.52, and 1.53 dB lower than the lowest results of single-exposure recalibration with $t_1$ = 0.125 $s$ on HDR, LDR-OE, and LDR-NE. &nbsp; `Q-3:` Add some legend or annotation to the teaser figure `A-3:` Thanks for reminding us. In Figure 1, we report five metrics of HDR novel view synthesis including PSNR in dB, inference speed in fps (frames per second), training time in minutes, SSIM, and LPIPS score. In particular, lower values are better for LPIPS and training time, while higher values are better for the other metrics. We will add a legend to explain the metrics. &nbsp; `Q-4:` Which perceptual network is used for the LPIPS score? `A-4:` Following the same test settings as HDR-NeRF for fair comparison, we use AlexNet [77] as the perceptual network to compute the LPIPS score. We will add more experimental details in Section 4.1. [77] ImageNet Classification with Deep Convolutional Neural Networks. In NIPS 2012. &nbsp; `Q-5:` Are the training sets and testing sets of real scenes completely separate? `A-5:` Yes, of course. You have a good observation. We also found this mistake in the official repo of HDR-NeRF. We rewrite the data splitting code to make sure there is no overlap between training and testing sets of real scenes. Please check the submitted code, which is consistent with the description of the implementation details in our main paper. &nbsp; `Q-6:` The training stability of our HDR-GS `A-6:` We did not experience model collapse during the training process. Our models were trained once and succeeded. We conduct five repeated experiments on the synthetic datasets. The PSNR results are shown in the following table. The performance fluctuation is within 0.21, 0.16, and 0.09 dB on LDR-OE, LDR-NE, and HDR. These results suggest the robustness and training stability of our HDR-GS. | Experiment | 1 | 2 | 3 | 4 | 5 | Avg | |:-------------|:--:|:--:|:--:|:--:|:--:|:--:| | HDR | 38.31 | 38.23 | 38.34 | 38.22 | 38.29 | 38.28 | | LDR-OE | 41.10 | 40.95 | 41.13 | 40.89 | 41.06 | 41.03 | | LDR-NE | 36.33 | 36.17 | 36.39 | 36.15 | 36.31 | 36.27 |
Summary: This paper introduces HDR-GS, a framework designed for efficient rendering of high dynamic range (HDR) novel views. HDR-GS leverages a Dual Dynamic Range (DDR) Gaussian point cloud model that utilizes spherical harmonics for HDR color fitting and an MLP-based tone-mapper for low dynamic range (LDR) color rendering. Given an exposure time, HDR-GS can reconstruct the corresponding LDR image, achieving a form of controllable tone-mapping. The method demonstrates significant improvements over state-of-the-art NeRF-based methods in terms of both speed and image quality. Strengths: HDR-GS achieves 1000x faster inference speed compared to HDR-NeRF. Its training is also efficient. The results shows significant improvement in image quality. The framework is sound. The authors provide a detailed derivation process, demonstrating the motivation and rationale behind the framework design. The tone mapper design enables controllable exposure time for reconstrcting LDR image. Weaknesses: This method requires taking photos with different exposure settings at each camera position and additional HDR image data in $L_c$ to calculate the loss function. These photos and data are relatively difficult to obtain in practice. The entire pipeline is quite similar to HDR-NeRF, including the tone mapper MLP. Essentially, the authors replace the NeRF MLP with Gaussian splatting. To adapt HDR-NeRF to Gaussian splatting, the authors propose several key modifications: 1) camera recalibration and point cloud generation; 2) a constant bias $b$ in Equation 8; and 3) using $L_c$ instead of a unit exposure loss $L_u$. However, these modifications are minor and contribute only slightly. Moreover, the poor performance of the baseline model in Table 3a indicates that LDR supervision alone, without GT HDR images, is insufficient for reconstructing the HDR point cloud. This somewhat weakens the novelty. There is not ablation study for the constant bias $b$ in Equation 8. Technical Quality: 4 Clarity: 4 Questions for Authors: My main concern is about the novelty issue mentioned in the Weaknesses section. Is it possible to apply HDR-GS to data where each viewpoint has a different exposure time? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 2 Limitations: The authors may discuss the difficulty of data acquisition in real-world settings. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: &nbsp; ## Response to Reviewer E3vH &nbsp; `Q-1:` Questions about the training data `A-1:` (i) Actually, our method only requires an LDR image with a single exposure time $\in$ {$t_1, t_3, t_5$} at each view to train in Eq.(15). (ii) **As claimed in Lines 198 - 200, we do not use HDR images for supervision $\mathcal{L}_c$ in real scenes, $\gamma = 0$ in Eq.(17).** We only use HDR supervision for synthetic scenes to align the HDR output with the Blender rendering style. This is because the transformation from LDR to HDR is not unique. It is affected by camera parameters, rendering software settings, etc. In our work, synthetic HDR images are synthesized by the software Blender with specific settings and the evaluation metrics for HDR images are hard pixel alignment. Thus, it is reasonable to enforce a constraint on the rendered HDR images. To this end, HDR-NeRF uses the GT camera response function (CRF) correction coefficient $C_{0}$ to rectify the HDR output. But $C_{0}$ is a strong scene-specific prior and not available in real practice because CRF is the target to model, not a given condition. Hence, we use tone-mapped HDR images for supervision instead. For fair comparison, we conduct experiments without using HDR restrictions in the following table. HDR-GS is 14.91 dB higher than HDR-NeRF on HDR. |Method|HDR-NeRF|HDR-GS| |:-|:-:|:-:| |HDR|13.35|28.26| |LDR-OE|36.48|40.82| |LDR-NE|34.47|36.01| In Figure 6, our method renders more visually pleasant HDR images without HDR supervision than HDR-NeRF in real scenes. &nbsp; `Q-2:` Comparison of HDR-GS and HDR-NeRF `A-2:` Our HDR-GS is different from HDR-NeRF in (i) Motivation. HDR-NeRF aims to learn the neural HDR radiance fields. It is an implicit 3D representation that lacks geometric information. In contrast, HDR-GS aims to reconstruct 3D HDR Gaussian point clouds that capture the scene geometry. It is an explicit 3D representation with better controllability and interactivity. (ii) Technique. (1) In Lines 40 - 45, HDR-NeRF suffers from a time-consuming rendering scheme. It samples many points to compute densities and colors for each ray. In contrast, HDR-GS adopts rasterization to render different tiles in high parallelism on GPU, enjoying much faster speed. (2) The tone-mapping operation of HDR-GS is also fundamentally different from that of HDR-NeRF. HDR-NeRF conducts tone-mapping in a ray-tracing manner. It converts HDR to LDR at all sampled points for every single ray. This ray-tracing tone-mapping only extracts the color information along a single ray and further slows down the inference speed. In contrast, HDR-GS performs a global tone-mapping operation that converts HDR colors of all 3D Gaussians to LDR colors at one time. This operation captures more contextual color information for HDR-to-LDR transformation and accelerates the inference speed. The following table shows the results of using the two tone-mapping methods on HDR-GS. Our global tone-mapping is better and faster. |Tone-mapping|Train Time (min)|Infer Speed (fps)|LDR-NE|LDR-OE|HDR| |:-|:-:|:-:|:-:|:-:|:-:| |Ray Tracing (HDR-NeRF)|58|78|39.51|34.68|36.12| |Global Infer (Ours)|34|126|41.10|36.33|38.31| (iii) Data. HDR-NeRF uses the normalized device coordinate (NDC) system. But, as analyzed in Lines 174 - 181, the NDC system restricts the representing ability and transformations of 3D Gaussians. Plus, the datasets collected by HDR-NeRF do not provide point clouds for initialization. To address these problems, we recalibrate the camera parameters and compute the initial point clouds as Eq.(14). See Table 3 (a), our data recalibration and point cloud initialization lead to 2.27/2.58 and 4.84/4.56 dB improvements on LDR-OE/LDR-NE, while alleviating the over-fitting issue. (iv) Performance. HDR-GS shows significant advantages over HDR-NeRF. HDR-GS surpasses HDR-NeRF by 1.91 dB on HDR in synthetic scenes (Table 1) and 3.84 dB on LDR in real scenes (Table 2), while enjoying 1000x inference speed. In Figure 6, HDR-GS reconstructs clearer HDR details and brightness. &nbsp; `Q-3:` Performance of 3DGS trained with GT HDR images `A-3:` We conduct experiments to compare HDR-GS with 3DGS directly trained with GT HDR images in the following table. HDR-GS outperforms 3DGS by 25.98, 22.86, and 8.16 dB on LDR-NE, LDR-OE, and HDR. |Method|LDR-NE|LDR-OE|HDR| |:-|:-:|:-:|:-:| |3DGS|15.12|13.47|30.15| |HDR-GS|41.10|36.33|38.31| We analyze these results: (i) The suboptimal HDR result of 3DGS stems from that the high-frequency details and broader range of brightness in HDR images are hard to capture by the spherical harmonics (SH) with limited order in original Gaussian point clouds. (ii) The LDR results of 3DGS are poor because the original 3DGS cannot control the light intensity according to the exposure time. (iii) Besides, training 3DGS with HDR images is hard in real scenes where HDR images are difficult to obtain. In contrast, HDR-GS only requires LDR images for supervision in practice. &nbsp; `Q-4:` Ablation of $b$ in Eq.(8) `A-4:` We follow your advice to do an ablation of $b$ in the following table. |Method|LDR-NE|LDR-OE|HDR| |:-|:-|:-|:-| |w/o $b$|40.72|36.08|38.05| |with $b$|41.10|36.33|38.31| &nbsp; `Q-5:` Can HDR-GS be applied to data where each view has a different exposure time? `A-5:` Yes. HDR-GS only requires an LDR image with a single exposure time at each view to train. We do experiments with training views of different exposure times in the following table. HDR-GS performs better and faster. |Method|Train Time (min)|Infer Speed (fps)|LDR-NE|LDR-OE|HDR| |:-|:-:|:-:|:-:|:-:|:-:| |HDR-NeRF|551|0.12|37.94|36.21|35.83| |HDR-GS|35|123|39.76|37.40|38.05| &nbsp; `Q-6:` The difficulty of data acquisition in real world `A-6:` The real-world data is easy to obtain. In real scenes, HDR-GS only requires a single LDR image at each view to train and no HDR images are required. The exposure time can be easily set in the camera and read from the EXIF files. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their response; it addressed my concerns regarding HDR supervised training. As for the comparison with HDR-NeRF, I believe most of the differences arise from the necessary adjustments when transitioning between the representation and rendering pipeline of NeRF and 3DGS. These adjustments seem to be natural solutions and do not represent significant novelty. The improvements in rendering quality and speed of HDR-GS also stem from the inherent characteristics of 3DGS. Could the authors point out any technical modifications that are not simply required for adapting the NeRF representation from HDR-NeRF to 3DGS, but that also enhance performance? Or could they explain why some adjustments made during the transition, which could have been done more simply, are superior in their current form? In any case, I appreciate the effort the authors put into this work, which has improved the performance of the results. Considering the rebuttal that has addressed some of my concerns, I am willing to slightly raise the review score. --- Rebuttal 2: Title: Discussion with Reviewer E3vH Comment: Thanks for your reply. We appreciate your recognition of our work and performance. &nbsp; Actually, our HDR-GS is not simply adjusting HDR-NeRF to 3DGS. Here are some comparisons in technical details: &nbsp; (i) Tone-mapping Operation. The tone-mapping operation of HDR-NeRF converts the HDR color of the sampled 3D points to LDR color following volume rendering along every single ray. Specifically, the HDR volume rendering is formulated as $\mathbf{I}^h(\mathbf{r}) = \sum_{i=1}^{N} T_i (1 - \exp(-\rho_i \delta_i)) \mathbf{c}^h_i$, where $\rho_i$ is the volume density at the $i$-th sampled point. $\mathbf{c}_i^h$ is the HDR color at the $i$-th point. $\mathbf{I}^h(\mathbf{r})$ is the HDR color of the pixel where ray $\mathbf{r}$ lands on. The tone-mapping operation of HDR-NeRF converts $\mathbf{c}^h_i$ to the LDR color $\mathbf{c}^l_i$. 3DGS also adopts a similar point-based rendering in rasterization. So directly adapting HDR-NeRF to 3DGS should perform the tone-mapping operation following the HDR rasterization. Specifically, in Eq.(13), the point-based rendering in HDR rasterization is $\mathbf{I}^{h}(p) = \sum_{j \in \mathcal{N}} \mathbf{c}_j^h \sigma_j \prod _{k=1}^{j-1}(1-\sigma_k)$. The naive adaptation should perform tone-mapping to convert $\mathbf{c}_j^h$ to $\mathbf{c}_j^l$ along the ray landing on the pixel $p$. However, as compared in the following table, which is copied from the table in `A-2` of our rebuttal for your convenience, we found this adaptation only yields suboptimal results and speed because it can only extract limited color information on a single ray for HDR-to-LDR transformation and needs to compute many times. |Tone-mapping|Train Time (min)|Infer Speed (fps)|LDR-NE|LDR-OE|HDR| |:-|:-:|:-:|:-|:-|:-| |Ray Tracing (HDR-NeRF)|58|78|39.51|34.68|36.12| |Global Infer (Ours)|34|126|41.10|36.33|38.31| To address these issues, we design the Dual Dynamic Range (DDR) Gaussian point cloud model that performs a global tone-mapping operation converting the HDR colors of all Gaussian point clouds to LDR at one time to extract more contextual color information and accelerate the inference speed. As shown in the above table, our global tone-mapping leads to improvements of 1.59/1.65/2.19 dB and 48 fps on LDR-NE/LDR-OE/HDR and speed. &nbsp; (ii) Data Recalibration. As analyzed in Lines 174 - 181, HDR-NeRF adopts the normalized device coordinate (NDC) system that rescales the coordinates to the unit cube [-1, 1]$^3$ to help stabilize the training. Plus, the data collected by HDR-NeRF does not provide point clouds for the initialization of Gaussian point clouds. The naive and straightforward adaptation is to randomly init the positions of Gaussian point clouds within the cube [-1, 1]$^3$ and use the NDC with 2D projections to optimize. However, when we try this naive adaptation, the HDR-GS only achieves poor results of 24.45, 25.31, and 23.08 dB on HDR, LDR-OE, and LDR-NE for two reasons. Firstly, the NDC restricts the representing ability and spatial transformation of Gaussian point clouds. Secondly, training the randomly initialized Gaussian point clouds with few views (only 18) leads to an overfitting issue. To address these problems, we use the SfM algorithm to recalibrate the camera parameters and compute the initial point clouds. A naive method is to use all images to recalibrate, as mentioned in `Q-2` of our response to reviewer `tLXM`. However, this naive recalibration achieves suboptimal results of only 34.21, 36.19, and 33.75 dB on HDR, LDR-OE, and LDR-NE because the light intensity change degrades the accuracy of feature keypoints detection and matching. Hence, we use the LDR images with the same exposure time to recalibrate in Eq.(14), leading to improvements of 4.10, 4.91, and 2.58 dB on HDR, LDR-OE, and LDR-NE. &nbsp; (iii) HDR supervision. HDR-NeRF uses the ground truth CRF correction coefficient $C_0$ for HDR supervision. The naive adaptation should also use $C_0$. Yet, we found that training with $C_0$ correction is unstable and the model easily collapses. Besides, $C_0$ is unavailable in real scenes where HDR-NeRF naively sets $C_0 = 0.5$. Yet, this inaccurate $C_0$ may cause color distortion or introduce black spots, as shown in Figure 6. Thus, we use Eq.(16) as the HDR supervision for quantitative evaluation to stabilize the training process. When HDR images are not available in practice, we do not use inaccurate $C_0$ to avoid degradation. &nbsp; Besides, we resolved the data splitting issue on the real datasets, as mentioned in `Q-5` of our response to reviewer `tLXM`. &nbsp; 3DGS is a great work. Yet, directly applying 3DGS or naively adapting HDR-NeRF to 3DGS does not work well. Our work, as the first attempt, proposes an effective method to explore the potential of 3DGS for HDR imaging. We are glad to share our code, model, and recalibrated data with the community. &nbsp; Feel free to ask us if you have other questions. Looking forward to your reply. --- Rebuttal Comment 2.1: Title: Discussion with Reviewer E3vH Comment: &nbsp; Dear reviewer `E3vH`, &nbsp; Thanks for your time and valuable comments. We appreciate your recognition of our work and effort. Could you please let us know if our response addressed your concerns about the comparison between the simple adaptation of HDR-NeRF for 3DGS and our proposed HDR-GS? We sincerely appreciate your willingness to raise the score. Just a friendly reminder that the scores have not been updated in the openreview system yet. Please feel free to ask us if you have any other questions. We are looking forward to your reply. &nbsp; Best, Authors
null
null
Rebuttal 1: Rebuttal: &nbsp; ## General Response to All Reviewers &nbsp; Thanks for your time and valuable comments. We really appreciate you for recognizing our framework soundness (`E3vH`,`tLXM`, and `VwS8`), method novelty (`tLXM` and `VwS8`), outstanding performance (`E3vH`,`tLXM`, and `VwS8`), and good writing and presentation (`E3vH`,`tLXM`, and `VwS8`). We have written a separate detailed response to each of you, respectively. We address all the issues raised in detail and clarify a few miscommunications Our code, models, and recalibrated data will be released to the public. Feel free to ask us if you have any other questions. Looking forward to discussing with you.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
SleeperNets: Universal Backdoor Poisoning Attacks Against Reinforcement Learning Agents
Accept (poster)
Summary: This paper proposes a new backdoor attack against Reinforcement Learning, termed SleeperNets. SleeperNets adopted dynamic reward poisoning to overcome the insufficiency of static reward poisoning proposed in previous works. The author provided a theoretical analysis of the advantages of dynamic adversarial poisoning and also conducted comprehensive evaluations to demonstrate the effectiveness of SleeperNets over previous backdoor attacks against DRL. Strengths: * This paper shows the drawback of static reward poisoning adopted in the previous DRL backdoor attacks, which motivates dynamic reward poisoning. * The authors provide a theoretical analysis of the returned reward given the design of the dynamic reward, convincingly show that dynamic reward poisoning overcomes the drawbacks of static design. * Comprehensive evaluations including detailed ablation studies are conducted. Weaknesses: NA Technical Quality: 3 Clarity: 3 Questions for Authors: NA Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Overview:** We thank the reviewer for their strongly positive assessment of our paper. We appreciate them for acknowledging the many theoretical and technical contributions of our paper such as showing “the drawback of static reward poisoning”, providing “a theoretical analysis of the advantages of dynamic adversarial poisoning”, and in conducting “a comprehensive evaluation to demonstrate the effectiveness of SleeperNets”. Although the reviewer provided no weaknesses or questions for our paper, we encourage them to bring up any further questions they may have during the author discussion period.
Summary: The SleepNets paper considers a new ("outer loop") threat model, more powerful than those typically considered in adversarial RL settings. The authors consider a stealthy attacker, aiming to both be successful (essentially tricking the learner into believing the underlying MDP is instead one of the attacker's choosing) and remain hidden (keeping the values of the original and corrupted policies similar). They provide theoretical results on the limits of the traditional, weaker threat model, and introduce a "Dynamic Reward Poisoning Attack Formulation". This yields their new attack "SleeperNets", which they empirically evaluate. Strengths: 1. The paper has an explicitly stated threat model -- a welcome sight in this area. 2. The paper provides a theoretical investigation of prior threat models, with an ultimately simple example demonstrating an impossibility result. 3. While it is too far out of my area for me to be sure of the coverage of related work, the paper does seem to well-situate its contributions in the broader body of literature. 4. There is a broad but not overly cumbersome set of empirical analyses. 5. The paper is well written and easy to follow. Weaknesses: The main weakness I see is the applicability of the threat model. As the others state, the adversary is assumed to infiltrate the computer on which the agent is training. It's not clear to me what scenarios would exist where an attacker has that much access and can't perform a far deadlier attack (simply manipulating values directly). The paper would be improved if the authors gave examples of real settings where an attacker could act in this outer-loop way without having direct software access. I do believe such examples exist, they just need to be articulated. That is, described in detail with specifications about how Algorithm 1 could still be executed (and the assumptions about e.g., \beta hold). Some rough ideas for such settings: 1. The RL agent is acting on financial markets and the attacker is able to manipulate the reward signal by directly purchasing shares at an inflated cost from the agent. 2. The RL agent is acting in a physical environment and the attacker is able to manipulate that same environment (I'm picturing how humans train drug sniffing dogs by hiding toys). 3. The RL agent is flying, and the attacker has limited access to some of its instrumentation (e.g., can spoof its GPS location or jam certain signals). Technical Quality: 3 Clarity: 3 Questions for Authors: Same as the weakness described above -- in what real world settings is the attacker powerful enough to perform the sleepernets attack, but not so powerful as to directly manipulate memory values on the training machine? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The primary limitation is the weakness described above. The other limitations are described in Section 7, and I agree with the authors that they are interesting areas for future work. This paper stands without a deeper investigation of them. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Overview:** We thank the reviewer for their positive assessment of our paper and for their insightful questions. The reviewer’s main concern was the feasibility of the outer loop threat model. This is something we’ve taken much time to consider over the course of writing our paper, thus we will give the reviewer a thorough response. Upon reading our in-depth responses to all their questions, we encourage the reviewer to consider increasing their rating of our paper, or to engage in further discussion with us over the next week. **Comment 1:** “It's not clear to me what scenarios would exist where an attacker has that much access and can't perform a far deadlier attack (simply manipulating values directly).” **Response:** Here we focus particularly on the hypothetical, “far deadlier attack”’. First, we would like to ask the reviewer how they envision this attack? We lack a direct answer at this time, but encourage the reviewer to discuss this point further with us as it will be enlightening for all parties. For this response we will be assuming that the adversary has full control over the agent’s training machine and can arbitrarily manipulate RAM values. We will then push back on the notion that a “far deadlier attack” is either possible or sensible. When studying poisoning attacks we assume the adversary wishes to exploit the agent at test time, thus the agent must perform well enough in the base MDP to reach a deployment phase. In other words, the adversary must maintain “attack stealth” (section 3). This puts heavy restrictions on the adversary as they cannot simply replace the agent’s policy with one of their own design without training a model themselves. They must have full access to the victim’s MDP as well as the necessary compute resources to train the agent (which is hard and expensive to obtain). Accessing the MDP may be easy if it is simulated, but would be extremely difficult and costly in the case of real-world training environments, such as self-driving or robotic applications. Therefore, the most sensible option for the adversary is to implement a versatile attack like SleeperNets. In our work we show that extra information and domain specific engineering is largely unnecessary for a successful attack. So long as the adversary knows the agent’s observation space and can devise a trigger pattern they will be able to perform a successful and stealthy attack. Thus, we assert that the SleeperNet attack is a versatile and reliable approach for attackers of different capabilities. **Comment 2:** “The paper would be improved if the authors gave examples of real settings where an attacker could act in this outer-loop way without having direct software access…” **Response:** We thank the reviewer for this question since it is very fundamental to our paper. We additionally thank them for the inspirational examples they include in their comment. This question covers a very important topic, so we intend to answer it thoroughly and from multiple angles. **Viability of Direct Software Access:** We would like to first push back against the assertion that direct software access is unreasonable. There are countless, well documented cases of advanced persistent threats (APTs) achieving such levels of access in real-world settings. In fact, the most recent Verizon Data Breach Investigation (DBIR) report mentioned that in 2023 there were more than 5000 breaches with system intrusions, and these numbers are growing at a fast rate year-to-year. There are also infamous cases such as the RSA breach which prove that critical assets, such as model training servers, are not safe from adversarial attacks. **Applications of the Outer Loop Attack:** One nice feature of the outer loop threat model, in contrast to the inner loop, is that it allows for more direct translations between offline and online reinforcement learning. When attacking offline RL we can directly apply Algorithm 1 to the offline dataset intended to be used for training. The only difference is that, on line 2, we would sample H from the fixed dataset rather than from the MDP. Similarly, in many domains agents are trained with pseudo-MDPs created from existing real-world data. For instance, companies designing stock trading agents utilize real-world market data to train their models. This database, just like any other, is subject to direct manipulation as the result of a breach - which we know, from the aforementioned DIBR report, is unfortunately common. **Malicious Trainers:** Often in adversarial machine learning we assume the innocence of the training entity and assert the adversary must be external, however this isn’t always the case. We are currently working with collaborators on one such scenario. Imagine a company who designs and optimizes 5G connectivity controllers using DRL. Internet Service Providers will purchase products with the highest customer satisfaction, generally measured by how fairly they distribute internet bandwidth. However, the designer of the controllers may want to give preferential treatment to particular services (streaming platforms, etc.). How can they convince the ISP that their controller is fair while also allowing for this preferential treatment? A powerful solution to this problem is for the trainer to perform a backdoor attack against their own model - a setting they have complete control over. Through this method the trainer can guarantee state of the art performance - leading to the purchase of their controllers - while also allowing for exploitation of the backdoor after deployment. This solution requires no special tuning of the training algorithm, the model weights, or the MDP - it works directly “out of the box”. This necessitates further study into the auditing of DRL agents and the test-time detection of backdoor attacks, which we believe are both exciting areas of future research. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: I thank the authors for their additional comments. These have helped clarify the paper and its context for me. I recommend that more of the discussion the authors laid out be added to the manuscript as space constraints allow, and I have increased my rating of the paper. --- Reply to Comment 1.1.1: Title: Thank you for the response Comment: We are happy to hear that our response helped clarify the paper for you, and we're very grateful for your decision to increase your rating of our paper. We agree that including this additional context will be important for readers to understand our work, so we will be sure to make the proper additions to the threat model section of our paper.
Summary: The authors introduce a novel framework for backdoor poisoning RL agents, SleeperNets. SleeperNets assumes that adversaries can inject adversarial perturbations into the agent's observations during policy training within some total budget. Unlike in prior frameworks, the adversary implements its attacks post-hoc on full episode rollouts. The authors show that their attack manages to be successful, while retaining the performance of the optimal unpoisoned setting. The authors implement their novel framework on four different environments and show that it works favourably. Strengths: The paper has a number of strengths. First of all, I believe that the threat model innovations are sensible; it seems natural that the adversary could manipulate whole episodes and not just single steps. Equivalently, interpreting stealth as retaining policy performance seems sensible. The authors' insight that "dynamic" attacks can attain both success and performance while "static" attacks cannot is insightful. The empirical results seem to support the author's claims. Weaknesses: * test-time defenders can still perform anomaly detection based on observations, actions or state transitions; I do think the authors should perform empirical investigations using out-of-distribution anomaly detection methods [2] to infer the information-theoretic detectability of their methods. * The idea of condition of increasing the adversaries' attack context beyond single steps is not entirely novel within the adversarial attack literature, see e.g. [1] who devise adversarial attacks that condition on the entire action-observation history. ### minor weaknesses line 264: "environment environment" [1] Franzmeyer et al., Illusory Attacks, https://openreview.net/forum?id=F5dhGCdyYh, ICLR 2024 [2] Nasvytis et al., DEXTER, https://arxiv.org/abs/2404.07099, AAMAS 2024 Technical Quality: 3 Clarity: 3 Questions for Authors: * you are not referencing adversarial cheap talk [3] - can you compare and contrast their setting against yours? * you are mentioning a adversarial perturbation budget - where does this budget come from, and why would a budget be justified in reality rather than say a constraint based on information-theoretic detectability as in [1]? [3] Lu et al., Adversarial Cheap Talk, ICML 2023 Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors do not further investigate defenses against their novel attack, although they state that securing the training environment as well as developing test-time anomaly detectors would be suitable avenues. ## Update in Response to the Rebuttal The reviewer have successfully addressed my concerns, I therefore now recommend the paper for acceptance. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Overview:** We thank the reviewer for their positive assessment of our work in noting the sensibility of our threat model the insight brought by our theoretical analysis. The reviewer also brought with them much insight in the form of constructive questions and citations. We thank the reviewer for these comments and aim to respond to all of them thoroughly. Given our in-depth responses, we encourage the reviewer to consider increasing their rating of our paper, or to engage in further discussion with us over the next week. **Comment 1:** “The authors implement their novel framework on four different environments and show that it works favourably.” **Response:** We thank the reviewer for their positive comments about our framework’s novelty and the favorability of our results. We would like to make a minor correction - we performed experiments over **6 different environments** which we then classified into 4 generic categories. **Comment 2:** “The idea of increasing the adversaries' attack context beyond single steps is not entirely novel within the adversarial attack literature, see e.g. [1]...” **Response:** We thank the reviewer for bringing [1] to our attention as it takes an approach to avoiding test-time detection which is very unique. We will certainly include it in an extended related work if our paper is accepted. That being said, it considers both a threat model and attack type - test-time evasion attacks - that are completely different from ours. While there are some similarities in giving the adversary access to more temporal information, our contribution of the outer-loop threat model is sufficiently novel and important to the study of backdoor attacks against DRL. Our paper is the first to introduce and formalize the outer-loop threat model for poisoning attack in RL - even when considering both backdoor and policy replacement attacks. We show that the outer loop threat model is not only viable and realistic, but allows for more powerful and dangerous attacks compared to the inner loop threat model. This necessitates further research into not only the capabilities and applications of the outer loop threat model, but also prevention against it. **Comment 3:** “you are not referencing adversarial cheap talk [3] - can you compare and contrast their setting against yours?” **Response:** We thank the reviewer for bringing up this work and giving us an opportunity for further reflection. We will definitely include it in an extended related work upon acceptance of our paper as an alternative class of poisoning attacks. In terms of threat model the papers are quite different. In [3] it is assumed the adversary can use an open “cheap talk” channel to send the agent “messages” **at each time step** during training and testing. The goal of each message is to influence the agent’s training behavior and potentially control them at test time. These messages are unique per state, requiring the adversary to learn a message function. In SleeperNets we instead allow the trigger to exist within the agent’s latent observation space - making no critical assumptions about the MDP. We additionally achieve state of the art results while poisoning less than 0.5% of the agent’s observations, while in [3] they use an effective poisoning rate of 100%. Lastly, in SleeperNets the adversary is allowed to manipulate the agent’s reward signal while [3] only perturbs the agent’s observations. We look forward to any future works attempting to combine the two attack methodologies or settings. **Comment 4:** “you are mentioning an adversarial perturbation budget - where does this budget come from?” **Response:** The concept of a poisoning budget (alternatively called the poisoning rate) is standard throughout the poisoning attack literature in both RL [4] and supervised learning [5]. There are two key reasons for its usage: we want to minimize the likelihood that the perturbations are detected at training time - which will increase with our poisoning budget; and we want the agent to still learn the benign MDP - which empirically becomes more difficult as one increases the poisoning budget (see our ablations in section 6.3). **Multiple Comments: Test Time Anomaly Detection via Information Theory** e.g. “...I do think the authors should perform empirical investigations using out-of-distribution anomaly detection methods [2] to infer the information-theoretic detectability of their methods.” **Response:** We thank the reviewer for bringing up [2] as it is an insightful work for the detection of test-time anomalies. We will certainly include a citation of this paper in an extended related work if our paper is accepted. We agree with the reviewer that the test-time detection of backdoor attacks is a critical and open problem in this field, but we believe it is orthogonal to the objectives of our paper. In our paper we answer the fundamental question “What poisoning methods are necessary for successful RL backdoor attacks?”. We not only prove the insufficiency of static reward poisoning, but we also develop a novel, dynamic reward poisoning strategy and rigorously prove its sufficiency in achieving both attack success and stealth - becoming the first backdoor attack paper in RL to produce such concrete results. Through this we propose a versatile attack framework which makes no critical assumptions about the attacker’s test time objectives. This allows future poisoning attack literature in RL to build upon our theoretically rigorous foundations and technically rich contributions when they aim to solve additional adversarial objectives, such as minimizing the detectability of test-time anomalies in the agent’s observations or actions. [4] Kiourti et. al., TrojDRL: Trojan Attacks on Deep Reinforcement Learning Agents. ACM/IEEE Design Automation Conference (DAC), 2020 [5] Shafahi et. al., Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks. NIPS 2018 --- Rebuttal Comment 1.1: Title: Thanks for clarifying my concerns. Comment: I thank the reviewers for clarifying my concerns; I have decided to increase my score in response.
null
null
Rebuttal 1: Rebuttal: We would like to first extend our thanks to the reviewers for their time in reading our paper, evaluating its merits, and highlighting its novel contributions. We hope that our extensive responses have sufficiently answered all the reviewers’ questions, but openly invite any further questions or comments during the author discussion period. We are further grateful for the reviewers’ enlightening comments and feedback. Reviewer VsP1 was primarily concerned with the test time detectability of our attack against information theoretic defenders, citing multiple related works. In our response we highlight the versatility of our attack framework and its applicability to these additional, adversarial objectives. We further note the relevance of their cited papers and aim to include them in an extended related work. Reviewer YkW9 asked questions about the viability of our threat model in the real world, additionally including some insightful attack scenarios. We answer these questions thoroughly and from multiple angles. We first establish the utility of our attack for both strong and weak adversaries, highlighting the difficulty and extra cost of implementing a “far deadlier attack” without being detected. We then provide multiple use cases of the outer loop attack and motivate its positioning within both the literature and real world settings. Reviewer Es9J was highly positive of our work, noting their confidence in both our theoretical contributions and our empirical evaluation. We greatly appreciate this praise and openly invite the reviewer to engage with us in further discussion during the author rebuttal period if they have any additional questions or comments. Finally, when writing this paper we aimed to take a theoretically rigorous approach towards filling crucial gaps in our understanding of backdoor attacks in reinforcement learning. Through this exploration we not only prove the insufficiency of prior approaches, but make key contributions in developing the first backdoor attack framework to provably maximize both attack success and attack stealth. We believe these theoretical developments, in addition to our novel SleeperNets attack and outer loop threat model, will be a crucial foundation for future works studying both backdoor attacks and defenses in reinforcement learning.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
WaterMax: breaking the LLM watermark detectability-robustness-quality trade-off
Accept (poster)
Summary: This paper proposes a watermark technique called WaterMax to distinguish LLM-generated texts and human-written texts. WaterMax starts with the watermark detector and asks LLMs to generate a group of candidates from which the one with the lowest p-value determined by that detector is selected as the final output. This work offers a brandnew perspective of text watermarking as there is no formal watermark generator to embed watermark into LLM outputs. Watermax breaks the trade-offs between watermark detectability, quality and robustness, which is carefully discussed in this work with theoretical proof and experiment validation. Strengths: 1. WaterMax offers a brendnew research perspective in the field of text watermarking, where there is no official watermark generator. 2. WaterMax is almost distortion-free as it achieves high text quality on LLM outputs, yet the detectability is preserved. 3. By upgrading the detector, WaterMax can achieve high robustness as well. 4. The superiority of WaterMax is both theoretically proven and experimentally validated. Weaknesses: 1. The dataset used for experiments is limited to high-entropy text generation, yet in reality there is need to watermark LLM-generated codes. Technical Quality: 4 Clarity: 3 Questions for Authors: Is it possible to include low-entropy tasks such as code generation to test if the detector can still properly function? Several works [1,2] can be refered to. [1].Who Wrote this Code? Watermarking for Code Generation [2].An Entropy-based Text Watermarking Detection Method Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: It would be better to include more datasets, and further improve the time complexity if possible. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q1: "Is it possible to include low-entropy tasks such as code generation to test if the detector can still properly function? Several works [1,2] can be refered to."" We thank the reviewer for the references. At the detection side, these works weight the token value depending on an estimated entropy of the token: hard thresholding for [1], soft weighting for [2]. In principal, this idea can be integrated to WaterMax at the detection side (and thus at the embedding as well since it uses the same score function). Applying hard thresholding [1], WaterMax will not embed a watermark in a chunk where all tokens have low entropy. Yet, we do not recommend the soft weighting of [2]: this prevents from computing a sound p-value. >[1] *Who Wrote this Code? Watermarking for Code Generation* >[2] *An Entropy-based Text Watermarking Detection Method* --- Rebuttal Comment 1.1: Comment: Thank you for responding. I will maintain my rating.
Summary: The authors propose a method for watermarking language models through the use of rejection sampling. By sampling and discarding "chunks" from the model until the p-value returned by an (arbitrary) detection rule is sufficiently low, the proposed method simultaneously preserves output text quality while achieving strong robustness to a variety of attacks. Crucially, the proposed method does not require any intervention within the model itself (e.g. via logit biasing). Strengths: * The proposed method is both elegant and flexible. * The paper is clearly written and adequately describes the proposed method. * The authors demonstrate strong results in terms of text quality, detectability, and robustness against a reasonable selection of attacks. Weaknesses: * A previous LLM watermarking work, "SemStamp" [1], is similarly based on rejection sampling. Adding experimental comparisons to SemStamp might therefore strengthen the paper by showing how the proposed method fares against the most directly comparable existing method. Otherwise, the authors should probably cite it. * The authors claim on line 62 that the method of Kuditipudi et al. is the only watermark explicitly designed for robustness; however, the unigram method of Zhao et al. [2] also appears to provide theoretical robustness guarantees. * While the authors propose an algorithm to limit search over candidate generations, the computational complexity of the proposed method is somewhat concerning. Based on figure 7 (appendix G), it looks like WaterMax incurs ≤ 40% additional runtime on top of generation at a length of only 256 tokens; for the texts in the "Mark My Words" benchmark, the authors report in line 560 that WaterMax requires 5 times the runtime of KGW & Aaronson for watermarked generation. Additional computation is unavoidable with a rejection sampling watermarking scheme, but the authors could address this limitation more clearly within the main paper body. * A very minor note -- given that the authors consistently compare three watermarking schemes (WaterMax, Aaronson, KGW), it might improve legibility to use a consistent color scheme to refer to these methods across figures. [1] https://arxiv.org/abs/2310.03991 (NDSS '24) [2] https://arxiv.org/abs/2306.17439 (ICLR '24) Technical Quality: 3 Clarity: 3 Questions for Authors: * Did the authors explore the use of detection rules other than (11) in conjunction with WaterMax, and if so, did they observe any significant differences in performance? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: * See comment on efficiency in "Weaknesses" section Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > W1: "A previous LLM watermarking work, "SemStamp" [1], is similarly ... Adding experimental comparisons to SemStamp ... Otherwise, the authors should probably cite it." We thank the reviewer for pointing us to SemStamp. We acknowledge that there exists some proximity to our work in the sense that they perform rejection sampling on sentences. It can also be seen as a KGW-type algorithm at the level of sentences instead of tokens. However, the work exhibits two major weakness that prevent comparison: 1) Since the watermark works at the level of sentences and not tokens, extremely large texts are necessary to attain acceptable performance. SemStamp's false positive rate is around 1%. All algorithms studied in our work easily reach 100% TPR within this regime. 2) The authors compute the FPR using a z-score which allows an accurate approximation of the p-value only when the number of sentences is high -- however this is never achieved in practice. No empirical validation of this approximation is provided. On the other hand, our work computes exact p-values which are very low for watermarked text. For these reasons, we cannot sensibly compare SemStamp to the state-of-the-art. > W2: "The authors claim on line 62 that the method of Kuditipudi et al. is the only watermark explicitly designed for robustness; however, the unigram method of Zhao et al. ..." We agree. We thank the reviewer for pointing out this oversight. We added this reference. > W4: "A very minor note ... it might improve legibility to use a consistent color scheme to refer to these methods across figures." We corrected the colors of Fig. 1 which were indeed not consistent with the other figures. Thanks for pointing this out. > Q1: "Did the authors explore the use of detection rules other than (11) in conjunction with WaterMax, and if so, did they observe any significant differences in performance?" If 'detection rule' means the distribution of the values associated to each token, then yes: As we show in the paper, the choice of detection rule theoretically does not impact the performance of WaterMax. If detection rules means other kind of score function like SemStamp based semantics, then no, we did not try. --- Rebuttal Comment 1.1: Title: Reply to Authors Comment: I thank the authors for their reply and for the general rebuttal. The additional experiments and promised clarifications re: complexity overhead should strengthen the paper. > We thank the reviewer for pointing us to SemStamp. We acknowledge that there exists some proximity to our work in the sense that they perform rejection sampling on sentences… However, the work exhibits two major weakness that prevent comparison… I’m confused how these weaknesses would prevent at the very least mentioning the prior published sentence-level rejection sampling watermark. If anything they contextualize WaterMax’s strengths. --- Reply to Comment 1.1.1: Comment: I'm sorry, our rebuttal was not clear. We wish to cite this paper due to its similarity, yet we will not include it in the benchmark. We agree that this paper helps contextualize our work. Thanks for your advise. Best Regards
Summary: The paper presents a novel watermarking scheme for large language models (LLMs). The proposed WaterMax scheme aims to achieve high detectability while maintaining the quality of the generated text, without modifying the LLM's weights, logits, temperature, or sampling technique. WaterMax balances robustness and complexity, distinguishing itself from existing methods that often trade off quality for robustness. The performance of WaterMax is theoretically proven and experimentally validated. Strengths: 1. WaterMax introduces a new detection mechanisms that improve the detectability of watermark in short text, thus preserving the original LLM's token distribution and sampling method. 2. The scheme maintains the quality of the generated text, which is a critical factor in practical applications of LLMs. 3. The paper demonstrates that WaterMax achieves higher robustness and detectability compared to other state-of-the-art watermarking techniques, even under various attack scenarios. 4. The theoretical and experimental evaluations are thorough, covering multiple LLMs and benchmarks. This provides a solid validation of the scheme's effectiveness. Weaknesses: 1. The method involves generating multiple texts for a given prompt and selecting the most suitable one, which increases computational cost and latency. Although the paper suggests ways to limit this, it remains a potential drawback. 2. Based on the randomness in the sampling strategy, LLMs can generate diverse outputs from the same input. This method requires generating multiple outputs and selecting the most suitable one, which may damage the diversity and quality of the generated text. 3. The method's latency, especially in generating longer texts, could be a limitation in time-sensitive applications. 4. The scheme requires careful tuning of parameters to balance robustness, detectability, and computational cost. This adds complexity to its implementation. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. How does WaterMax perform with LLMs that exhibit low diversity in generated texts? Are there any measures in place to handle such scenarios effectively? 2. In Line 290, the authors mentioned that there are no methods that can defend against translation attacks. However, many works have been proposed to defend against paraphrase-based and translation-based attacks [1, 2, 3]. Comparison with these methods is necessary. 3. Watermark stealing attack has been proposed recently [4,5,6], which can infer the parameters of the watermarking scheme and remove the watermark from the text. Watermark stealing attacks exhibit significant ability in removing watermark. How does the proposed scheme perform against watermark stealing attacks? * [1] A. B. Hou et al., “SemStamp: A Semantic Watermark with Paraphrastic Robustness for Text Generation.” http://arxiv.org/abs/2310.03991 * [2] J. Ren et al., “A Robust Semantics-based Watermark for Large Language Model against Paraphrasing.” http://arxiv.org/abs/2311.08721 * [3] Z. He et al., “Can Watermarks Survive Translation? On the Cross-lingual Consistency of Text Watermark for Large Language Models.” http://arxiv.org/abs/2402.14007 * [4] N. Jovanović, R. Staab, and M. Vechev, “Watermark Stealing in Large Language Models.” http://arxiv.org/abs/2402.19361 * [5] Q. Wu and V. Chandrasekaran, “Bypassing LLM Watermarks with Color-Aware Substitutions.” http://arxiv.org/abs/2403.14719 * [6] Z. Zhang et al., “Large Language Model Watermark Stealing With Mixed Integer Programming.” http://arxiv.org/abs/2405.19677 Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: This method may affect the quality, diversity and latency of the output text. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > W4: "The scheme requires careful tuning of parameters ... This adds complexity to its implementation." The tuning of our algorithm is **less complex** than the tuning of KGW. The two main parameters of WaterMax are the number of chunk $N$ and the number of drafts per chunk $n$: - Detectability/robustness: the performance of Watermax can be computed *a priori* using the theoretical formulas in Eq.(4) or Eq.(6). - Quality: $(N,n)$ have basically no impact on text quality. - Complexity: Increasing (N,n) increases computational complexity. The cost of $n$ can be reduced through parallelization, wheras the cost of $N$ cannot. The size of the h-window mostly set the trade-off between robustness and security (against Watermark stealing attack). This is common to any fixed h-window based scheme like KGW and Aaronson's schemes. In contrast, the choice of $(\gamma,\delta)$ for KGW is not well documented: their impact on the detectability and quality is not easy to predict. Their setting must be done empirically. >Q1: "How does WaterMax perform with LLMs that exhibit low diversity in generated texts? ... " A LLM that exhibits low diversity in generated text is a LLM with low entropy in its completions. We direct the reviewer to Appendix J which is explicitly treating this question. We show that both Aaronson's scheme and KGW are sensitive to the LLM's entropy whereas WaterMax 's performance stays constant whatever the choice of LLM and temperature. The reason for this behavior being that WaterMax works on the level of chunks of token whereas the other two algorithms works token per token. This means that, as long as a LLM provides at least some diversity at the chunk level, WaterMax's performance should not suffer compared to other schemes. > Q2: "In Line 290, the authors mentioned that there are no methods that can defend against translation attacks. However, many works ... [1, 2, 3]. Comparison with these methods is necessary." We agree with the reviewer: this statement was not correct. We thank the reviewer for providing these references, which we will add to the paper. However, we cannot compare to the proposed methods as they don't provide any guarantees in terms of false-alarm contrary to the three algorithms studied in our work. Furthermore, due to the lack of theoretical false-alarm rates, these works only compute empirical FPR, --with the papers providing results for FPR from 1% to 10%. This is to be compared to our work which demands false-alarm rates at $10^{-6}$ or below. On the other hand, WaterMax, KGW and Aaronson easily reach 100% TPR with almost not cost on quality or complexity for FPR from 1% to 10%. > Q3: "Watermark stealing attack has been proposed recently [4,5,6] ... How does the proposed scheme perform against watermark stealing attacks?" We again thank the reviewer for these references. The problem of watermark stealing is mainly linked to the choice of 1) hashing function and 2) scoring function. 1) The attack in [4] only works for the Min-Hash and Sum-Hash functions. We don't use these hashing functions for any watermark scheme as they are intrinsically flawed. The hash is computed recursively on each token in the window, guaranteeing a new hash -- and thus a different key -- for each token. Moreover, we use much longer hash windows: $h=6$. 2) The references in [4,5] only work for UNIGRAM and KGW as they are based on the existence of a green-list of tokens. Like Aaronson, WaterMax is not based on a binary partition of the tokens: it associates a *real soft* value to each token. Stealing the secret of this kind of schemes has not been demonstrated. 3) References [4,5,6] work token-wise, whereas WaterMax works over chunks, not token. This means that it may select a chunk containing a low valued token because globally that chunk maximizes the score. Therefore, the frequency of a token may not be related to its associated secret value. --- Rebuttal Comment 1.1: Comment: I thank the authors for answering my questions, I prefer to maintain the score I initially assigned.
Summary: The work proposes a new LLM watermarking scheme called WaterMax, that does not modify the distribution or the sampling procedure but uses rejection sampling on multiple generated segments of tokens to cause a high watermark score. A theorem is given to characterize the detector power under attack, given certain independence assumptions. The experiments attempt to demonstrate superior detectability under the constraint of high quality, independent of the temperature/entropy, as well as superior robustness. Strengths: - **The approach is novel**: the work investigates a fundamentally different approach from prior baselines which is a valuable contribution in itself. - **Thorough analysis**: the work analyzes the proposed method from several perspectives, and both theoretically and empirically, making it a well-rounded study. I especially appreciate the care taken to discuss the underlying assumptions and investigate their violations in practice. - **Good writing**: the work is generally very well written, discusses the prior work well, and introduces the method in an understandable way, splitting the different components across different parts of the paper. Weaknesses: - **Questionable experimental evaluation**: The experimental setup of the main experiments (Figure 5) is mostly reasonable and the observation about the instability of Aaronson holds up. However, the key claim seems to be that "KGW used with common/realistic settings is both of lower quality and has lower detectability than WaterMax". I have several observations here. 1) This claim is made by observing 1.2x relative perplexity measured with a weak 3B model; it is not clear that this is a reliable estimate of text quality in pratical use-cases (see e.g. https://arxiv.org/pdf/2312.02382). The result would be more convincing if a larger model was used, but more importantly, a SOTA LLM (e.g. GPT4) was used as a judge of responses in at least one setting. The author's claim regarding this is ambigious, are they claiming that GPT4 is unreliable for this task? 2) The watermarked model itself is a single 8B model. I see two more small models in Appendix J (7B and 4B). On Llama2-7B the variants of KGW that were tried actually have /higher/ quality that WaterMax, so it is not clear if increasing delta would significantly ruin quality but it may lead to high TPR. Regardless, at least one larger model (e.g., 13B) should in my opinion be tried to substantiate the claim. 3) Even in Fig. 5 a non-standard variant of KGW is used with h=6 and gamma=0.5. To make the claim stronger, can the authors repeat the experiment with more standard h=4 and with gamma=0.25 as well? - **High computational cost**: while the authors acknowledge this explicitly to some extent, no quantifiable measurement of the computational cost is shown in the main paper, and the appendix suggests that the full experiment took 5x more with WaterMax. This clearly makes WaterMax inapplicable in most real deployments where latency is critical. However, despite the 5x slowdown, I still believe the paper would have sufficient merit as an exploration of an interesting idea, if the authors could fill the gaps in the experimental evaluation and make the case regarding quality vs power fully convincing. My current borderline accept score is conditioned on the authors providing these additional results to remove my doubts. Technical Quality: 3 Clarity: 4 Questions for Authors: - Is GPT4 unreliable for the task of judging text quality? Can you provide empirical evidence? - Can you repeat the main experiment with any model larger than 8B? - Can you repeat the main experiment with a more standard variant of KGW? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The only comment I have is regarding quantification of the efficiency degradation which I believe should be explicit in the main paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q1: "*Is GPT4 unreliable for the task of judging text quality? Can you provide empirical evidence?*" First of all, using closed-source models available through API is against our ethics because it prevents reproducibility: GPT-4 is not free; we do not fully control the prompts; it will be no longer available when replaced with GPT-5, etc. With that said, we acknowledge the trend towards evaluating text quality using LLM judges -- as an example the *Mark My Words* benchmark [1] directly used Llama2 as a judge for evaluating watermark quality. Several open-source proxies fine-tuned for this task have been recently proposed in the literature such as Prometheus [2]. Yet, our use of these models for evaluating the quality of watermark texts has not proven fruitful. We provide the results of using Llama-2-7b-chat, Mistral-7b-instruct-v0.2 and Prometheus-7b-v2.0 as LLM judges of the text quality in Fig. 3 of the global response. We asked each of these LLM to grade out of 5 the texts of the "Invented Stories" task of the *Mark My Words* benchmark following the methodology of [2], using the code provided in the official Githubs's implementation. 1. The grading is inconsistent between LLMs, with Prometheus-7b biased towards higher grades and Llama-2-7b-chat towards lower. Note that the ranking of each watermark is different for each LLM judge. Furthermore, grading can be inconsistent even for non-watermarked texts, with Mistral-7b-instruct-v0.2 highly biased in favor of non-watermarked texts generated with temperature 1.0. 2. The average grade does not fluctuate significantly for different watermark strength, whatever the choice of LLM judge. Interestingly, Fig. 3 shows that increasing $\delta$ can slightly increase the LLM grading, which is a total non-sense. 3. This prompted us to study the grading of barely legible texts, by randomly replacing a percentage of the characters within texts: LLM judges still provide similar average grades despite this attack. More precisely, the grade starts to degrade when the texts starts to look like random strings (around 15% of modified letters). However, there is no impact to the grading for, i.e a percentage of 10%. Here is an example of text with a maximum grade of 5/5: `` sir edward, a chivWlrous knight, had always been driGen by a sense of duty and a thiFst foD SdventuFe. as a yLung man, hW had heard tales of the legendary holy graiP, said to grant the deepest desJreD of FhoWe who posseZsed it. convinced tUat thS grail helX the keT to bringing pFace and prosperity tK hiD kingdLK, sir Wdward set out on a perilLus quesY to find it.\n\nhe bSgaJ his jouTnFy in the miDty mountaiMs of wales, where he souFht [...] `` Our conclusion is that, for the moment, these LLM judges don't seem suited to the evaluation of watermarked texts quality. As explained in the paper, we prefer the relative perplexity and ROUGE which fluctuate as expected with watermark strength were. [1] *Mark My Words: Analyzing and Evaluating Language Model Watermarks*, J. Piet et al. [2] *Prometheus: Inducing Fine-grained Evaluation Capability in Language Models*, S. Kim et al. > Q2: "Can you repeat the main experiment with any model larger than 8B?" We repeated the experiments with Llama-2-13b and Phi-Medium-4k, two models with 13 and 14 billion parameters respectively. We report the results in the global response in Fig. 1-2. The results are mostly identical to the smaller models: Phi-medium has high entropy and thus all watermarking scheme perform well, whereas Llama-2-13b has low entropy leading to low performance except for WaterMax. This points to the fact that the main parameter of importance is the entropy of the completion, not the size of model. (Note that there is no official 13b version of Llama-3-Instruct.) > Q3: "Can you repeat the main experiment with a more standard variant of KGW?" We chose to fix $\gamma = 0.5$ per the recommendations of the *Mark My Words* benchmark [1]. We repeated the experiments following the reviewer's specifications in Fig. 1a and 2a of the pdf. The results are mostly similar and illustrate the trade-off between quality and detectability: smaller $\gamma$ leads to better detectability but lower quality. --- Rebuttal Comment 1.1: Comment: I thank the authors for responding and I acknowledge the provided results as well as the promise to include explicit comments on computation inefficiency to the main paper. My main concern, which is that measuring free text quality using the perplexity of a 3B model is insufficient, was not addressed. I acknowledge the authors' strong ethical stance, and I do not intend to discuss the stance itself further. As the authors acknowledge, using SOTA LLMs as judges of text quality is (for better or worse) a widely popular method in current research, meant to save on costs of human studies. If the authors categorically reject this common evaluation method based on personal beliefs, it is their duty to find another comparably trustworthy way of measuring text quality. One option would be a human study, and another may be testing more capable open models. In their rebuttal response, the authors only experiment with weak 7B models, while there are orders of magnitude more capable open variants. Why were these models ignored? As text quality is a key metric that the work relies on, it is hard to accept the perplexity of a weak 3B model as the only way to measure it. --- Rebuttal 2: Comment: We thank the reviewer for their feedback. We would however like to clarify some points: > If the authors categorically reject this common evaluation method based on personal beliefs, it is their duty to find another comparably trustworthy way of measuring text quality. Their seems to be a misunderstanding with regard to our stance with respect to evaluating text quality using LLMs. The reason we have not chosen this method to evaluate the performance of watermarking methods is not due to personal belief but, as we claim in the rebuttal, due to its insufficient discriminative power in the specific case of watermarking. Evaluating the impact of a watermark on text is a very different problem than say, evaluating the quality of a newspaper article. Indeed, the watermarking signal is (hopefully) weak enough to be hard to detect, and as a consequence, using LLMs as a judge to measure its impact might not be the best tool for the job. Again, the only metrics we have found to vary alongside watermark strength were perplexity and the different variants of ROUGE. Our choice was thus made not due to personal preference but out of necessity of having a meaningful metric in the specific situation of watermarking. Now, there is an argument to be made against this choice: if an LLM cannot discriminate between watermarked and non-watermarked text, isn't that enough to claim perfect quality preservation? Such a decision would remove any benefit of so called "distortion-free" algorithms (such as Aaronson's) since even a high $\delta$ of KGW don't seem to impact the grading of an LLM consistently. As an analogy, we could also use LLM judges to grade the overall quality of pictures in the case of image watermarking. Most likely, the problem would be the same as the LLM would likely disregard the slight noise added by watermark methods whereas more common metrics (PSNR, LPIPS, SSIM) would take even small perturbations into account. The question then becomes: does the watermarker simply wants to preserve content quality as measured by a human judge -- allowing for larger distortion --, or does she want to keep as close to what would be the "natural" distribution of the content? We decided for the latter as it seems to capture the impact of watermark on text better than grading. >In their rebuttal response, the authors only experiment with weak 7B models, while there are orders of magnitude more capable open variants. Why were these models ignored? We followed the methodologies proposed recently in the watermarking literature: the *Mark My Words* benchmark [1] claims good grade correlations between GPT-3.5 and the Llama-2-7b model (see Section 4.1 of their paper where they claim a $R^2 =0.97$). Furthermore, the 7b model of Prometheus-eval [2] is claimed to attain SOTA performance at its size and is specifically fine-tuned on GPT-4 evaluations. If we are to trust the results from these works, there should not be a large gap in grading between model sizes. > [It] is hard to accept the perplexity of a weak 3B model as the only way to measure [text quality]. We chose Opt-2.7b for measuring perplexity as to follow KGW methodology in [3] (see Section 6 of their paper). We haven't found any difference between using a small or a larger model for measuring perplexity. In our case we tested Llama-2-7b, Llama-3-8b and Mistral-7b-v0.2 and despite some differences in the absolute value of the perplexity measured, there was no difference in relative perplexity between the watermarking algorithms -- which is what we want to measure. Consequently, we opted for the smaller model in order to match with the methodology of previous art. [1] _Mark My Words: Analyzing and Evaluating Language Model Watermarks_, J. Piet et al. [2] _Prometheus: Inducing Fine-grained Evaluation Capability in Language Models_, S. Kim et al. [3] _A watermark for large language models_ Kirchenbauer, John, et al.
Rebuttal 1: Rebuttal: We thank all the reviewers for their constructive comments which help us improving our submission. **The reviewers globally find that the limitations on the complexity is not enough outlined.** The submission already accounted that: - The main point of WaterMax is to strike the trade-off detectability/complexity (without loss of quality) instead of the trade-off detectability/quality (without extra complexity) well documented in the literature. - The paper proposes means to reduce the complexity overhead and latency (from sampling text to chunks, sampling chunks in parallel, beam search ...). - App. G shows experimental results about runtimes. $\to$ *We agree that the complexity overhead and the latency of the final scheme is not properly reported in the main body. We will rewrite Sect. 7.4 "Computational complexity" explicitly as a limitation with reported runtime and latency measurements. Especially, we will clearly say that WatermMax is 5 times slower for our recommanded setup.* **The attached pdf file shows the following news results as suggested by the reviewers.** - Fig. 1.a and Fig. 2.a: repeat the main experiment with a standard KGW set as $(\gamma,h)=(0.25,4)$ as Reviewer VU2G suggested. - Fig. 1.bc and Fig 2.bc: repeat the main experiment with larger models (Llama-2-13b and Phi3-medium), as Reviewer VU2G suggested. - Fig 3: demonstrate the limited applicability of LLM judges in evaluating watermarked text quality, in response to Reviewer VU2G's question. Pdf: /pdf/dc4eb548e3aa55efc08e97e46893fbbac97cc797.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Heterogeneity-Guided Client Sampling: Towards Fast and Efficient Non-IID Federated Learning
Accept (poster)
Summary: This paper provides an interesting approach HiCS-FL to investigate the client sampling problem in federated learning, especially for the non-iid setting. This paper estimates the clients' data statistical heterogeneity (label distributions) via the client-updated gradients of the output layer's weights. By using this distance information, the server can distinguish which client owns more balanced data. Strengths: - This paper found an interesting relationship between the last layer's bias and the label distributions of a given client data. - The proposed distance metric can be used to estimate the local data heterogeneity and distinguish which one's data is more balanced. - The paper is well-written and easy to follow. Weaknesses: - A potential privacy issue may be raised since it needs to access the individual client update (i.e., gradient) information. More client's data information may be leaked from the gradient inversion attacks [1]. [1] Evaluating Gradient Inversion Attacks and Defenses in Federated Learning, NeurIPS 2021. Technical Quality: 3 Clarity: 3 Questions for Authors: - Why the first training rounds set to be $\lceil N / K\rceil$ in line 235? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for the valuable comments. Please find our responses below. **Q4.1** A potential privacy issue may be raised since it needs to access the individual client update (i.e., gradient) information. More client's data information may be leaked from the gradient inversion attacks. **A4.1** In the classical federated learning, a trustworthy server is allowed to access clients’ model updates. In the proposed HiCS, no additional information is requested from the clients and thus HiCS does not cause more privacy risk than other client selection methods. Moreover, as we also discussed in response **R.1** in the **Author Rebuttal** block, HiCS can perform well in settings where clients have different objective functions such as in **FedProx** framework [1] and use adaptive optimizers such as **Adam** [2]. In the gradient inversion attacks mentioned by the reviewer, the victim model is typically trained with cross-entropy loss and SGD optimizer. The information contained in the gradients is eliminated by the Adam optimizer because varying step sizes are assigned to different components of the gradients to compute model updates. **Q4.2** Why the first training rounds set to be $t \leq \lceil N/K \rceil$? **A4.2** Global rounds $t = 1$ to $t =\lceil N/K \rceil$ are a warm-up phase for the server to collect local updates from clients and perform clustering based on the estimated data heterogeneity. We do not require all the clients to attend local training and upload model updates because that would violate the assumption of partial participation. After $\lceil N/K \rceil$ rounds, all available clients in the system are selected at least once so the server is able to perform clustered sampling in the following global rounds. More details can be found in **R.2** in the **Author Rebuttal** block. Reference: [1] Li T, Sahu A K, Zaheer M, et al. Federated optimization in heterogeneous networks[J]. Proceedings of Machine learning and systems, 2020 [2] Kingma D P, Ba J. Adam: A method for stochastic optimization[J], ICLR2015 --- Rebuttal Comment 1.1: Comment: I appreciate the authors' responses to my questions. I would like to keep my current score. --- Reply to Comment 1.1.1: Comment: We’re happy to clear up any confusion the reviewer may have had. We’d like to express our gratitude once again for the reviewer’s invaluable assistance in improving our work.
Summary: The paper address data heterogeneity by clustering clients via the gradients of the output layer to distinguish between clients with balanced from those with imbalanced data. HiCS-FL assigns different importance to the clusters according to their average estimated data heterogeneity. The paper found that there is a correlation between the gradient of the output layer and the label distribution. Strengths: The paper is very well written. The paper did a good job of summarizing what others have done and how HiCS-FL differs. Figure 2, and Figure 3 shows considerable gain over the baselines on common FL datasets. The method doesn't incur computation or communication overhead and leads to 2x speed up over other methods. Weaknesses: The approach of clustering to address heterogeneity might be practical as there are simpler methods that can achieve the same goal. The experiments were done on a small set of clients, 50. As the number of clients increases, the more pronounced the effect of heterogeneity can have. Technical Quality: 3 Clarity: 3 Questions for Authors: Data heterogeneity can be mitigated with pre-training and personalization. Would pre-training make clustering unnecessary? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper doesn't list limitation but I think a limitation there is the experimental setting. Using Dirichlet to generate clients is unrealistic (I know this is common in FL papers), and the paper focuses only on classification tasks. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for the valuable comments. Please find our responses below. **Q3.1** The approach of clustering to address heterogeneity might be practical as there are simpler methods that can achieve the same goal. The experiments were done on a small set of clients, 50. As the number of clients increases, the more pronounced the effect of heterogeneity can have. **A3.1** As we also stated in our response **A2.2** to reviewer **yLvJ**, clustered federated learning is an alternative approach to tackling the challenges of data heterogeneity. There, the server clusters clients into multiple cohorts and trains personalized models reflecting these cohorts’ preference. However, this approach contradicts our aim at training a single global model with strong generalization capabilities and requires additional resources to store multiple personalized models in the server. Please note that our method can readily be applied to the cross-silo federated learning scenarios where a server attempts to train a strong model that can be adapted to downstream tasks efficiently, involving multiple data centers such as hospitals or banks. In the cross-device federated learning settings, HiCS can prove valuable even if thousands of clients are involved as HiCS can recognize the valuable clients with class-balanced data. **Q3.2** The paper doesn't list limitations but I think a limitation there is the experimental setting. Using Dirichlet to generate clients is unrealistic (I know this is common in FL papers), and the paper focuses only on classification tasks. **A3.2** We rely on Dirichlet distribution since using different concentration parameters helps model heterogeneity in real-world datasets. The limitation of previous FL papers is in using only one concentration parameter to generate data partitions, which is indeed unrealistic. In this paper, however, we take a step further and use multiple concentration parameters to generate data partitions. Admittedly, the number of concentration parameters alpha is still not quite sufficient to simulate full richness of real-world settings. We appreciate the reviewer pointing out this limitation – we are committed to exploring a wider range of real-world datasets in our future work. **Q3.3** Data heterogeneity can be mitigated with pre-training and personalization. Would pre-training make clustering unnecessary? **A3.3** As shown by the experimental results on Mini-ImageNet dataset in the paper and the supplementary experiments in this rebuttal, the pretrained models on ImageNet still encounter performance degradation when faced with severe data heterogeneity. On another note, personalized federated learning (pFL) has a fundamentally different goal than general federated learning. In pFL, the server does not attempt to train a global model with strong generalization capacity but various personalized models based on clients’ preference, which may perform well on local data distribution. However, these personalized models cannot work when the inference data distribution is unknown or shifts from training, which requires strong generalization ability of the global model. --- Rebuttal Comment 1.1: Comment: > A3.3 As shown by the experimental results on Mini-ImageNet dataset in the paper and the supplementary experiments in this rebuttal, the pretrained models on ImageNet still encounter performance degradation when faced with severe data heterogeneity. I don't understand the setup for " experimental results on Mini-ImageNet dataset in the paper and the supplementary experiments in this rebuttal", which table should I be looking at? I checked the paper and don't see which dataset did you pretrain with. --- Rebuttal 2: Title: Experiments with pretrained models Comment: We appreciate the reviewer for pointing out this potential confusion, which may also affect other reviewers, and for helping us improve our paper. As stated in the General Settings in Appendix A1.1 (line 456), all experiments were conducted on Mini-ImageNet using a pretrained ResNet18. To clarify, we kept the feature extractor of the pretrained model (on ImageNet1K) frozen and only fine-tuned the fully connected layers on the target dataset. As a result, the test accuracy on the target dataset improves from the initial, similar to training from scratch. The experimental results on Mini-ImageNet indicate performance degradation under severe data heterogeneity. We apologize for the unclear description of our experimental setup and will clarify the pretrained settings in the main paper in the revised version.
Summary: The paper addresses the challenges posed by non-IID data in FL systems, particularly under communication constraints where only a small fraction of clients can participate in each training round. It introduces HiCS-FL, a novel client selection method. HiCS-FL estimates the statistical heterogeneity of a client’s data using updates from the network’s output layer to cluster and sample clients more effectively. Based on the heterogeneity information, clients are clustered, and those with more balanced data are preferentially sampled. Experimental results demonstrate HiCS-FL's superior performance compared to state-of-the-art methods. Strengths: The introduction of a hierarchical clustering-based method that considers data heterogeneity is a significant advancement over existing client selection techniques. The solution considers the balanced and imbalanced client data to train the model, which is a very interesting perspective. The extensive experiments on multiple datasets and comparison with several baselines demonstrate the robustness and generalizability of HiCS-FL. This paper is well-written. Weaknesses: - The paper primarily focuses on label imbalance when discussing data heterogeneity. However, other types of imbalances, such as feature imbalances within the data, are not considered. Addressing these other forms of data imbalance could provide a more comprehensive solution. - The method involves collecting updates from the fully connected layer, which may inadvertently reveal the label distribution of a client’s data. This could raise privacy concerns, as it potentially compromises the confidentiality of the client's data distribution. - The paper does not compare HiCS-FL with other relevant algorithms, such as Oort and Auxo, which also focus on client selection and clustering. Including these comparisons or discussions would provide a clearer picture of HiCS-FL's relative performance and effectiveness. [1] Oort: Efficient Federated Learning via Guided Participant Selection [2] Auxo: Heterogeneity-Mitigating Federated Learning via Scalable Client Clustering - The assumption that clients can be clustered into M groups, where M is larger than the number of selected clients (K) per round, may not be practical in large-scale systems. Specifically, the paper assumes that all clients are available during the initial clustering phase and that no new clients join the system later, which is unrealistic in dynamic FL environments. There might be additional overhead associated with the proposed clustering method. The paper should ensure that the experimental comparisons account for this overhead to maintain fairness in the evaluation of HiCS-FL’s performance. Technical Quality: 2 Clarity: 3 Questions for Authors: - The rationale behind using gradients from different rounds for clustering clients is not clearly explained. Given that gradients can vary significantly across rounds, further clarification is needed to justify this approach and its impact on the effectiveness of the clustering process. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: See weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful comments. Our answers addressing unique questions are below; responses to concerns raised by multiple reviewers are in the Author Rebuttal block for sake of brevity. Note that **Tables 4-5** can be found in the **pdf** file at the bottom of the **Author Rebuttal** block. **Q2.1** The paper primarily focuses on label imbalance when discussing data heterogeneity. However, other types of imbalances, such as feature imbalances within the data, are not considered. **A2.1** We appreciate the reviewer's point regarding feature skew, wherein clients may possess data with analogous features but divergent labels; this indeed differs from the label skew addressed in our paper. The inconsistency between local models trained on data exhibiting feature skew certainly hinders the convergence of the global model. In scenarios characterized by pronounced feature skew, personalized federated learning (pFL) [1], which aims to train multiple personalized models based on clients' preferences, may help mitigate this challenge. While it appears difficult to simultaneously address both label and feature skew due to their inherently opposing nature, it would certainly be of interest to deploy our method within a pFL framework that allows the server to train multiple global models catering to varied client preferences. **Q2.2** The paper does not compare HiCS-FL with other relevant algorithms, such as Oort and Auxo, which also focus on client selection and clustering. **A2.2** We thank the reviewer for pointing out these two relevant papers. Oort relies on the idea of assigning higher importance to the clients with higher training loss and applies the UCB algorithm with the training loss as the reward function to balance exploration-exploitation. Auxo is an interesting paper attempting to address the challenge of data heterogeneity via personalized models aggregated by clustering. While this presents an exciting line of work, it conflicts with the motivation of training a global model with strong generalization ability pursued by our paper, rendering a fair comparison between two methods rather difficult. Therefore, we only implemented Oort and compared it with our HiCS (see **Table 4**). Nevertheless, Auxo introduced an interesting idea of online clustering without assigning a fixed number of clusters, which may potentially be adopted by our method. In any case, we are committed to citing and discussing these two highly relevant works in our revised manuscript. **Q2.3** The assumption that clients can be clustered into $M$ groups, where $M$ is larger than the number of selected clients $K$ per round, may not be practical in large-scale systems. **A2.3** As discussed at the end of Section 3.3, the distance function in Equation (9) can be reduced to the conventional cosine similarity when clients exhibit similar levels of statistical heterogeneity, despite potential differences in data distribution. Under these circumstances, our HiCS-FL can recover the performance of the previously established Clustered Sampling (CS) approach [2]. While CS suggests that the number of clusters M should be greater than or equal to the number of selected clients $K$, our HiCS-FL does not require $M>K$ but adheres to the CS settings to ensure a fair comparison. To elucidate the impact of the number of clusters, we conducted supplementary experiments with HiCS-FL using varying numbers of clusters M and compared these results to those obtained with $M=K$ as presented in the paper. The results of those experiments can be found in **Table 5**. As shown there, HiCS performs well with smaller $M < K$ as long as M is not too small (e.g., $M > 0.3K$). **Q2.4** The rationale behind using gradients from different rounds for clustering clients is not clearly explained. Given that gradients can vary significantly across rounds, further clarification is needed to justify this approach and its impact on the effectiveness of the clustering process. **A2.4** The optimal approach would indeed involve collecting updates from all clients within the same round and using those for clustering. However, this may not be feasible due to clients’ varied availability and would undermine the goal of client selection aimed at reducing communication overhead. Consequently, we adopt a compromise by gathering updates from clients during the initial $t \leq \lceil N/K \rceil$ rounds, thereby adhering to communication constraints. As demonstrated in Appendix A.4, update of the output layer $\Delta \mathbf{b}^{(k)}$ is influenced by the data distribution of client $k$ and the global model's capability. During the early stages of federated learning (FL) training, the global model's capability is relatively consistent, allowing us to treat updates collected in these rounds as equivalent. This "warm-up" strategy is also employed by CS [2] and FedCor [3] to avoid requiring all clients to attend local training in the same round. Even though the gradients computed from different global rounds could be different, the pattern corresponding to the data distribution is contained in the gradients and can be utilized for clustering as Auxo (the paper brought up by the reviewer) does to generate different cohorts in online clustering. Reference: [1] Tan A Z, Yu H, Cui L, et al. Towards personalized federated learning[J]. IEEE transactions on neural networks and learning systems, 2022 [2] Fraboni Y, Vidal R, Kameni L, et al. Clustered sampling: Low-variance and improved representativity for clients selection in federated learning, ICML2021 [3] Tang M, Ning X, Wang Y, et al. FedCor: Correlation-based active client selection strategy for heterogeneous federated learning, CVPR 2022 --- Rebuttal Comment 1.1: Comment: Thanks for the clarification. I've updated my score. --- Reply to Comment 1.1.1: Comment: We’re happy to clear up any confusion the reviewer may have had. We’d like to express our gratitude once again for the reviewer’s invaluable assistance in improving our work.
Summary: The authors propose a novel client selection method to address federated learning scenarios where clients exhibit varying degrees of data imbalance. The authors estimate the label distribution entropy of clients on the server side using the gradient of the output layer's bias. Based on this estimation, they cluster and sample clients, utilizing those with more balanced data to train the global model. Experiments in the paper demonstrate that the proposed method, HiCS-FL, achieves better and faster convergence compared to baselines. The authors provide detailed theoretical proofs to support their approach. Strengths: 1. This paper proposes HiCS-FL, a new federated learning method designed to adaptively handle clients with varying label distributions. It achieves better convergence and greater efficiency compared to baseline methods in experiments. 2. HiCS-FL utilizes the gradients of the output layer's bias to estimate clients' statistical data heterogeneity. This estimation is then used to cluster and sample clients, enabling efficient training of the global model with relatively data-balanced clients. 3. This paper provides a detailed theoretical analysis of HiCS-FL. Weaknesses: See questions. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Is it due to privacy concerns that clients cannot directly compute and upload label distribution entropy to the server? If so, does estimating label distribution entropy through $ \Delta \mathbf{b}^{(k)} $ still pose privacy concerns for clients? Additionally, could you consider conducting an experiment comparing the federated learning performance (accuracy, convergence speed, etc.) using the true label distribution entropy versus the gradient-based label distribution entropy estimation proposed in this paper? 2. Why wasn't an experimental analysis conducted on different values of $ \lambda $ in Eq. 9? 3. Will the final trained global model be used for inference on local clients? If so, wouldn't the client selection method that prefers more data-balanced clients be suboptimal for local clients with severe data imbalance? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors mention leaving studies of system heterogeneity to future work in the first paragraph of the Introduction, but there is no further discussion on the limitations of the proposed approach. There are no concerns raised about the societal impacts of the research. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the informative comments and valuable questions. Our answers addressing unique questions are below; responses to concerns raised by multiple reviewers are in the Author Rebuttal block for sake of brevity. Note that **Tables 1-3** can be found in the **pdf** file at the **bottom** of the **Author Rebuttal** block. **Q1.1** Is it due to privacy concerns that clients cannot directly compute and upload label distribution entropy to the server? If so, does estimating label distribution entropy still pose privacy concerns for clients? **A1.1** Please note that other reviewers raised the same question. For our response, please see **R.1** in the **Author Rebuttal** block. **Q1.2** Additionally, could you consider conducting an experiment comparing the federated learning performance (accuracy, convergence speed, etc.) using the true label distribution entropy versus the gradient-based label distribution entropy estimation proposed in this paper? **A1.2** Thanks for the suggestion! To address it, we conducted additional experiments in the oracle settings where the server has access to the true entropy of the clients’ data distribution. As expected, providing the true entropy information further improves the accuracy and speeds up the convergence of the proposed method. The results are shown in **Table 1**; for convenience, the variant of HiCS that relies on the true entropy values (provided by the clients) is labeled as ‘Oracle’. **Q1.3** Why wasn't an experimental analysis conducted on different values of $\lambda$ in Eq. 9? **A1.3** Following up on the reviewer’s question, in **Table 2** we show the results of experiments for different values of $\lambda$. As can be seen there, our method achieves the best performance with $\lambda = 10$. As a general comment (and as discussed at the end of Section 3.3), $\lambda$ has to be set to a value that ensures the second term in equation (9) dominates the first term; if $\lambda$ is too small, the clustering prioritizes the cosine similarity of gradients over data heterogeneity. In the extreme case of $\lambda=0$, our algorithm collapses to the prior work Clustered Sampling [3]. **Q1.4** Will the final trained global model be used for inference on local clients? If so, wouldn't the client selection method that prefers more data-balanced clients be suboptimal for local clients with severe data imbalance? **A1.4** Since the original motivation of Federated Learning is to collaboratively train a global model with strong generalization capability, the experimental results shown in the manuscript are performed on the global test dataset with balanced data. Nevertheless, having obtained such a global model, the clients certainly may personalize the global model by fine-tuning it on local data. To investigate this more closely, we conducted supplementary experiments where the global model is personalized by fine-tuning on local data and compared our HiCS with several other client sampling methods. The results of inference tests on the local data from the same distribution as in fine-tuning are shown in **Table 3**. The results demonstrate the value of HiCS training strategy – namely, aiming at achieving strong generalization ability on all classes – as it sets up the fine-tuned model for success. Indeed, the fine-tuned HiCS generally outperforms the competing methods on personalized/local inference tasks. **Q1.5** The authors mention leaving studies of system heterogeneity to future work in the first paragraph of the Introduction, but there is no further discussion on the limitations of the proposed approach. **A1.5** To partly address the reviewer’s comment, we conducted supplementary experiments where we investigated the impact of clients’ availability on the proposed method. Please note that another reviewer raised the same point -- we provided the same response as **R.2** in the Author Rebuttal block. As a general comment, please note that we consider system heterogeneity and data heterogeneity to be complementary challenges in federated learning. System heterogeneity encompasses variations in computational resources, communication bandwidth, and fault tolerance, which are unrelated to the statistical heterogeneity of client data. The former may cause difficulties when clients with valuable balanced data cannot attend a training round due to poor resources; in such scenarios, our HiCS-FL must manage the trade-off between data heterogeneity and training efficiency. We will add this brief discussion to the conclusion section of the manuscript. In this paper, the primary focus is on enhancing the generalization capability of a global model trained within the federated learning framework, while the system heterogeneity remains an exciting topic for future extensions of our work. Reference: [1] Li T, Sahu A K, Zaheer M, et al. Federated optimization in heterogeneous networks[J]. Proceedings of Machine learning and systems, 2020 [2] Kingma D P, Ba J. Adam: A method for stochastic optimization[J], ICLR, 2015 [3] Fraboni Y, Vidal R, Kameni L, et al. Clustered sampling: Low-variance and improved representativity for clients selection in federated learning, ICML, 2021 --- Rebuttal Comment 1.1: Comment: Thank you for the authors efforts in addressing the review comments in the rebuttal. Overall, the authors' responses and new experiments have resolved my concerns. I am willing to increase the rating. It is interesting that the authors found relevance between the output layer bias gradients and class distribution entropy, and applied it to federated learning. I hope that in the future, they might consider how to use this in other tasks like regression, not only classification. --- Reply to Comment 1.1.1: Title: Thank you for your contribution Comment: We’re happy to clear up any confusion the reviewer may have had. We’d like to express our gratitude once again for the reviewer’s invaluable assistance in improving our work, and we are open to incorporating additional tasks, such as regression, in future work.
Rebuttal 1: Rebuttal: We would like to thank the reviewers for their time and valuable comments. We have attempted to address all the points they raised. Excerpts from their most significant/repeated questions, followed by our responses, are below. Please note that the **tables** reporting new experimental results are in the **pdf** file at the **bottom** of this block. **W.1** : The method involves collecting updates from the fully connected layer, which may inadvertently reveal the label distribution of a client’s data. This could raise privacy concerns, as it potentially compromises the confidentiality of the client's data distribution. **R.1** While the entropy of a client's label distribution alone does not reveal critical information about the client’s data (i.e., does not provide opportunities for privacy attacks), if combined with other information (e.g., which classes are in the dataset), it may indeed cause privacy concerns. This, in turn, may compel the client to withhold the true entropy values, which motivated us to develop the method in the manuscript. Please note that, as emphasized in the title of Section 3.2.2, our method characterizes the level of data heterogeneity by relying on a proposed proxy rather than the actual entropy – this proxy, $\hat{H}(\mathcal{D}^{(k)})$, is computed by relying on the label distribution of client $k$ but is not identical to the true entropy. As stated in Theorem 3.3, if client $u$ has a more balanced label distribution than client $k$, $\hat{H}(\mathcal{D}^{(u)})$ is expected to be larger than $\hat{H}(\mathcal{D}^{(k)})$. Building on this observation, we proposed a method capable of identifying the clients with large $\hat{H}(\cdot)$, implying more balanced data, without exposing the true entropy values. In fact, establishing a precise relationship between $\hat{H}(\mathcal{D}^{(k)})$ and the true entropy of client $k$ appears rather challenging. Note that the analysis involving expressions (6)-(8) in the paper is predicated upon the assumption that an FL system is running vanilla FedAvg and using SGD optimizer, which is a setting indeed potentially vulnerable to privacy attacks due to the linear relationship between model updates and gradients. However, we also demonstrated that our method, HiCS-FL, can readily be implemented within **FedProx** [1] framework trained by **Adam** [2] optimizer, and outperform competing client selection methods (experiments on CIFAR10 and Mini-ImageNet). Adam is an optimizer which adaptively assigns various stepsizes to different gradient components according to their magnitude so the true information of label distribution contained in the gradients cannot be recovered. Finally, please note that our method **does not** require any information beyond what is already collected by the servers in federated learning systems, where local model updates (including the output layers) are typically collected by the server for aggregation. We certainly appreciate the reviewer’s comment – further close examination of privacy issues is a major potential extension of the presented work. **W.2** The paper assumes that all clients are available during the initial clustering phase and that no new clients join the system later, which is unrealistic in dynamic FL environments. **R.2** The purpose of the “warm-up” phase ($t \leq \lceil N/K \rceil$) is to collect updates of the output layer from all the available clients in the system in order to facilitate clustering. Although we conducted all the experiments in the setting where clients have fixed availability, our HiCS-FL does not assume all the clients are available in the “warm-up” phase and can be adapted to more practical scenarios where clients have dynamic availability, as explained and illustrated by the experiments below. In such a scenario, the warm-up phase can be implemented by the available clients at the beginning of training. The proposed hierarchical clustered sampling is then implemented only among the available clients; the available clients with more balanced data are preferred. When new clients join the system at global round $t$, the server selects these new clients at round $t+1$ to approximate their data heterogeneity. To provide further insights, we conducted additional experiments on CIFAR10; the results are reported in **Table 6**. As can be seen there, HiCS outperforms baselines that account for variations in clients’ availability. **W.3** There might be additional overhead associated with the proposed clustering method. The paper should ensure that the experimental comparisons account for this overhead to maintain fairness in the evaluation of HiCS-FL’s performance. **A.3** We discussed the computation and communication aspects of HiCS-FL in Appendix A.11. Compared to the random sampling, HiCS-FL’s only additional computations are in the one-shot clustering of the updates of the clients’ fully-connected layers. Since the dimension of bias in the fully-connected layers is typically $C$, the total number of classes, the computation needed for clustering is negligible. On another note, the competing client sample methods require much more computation and communication resources than HiCS-FL. Specifically, Power-of-Choice and DivFL require all clients in the systems to attend training each global round; FedCor requires all clients to conduct local training in the “warm-up” phase; Clustered Sampling conducts clustering on the entire updates of local models. These additional operations present significant computation overhead as compared to random sampling. Reference: [1] Li T, Sahu A K, Zaheer M, et al. Federated optimization in heterogeneous networks[J]. Proceedings of Machine learning and systems, 2020 [2] Kingma D P, Ba J. Adam: A method for stochastic optimization[J], ICLR2015 Pdf: /pdf/f7cac91ace11c32b5ba50c0acb008d8606d4aba1.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Inferring Neural Signed Distance Functions by Overfitting on Single Noisy Point Clouds through Finetuning Data-Driven based Priors
Accept (poster)
Summary: The paper presents a method for learning neural signed distance functions (SDFs) from noisy point clouds. This approach integrates the advantages of data-driven and overfitting-based methods to enhance generalization, accuracy, and inference speed. They employ statistical reasoning within local regions to refine the estimation of SDFs without requiring clean point clouds or signed distance supervision. Strengths: 1. This paper successfully combines data-driven and overfitting-based approaches, leveraging their strengths while mitigating their weaknesses. 2. This paper shows that the method enhances robustness against noise, a common real-world challenge. They effectively use local statistical reasoning to adjust the neural SDFs, improving accuracy in surface reconstruction and denoising tasks, as evidenced by the benchmark results. Weaknesses: 1. There are a lot of typos in the manuscript, e.g., In the Abstract and Introduction, "stat-of-the-art" should be "state-of-the-art"; Line 57: "Learning implicit functions have achieved" should be "Learning implicit functions has achieved"; Line 39: "overfits an neural network" should be " a neural network", "Querys", and so on. In the Abstract, the "prompt" might be "promote". It would be best to check out all the typos. 2. Dependency on initial conditions: The performance might heavily rely on the quality of the initial data-driven priors, which might limit the method's effectiveness when such priors are not well-tuned or applicable. 3. The paper does not extensively discuss the computational demands or scalability when applied to larger datasets (e.g., nuScenes and Waymo) or more complex scenarios. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. How does the method perform under limited computational resources? When scaling to larger datasets, is there a significant trade-off between accuracy and computational efficiency? 2. Can the method maintain its performance across different types of noise or corruption that were not part of the training set? 3. How does the method perform when such priors are unavailable, or how sensitive is it to the initial conditions set by these priors? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: See the "Questions" and "Weaknesses" sections above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. Typo We will correct these typos and proofread the paper more carefully. 2. Impact on Performance by the Prior Since we use ground truth signed distances and a mature neural network to learn an implicit function as a prior, the network usually converges quite well. Imperfect priors should not be a concern because of our robust loss. To justify this, we conduct an experiment with a prior that is trained in different iterations, such as 500, 1000, 1500, and 2000 (used in our paper). As shown in Fig. 5 in the rebuttal PDF, we did not see a significant difference between these reconstructions. 3. Time Complexity We reported time comparisons in Tab. 9 and Tab. 11. Using a prior can provide a good object-like SDF and adjusts the field accordingly fast. Moreover, our loss that infers information in local regions can also be more efficient than the global strategy, since we avoid points and queries that are far away from each other, please read “G4. Why Local Patches Work Better” above for more details. Thus, our method is much more efficient than globally overfitting based methods. The time is recorded on a single RTX3090 GPU card. 4. Results on nuScenes We did report results on complex scenarios like the results on KITTI in Fig. 8 and Paris-rue-Madame in Fig. 9. Both of these results are produced on large scale real scans. Similarly, we additionally report a result on a scene from nuScenes in Fig. 6 in the rebuttal PDF, where our reconstructed mesh surface is nearer to the point and not fat like the N2NM’s result. All these results indicate that our method can handle very large scale and complex real scans. 5. Running with Limited Resources Limited computational resources will make a negative impact on the speed but not the accuracy. As we explained in “G4. Why Local Patches Work Better” in the rebuttal above, we train each patch in a batch, which is a good way to run our method on multiple GPU cards in parallel. Another trade-off between accuracy and computational efficiency is to downsample the noisy point cloud. We additionally conducted an experiment with downsampled noisy points in Fig. 2 and Tab. 1 in the rebuttal PDF. We can see that we can work well on much fewer points, and also save some time. 6. Noise Types Please read ”G6. Gaussian Noise” in the rebuttal above for more details. 7. Corruptions Since our method is targeting on inferring SDF from single noisy shapes, the shapes we used are mainly corrupted by noises, where the prior is learned on clean shapes that are not corrupted at all. But we found our method can generalize the prior on unseen shapes or scenes quite well. Like the objects in SRB or FAMOUS in Fig. 4 and Fig. 5, humans in D-FAUST in Fig. 6 and scenes in 3D scene in Fig. 7, our prior did not see these kinds of data during training. Similarly, our method also did not see any corruptions like some missing structures on real scans like KITTI in Fig. 8, Paris-rue-Madame in Fig.9, and nuScene in Fig. 6 in the rebuttal PDF, but our method also handles these corruptions quite well. This is because our statistical reasoning loss can infer information in local regions in an overfitting manner. 8. Results without Our Prior Using data-driven prior is one of our contributions to combine the data-driven based prior with the overfitting based strategy. Our ablation studies in Tab. 9 justify the effect of the prior. Using either random parameters in the network (“Without Prior”) or random initialized shape code (“Without Prior”, “Without Embed”) indicates no prior is used in the experiments, which either lowers the accuracy and slows down the convergence. --- Rebuttal 2: Comment: Hi, it's near the end of the rebuttal period. Could you please respond to the rebuttal? --- Rebuttal Comment 2.1: Comment: Dear reviewer 76DM, As the reviewer-author discussion period is about to end, can you please let us know if our rebuttal addressed your concerns? If it is not the case, we are looking forward to taking the last minute to make further explanation and clarification. Thanks, The authors
Summary: The presented work tackles the problem of reconstructing a shape from a noisy point cloud (PC) into an implicit representation, using the signed distance function (SDF). Recent methods are categorised into i) data-driven approaches that learn a shape prior with a dataset of training shapes, with poor generalization but generally faster, or ii) approaches that overfit on a single PC at a time, with better generalization but slow convergence. The paper proposes a middle ground: finetuning a pre-trained shape prior model to a single PC, using a novel approach that performs "local statistical reasoning": supervising local parts of the shape with multiple sampled noisy PC. The method is experimentally validated against multiple baselines, either data-driven or overfitting-based, and on multiple datasets showing better reconstruction accuracy at lower convergence times. A final ablation study shows that the prior and locality of the loss play a role in the method's performance. Strengths: 1. Experiments are extensive: the PC reconstruction is evaluated against multiple baselines (both data-driven or overfitting-based) on various datasets (ShapeNet, ABC, FAMOUS, SRB, and D-FAUST). In nearly all cases and presented metrics, the proposed approach is the best. 2. I found the locality approach to solving the problem interesting; supervising parts of the shape at a time instead of the full one. 3. Using a shape prior that is unrelated to the target object (such as ShapeNet prior for FAMOUS reconstruction) also works. This suggests that this approach is applicable even if the target shape is from an unknown category (thus helps for generalization). 4. The method is fairly simple and clear, which should make it easily usable. Weaknesses: 1. The main pipeline is fairly simple (fine-tuning a pre-trained model). I believe the locality of the "statistical reasoning" is perhaps the main novelty, and the ablation shows that it improves accuracy and convergence time (§ at L.342), but it is not discussed or explained why using a local approach instead of a global one improves convergence and accuracy. 2. The writing could be improved. More specifically, the experimental results on the different datasets all make more or less the same points and so could be condensed into something more compact. This can free up some space to expand other parts and increase their clarity, e.g., the related works and the "Neural Signed Distance Function." paragraph (L.101). 3. Throughout the work, the noise is assumed (additive) Gaussian. While this is not a weakness in itself, this still is an assumption that should be explained and made clear: 1. Why is a Gaussian noise model good? Are Lidar data, for instance, usually with gaussian noise and what real use-cases does it apply to? Generally speaking, the motivation of the work could be expanded beyond "reconstructing a PC to an SDF". 2. "Statistically, the expectation of the zero-level set should have the minimum distance to all the noisy point splitting." (L.131-132) This is true assuming Gaussian noise with expectation 0. With other noise types the loss may not perform as well (e.g., "Text removal" in Noise2Noise [1]). [1] Lehtinen et al., Noise2Noise: Learning Image Restoration without Clean Data, ICML 2018. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. What is the reasoning behind using the locality in the "statistical reasoning" and its better performance? 2. For the ShapeNet dataset, is it per-category and then averaged or trained on multiple categories combined? (and which ones?) Same questions regarding the SRB/FAMOUS experiments with ShapeNet prior. 3. What are the PC sizes? It's hard to appreciate the other parameters without this info. What are the datasets sizes in number of training and test shapes? 4. How does the prior help if it apparently does not need to be from the same dataset as the test shapes (e.g., SRB/FAMOUS experiments)? Is it actually pre-training on any valid SDFs that helps? If so, how do you think it compares to the simple SAL [2] geometric initialization? 5. Why are the metrics (CD-L2/L1, NC, F1-score) not used consistently through the datasets and results? [2] Atzmon and Lipman, SAL: Sign Agnostic Learning of Shapes from Raw Data, CVPR 2020. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Limitations are briefly mentioned in the appendix regarding noise level. What kind of noise level is considered too big? It might also be worth commenting on the noise types (see Weakness 3.), is the method working only for Gaussian noises? Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. Contribution and Novelty Our novelty is not only the loss but also the way of generalizing the prior. Please see G1-G3 in our rebuttal above. 2. Why Local Patches Work Better Please read “G4. Why Local Patches Work Better” in our rebuttal above for the analysis. 3. Writing We will follow your advice to polish our writing in our revision. 4. Gaussian Noise Please read ”G6. Gaussian Noise” in our rebuttal above for more details. 5. What is Statistical Reasoning and Why Local Patches Work Better Please read “G5. Statistical Reasoning” and “G4. Why Local Patches Work Better” in our rebuttal above. 6. Training Data-driven Priors The prior is learned on multiple shape categories on ShapeNet. We also tried to learn the prior in each category separately, the performance can get even better than we reported in the current submission. 7. Priors Used on SRB/FAMOUS As stated in Line 237-239 and Line 253-255, we use the prior learned from ShapeNet, and generalize the learned prior on SRB/FAMOUS for inference. Either SRB or FAMOUS is much different from ShapeNet. Our good performance on these real scans justify the good generalization ability of our method. 8. Point Cloud Size and Training and Testing Sets For point clouds for training, we used the data from [50] for fair comparison on ShapeNet, where each point cloud has 3k points. We used over 27k shapes from 13 categories in ShapeNet for learning a data-driven prior, and generalize this prior on SRB, FAMOUS, D-FAUST, 3D scene, KITTI, and Paris-rue-Madame datasets which do not contain a training set. We used nearly 5k shapes from the training set in ABC to learn a prior which is generalized to the testing set, where each shape has 5k to 12k points. For testing shapes, the point number of point clouds is determined by datasets. We used about 7k shapes from 13 categories on ShapeNet, and each point cloud has 3k points. On SRB, there are 5 shapes, and each shape contains about 57k to 95k points with noise. On ABC, the testing set we used include 100 shapes, each shape has 5k to 12k points. On FAMOUS, there are 22 shapes, and each shape has 20k points. On D-FAUST, there are 5 shapes for testing, and each shape has 200k points. On 3D scene, there are 5 scenes, and each scene has about 3900k points. On KITTI, we use one scene which contains 13720k points. And on Paris-rue-Madame, we use one scene which contains 10000k points. We will make this more clear in our revision. 9. Difference to SAL geometry initialization Geometry initialization introduced by SAL uses analytically determined parameters to initialize an SDF like a sphere, which requires to set a proper radius and just determines the outside and inside. While ours can produce an object like SDF, which produces more details in the initialization, and more importantly, our prior can adjust the SDF fast according to what it saw during the training. We do not think any valid SDF can help. Since simple shapes just provide an SDF with a boundary indicating the inside or outside, but can not adjust the SDF with any experience. We conduct an additional experiment to compare with different initializations in Tab. 2 and Fig. 4 in the rebuttal PDF, including random initialization, geometry initialization, simple square, and ours. We can see our prior can reconstruct more accurate surfaces from single noisy point clouds in much shorter time than any other initializations. 10. Metrics on Different Benchmarks For fair comparisons, we follow previous methods to report the results in terms of the same metrics they used, which leads to inconsistent metrics on different benchmarks. 11. Big Noise Level There is no standard to define what noise level is a big noise level. But we can handle pretty large noises like what we show in Fig. 1 in the rebuttal PDF. Noises with a variance of 5% are usually used as a large noise by previous methods, but we can handle noises with a 7% variance. --- Rebuttal Comment 1.1: Comment: I thank the authors for their answers and clarifications to my questions, and for the added results in the rebuttal PDF. I believe my weakness #3 regarding Gaussian noise has been mostly answered in the common rebuttal above, with the additional experiment in the rebuttal PDF. Additionally, I had misunderstood that all experiments were using additive Gaussian noise, while some are: > real scans with unknown noise types, such as our results on SRB in Tab.4 and Fig.4, D-FAUST in Tab.6 and Fig.6, 3D scene in Tab.7 and Fig.7, KITTI in Fig.8, and Paris-rue-Madame in Fig.9. Please, see my other comment in the common rebuttal for additional points. --- Reply to Comment 1.1.1: Title: We are glad to know our rebuttal addressed your concerns Comment: Thanks for letting us know our rebuttal addressed your concerns. We will response to your comments in the rebuttal above. Best, The authors
Summary: The authors propose an implicit surface reconstruction method from point clouds that uses a learned prior to initialize the optimization of a neural SDF. First, a neural SDF generator based on DeepSDF [66] is trained on a dataset of shapes to learn a prior over shapes. Given a point cloud, both the shape code used as input for the generator, as well as the parameters of the neural SDF are then optimized to fit the point cloud, using an approach based on Noise to Noise Mapping (N2NM) [50], but focusing on local patches instead of the full global point cloud. The authors show that this approach performs better than N2NM as well as several state-of-the-art methods in an extensive evaluation on several datasets. Strengths: - Using a learned prior as initialization for an optimization-based surface reconstruction method like N2NM has not been done before as far as I can tell. - Using local patches also seems like a (smaller) contribution to me (the authors show that this contributes to the performance, but it is not very clear to me at this point why it improves performance significantly - this could be improved). - Results show consistently better reconstruction accuracy than the state-of-the-art. - The evaluation is extensive and mostly well-done, and convincingly shows and advantage over the state-of-the-art. Weaknesses: - While the idea of using a learned prior as initialization of an optimization based method has not been done for surface reconstruction as far as I can tell, it is not very surprising that this can be done and is also relatively straight-forward to do. Both stages are essentially the same as existing work [66] and [50], with some deviation from [50] in using local patches. So all-in-all, the technical contribution does not seem huge, although possibly still large enough for acceptance. - While local patches do seem to improve performance, it is not fully clear why this is the case, and the exact details of using local patches are missing some details in the exposition, making it unclear exactly how they are implemented. - A few recent methods are missing both from the related work and the comparisons (see below). - The exposition could use some additional details and clarifications in some parts (see below). Most of the weaknesses, apart from the first one, can be addressed with text changes to some extent. Given the good performance of the method and the extensive evaluation, I lean towards acceptance. Details: - The field of surface reconstruction from point clouds is quite vast, so the authors missed a few works: - Neural-Singular-Hessian: Implicit Neural Representation of Unoriented Point Clouds by Enforcing Singular Hessian, Wang et al., TOG 2023 - Iterative Poisson Surface Reconstruction (iPSR) for Unoriented Points, Hou et al., TOG 2022 - 3DShape2Vecset: A 3D Shape Representation for Neural Fields and Generative Diffusion Models, Zhang et al., TOG 2023 - Geoudf: Surface reconstruction from 3d point clouds via geometry-guided distance representation, Ren et al., TOG 2023 - Given good normals, surface reconstruction becomes much easier - the normal computation could be followed by Poisson reconstruction based on the normals for example. Therefore papers to compute oriented normals are relevant, such as: - Orienting Point Clouds with Dipole Propagation, Metzer et al., TOG 2021 - Globally Consistent Normal Orientation for Point Clouds by Regularizing the Winding-Number Field, Xu et al., TOG 2023 - SHS-Net: Learning Signed Hyper Surfaces for Oriented Normal Estimation of Point Clouds, Li et al., CVPR 2023 - The exposition could use some additional details and clarifications: - The approach for using local patches is missing some details: How many patches are used during optimization? In which order are patches evaluated? One after the other? Or is a random patch chosen in each iteration? This should be clarified. - The difference of the second stage to [50] could use some more discussion. The authors describe the application of [50] to local patches as main difference (as shown in Eq. 3). But since the expectancy in Eq. 3 is taken over random samples near random patches, how is this different than taking the expectancy over random samples near the whole shape (as in [50])? This could use some discussion. (Or is the SDF actually optimized separately per patch? From reading I did not get this impression.) - [50] also uses a geometric consistency constraint as regularization, although such a regularization is not described by the authors. If it is not used, a discussion why the authors removed it might be useful. - c seems to be initialized to a fixed constant vector: the average of all embeddings in the training set, while the original DeepSDF paper uses a c randomly sampled from a Gaussian. Why did the authors use a different initialization for c here? Did the average empirically perform better? This could be discussed. - In the paragraph starting at line 129, It should be stated clearly that the method described here is the method proposed in [50] (although evaluated at random samples near random local patches, rather than random samples near the whole shape). - The evaluation is quite extensive, but would benefit from a few improvements: - In the evaluation, it would be good to hear more about how the time used to fine-tune each optimization-based method was chosen. Was each result optimized until convergence? Or otherwise, how was the number of iterations/optimization time chosen? - It would be good to add DeepSDF to the comparison, as it is the closest to the initialization used for the optimization stage, and would show how much the fine-tuning improves accuracy. - In the ablation, using a large local region seems worse than using a global approach without local regions. This should be discussed. - References [52] and [53] are duplicates Technical Quality: 3 Clarity: 2 Questions for Authors: - A clarification of the unclear details regarding local patches would help. - A clarification of how timings for methods were chosen would help as well. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The authors have not discussed clear limitations unfortunately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. Contribution and Novelty Our method is not a simple combination of DeepSDF and noise2noise. Please see G1-G3 in our rebuttal above. 2. Why Local Patches Work Better Please read “G4. Why Local Patches Work Better” in our rebuttal above for the analysis. 3. We will add references you mentioned We will add these references. The main difference between these methods and ours lies in the ability of reasoning on noisy point clouds in an overfitting manner. This ability significantly improves the generalization performance on unseen noisy point clouds. 4. Local Patches in Optimization We randomly sample a noisy patch and a set of queries within the patch in each iteration. Thus, there is no order needed. We use the loss difference between two successive iterations as a metric to determine the convergence, which can be used to conduct time comparisons in ablation studies. The optimization is usually converged in 4000 iterations, and no more than 4000 patches are used. We will make these details more clear in our revision. 5. Difference to the Global Mapping [50] Please read “G4. Why Local Patches Work Better” in our rebuttal above before going ahead. Besides the inaccuracy and inefficiency in inference, another downside of [50] is the requirement of additional constraints on the pulling, i.e., each travel distance should be as small as possible, which makes sure the learned distance is the minimum to a surface, as shown in Fig. 3 in the original paper of [50]. The reason why [50] needs a constraint like this is two fold. One is that the optimization of SDF is still not converged well during inference, and it produces lots of uncertainty, thus queries can not get pulled at the right place. The other is that the loss of noise2noise just pushes the pulled queries to be as near to the noisy patch as possible, but does not constrain how the pulling will be conducted. Thus, our local patches resolve this issue with a patch limit, which also gets rid of the regularization term. Our preliminary results showed that the regularization term does not improve the performance but slows down the optimization a lot. We will make this more clear in our revision. 6. Shape Embedding c Please read “G3. Traditional test-time optimization vs ours” in our rebuttal above before going ahead. Using averaged shape code as an initialization is a key point to generalize our prior with an overfitting loss. As shown in our ablation studies in Tab. 9, using randomly initialized shape code (“Without Embed”) makes a negative impact on the accuracy and efficiency. Our averaged shape code takes a good advantage of the prior space, and provides an object-like SDF to start the inference with uncertainty of SDF using the overfitting loss. 7. Convergence Metric We use the loss difference between two successive iterations as a metric to determine the convergence, as shown in Tab. 9 and Tab. 11. The optimization is usually converged in 4000 iterations, as stated in Line 164. Each result was recorded when the optimization converges. We will make these details more clear in our revision. 8. Comparisons with DeepSDF We compare it with DeepSDF in Tab. 9. As stated in Line 329-330 and the result (“Fixed Param”) in ablation studies in Tab. 9, the auto-decoding introduced in DeepSDF can not work well with the overfitting loss on single noisy point clouds. In almost all cases, it does not reconstruct a plausible shape. As analyzed in “G3. Traditional test-time optimization vs ours”, the signed distances at the same locations inferred by the overfitting loss from a single noisy point cloud are fluctuating iteration by iteration, which is not like the GT signed distances, i.e., constant values, used in DeepSDF. This makes the prior generalization different a lot. 9. Why Larger Patches Do Not Work Well Please read “G4. Why Local Patches Work Better” in our rebuttal above going ahead. With larger patches, more queries and noisy points that are far away from each other are paired to infer the SDF, which is quite similar to the global strategy [50]. This reduces the reconstruction accuracy and efficiency. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the detailed replies. Regarding the contributions and novelty, what I meant with the paper using the learned prior as initialization is not that the optimization is like in DeepSDF, but that (apart from the local patches), the optimization is very similar to Noise-to-Noise, but using DeepSDF as initialization for the neural SDF that is being fitted during the optimization. While the choice of initial shape code c is interesting, this choice is not ablated as far as I can tell (adding an ablation would be good actually, especially if it is claimed as contribution), and also does not add a very large contribution by itself. Overall though, given the good performance, I still think the contribution is probably large enough for acceptance. Regarding the discussion why local patches improve performance, I think the discussion provided in the rebuttal seems reasonable to some extent. If point cloud and query densities do not match across the shape, then queries may be matched to more distant points, and the local patches essentially stratify the problem by confining it to local patches. At this point though - why not use the Chamfer Distance instead? The CD would not have this problem with different point and query densities. Is it because we still want the behavior of EMD with its longer distance matches rather than CD in local patches? So overall, while this explanation is already reasonable if clarified a bit, the discussion of this point could also go a bit more in depth, especially since this is one of the main claimed contributions. The discussion of this point (whether it is a clarified version of the current discussion, or a more in-depth discussion) should be added to the paper. Regarding the discussion why using an average shape code for c is better than a random shape code, the only relevant information I can find in the response (apart from references to the ablation) is that the average shape code gives a more 'object-like' appearance. This could some more elaboration. What does 'object-like' mean? Is the average shape code c typically closer to the optimum? Why is this the case? Maybe because the distribution of shape codes used by typical shapes is different from a Gaussian distribution and the average is better centered in this actual distribution of shape codes, so the initial shape output by the neural SDF is already closer to the target shape compared to a random shape code? A somewhat more elaborate discussion of this point would improve the motivation of this design choice. --- Reply to Comment 1.1.1: Comment: **1. Ablation Studies on Shape Codes** We did two ablation studies related to shape codes. One is the code length in Tab.8, the other is the code initialization in Tab.9. In Tab.9, we tried random initialization for shape code in “Without Prior” (using network parameters that are learnable and randomly initialized), “Without Embed” (using network parameters that are learnable and initialized with our prior), and “Fixed Param” (using network parameters that are not learnable and initialized with our prior). Moreover, as we mentioned in c) in “G3. Traditional test-time optimization vs ours” in the rebuttal above, we also tried various normalizations based on the random initialization, which produces too bad results to get reported in the paper. In addition, we also tried some other variations like our averaged code in our preliminary results, such as randomly selecting one shape code from the training set as the initialization. Similarly, randomly selecting also produces too bad results to get reported in the paper. We will follow your advice to reorganize our ablation study on shape codes in our revision. Please also let us know if you would like to see any other alternatives on code initialization. **2. How about CD?** You raised a great point. Actually, we followed [50] to use EMD to report our results. As a main contribution, the authors of [50] did a pretty good job to find that EMD works while CD does not work, please see Fig.4 and ablation studies “CD” in Tab.11 in their paper for more. But we are different, and we make everything happen in a patch rather than over the whole shape, which changes the results with CD a little bit. In our preliminary results, we found that the CD can produce much better results in our method than in [50], but the results are still much worse than the ones with EMD. Another consideration of using EMD is for fair comparison with [50]. Although CD can handle density difference between noises and queries and produces more reasonable results in local patches, it does not have one thing that is the key for the success as EMD, which is the one to one correspondence. CD involves many to one correspondence which is not efficient and targeted in statistical noise to noise mapping. We will follow your advice and add a discussion on this in our revision. **3. Why does averaged shape code work?** The “Object-like” SDF is basically a shape, since the code is averaged over all training shapes. With the pretrained network parameters, that is our prior, at the very beginning, this averaged shape code can produce a shape that may be very similar to some shapes in the training set, which however is not necessarily similar to the noisy point cloud. This means that it is not nearer to the optimal point or closer to the target shape, but provides a relatively stable area for the optimization. The reason why we use averaged shape code is three fold. i) One is that it can not only initialize an SDF with inside and outside, which is basically the same purpose as the geometry initialization (initializes network parameters so that a sphere like SDF is produced) introduced in SAL, but also inherit some geometry details from the training dataset. ii) Another one is that it provides a great scope for the optimization to search for a result. This is because the area where the averaged shape code is gathers more shape codes that represent meaningful shapes in the training set than other areas in the shape code space. This area provides a large enough candidate pool for our inference loss, which avoids the optimization to fly away due to the large change of shape code updated by the inferred information, such as signed distances in our case, which is fluctuating iteration by iteration, even at the same locations. iii) Based on ii), the network parameters which saw various shapes during training can work with the shape code to fast update the SDF according to the inferred information from the loss. These three folds make our averaged shape code is much better than the random initialization which provides a start where there may be no other learned shape codes at all, which makes it hard to find optimization directions in the following. We will follow your advice and add this discussion in our revision.
Summary: The paper proposes 3D shape reconstruction given a noisy point cloud using a test-time optimization approach. The paper proposes a new loss function that can work directly on noisy point clouds without the need of ground truth normals or SDF. The approach involves learning a network (DeepSDF) to predict SDFs and later fine-tune the network on a single shape during test. The learned priors reduce the test-time fitting and leads to better reconstruction results. Strengths: 1. The paper proposes a simple and easy to implement approach to 3D reconstruction that works on a bit noisy point clouds. 2. The better quality results and fast inference has a broad significance in 3D reconstruction. Weaknesses: 1. In the introduction, the test-time optimization is hailed as a novel thing, however previous methods (including DeepSDF) has already proposed it. 2. The main contribution is the loss function defined using "pulling" operation, which forces the surface defined by the projected points close to the input points be similar to the input point clouds. However, why this loss function should work in a noisy point cloud setting is not well motivated and well explained. 3. The writing of the method section can be significantly improved. For example, the symbol $f$ seems to represent both the neural network that predicts the signed distance field and the field it self. More about it in the Questions section. 4. The use of the phrase "statistical reasoning"is quite vague. The phrase is used throughout the manuscript but does not convey anything meaningful. 5. It is not clear whether the baselines in the experiment section are also trained on noisy data to make them robust. 6. Comparison with Points2Surf [21] method is missing. I think it work quite well on shapes in Figure 4. 7. What does "extreme noise" mean in the Table 13? What does it translate to in percentage of the scale of the scene. 8. How do you achieve 100% sparsity in Table 14? Does it imply no points? 9. Noise level of 0.5% is quite low to claim that the method is robust. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. Please improve the notations in the method section. The vectors should be bold. Use different symbols for parameterized function and output. 2. The motivation of the loss function is not clear. Perhaps a diagram would enhance the understanding for a reader. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: 1. Limitations regarding pose variations and incomplete data (holes) are not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. Why pulling works Pulling queries towards surface points has been widely used to estimate a signed distance function from a point cloud. We can use a neural network to infer the SDF by pulling randomly sampled queries to their nearest surface points using the predicted signed distances and gradients. We train this network by minimizing the distance between the pulled queries and queries’ nearest surface points. However, it is not a good way to infer an SDF through pulling queries on a noisy point cloud, since queries’ nearest surface points are not accurate to find due to noise corruption. To resolve this issue, we got inspiration from [1] and [2], which infers the clean 2D and 3D information from noisy observation respectively. Our statistical reasoning in a local region managed to estimate an SDF by minimizing the distance between different sets of pulled queries that are randomly sampled around and their commonly shared noisy patch. The rationale behind this is that the different sets of pulled queries are forced to be as near to the same noisy point patch as possible, when we do this over different noisy point patches that overlap with each other, the optimization converges to a consistent zero-level set. [1] Noise2noise: Learning Image Restoration without Clean Data [2] Learning Signed Distance Functions from Noisy 3D Point Clouds via Noise to Noise Mapping 2. Notation We will revise the notation to make them more specific and clear. 3. Statistical Reasoning Please read “G5. Statistical Reasoning” in the rebuttal above. 4. Baselines Data-driven based methods such as IMLS and ConOcc were pre-trained on point clouds with noise of different variances, so that they can work with various noises during inference. While overfitting based methods directly learn SDFs on single shapes in the testing set. Both data-driven based and overfitting based methods use the same set of noisy point clouds as ours for evaluations. 5. Comparison with Points2Surf We compared the data-driven based method Points2Surf on ABC in Tab. 3 (“P2S”), Fig. 3 (“P2S”), and Fig. 12 (“P2S”). Numerical and visual comparisons indicate that our method significantly outperforms Points2Surf. Since Points2Surf is a relatively old method that was proposed back in 2020, and its following methods like PCP [3] and OnSurf [4] have reported much better performance on FAMOUS dataset, we just list PCP and OnSurf as baselines in Tab. 5. Moreover, Points2Surf did not report its results on SRB in the original paper, and we failed to reproduce plausible results on SRB by running its code as well, because of its poor generalization ability on SRB. Thus, we did not compare Points2Surf in Fig.4. [3] Surface reconstruction from point clouds by learning predictive context priors [4] Reconstructing surfaces for sparse point clouds with on-surface priors 6. Extreme Noise in Tab. 13 The noise levels of middle and max come from the ABC dataset released by Points2Surf. The middle indicates noises with a variance of 0.01L, where L is the longest edge of the bounding box. The max indicates noises with a variance of 0.05L. Our extreme noise comes with a variance of 0.07L. 7. 100\% sparsity in Tab. 14 As stated in Line 361-362, the percentages shown in Tab. 14 are ratios of left over points after the downsampling. Thus, 100\% indicates the results without downsampling. We will clarify this in the revision. 8. 0.5\% noise variance We use 0.5\% noise variance on ShapeNet for fair comparisons with previous methods. As shown on ABC in Tab. 3, FAMOUS in Tab. 5, shapes with extreme noise in Tab. 13, and the additional results in Fig. 1 in the rebuttal PDF, our method can handle point clouds with noise variance as large as 7\%. 9. Understanding of Our Loss The diagram of our loss is shown as L_{Local} on the right in Fig. 1. For a noisy point patch, we randomly sample a set of queries within the noisy point patch. Then, we use the learned SDF to pull them onto the zero-level set, and we minimize the EMD distance between the set of pulled queries and the noisy point patch. --- Rebuttal 2: Comment: Hi, could you please chime in? It's near the end of the discussion period! --- Rebuttal Comment 2.1: Comment: Dear reviewer AdmT, As the reviewer-author discussion period is about to end, can you please let us know if our rebuttal addressed your concerns? If it is not the case, we are looking forward to taking the last minute to make further explanation and clarification. Thanks, The authors
Rebuttal 1: Rebuttal: We appreciate the reviewers' valuable comments, which highlighted our simple and interesting method (AdmT, mU7t), strong performance and extensive experiments (fwt4, mU7t, 76DM), and the broad significance and usefulness of our work (mU7t, AdmT). G1. Our Novelty Our novelty lies in how to combine data-driven based priors with the overfitting based strategy but rather the test-time optimization. Our framework is much different from the test-time optimization introduced in DeepSDF, since DeepSDF uses ground truth (GT) signed distances during both training and testing, which is relatively easy to generalize the prior learned during training. While we use different kinds of supervision during training and testing, which brings challenges for prior generalization. Specifically, during training, we use GT signed distances to learn what shapes look like as a prior, while using a loss to infer plausible signed distances from noisy point clouds fast based on the prior during testing. The signed distances inferred in iterative optimization may be slightly fluctuating iteration by iteration, which brings significant uncertainty and makes it hard to optimize the code with fixed parameters like auto-decoder, while GT signed distances are constant. Thus, this challenge requires a novel way of generalizing a prior in an overfitting strategy. G2. Our Contribution Our main contributions are two folds. One is the noise to noise mapping in local regions, the other is how to leverage the prior as an initialization for inferring a SDF from a single noisy point cloud. Our first contribution leads to a loss that can infer more accurate and sharper geometry from a single noisy point cloud. The second contribution leads to a novel way of using the prior, which is the key to the fast convergence and good generalization of the prior. This way is not a simple variation of DeepSDF and noise2noise, please read “G3. Traditional test-time optimization vs ours” for more details. G3. Traditional test-time optimization vs ours The test-time optimization introduced in DeepSDF just optimizes a learnable code with fixed network parameters (a prior). It works well if the supervision used during testing is the same kind as the one used during training, such as GT signed distances. The difference to ours lies in that we do not access GT signed distances during testing, but just noisy point clouds, so we leverage a loss to infer signed distances. The inferred supervision is different from GT signed distances, since ground truth signed distances are constant values which keeps exactly the same in different iterations, while the signed distances inferred by pulling are fluctuating iteration by iteration, even at the same locations, which makes the learned prior hard to generalize according to the inferred supervision. We tried the following options at very beginning, but none of them can make the prior work with an overfitting based loss. a)Randomly code + fixed network parameters ( DeepSDF) b)Randomly code + learnable network parameters (prior initialization) c)Randomly code with normalization + learnable network parameters (prior initialization) Ours shown below work well with an overfitting based loss quite well. d)averaged shape code + learnable network parameters (prior initialization) This initialization provides us a good object-like SDF that is robust enough to the information fluctuation inferred by the overfitting loss. Moreover, the prior can adjust the SDF fast. G4. Why Local Patches Work Better As stated in Line 122-132, we randomly sample a point patch from a noisy point cloud, and a set of queries within this patch, and pair these two sets with each other to learn a SDF. Different from the global method which samples both noisy points and queries all over a shape, we have a clear boundary each time, which makes the SDF inference more targeted and efficient. This is because we avoid noisy points and queries that are too far away from each other, which makes no sense to pair them together for inference. Our ablation studies in Tab. 11 justify that our local strategy can significantly improve the reconstruction accuracy and reduce the inference time. G5. Statistical Reasoning As stated in Lines 129-132, we aim to reason a neural SDF f from a noisy point cloud. Since we randomly sample queries and noisy points, we statistically minimize the expectation of the distance between different sets of pulled queries and noisy points. Statistically, the expectation of the zero-level set should have the minimum distance to all the noisy points. We regard this inferring process from a noisy point cloud as statistical reasoning. We will make this more clear in our revision. G6. Gaussian Noise Current studies have shown that the distribution of real noises from scanning sensors is very close to Gaussian distribution, please see Fig.7 in [1]. Vast amounts of literatures on point clouds also commonly use Gaussian distribution to add noises. Although our optimization minimizes the expectation, our method can work beyond Gaussian noises. We also perform quite well on real scans with unknown noise types, such as our results on SRB in Tab.4 and Fig.4, D-FAUST in Tab.6 and Fig.6, 3D scene in Tab.7 and Fig.7, KITTI in Fig.8, and Paris-rue-Madame in Fig.9. We additionally conduct an experiment to show our performance with various noise types, i.e., impulse noise, quantization noise, Laplacian noise, and Gaussian noise. Visual comparison in Fig. 3 in the rebuttal PDF justifies that we can also handle other types of noise quite well. Moreover, we also tried more challenging cases with nonuniform noises which do not have a zero expectation across a shape, like a shape with only a half point having noises or a shape with several patches having noises. The result in Fig. 7 in the rebuttal PDF shows that our method can also handle nonuniform noises well. [1] Noise characterization of depth sensors for surface inspections Pdf: /pdf/d533d0549cdc4671d08ec2cb2f5b24c261a2f61c.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Transferring disentangled representations: bridging the gap between synthetic and real images
Accept (poster)
Summary: The work is situated in the area of Disentangled Representation Learning. The authors propose a new intervention-based metric (OMES) to assess the degree of disentanglement in different models. They perform extensive and comprehensive experiments validating and comparing OMES with other metrics. They thoroughly discuss the behavior of their metric with respect to other metrics and provide intuitions for the observed behavior. Finally, they use OMES to study transfer in disentangled representations from a source to a target dataset. The authors wish to understand which disentanglement properties remain after the transfer which they measure with OMES (and other metrics). Strengths: The paper is very well written and well structured. The OMES metric is explained and motivated well. The comparisons to the other metrics are comprehensive and sensible. I especially appreciate the formulation of the main research questions in Section 3.1 which guide the logic of the paper. The conducted experiments are very extensive and I appreciate the additional results in Appendix C. Weaknesses: I do not see major weaknesses with this paper. I think this is a solid and a mature submission. I have some comments and questions which should be well addressable. I found the results section in 3.3. quite hard to read. Each of the transfer experiments consists of one large block of text which is a bit hard to parse. I encourage the authors to think about restructuring it and introducing more paragraphs to promote readability. Fig.1 left: the ylabel should be renamed to “Disentanglement score” for clarity. Table 2: Please write a more descriptive caption ### Typos: Line 33: tranferring Line 169: Agreeement Line 589: Table 7 not Fig 7 Technical Quality: 4 Clarity: 3 Questions for Authors: OMES has one hyperparameter alpha in Eq. 1. As far as I have seen, it has always been set to 0.5. Does alpha need tuning / is 0.5 the optimal choice? What happens if alpha is set to a different value? Does this value depend on the dataset? The authors assess the distance between the source and the target data in order to understand how this distance affects transfer performance. While the distance between two datasets is hard to measure, I felt the use of “distance” was a bit loose here, especially because the first research question explicitly involves the distance: “How well does disentanglement transfer, and how much does it depend on the distance between Source and Target Dataset?” In line 51, the authors write: “We discuss the role of fine-tuning and the need to reason on the distance between Source and Target datasets.” I have not really seen any reasoning on the distance between the Source and Target datasets in the paper and would be interested to hear how the authors think about it. The distance could be measured in FoV or in the pixel space or in the “domain” space, for example. We could have the datasets A, B and C where A has 5 FoVs and consists of real images. Then, B would have 4 FoVs (a subset of A) and consist of synthetic, but realistic looking images. Finally, C would have the same 5 FoVs as A, but would consist of sketches, that is, the image distribution would be very far away from A and B. Which dataset would then be closer to A in terms of “distance”: B or C? And how would we expect the disentanglement measures to transfer? I would think that the trained VAE heavily depends on the image domain and switching real images to sketches would destroy all performance in terms of disentanglement. Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: I think a paragraph which explicitly discusses the limitations of the approach would be beneficial for the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for considering our work and for their valuable comments and insights. * > I found the results section in 3.3 quite hard to read ... promote readability. We agree with the reviewer that, mainly due to space limitations, the experimental section is very compact. We will attempt to re-design the section to improve its readability (for instance with bulleted lists if more space will be allowed) and will try to refer more explicitly to the research questions to provide clearer storytelling in our discussion. * > Fig.1 left: the label should be renamed ... for clarity. The reviewer is right, we will correct the label. * > Table 2: Please write a more descriptive caption We agree with the reviewer that a more descriptive caption is needed for readability. Unfortunately, we sacrificed the captions (the one in Table 2, but also some others in the following) for space reasons. We will revise it in compatibility with space constraints. A possible caption is the following: “Disentanglement metrics of transfer to XXX (Dataset Target). Average scores before and after finetuning; in brackets, is the difference between finetuning and direct transfer.” * > Typos: We thank the reviewer for pointing this out. We will correct all the typos in the revised version of the paper. * > OMES has one hyperparameter alpha ... on the dataset? We agree with the reviewer that the text lacks an appropriate discussion on alpha. We will add some comments on the main paper, and we will provide more details in the Appendix, where an empirical analysis on the role of alpha was actually already provided (Appendix B.3), but without an appropriate and focused discussion. Alpha does not need a specific tuning, since its role and behaviour are predictable by design. With reference to Eq. 1 we start observing that with alpha=0, OMES only measures the Compactness (MES) of the representation; with alpha=1, instead, our metric measures the Modularity (OS) only. Values of alpha in the interval (0,1) can be used to balance the importance of both contributions. In this sense, we can say it does not depend on the dataset, while it depends on which property(ies) we want to evaluate on the model. This is also implicitly shown in our experimental analysis (Sec. 2.3) where different datasets have been employed. In Appendix B we report the results with different values of alpha, in which we show the behaviour of OMES when changing its value. In the main paper, we considered instead the general case of alpha=0.5 (*i.e.* compactness and modularity have the same importance) in the absence of valid reasons to favour one property or the other. We think this choice also allows us to have a fair comparison with the other disentanglement metrics. * > The authors assess the distance between the source and the target data... destroy all performance in terms of disentanglement. Please refer to the common response. * > I think a paragraph which ... would be beneficial for the paper. We agree with the reviewer that a section which more explicitly discusses the limitations of our approach would be beneficial for our work. In the present version of the paper, as observed by Rev. GNcy, we only discuss the limitations of transferring disentangled representations in Sec. 3.3, when commenting on the experimental analysis. Other important limitations of our current work are the use of a specific family of approaches (VAE-based, as noticed by Revs. GNcy and R67v) and the lack of a more considered strategy to reason on the distance between Source and Target datasets (this reviewer and Rev. GNcy), that may give insights on the best choice for the model to transfer (*i.e.* Source) depending on the specific task (*i.e.* Target) at hand. We will add details on the main paper, if possible (given the space constraints), or alternatively in the Appendix. --- Rebuttal Comment 1.1: Title: Response to the rebuttal Comment: I have read the rebuttal, the other reviews and the comments. I thank the authors for their responses and clarifications and I am keeping my original score.
Summary: This paper conducts an empirical investigation into the problem of transferring disentangled representations from synthetic data to real-world data (syn2syn, syn2real, real2real). They start with well-defined research questions and perform the investigation on the feasibility and effectiveness of transfer learning step by step. Besides the investigation, this paper also proposes an intervention-based metric measuring the quality of factor encoding in the representation while providing information about its structure. Strengths: 1. The proposed metric OMES (Overlap Multiple Encoding Scores) is designed to measure two properties of disentangled representations: modularity and compactness. The proposed metric agrees well with other established metrics and also agrees well with performance metrics. 2. The experiments are extensive and comprehensive, covering four application scenarios (syn2syn, syn2real, real2real) and supported by six datasets. Some empirical insights are also revealed from the experiments. For example, the authors suggest that Explicitness is usually well maintained, while Modularity and Compactness are reduced as we move from synthetic to real. Interestingly, the authors also found out that fine-tuning is always beneficial, which is not an expected behavior to me. Weaknesses: 1. The presentation of the proposed metric has poor readability and is not structured well. The description from line85 to line132 is a bit messed up: for example, the input image pair and the motivation for choosing subsets of latent dimensions at line102 should be described earlier, while it might be more appropriate to move the high-level description to the introduction. I would suggest the authors add a few headlines to the metric introduction at least or rephrase the contents. 2. Could you please consider synthetic datasets composed of more complex transformations, such as Falcol3D and Issac3D [1]? Though the datasets in the paper are very diverse, I feel a bit the transformations are relatively too simple. It is not sure to me if these insights transfer to real-world complex transformations. In particular, is fine-tuning still beneficial? 3. How about vector-based disentanglement methods [2][3]? How do the metrics and experiments generalize to vector-based methods? Do the authors think the insights will also transfer to these vector-based approaches? > [1] Semi-supervised stylegan for disentanglement learning. ICML. 2020. > > [2] Flow Factorized Representation Learning. NeurIPS. 2023. > > [3] Unsupervised Learning of Disentangled Representations from Video. NeurIPS. 2017. Technical Quality: 3 Clarity: 2 Questions for Authors: Please see the weaknesses. Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Please refer to the weaknesses. There are no particular limitations of this paper -- there is a common limitation of disentangled representation learning that these approaches are not scalable to large datasets. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable insights provided with their comment. In the response, we will follow the structure of the bullet list in the review. * > The presentation of the proposed metric has poor readability... at least or rephrase the contents. We agree with the reviewer that this section would benefit from a revision to improve the readability (which is probably not optimal also due to the space constraints). As suggested by the reviewer, we will anticipate the details on the setup and the main design choices, and we will move the high-level description and motivations to the introduction. We hope this will help the clarity and readability of this important section of our work. * > Could you please consider synthetic datasets composed of more complex transformations ... is fine-tuning still beneficial? We thank the reviewer for the observation, which allows us to further elaborate on the strengths of our proposed methodologies. Considering the limited time for the rebuttal, among the two datasets suggested by the reviewer, we opted for Isaac3 because we find it to be more in line with the scenarios already considered, while providing a considerably higher complexity, with its 9 latent factors. With Isaac3D, we performed some further experiments by using the new dataset both as a Source and as a Target, and following the procedure described in Sec. 3.1 and common to all the experiments already included in the paper. The results of these new experiments are reported in the pdf attached to the rebuttal. More specifically, in Tables 1 and 2 we report the results when Isaac3D is used as a Target. In this case, the models trained on Shape3D have been used as a Source. We observe that the transfer is overall effective according to all the disentanglement scores. If we focus in particular on the “All” column, we notice that the performance here is higher than the one we obtain when Shape3D is Source and Coil is Target (Tab. 4 in the main paper) or RGBD is Target (Tab. 5 in the main paper). This is in line with the “picture” provided by OMES (Isaac3D as Target: 23,15, which becomes 33.1 after finetuning; Coil as Target: 28.35, and 27.95; RGBD: 28.85 and 28.2). We notice that only for Isaac3D the boost of finetuning is appreciable, while the baseline result is lower than for Coil and RGBD (that are real datasets). Overall, these empirical evidences may suggest that the robustness of transferring disentangled representations is influenced by several factors including the Sim2Real challenge but also the number of factors, their granularity, and how many factors Source and Target have in common. This is related to a more general concept of distance between datasets, we will consider in our future investigations and that we briefly discuss in the general response. On the other hand, Table 3 of the attached pdf reports the transfer from Isaac3D as the Source to Shape3D as the Target. Although the richer transformations of Isaac3D, the model adaptation with finetuning is still beneficial for all the disentanglement metrics. We think this may be explained by the “domain” dependence of VAE models, for which an adaptation step is in general beneficial. * > How about vector-based disentanglement methods ... insights will also transfer to these vector-based approaches? Please refer to the common response. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: Thanks for the response! I like the new experiments of Isaac3D and I will keep my original score.
Summary: This paper proposes a novel classifier-free metric for quantitatively measuring disentanglement and investigates transferring disentangled representations from a synthetic dataset with ground truth factors of variation (FoV) to a target real-world dataset. The authors introduce OMES, a novel intervention-based metric that evaluates the correlation between representations of two images sharing all FoVs. OMES computes the overlap between different FoVs (Overlap Score) and measures the encoding of each factor within single dimensions (Multiple Encoding Score). Using these metrics, the paper analyzes the properties of disentangled representations and source/target distributions to improve disentanglement transfer. The main contributions of the paper are: - Introduction of a novel classifier-free metric (OMES) for disentanglement evaluation, which reduces hyper-parameter sensitivity. - Extensive empirical study on transferring disentangled representations from source to target datasets, revealing the potential and properties of disentanglement transfer. Strengths: - The paper is easy to follow. - A simple yet novel classifier-free metric removes the dependency on hyper-parameters and enables reasonable comparison between various configurations and benchmarks. Also, this metric maintains the reasonable assessment of disentanglement compared to the conventional metrics and provides an informative tool for analyzing the each factor of variations. - Thorough empirical analysis provides comprehensive understanding on a novel metric and disentanglement transfer learning. Weaknesses: - The implications on transferring disentanglement from synthetic datasets to complex real-world datasets are somewhat limited. Although the empirical study indicates that a smaller distance between the source and target datasets and shared FoVs between them are beneficial, these are expected properties of conventional transfer learning. The paper would be strengthened by discussing how to specifically select a proper source dataset for a given target dataset with unknown factors of variation. For example, it would be useful to define and measure the structural similarity between a target dataset and potential source datasets. - The presentation could be improved. At first glance, it feels like two independent topics (metrics and transfer learning) are being introduced, making it hard to understand why a novel metric is needed for transferring disentangled representations. It would help readers if the connections between these two components were more clearly explained. Technical Quality: 3 Clarity: 3 Questions for Authors: - In transferring disentangled representation learning, how much of the improvement comes from transfer learning compared to training from scratch on the target dataset, in terms of disentanglement metrics? This comparison would provide a clearer understanding of the benefits and effectiveness of the transfer learning approach. - Can you provide more analysis on why OMES has relatively low correlation on FactorVAE score in Figure 2? - Most of the investigations are done on VAE-based models. Would it have the same implications with other recent disentangled representation learning approaches [1, 2, 3], which employ more powerful generative models? [1] Yang, et al. "Disdiff: Unsupervised disentanglement of diffusion probabilistic models.", in NeurIPS 23. [2] Lin et al., “Infogan-cr and modelcentrality: Self-supervised model training and selection for disentangling gans”, In ICML 20. [3] Ren et al., “Learning disentangled representation by exploiting pretrained generative models: A contrastive learning view”, in ICLR 21. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: - Section 3.3 describes the limitation of transferring disentanglement learning. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their effort spent in providing valuable feedback on our work. In the response, we will follow the paragraph listing structure of the review. * > The implications on transferring disentanglement from synthetic datasets ... and potential source datasets. Please refer to the common response. * > The presentation could be improved...were more clearly explained. We agree with the reviewer that the paper is rather dense in some parts, and this does not help the understanding. We will revise the contributions of our work to better clarify the connections between them in the final version of the paper, should it be accepted. The novel metric is not strictly required by the application to transfer learning, but rather by more general needs for disentanglement metrics, the first being interpretability. We also wanted our metric not to be based on a classifier and to be able to estimate different properties of disentangled representations (*i.e.* modularity, compactness) simultaneously. In this sense, the contributions of our work are indeed two, somehow independent: the new metric OMES and the transfer learning methodology. However, it is worth noticing that the interpretability of OMES allows us to directly exploit it in the transfer learning process, for instance, to select the most representative dimension of the representation for the classification experiments (we named “Pruned” in all our tables). * > In transferring disentangled representation learning, how much ... effectiveness of the transfer learning approach. We thank the reviewer for the question that allows us to illustrate a different aspect of our approach. We start observing that training from scratch on the Target dataset is usually not possible in the application domain we have in mind, where the annotation of the latent factors is not available in most cases. This explains why we did not consider this scenario in our experimental analysis. Nevertheless, we performed a further evaluation to quantify the loss of transferring rather than training from scratch. We used the source models trained directly on Shapes3D, Coil100 and Isaac3d (which we trained under the suggestion of Rev. R67v) and we compared their performances with the models trained on different Sources and finetuned on the 3 datasets. The results are reported in Table 4 of the attached pdf. It reports the average scores of the models trained from scratch (with weak supervision). In brackets instead, we put the average differences between the scores obtained from transfer learning + finetuning, and the ones with training from scratch (*i.e.* a negative value means we are losing when applying the transfer methodology). In general, we observe that the models trained from scratch perform almost always better than the transferred ones. This gap is expected because the models trained from scratch use the annotation of the dataset, while the finetuning is unsupervised. However, in this experiment, the gap is noticeable only when Shape3D is a Target, this could be explained by the fact that it has relatively simple transformations easier to disentangle when the annotation of factors is used for training. On the contrary, for datasets with real-world more complex (harder to learn and disentangle) transformations,the performances with finetuning is closer to the training from scratch. We also observe that, given the complexity, their overall performance is lower than the ones of Shape3D. * > Can you provide more analysis on why OMES ... FactorVAE score in Figure 2? We first observe that even if they are all intervention-based metrics, the formulation of the input pairs of our metric and FactorVAE/BetaVAE are the opposite (we sample pairs of images where all factors are the same but one, while FactorVAE and BetaVAE fix one common factor while randomly sampling the values of the others). The correlation of OMES with FactorVAE is poor (28) but an even lower correlation can be observed with BetaVAE (11). The latter can be explained by observing that when computing OMES we consider contributions from (latent dimension, factor) associations accurately selected to maximize the correlation, while BetaVAE keeps the information from the entire latent representations (despite some dimensions might be uninformative or even potentially harmful). Concerning FactorVAE instead, we notice that it essentially estimates the performance of a linear classifier on input-output pairs composed of the fixed latent factor (output) and the latent dimension which presumably mostly encodes the factor (input). This step is also a core for OMES, although implemented following a different strategy (*i.e.* we rely on high correlation rather than low variance to select the “best” encoding factor). Among the properties that may explain the different behaviours of OMES and FactorVAE/BetaVAE in Fig. 2 there is also the fact they measure only the modularity of disentanglement, while our metric OMES also accounts for compactness since alpha=0.5 (this comparison is reported in Table 6 in the Appendix). (For more details on the role of alpha, we refer the reviewer to the answer to Rev. Fxij on the same topic). * > Most of the investigations are done on VAE-based models ... employ more powerful generative models? Please refer to the common response. --- Rebuttal Comment 1.1: Title: Response to Author Rebuttal Comment: I appreciate the authors' effort in addressing the concerns. Most of the concerns are clarified. Although I still believe that including experiments on other disentangled representation learning methods would provide a more comprehensive understanding, I agree that the authors' transferring method is not limited to specific methods and this is not a crucial concern. However, my concern about the comparison to a model trained from scratch remains unresolved. If I understand correctly, their experiments involve two steps: (1) weakly supervised learning (Ada-GVAE) on the Source Dataset, and (2) fine-tuning the model in an unsupervised manner (which does not require GT FoVs). I was asking for the performance gap between step (1) -> (2) (the proposed method) and step (2) alone (training from scratch on the target dataset) to evaluate the impact of step (1). However, it seems that the authors reported the performance of weakly supervised learning (Ada-GVAE) on the Target Dataset, which does not clarify the effect of transferring from the Source Dataset. Please correct me if I have misunderstanding here. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for appreciating the rebuttal. We now realise we misunderstood one request. Indeed, in Table 4 of the pdf we attached to the rebuttal we reported the performances of the weakly-supervised model trained on the Target dataset and the differences (in brackets) of the scores obtained on the Target in this way or with our full pipeline, while the reviewer was asking for a comparison with the unsupervised approach directly applied to the "target" dataset. In the earliest stages of our work, we actually adopted the unsupervised approach on our reference datasets and we empirically observed its limitations emerging already on “simple” datasets. For this reason, we opted for some level of supervision. To give a quantification of what we gain with this change, we report here the score obtained in our original unsupervised experiments focused on the use of two synthetic datasets, namely Color-dSprites and Noisy-dSprites (for details on the dataset see the main paper). With unsupervised training from scratch on Color-dSprite we have DCI=0.14 and MIG=0.07, which become respectively 53.3 and 34.3 with our full pipeline based on transfer learning (Source dataset: dSprites). On Noisy-dSprites, with unsupervised learning from scratch, we obtained DCI=0.05 and MIG=0.02, which become 26.0 and 36.0 respectively with our full approach. As a reference, training from scratch on the two datasets using Ada-gVAE we have DCI=64.2 and MIG=49.5, showing the large expected gap between unsupervised and weakly-supervised training from scratch.
null
null
Rebuttal 1: Rebuttal: We thank the reviewers for their efforts in reading our paper and also for providing valuable insights and new interpretations for our work. There is a general agreement on the effectiveness of the new metric and the extensive assessment with a thorough experimental analysis. In this common response, we address comments shared by more than one reviewer. We are responding to each reviewer independently on the remaining points. * Rev. **GNcy** and Rev. **Fxij** express concern about *how to assess the distance between the Source and the Target datasets*. While a discussion on datasets distance is out of the scope of this paper, the reviewers' comments lead us to see a potential application of our approach in future research. It is true the word “distance” may be not appropriate, in a sense we are assessing *how two datasets are related mainly in terms of FoVs*: 1. If both Source and Target are synthetic, we normally possess a prior on their FoVs and we may reason on their similarity in a structured manner (choosing Source and Target with a majority of identical factors, such as dSprite and color-dSprite, or with similar factors such as Coil100 and BinaryCoil100). 2. If the Target is real, we may not possess a proper FoVs annotation but we may qualitatively identify some dominant FoVs of interest, which we could use to find an appropriate Source where such factors are present. 3. We notice that, even in the supervised case, FoVs are often similar but not identical, they may differ in internal variability/granularity (*e.g.* *Scale* factor in dSprites and Shapes3D, *Pose* in Coil100 and RGBD-Objects), or may have the same name but be different in nature (*e.g.* *Orientation* in dSprites and Shapes3D). We conducted the transfer experiments considering different scenarios (syn2syn, syn2real and real2real) to verify which disentanglement properties are preserved or degraded with the transfer. For this, we reasoned on scenarios of increasing complexity, including, for instance, scenarios where the Target has the same FoVs of the Source plus an additional one, or where the Source is a “simplified” version of the Target (*e.g.* from a binarized version of Coil100 to Coil100). Interestingly, as reported in Sec. 3.3 using a real dataset as a Source does not provide particular improvement in performance with respect to the use of a synthetic Source if we apply the fine-tuning. In the case of real data, this might suggest that *using a synthetic Source dataset* (well related to the Target one in terms of FoVs) and adapting the model with finetuning can (in principle) *provide similar performances to models trained on real images*, and hence closer in pixel and domain space. In the revised version of the paper, we will add a note on the choice of the Source depending on the Target. *** * Rev. **GNcy** and Rev. **R67v** observe that our methodology mostly employs VAE-based methods and *ask about the implications of using more recent/powerful methods*. We agree with the reviewers that an analysis of the generalisation of our insights to different disentanglement learning approaches is needed, and it will be the object of our investigations in the near future. Our decision to consider VAE-based models in this work, and Ada-GVAE in particular, is due to their *simplicity*. Keeping the complexity of the disentanglement learning approach under control, we are relatively confident that the benefits we might observe are due to the transfer methodology of disentanglement rather than to the high expressive power of complex models for disentangled representation learning. It is also worth mentioning that the sampling strategy of Ada-GVAE methods is similar to the one we adopt for OMES, and this allows us to obtain an overall coherent procedure. To add some more observations on the approaches suggested by the reviewers, we observe that recent works based on generative mechanisms belong to the vector-based family of approaches. This means that they adopt more than one dimension in the form of a vector to encode a generative factor, while dimension-based methods (*e.g.* VAE) usually adopt a single dimension to represent the factor. For this reason, vector-based methods are more suitable for coarse-grained factors (capturing more information) while dimension-based methods are suitable for fine-grained factors. As observed in [A], the latter is more appropriate when investigating under-explored research directions (as in our case) being more precisely defined. All the existing disentanglement metrics discussed in the work have been proposed for evaluating VAE-based methods, including ours. However, they can be easily used to evaluate vector-based methods, for example, performing PCA as post-processing on the representation before evaluation, as already done in the literature [B, C]. We have no particular reasons to think our insights are not transferring to vector-based approaches. However, each family of approaches for disentanglement learning *follows specific paradigms that may require tailored designs for transfer learning*. In other words, while the general transfer methodology is still applicable, it may need proper tuning to perform optimally depending on the particular learning approach. The generalization of our methodology to different disentanglement learning approaches will be object of our future investigation, since they represent a further step forward from the current state of our work. We will include the suggested works in the bibliography of this paper, referring to them in conclusions and future works. [A] Wang, et al. "*Disentangled representation learning.*" arXiv:2211.11695; PAMI 2024 (accepted) [B] Yang, et al. "*Disdiff: Unsupervised disentanglement of diffusion probabilistic models.*", in NeurIPS 23. [C] Du, et al. “*Unsupervised Learning of Compositional Energy Concepts*” in NeurIPS 21. Pdf: /pdf/a12fccf54595b19c47360cb7df317ac4be53d245.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Efficient Multi-task LLM Quantization and Serving for Multiple LoRA Adapters
Accept (poster)
Summary: This paper mainly focuses on the quantization problem of large language models and the problem of low-rank decomposition solvers, and proposes a method for quantizing large language models for multiple tasks and integrating multiple low-rank decomposition solvers. The article first analyzes that the current mainstream quantization method will make it impossible to share model parameters among tasks when processing multiple tasks. Therefore, the current LLM service system cannot integrate LLM quantization with multiple LoRA solvers to achieve memory-efficient multi-task support. At the same time, the existing LLM service system lacks support for dynamic task addition. Therefore, the LoRA-Inlaid proposed in this paper designs a flexible and efficient multi-task quantization algorithm, which is convenient for multiple LoRA solvers to share a single quantization model, greatly reducing the memory consumption of model deployment. On the other hand, LoRA-Inlaid develops a new multi-task scheduling algorithm based on output length prediction and grouping, which effectively reduces memory consumption and avoids frequent switching of LoRA solvers. This innovative multi-task quantization algorithm MLGPTQ uses multi-task data to perform joint quantization on the basic model. This allows the basic model of quantization to be shared between multiple tasks. In addition, it also supports incremental quantization of newly added tasks without affecting the performance of the service. Strengths: This paper proposes an innovative multi-task LLM quantization method for large language models, which can be used without introducing any other memory calculation consumption. It handles multiple quantification tasks with the following main advantages: 1. This innovative multi-task quantization algorithm MLGPTQ uses multi-task data to perform joint quantification on the basic model. This allows the quantified base model to be shared across multiple tasks. 2. Supports incremental quantification of adding new tasks, which improves the defect that most current systems can only support a constant number of tasks. Tasks can be added dynamically without having a great impact on the current task process. No need Pause or restart the current service process. 3. A multi-task scheduling strategy based on output length prediction and grouping is proposed. This effectively reduces memory consumption and memory swapping overhead in multi-tasking scenarios, significantly improving the overall performance of the system. 4. LoRA-Inlaid integrates multi-task quantification, realizes dynamic task addition, and adopts a multi-task scheduling strategy to achieve high-performance and flexible computing services for multi-task LLM in an environment with limited resources. Weaknesses: This article is a very excellent and innovative algorithmic article for quantitative model improvement. The algorithm process is clearly explained, the process is clear, and the proof process is very detailed. The proof ideas are clear and complete. However, the following small issues can be considered: 1. The symbols of the proof process in this article need to be further sorted out, for example, does the q-th parameter refer to the qth parameter or the qth row parameter? 2. This article mentions that the Lagrange multiplier method needs to be used to solve max-aggregation, but the solution process, including the final result, will have multiple matrix inversions. Then the condition number of the matrix needs to be further considered to see if it will affect the accuracy and speed of the solution. 3. Figure 2 mentions the difference between the method in this article and the GPTQ method, which is very clear, but this article also proposes other quantitative methods. Can you explain why GPTQ is simply compared with MLGPTQ, or the AWQ method is also represented in this way to show the highlights of this method? 4. This article has less or less explanation of the multi-task scheduling strategy. It is recommended that the author add more comparative experiments on this module, otherwise the content is relatively lacking as an innovation point. 5. For large model tasks, there may be some priorities among the tasks. So based on the multi-task scheduling method in this article, how should we consider the situation where there are multiple tasks and the task levels have an order? Technical Quality: 3 Clarity: 3 Questions for Authors: This article is a very excellent and innovative algorithmic article for quantitative model improvement. The algorithm process is clearly explained, the process is clear, and the proof process is very detailed. The proof ideas are clear and complete. However, the following small issues can be considered: 1. The symbols of the proof process in this article need to be further sorted out, for example, does the q-th parameter refer to the qth parameter or the qth row parameter? 2. This article mentions that the Lagrange multiplier method needs to be used to solve max-aggregation, but the solution process, including the final result, will have multiple matrix inversions. Then the condition number of the matrix needs to be further considered to see if it will affect the accuracy and speed of the solution. 3. Figure 2 mentions the difference between the method in this article and the GPTQ method, which is very clear, but this article also proposes other quantitative methods. Can you explain why GPTQ is simply compared with MLGPTQ, or the AWQ method is also represented in this way to show the highlights of this method? 4. This article has less or less explanation of the multi-task scheduling strategy. It is recommended that the author add more comparative experiments on this module, otherwise the content is relatively lacking as an innovation point. 5. For large model tasks, there may be some priorities among the tasks. So based on the multi-task scheduling method in this article, how should we consider the situation where there are multiple tasks and the task levels have an order? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: This article mentions possible limitations of this method at the end. The first is that the quantitative method in this article does not detect the existence of malicious or poisoned tasks, which may be intentionally used to harm other tasks. Second, our scheduling does not consider fairness among tasks, which may be essential for shared service platforms. Third, it only supports language tasks, while requiring some system redesign for multimodal tasks. In addition to the three possible limitations mentioned in the article, the priority of tasks and the order between tasks may also be points that need to be considered. Secondly, the calculation time of the intermediate process of MLGPTQ may vary significantly depending on the model. In addition, if some tasks may need to be exited midway during the training process, how should the model proposed in this article handle this situation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Q1 It refers to the $q$-th parameter, i.e., $w_q$ denotes the $q$-th element of $\mathbf{W}$ after it is flattened. ## Q2 In Appendix A.2, we implement MLGPTQ using **Cholesky decomposition** to increase speed and computational stability, similar to GPTQ's implementation (https://github.com/AutoGPTQ/AutoGPTQ/blob/v0.7.0/auto_gptq/quantization/gptq.py#L117). To address the comment, we record the condition numbers of the Hessian matrices for different layers, providing the mean and std-dev below. Results show that condition numbers are consistently within $10^3$, which is a reasonable value range in practice. | | trans-fr | trans-cs | trans-id | trans-nl | trans-da | trans-sw | QTsum | xlsum | tiny-codes | | ----- | ------ | -------- | -------- | -------- | -------- | -------- | ----- | ----- | ---------- | |mean| 285.1 | 316.3 | 345.7 | 321.0 | 318.3 | 324.1 | 258.3 | 203.2 | 232.9 | |std-dev| 101.0 | 108.7 | 94.8 | 107.9 | 108.2 | 103.9 | 92.2 | 92.9 | 89.5 | ## Q3 Our work centers on GPTQ due to its widespread use, but our solution can also adapt to AWQ. We presented the differences between MLAWQ and $AWQ_{tweaked}$ in _Alg A_ of the one-page PDF (similar to Alg 1 of our manuscript). As introduced in Section 3.1, most quantization methods follow the **_Forward-Aggregate Info-Modify Weight-Quant_** paradigm. In essence, $AWQ_{tweaked}$ smooths outliers by multiplying weights with a smoothing factor, best_s, to minimize per-channel quant error: - In **_Forward_**, the input is multiplied by the weights to create an unquantized monitor, guiding min error quantization (line 1). - ​In **_Aggregate Info_**, the average of all samples and weights is calculated for each channel (lines 2&3) to determine smoothing factor s (line 7). $W$ and $X$ are smoothed to remove outliers (lines 8&9). Then, smoothed $W$ is pseudo-quantized (quantize-then-dequantize to simulate round loss) and compared to the unquantized monitor for quantization error (line 10). This process iterates over various ratios (line 6), selecting the factor with the smallest error as best_s (line 11). Then, this best_s is used to **_Modify the weight_** (line 12), followed by the **_Quant_** (line 13) process using the modified weight. The drawbacks discussed in lines 146-154 of Section 3.1 also exist for $AWQ_{tweaked}$ in multi-task quantization: - **_Forward_** (line 1): It can't pass LoRA adapters during activation distribution simulation, causing quantization bias during inference. - **_Aggregate Info_**: It uses $X_{mean}=X.mean(0)$, a naive mixed average of multi-tasks' info. Since each task affects each channel differently, simply averaging blurs distributions, ignoring individual effects. As explained in lines 155-172 of Section 3.1 and shown in Fig 2, MLGPTQ mainly improves the first three steps to tackle GPTQ's issues in multi-task scenarios. Similarly, we can fix AWQ's issues to create a better multi-task quantization algorithm MLAWQ: - **_Forward_**: MLAWQ loads corresponding LoRA adapter for each task to participate in forward propagation, accurately simulating real activation distribution (line 1). - **_Aggregate Info_**: Instead of mixing and averaging features of each column across all tasks to compute $s$, MLAWQ computes the average for each task separately to get $s_i$ (line 3&7). Then it calculates quantization error for each column rather than the entire matrix (line 10). If the $i$-th task results in the smallest quantization error for the $j$-th column, it sets best_s$[j]=s_{i}[j]$ (line 11). This approach allows optimal error minimization, showing each task's individual effect on different channels, enhancing **_Aggregate Info_** (lines 3&6-11), and improving the **_Modify Weight_** (line 12) and **_Quant_** (line 13) processes. In summary, our work identifies common drawbacks of current single-task quantization methods in multi-task scenarios. By addressing these issues, we can develop more precise multi-task quantization algorithms. Due to time constraints, we couldn't complete the coding and experiments for MLAWQ during rebuttal. However, since the backbone algorithm choice is orthogonal to our work, and GPTQ is popular in LLM, we believe it doesn't diminish our work's significance. ## Q4 As discussed in Section 3.3 and _Global Responses_, our multi-task scheduling strategy involves two key techniques: prediction-based SRTF and task grouping. Ablation studies in Figure 9 show that prediction-based SRTF and task grouping increase SLO Attainment by 2.27x and 1.16x, respectively. To address the comment, in _Fig C_ of the PDF, we conducted two more experiments: - We considered a variant of LoRA-Inlaid, enabling multi-task scheduling strategy while disabling multi-task quantization (i.e., served model not quantized), denoted as "Ours (w/o quant)." Results show "Ours (w/o quant)" increases SLO attainment by 16% compared to S-LoRA, showing the power of our scheduling strategy. - We considered a variant of LoRA-Inlaid without output length prediction (instead, using averaged output length of all requests in the same task for sorting), denoted as "Ours (w/o prediction)." Results indicate 18% and 27% efficiency improvements ("Ours" vs. "Ours (w/o prediction)") for 7B and 13B models, respectively. ## Q5 As discussed in Section 5 and Appendix F, our work does not consider task fairness currently and we leave it as a future work. When tasks have different priorities, it becomes a case of weighted fairness. A potential solution is using Weighted Fair Queueing (WFQ) [a]. For example, compute a weighted combination of input and output tokens for each task and scale it by the task's priority factor. This information can be seen as the service amount each task receives. Then, we can integrate WFQ to schedule different tasks. [a] Parekh and Robert. A Generalized Processor Sharing Approach to Flow Control in Integrated Services Networks: The Single-Node Case. --- Rebuttal Comment 1.1: Title: Response to authors' rebuttal Comment: Authors' rebuttal mostly addresses my concerns. I will change the score to "accept". --- Reply to Comment 1.1.1: Title: Thanks for your reply Comment: Many thanks for your acknowledgement! Your insightful comments guide significant enhancement to our paper.
Summary: The paper addresses the need for efficient fine-tuning and deployment of large language models (LLMs) in multi-task scenarios, which has been largely overlooked in favor of single-task scenarios. Existing quantization methods, such as GPTQ and AWQ, and parameter-efficient fine-tuning techniques like LoRA, are commonly used, but they do not support the integration of multiple tasks due to their limitations in sharing the base model across tasks and handling dynamic task addition. To tackle these issues, the authors propose LoRA-Inlaid, a multi-task LLM serving system that combines a flexible and efficient multi-task quantization algorithm with a novel multi-task scheduling strategy. This system significantly reduces memory consumption, supports real-time task addition, and enhances the stability of online services. The major contributions of this paper are as follows. The authors introduce an innovative multi-task quantization algorithm, MLGPTQ, which enables the joint quantization of models for multiple tasks, allowing a single quantized base model to be shared across tasks and supporting incremental quantization for new tasks. Additionally, they develop a novel scheduling strategy based on output length prediction and grouping, which minimizes memory consumption and reduces memory swapping overhead in multi-task scenarios. These techniques are integrated into the LoRA-Inlaid system, which demonstrates significant performance improvements over existing LLM serving systems, achieving up to 1.58× higher throughput, 1.76× lower average latency, 2× faster job completion time, and 10× better SLO attainment, all while maintaining model quality. Despite some limitations, such as the need for improved detection of malicious tasks and fairness considerations among tasks, LoRA-Inlaid represents a significant advancement in multi-task LLM serving, highlighting its potential for resource-constrained environments. Strengths: Originality: The paper introduces an approach to addressing the overlooked area of multi-task fine-tuning and deployment of LLMs. By proposing LoRA-Inlaid, the authors present an innovative multi-task quantization algorithm (MLGPTQ) that allows for joint quantization of models across multiple tasks, supporting incremental quantization for new tasks. This results in their novel multi-task scheduling strategy, which efficiently manages memory consumption and task addition, marking a departure from existing single-task-focused methods. Quality: The research is evaluated through comprehensive experiments and performance evaluations. The authors provide evidence of significant improvements over existing LLM serving systems, including up to 1.58× higher throughput, 1.76× lower average latency, 2× faster job completion time, and 10× better Service Level Objective (SLO) attainment. Clarity: The paper is well-structured and clearly presents the problem, methodology, and results. The authors provide detailed explanations of their innovative multi-task quantization algorithm and scheduling strategy, making complex concepts accessible. The use of figures and tables to illustrate performance improvements enhances the clarity and readability of the paper. Significance: This work addresses a critical gap in the efficient deployment of LLMs in multi-task scenarios. The LoRA-Inlaid system has the potential to greatly improve the efficiency and stability of online services, particularly in resource-constrained environments. The advancements presented in this paper can help advance deployment and scalability of LLMs across various applications, making it an important contribution to the field. Weaknesses: Editorial comments: Abstract: Avoid excessive use of undefined acronyms in the abstract. Can you explain in the abstract briefly how is LoRA-Inlaid related to MLGPTQ? References: Capitalize proper names and acronyms properly in the References. Some references are incomplete — their publication details (or URL) are missing (e.g., [10]) Outliers: (Figure 3) There are a significant number of outliers represented by circles, especially in the French-English Translation and Table Summary tasks. It might be useful to provide a brief explanation or context for these outliers. Y-Axis Scale: The y-axis scale goes up to 4000 tokens, but most data points fall well below this range. This could make it harder to see differences in distributions for tasks with shorter lengths. Using a logarithmic scale or breaking the y-axis into two parts could provide better clarity. Outliers and Variability (Figure 4): There seems to be significant variability in the number of tasks for Skip-join MLFQ (FastServe) and FIFO (S-LoRA). Providing statistical summaries such as mean or median lines could help interpret the data more effectively. Including error bars or confidence intervals would give a better understanding of the variability and reliability of the scheduling strategies. Malicious Task Detection: The paper acknowledges the need for improved detection of malicious tasks but does not provide detailed solutions or strategies to address this issue. This represents a potential vulnerability in the proposed system that could be exploited in practical applications. Fairness Considerations: Fairness among tasks is briefly mentioned as a limitation, but the paper lacks an in-depth discussion on how fairness is measured and what specific strategies could be employed to ensure equitable resource distribution among tasks. This is crucial for practical deployment in environments where multiple users or tasks compete for limited resources. Experimental Scope: While the results are promising, the scope of the experiments is somewhat limited. The paper would benefit from additional experiments across a wider range of tasks and more diverse datasets to fully validate the generalizability and robustness of the proposed system. Incremental Quantization Details: The paper does not provide extensive details on the incremental quantization process. More information on how new tasks are integrated into the system and the potential impact on existing tasks would strengthen the paper. Technical Quality: 3 Clarity: 3 Questions for Authors: Detection of Malicious Tasks: Can you provide more details on how you plan to enhance the detection of malicious tasks? Are there any preliminary strategies or methods you are considering to address this vulnerability? Fairness Among Tasks: How do you measure fairness among tasks in the LoRA-Inlaid system? Can you elaborate on the strategies you plan to implement to ensure equitable resource distribution? Incremental Quantization Process: Could you provide more information on the incremental quantization process? How are new tasks integrated into the system without impacting the performance of existing tasks? Generalizability of Results: The experiments demonstrate significant improvements, but are these results consistent across a broader range of tasks and datasets? Can you share any additional experimental results or plans for future testing? Scalability and Real-Time Performance: How does the system scale with a large number of tasks added in real-time? Are there any performance benchmarks or case studies that illustrate the system’s scalability in real-world scenarios? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have identified some limitations, such as the need for improved detection of malicious tasks and fairness considerations among tasks. However, these limitations are not fully addressed in the paper. Addressing Malicious Task Detection: The paper mentions the need for better detection of malicious tasks but does not provide concrete strategies. Constructive suggestion: Develop and describe specific methods for detecting and mitigating malicious tasks, potentially through anomaly detection techniques or secure task validation protocols. Fairness Among Tasks: Fairness is noted as a limitation, but there is little discussion on how to ensure it. Constructive suggestion: Elaborate on fairness metrics and propose strategies for fair resource allocation among tasks, possibly through dynamic scheduling algorithms that prioritize based on task urgency and resource consumption. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Editorial Comments **Undefined Acronyms** There are three undefined acronyms in our abstract: - **LoRA**, short for Low-Rank Adaptation, is one of the most widely used parameter-efficient fine-tuning techniques for LLMs. - **GPTQ** and **AWQ** are state-of-the-art quantization algorithms for LLMs. GPTQ (Frantar et al., ICLR 2023) merges the name of GPT model family with "Q"uantization. AWQ (Lin et al., MLSys 2024) stands for Activation-aware Weight Quantization. **How is LoRA-Inlaid related to MLGPTQ?** LoRA-Inlaid consists of two major techniques: the multi-task quantization algorithm and the multi-task scheduling algorithm. We term the multi-task quantization algorithm as MLGPTQ (Multi-LoRA GPTQ). MLGPTQ supports joint quantization of multiple tasks and allows the quantized model to be shared across tasks. It also supports incremental quantization, facilitating dynamic task addition without impacting performance. We'll add the name of our MLGPTQ method to the abstract for clarity. **References** Thanks for pointing out the issue. We will update the references in our manuscript accordingly. ## Outliers and Variability **Explanation for the Outliers in Fig 3** The outliers are due to the long-tail property of sequence lengths, where few sequences are significantly longer than others. This property is common in NLP datasets (e.g., see section 2 of this article: https://www.harmdevries.com/post/context-length/). **Scale of the y-axis in Fig 3** Thanks for the suggestion. We used the same y-axis value range in Fig 3 to clarify differences in sequence length distributions across tasks. For instance, it's clear that the table summary task has shorter output lengths than the code generation task. Thus, we are afraid that using a logarithmic scale or breaking the y-axis into two parts may not achieve clarity as expected. To address the concern, we have redrawn Fig 3 as suggested, shown in _Fig B_ of the PDF. We will update our manuscript if the reviewer prefers the new figure. **Variability in Fig 4** Thanks again for the suggestion. We present the mean and std-dev for each strategy in Fig 4 below. As discussed in Section 3.3, FIFO (S-LoRA) and Skip-join MLFQ (FastServe) lack consideration of tasks scheduled in each step, leading to significant variability. In contrast, our approach shows a smaller std-dev, so our strategy is more suitable for multi-task scheduling. | | Ours | FIFO (S-LoRA) | Skip-join MLFQ (FastServe) | | --- | --- | --- | --- | | mean | 10.11 | 22.22 | 31.49 | | std-dev | 1.57 | 2.07 | 4.27 | ## Malicious Task Detection One of the most typical use cases of LoRA-Inlaid is the personalization of LLMs, where clients can upload their data to create personalized LoRA adapters using the same base model (or directly upload their self-tuned LoRA adapters). The server is responsible for serving requests from all these clients using the proposed LoRA-Inlaid system. Fortunately, these LoRA adapters are independently manufactured, so we can apply malicious detection to them individually. For instance, the server can prepare a rich set of evaluations to assess the security risks of each LoRA adapter, including violence, discrimination, unlawful responses, etc. If any LoRA adapters fail to pass the evaluation, the server can reject serving them. ## Fairness Considerations To measure fairness among tasks, we can compute a weighted combination of numbers of input and output tokens for each task. This is because the prefilling and decoding phases in LLM inference have different workload characteristics [a] (also why these tokens differ in online service pricing). Then, we can borrow the idea of Weighted Fair Queueing (WFQ) [b] for scheduling different tasks. [a] Hu et al. Inference without Interference: Disaggregate LLM Inference for Mixed Downstream Workloads. [b] Parekh and Robert. A Generalized Processor Sharing Approach to Flow Control in Integrated Services Networks: The Single-Node Case. ## Experimental Scope/Generalizability of Results We evaluated our work's effectiveness with 9 datasets from 4 task types, as detailed in Appendix D of our manuscript. We believe these datasets are with significant diversity: - We considered six languages (French, Czech, Indonesian, Vietnamese, Danish, and Swedish) for the machine translation task, covering various language families. - Beyond texts, we also considered two other kinds of inputs (tables and codes) to evaluate our work over diverse tasks. To address reviewer concerns, we added three datasets with distinct tasks to enhance experimental diversity: grade school math problems (GSM8K), medical QA (Medical_MMLU), and anomaly detection (malicious-600k). As shown in _Table A_ of the one-page PDF, our work consistently outperforms baselines across all tasks. Thus, we believe our work is effective and robust across a wide range of tasks. GSM8K: https://huggingface.co/datasets/openai/gsm8k Medical_MMLU: https://huggingface.co/datasets/medalpaca/medical_meadow_mmmlu malicious-600k: https://huggingface.co/datasets/bgspaditya/malicious-600k ## Scalability and Real-Time Performance In Table 2 of our manuscript, we evaluated the scalability of our work under different numbers of tasks. The results show a small decrease in LoRA-Inlaid throughput (less than 10%) when tasks increase from 2 to 100, even under three request rate levels. To address the comment, we further increase the number of tasks to 1000, and the throughputs under request rates of 5, 10, 20 are 3.42, 4.02, and 4.22 reqs/s, respectively, which are even better than that of S-LoRA serving only 2 tasks. Thus, we believe LoRA-Inlaid has a sound scalability to support both small and large scale workloads in real-world scenarios. ## Details of the Incremental Quantization Process Please refer to the _Global Responses_. --- Rebuttal Comment 1.1: Comment: Thanks for your clarifications. I acknowledge that I have read these comments in the rebuttal in response to my comments and that I have considered these in my review scores. --- Reply to Comment 1.1.1: Title: Thanks for your reply Comment: We are grateful for your acknowledgement in our response and we believe our paper will be substantially improved by addressing your constructive comments.
Summary: This paper introduces LoRA-Inlaid, an innovative and efficient system for quantizing and serving Large Language Models (LLMs) in multi-task environments. By utilizing the Multi-LoRA GPTQ (MLGPTQ) algorithm, LoRA-Inlaid facilitates sharing a unified quantized model across various LoRA adapters, significantly reducing memory usage for model deployment. The platform also features a dynamic task addition mechanism that enhances the stability and reliability of online services. Moreover, it introduces a novel multi-task scheduling approach guided by predicted output lengths and task grouping, significantly reducing memory consumption and increasing overall system efficiency. Experimental results demonstrate that LoRA-Inlaid outperforms current state-of-the-art (SOTA) LLM serving systems in terms of throughput, average request latency, Job Completion Times (JCT), and Service Level Objectives (SLO) Attainment without detracting from the model's performance. Strengths: Originality: The paper introduces LoRA-Inlaid, a novel multi-task serving system for Large Language Models (LLMs), featuring innovations such as multi-task joint quantization, dynamic task addition, and multi-task scheduling. These advancements significantly propel the current field of LLM deployment and services. Significance: LoRA-Inlaid reduces the memory requirements for model deployment through its multi-task joint quantization and scheduling strategies while maintaining model quality. This is of great importance for resource-constrained environments. Additionally, the addition of dynamic tasks ensures the stability and robustness of online services. Clarity: The paper has a clear structure and is easy to comprehend, providing detailed procedures of the algorithms and facilitating reproduction. The experiments encompass a variety of metrics, including throughput, average request latency, JCT, and SLO Attainment, demonstrating a well-designed and persuasive set of results. Weaknesses: 1. The paper introduces multi-task quantization, incremental quantization, and predicted output length, which naturally will incur additional computational overhead. However, the paper needs an analysis of the extra costs associated with introducing these techniques. 2. The paper is missing absolute accuracy comparison experiments for the unquantized model. Both Figure 5 and Figure 6 represent relative accuracy drops compared to the quantization model without providing a direct comparison to the baseline performance of the unquantized model. 3. An excessive amount of content from Chapter 3 has been relegated to the appendices, with the main text providing a succinct description of the specific methods and also needing formulas to aid in explanation, which affects the clarity of the paper. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In Figure 5, why is there a lack of error bars for RTN? 2. In Figure 5, why does the accuracy drop for some tasks with 3-bit quantization appear to be less than that for 4-bit quantization? 3. In the three charts of Figure 6, MLGPTQ and GPTQ have their respective strengths and weaknesses across various metrics. It is suggested that additional tasks be added to demonstrate the superiority of MLGPTQ in terms of accuracy. 4. Currently, there are only experiments on accuracy drops. Please supplement with accurate experiments. In the efficiency tests, only S-LoRA used half-precision, which significantly increased its computational overhead, leading to an unfair comparison experiment. 5. In addition to the End-to-end system performance, could you independently analyze the individual impacts of components such as multi-task quantization, incremental quantization, and predicted output length on each performance of LLM services? 6. Please add formalized descriptions of dynamic task addition and multi-task scheduling in the main text and a brief outline of the specific method. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## W2 & Q4 (first half) Since divergent metrics (SacreBLEU and ROUGE-1) are used for different tasks, Fig 5 shows the relative performance to align the y-axis. We have presented the detailed results in _Table A_ of the one-page PDF, showing that MLGPTQ consistently outperforms the baselines. ## Q1 Thanks for pointing out the issue. The std-dev is also provided in _Table A_. RTN has a lower std-dev than the other quantization approaches. This is because RTN is deterministic while the other approaches introduce randomness as we shuffle the calibration sets for every time of quantization. The randomness in RTN (as well as Unquantized) only comes from the non-deterministic outputs (the LLM may produce different outputs for the same input to enhance creativity), however, such randomness hardly affects the evaluation. ## Q2 Fig 5 uses different value ranges for the y-axes of 3-bit and 4-bit quantization, leading to misunderstanding. _Table A_ shows 3-bit quantization is consistently worse than 4-bit quantization. ## Q3 In Fig 6, GPTQ is the baseline that quantizes the model individually for each task, so the quantized model of GPTQ cannot be shared among different tasks. In contrast, MLGPTQ quantizes the model jointly for all tasks, ensuring the quantized model is shareable. Hence, it is reasonable that GPTQ is better on some metrics. We considered GPTQ as a reference since it fulfills the two key factors (discussed in lines 283-286 of Section 4.2), aiming to better anatomize the effectiveness. Due to the space constraint, we only provided the results on 3 datasets in Fig 6. To address the comment, we have presented the results on the other 3 datasets of the translation task in _Fig D_ of the one-page PDF (the other tasks are not considered since metrics like G_BLEU, S_BLEU, and NIST_MT do not apply). The results are consistent with Fig 6 --- both MLGPTQ and GPTQ, which fulfill the two factors, outperform the other approaches in almost all metrics. Moreover, since MLGPTQ produces a shareable quantized model while GPTQ cannot, the results verify that MLGPTQ is suitable for multi-task quantization. ## Q4 (second half) Although the model is quantized, the computation during inference is still executed in half-precision. Before the computation of each layer, the corresponding model weights are temporarily dequantized (e.g., `MatMul4Bit`in `bitsandbytes`: https://github.com/bitsandbytes-foundation/bitsandbytes/blob/0.43.0/bitsandbytes/autograd/_functions.py#L516). Thus, mmodel quantization does not decrease computational overhead but introduces a minor overhead of dequantization. Besides, S-LoRA does not support deploying quantized models since existing quantization methods do not fit multi-task scenarios, as discussed in Section 3.1 of our manuscript, so we could only use half-precision for S-LoRA. (LoRA-Inlaid is the first system that supports deploying quantized models for multi-task serving.) Thus, we believe the system comparison in our work is fair. ## W1 & Q5 Below we analyze the individual impact of each component. **Multi-task Quantization** As discussed in the response to _Q4 (second half)_, moodel quantization does not decrease computational overhead. However, it reduces the memory consumption of storing model weights so that we can preserve more memory for KVCache to improve overall efficiency. To assess the impact of multi-task quantization, we considered a variant of LoRA-Inlaid, which disables quantization (i.e., the served model is not quantized), denoted as "Ours (w/o quant)" in the right of _Fig C_ of the one-page PDF. The results show that multi-task quantization brings 39% improvement ("Ours" vs. "Ours (w/o quant)") when serving the 7B model, and disabling it leads to OOM when serving the 13B model. The time cost of quantization is discussed in the analysis of incremental quantization below. Note that we can quantize the model before serving, so it does not affect the performance of serving. **Incremental Quantization** The incremental quantization aims to support dynamic task addition without halting the serving. As introduced in our _Global Responses_, there are two steps in quantization, and we avoid redundant computation in the first step of incremental quantization. Meanwhile, a layer-by-layer mechanism is developed to reduce the memory consumption of incremental quantization. To evaluate its impact, we conducted an experiment where there are 5 tasks in the ongoing service and another 5 tasks need to be added. We measured the time cost of three approaches: - Full quantization with 10 tasks, which halts the serving. - An offline variant of our incremental quantization with the 5 new tasks, which halts the serving. However, it does not need to perform the layer-by-layer quantization. - Our incremental quantization with the 5 new tasks, which works concurrently with the ongoing service. As shown below, by avoiding the redundant computation, the time cost of the first step can be reduced greatly, accelerating quantization. Moreover, although the layer-by-layer mechanism slows down the quantization by 1.26 times due to the extra IO, it reduces the memory greatly and does not halt the serving. ||Step 1|Step 2|Total|Peak Memory (GB)| |-|-|-|-|-| |Full Quant|1403(±21)s|415(±6)s|1818(±22)s|9.2| |Incr Quant (offline)|663(±11)s|416(±5)s|1079(±12)s|9.2| |Incr Quant|889(±11)s|469(±6)s|1358(±13)s|2.5| **Output Length Prediction** As introduced in our _Global Responses_, the output length prediction is done on CPU and overlaps with the LLM inference on GPU. To measure its impact, we experimented with a variant of LoRA-Inlaid without the output length prediction (instead, the averaged output length of each task is used), denoted as "Ours (w/o prediction)" in the left of _Fig C_ of the one-page PDF. The results show that it brings 18% and 27% improvement ("Ours" vs. "Ours (w/o prediction)") for the 7B and 13B models. ## W3 & Q6 Please refer to the _Global Responses._ --- Rebuttal Comment 1.1: Comment: Thanks for the authors' detailed feedback. It has addressed my concerns. In light of these explanations, I will revise my score accordingly. --- Reply to Comment 1.1.1: Title: Thanks for your reply Comment: Thank you for your time and consideration! Your detailed comments are extremely helpful, and we believe addressing your comments will significantly improve our paper.
null
null
Rebuttal 1: Rebuttal: # Global Responses We are grateful to all reviewers for the careful reviews. We provide _Global Responses_ to common questions, followed by individual responses. Please refer to the attached one-page PDF for related figures and tables. ## Details of the Incremental Quantization and Multi-task Scheduling We acknowledge the oversight that these two modules are not introduced comprehensively in the main text of our manuscript due to the space constraint. To address the reviewers' concerns, we would like to elaborate below. ### **Incremental Quantization** In Appendix A.2 of our manuscript, we provided the full quantization process of MLGPTQ in Alg 1. To quantize each model weight $W$ with $T$ tasks, there are two steps: - [Lines 1-2 of Alg 1] Compute the Hessian matrices for all tasks (i.e., compute $H_1^{-1},H_2^{-1},\cdots,H_T^{-1}$). - [Lines 3-13 of Alg 1] Max-aggregate these $T$ Hessian matrices (i.e., $H_{tmp}=MaxAgg(H_1^{-1},\cdots,H_T^{-1})$) and update the model weight. When there are new tasks, a naive solution is to perform full quantization again. Denote $T_1,T_2$ as numbers of existing and new tasks, respectively. The naive solution runs the two steps above with $T=T_1+T_2$. However, this leads to **redudant computation** of $H_1^{-1}, H_2^{-1}, \cdots, H_{T_1}^{-1}$. Furthermore, given the **commutative property of the max-aggregation operation**, we have $MaxAgg(H_1^{-1},\cdots, H_{T_1+T_2}^{-1})=MaxAgg(MaxAgg(H_1^{-1},\cdots, H_{T_1}^{-1}),MaxAgg(H_{T_1+1}^{-1},\cdots,H_{T_1+T_2}^{-1})),$ where the first term $MaxAgg(H_1^{-1},\cdots,H_{T_1}^{-1})$has already been computed as $H_{tmp}$ in the previous quantization. Inspired by this, we can cache $H_{tmp}$, so the incremental quantization can be done as follows: - [Same as lines 1-2 of Alg 1] Compute the Hessian matrices for new tasks $H_{T_1+1}^{-1},\cdots,H_{T_1+T_2}^{-1}$. - [Same as lines 3-13 of Alg 1] Max-aggregate the $T_2+1$ matrices (i.e., $H_{T_1+1}^{-1},\cdots,H_{T_1+T_2}^{-1}$ and the cached $H_{tmp}^{(cached)}$) and update the model weight. Obviously, incremental quantization with $T_2$ tasks is identical to full quantization with $T_1+T_2$ tasks, while **avoiding redundant computation**. To avoid halting the ongoing services, LoRA-Inlaid spawns a background thread for incremental quantization. In addition, to reduce the memory consumption of incremental quantization, it is done in a **layer-by-layer** manner: for each (unquantized) model weight, we load it from CPU memory to GPU memory, perform incremental quantization, remove it from GPU memory, and proceed to the next model weight. The IO between CPU-GPU is overlapped with computation. Thus, LoRA-Inlaid supports seamless task addition on the fly and has very little influence on the ongoing services. ### **Multi-task Scheduling** In Appendix B of our manuscript, we illustrated multi-task scheduling in Alg 2~4 and discussed its workflow in lines 529-542. For better understanding, we included a flow chart in _Fig A_ of the one-page PDF. Given the prompt of a request, there are two phases for LLM inference: _the prefilling phase_ takes the prompt to compute the key-value (KV) cache and generates the first output token in a single step, and _the decoding phase_ takes the last generated token and KV cache to generate subsequent tokens. The decoding phase needs to be executed for multiple steps for each request, with each step generating only one token, until the EOS token is generated. In the field of LLM serving, the scheduling strategy is responsible for determining what should be done in each scheduling step. There are three choices for each step: - [lines 7-11&16-20 of Alg 2] generate a batch from the prefilling queues; - [line 22-25 of Alg 2] schedule a batch from the decoding queues; - [lines 27-28 of Alg 2] continue decoding for the current batch. The determination in our work follows the standard rules in the field of LLM serving (e.g., switch to prefilling if we have done decoding for several consecutive steps). To enhance multi-task scheduling, there are two key techniques in our work (i.e., the two solutions in Section 3.3) when generating/scheduling a batch. **Scheduling Guided by Output Length Prediction**: Considering the sequence length variation across tasks, we take the remaining output length information into account. - [lines 214-216 of Section 3.3] Upon receiving a new request, we predict its output length on CPU using a small model (255MB) and enqueue the request into the _prefill_reqs_. Note that the output length prediction takes about 16ms for one request on CPU, while it takes about 200ms to finish the inference of one request on GPU. Hence, we can completely overlap the prediction, without occupying any GPU computing resources. - [lines 2-3 of Alg 3 & lines 2-3 of Alg 4, also in lines 217-219 of Section 3.3] For each scheduling step, we sort the queues to achieve the Shortest Remaining Time First (SRTF). **Reducing Tasks Involved via Grouping**: To avoid expensive memory access overhead in each step, there are two efforts. - [lines 6&11 of Alg 3, also in lines 233-234 of Section 3.3] We restrict the number of involved tasks below a threshold (i.e., $\beta$ in Section 3.3) when generating a new batch from the prefilling queues; - [lines 6&11 of Alg 4, also in lines 235-236 of Section 3.3] We prioritize tasks involved in the previous step to avoid swapping LoRA adapters frequently. Besides, our work also maintains the waiting time of each request to avoid starvation (i.e., by the hungry queues in Alg 2-4), which is common in LLM serving. To conclude, the scheduling in our work is not complex. Compared with existing scheduling strategies, we leverage two techniques considering the characteristics of multi-task serving. Thus, in our manuscript, we focused on the two techniques in Section 3.3, while the detailed routine (e.g., the determination of the three choices for each step) is deferred to the appendix. Pdf: /pdf/a96886a4440445cca1cd46a00dfe9ea44de2720c.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Blind Image Restoration via Fast Diffusion Inversion
Accept (poster)
Summary: The paper introduces a blind image restoration method based on DDIM, which iteratively optimizes the initial noises and the degradation model parameters through the restoration loss of reconstructing the degraded image. As a result, the restored image remains on the data manifold of the pretrained diffusion model. The experiments on three image restoration tasks demonstrate the feasibility of the proposed method. Strengths: 1. The method do not alter the reverse sampling, and could generate images that lie on the data manifold of diffusion model at every iteration. 2. The method is adaptable to multiple image restoration tasks, such as deblurring, super-resolution, and JPEG de-artifacting, without requiring model retraining or fine-tuning. Weaknesses: 1. The inversion of generative models has been extensively used in both GAN and diffusion models. This work does not sufficiently distinguish itself from existing methods. 2. The optimizable degradation model has been presented in the paper of GDP[1], and the iterative optimization of the degradation model and the restored image also has been adopted in the paper of TAO[2]. The authors have not adequately addressed how their method differs from these existing approaches. 3. The paper claims that the method is computationally efficient, yet the timing results presented in Table 2 do not support this claim. In addition, despite the authors' assertion that their method achieves state-of-the-art results, the performance metrics in Table 4 do not substantiate this claim. [1] Fei B, Lyu Z, Pan L, et al. Generative diffusion prior for unified image restoration and enhancement[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 9935-9946. [2] Gou Y, Zhao H, Li B, et al. Test-Time Degradation Adaptation for Open-Set Image Restoration[C]//Forty-first International Conference on Machine Learning. Technical Quality: 3 Clarity: 3 Questions for Authors: Please address the concerns in the above Weaknesses. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Please see the above Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. Please see the detailed response below. ### Q1. Further distinction from existing methods based on inversion First, we point that there is a a fundamental difference between *inverting a clean image* (mostly useful for applications like image editing), where we aim for the same image, and *inverting a degraded image* (useful for IR tasks), where we aim for the degradation-free image. Indeed, a lot of the literature related to the inversion of generative models applies to the clean case, which is different from what IR tasks aim to solve. Here, we further emphasize the key differences between BIRD and other IR methods based on diffusion and GAN inversion. *Diffusion-based approaches* We can fairly claim that BIRD is the *first* diffusion-based approach to invert a *degraded* image. By inversion, we mean obtaining the initial Gaussian noise $x_T$, which, when applied to the diffusion model, yields the clean image. There are a few methods that apply diffusion inversion to a clean image in the context of image editing, such as [3, 4]. Based on the assumption that the ODE process can be reversed in the limit of small steps, these methods run the diffusion in the reverse direction from $z_0$ to \$z_T $ (from image space to Gaussian noise), rather than from $z_T$ to $ z_0$ to obtain the inverted latent. Unlike these methods, which solve a different task, BIRD is fundamentally different as it employs a gradient descent optimization approach, whereas [3, 4] do not. Moreover, applying those methods to a degraded image results in the same degraded image. *GAN-based approaches* The IR methods based on GAN inversion can be mainly classified into two approaches: 1) learning-based methods such as [5, 6], which train an encoder to invert the GAN. These methods are clearly distinguishable from BIRD, as BIRD is not a learning-based method and does not involve training. 2) Optimization-based methods such as [7, 8]. Although both these methods and BIRD are gradient descent optimization methods, they differ in key ways: 1) They fine-tune the GAN or train an auxiliary network, whereas BIRD involves no fine-tuning or training. 2) A diffusion model differs from a GAN in aspects such as iterative sampling versus a single forward pass and inference speed. Simply applying a GAN-style inversion to a diffusion model would take on the order of hours, which is impractical. Although many methods apply GAN inversion to IR tasks, the literature still lacks a diffusion inversion method despite its potential advantages. We believe that BIRD fills this gap and brings all the benefits of diffusion combined with inversion. In our humble opinion, BIRD can open new perspectives on the applicability of diffusion models in IR tasks and encourage further exploration of diffusion inversion for IR applications. ### Q2. Difference with GDP[1] and TAO[2], the iterative optimization of the degradation model and the restored image The idea of jointly optimizing the degradation model and the clean image (or doing it iteratively) is not new and is neither a contribution of ours nor of GDP [1]. The contributions of GDP and BIRD specifically lie in *how* to use the diffusion prior to obtain the clean image. In this regard, BIRD is fundamentally different from all other methods, particularly from GDP [1]. As stated in our contributions (lines 65-67), BIRD is the *first* to frame the IR problem as a latent (initial noise) optimization problem within the context of diffusion models. Based on this fundamental difference, there are many algorithmic differences between GDP [1] and BIRD. For example, our method is a pure *optimization-based* approach, where we optimize a well-defined variable (the initial noise $z_T$ ), while GDP is a *sampling* method. Moreover, in GDP, the degradation model is updated after each diffusion step while in BIRD, it is updated only after the hole diffusion process. BIRD is also significantly different from TAO [2] because TAO is an open-set method, whereas BIRD is a zero-shot method. Additionally, BIRD focuses on diffusion inversion, while TAO [2] does not. ### Q3. Computational efficiency and state of the art? BIRD consistently yields the best quantitative results in Table 1 and Table 4, except in two cases. We believe that a state-of-the-art method does not necessarily outperform all other methods across all datasets and metrics. This is the case, for example, with GDP [2] and DPS [9]. Regarding computational efficiency, BIRD is slower than GDP but significantly better in terms of image quality. It is faster than the state-of-the-art BlindDPS while requiring five times less memory. We believe that BIRD presents the best trade-off between image quality and computational efficiency. [3] Prompt-to-Prompt Image Editing with Cross-Attention Control. arXiv 2022 [4] Null-text Inversion for Editing Real Images using Guided Diffusion Models. CVPR 2024 [5] High-fidelity image inpainting with gan inversion. ECCV 2022 [6] Gan prior embedded network for blind face restoration in the wild. CVPR 2021 [7] Robust unsupervised stylegan image restoration. CVPR 2023 [8] Exploiting deep generative prior for versatile image restoration and manipulation. PAMI 2021 [9] Diffusion Posterior Sampling for General Noisy Inverse Problems. ICLR 2023 --- Rebuttal 2: Title: Official Comment by Reviewer cmXb Comment: I appreciate the authors' efforts in the rebuttal, and keep my original score since I reserve my opinion on the first two weaknesses. --- Rebuttal 3: Comment: First, we thank the reviewer for their response. The reviewer reserves their opinion on the first two points without providing further justification. Regarding the first point (distinction with inversion-based IR methods), we stated that BIRD is the first IR method based on diffusion inversion. We would be grateful if the reviewer could point out any work that applies diffusion inversion for IR. Additionally, we mentioned that a GAN-style inversion approach will not work for diffusion, as in contrast to GANs, one clear difference is that diffusion is much more computationally demanding. Regarding the second point (differences with GDP [1] and TAO [2]), we noted that a key difference is that GDP [1] is a sampling method (it alters the reverse sampling of diffusion by adding a projection-based step to ensure consistency), whereas BIRD is an optimization-based method (it does not alter the reverse sampling). For TAO, a significant difference is that TAO is an open-set method, while BIRD is zero-shot. Figure 1 in TAO [2] illustrates this clearly.
Summary: The authors propose a novel approach to solving image restoration problems using diffusion models, termed BIRD. Unlike previous approaches, BIRD alternates between optimizing a parameterized forward operator and the initial latent variable of a DDIM to address various IR problems in a blind manner (i.e., with an unknown forward operator). This iterative process involves mapping the current latent variable to its outcome in the DDIM via the DDIMReverse algorithm and alternately taking a gradient step of the measurement error with respect to the current latent variable and the forward operator's parameters. Strengths: This paper presents an original approach by leveraging pretrained diffusion models to address blind inverse problems in image restoration. The simplicity and novelty of the method lie in its creative combination of well-known techniques, making it a noteworthy contribution to the field. The quality of the theoretical analysis, particularly in sections 3.2 - 3.2.2, is commendable as it is grounded in related work, providing a solid foundation for the proposed method. The results presented demonstrate the method's potential and effectiveness in addressing the stated problems. The clarity of the paper is generally good, with the authors providing a comprehensive introduction and detailed descriptions of the algorithms. The inclusion of specific equations and theoretical justifications helps in understanding the underlying principles of the method. The paper's structure and flow, from the introduction to the results, are logically organized, aiding in the reader's comprehension. In terms of significance, the method shows considerable promise for application in image restoration, particularly for more complex and challenging inverse problems. The differentiation from non-blind methods is an important aspect, and the potential for this method to address a wider range of problems enhances its relevance and impact in the field. Weaknesses: The paper has several areas that require improvement. The theoretical analysis in section 3.1 is noted to need thorough revision, suggesting that some foundational aspects of the method may not be as robust as they could be. Additionally, much of the theoretical content is heavily dependent on related work, which might limit the perceived novelty of the contribution. Specific areas needing attention include: - The phrase "better neural network architecture choices" on lines 21-22 should be made more specific to avoid ambiguity. - Equations 1, 3, 4, 6, 8: \DeclareMathOperator*{\argmin}{arg\,min} - Line 129: If a certain coefficient $\rho$ is equivalent to Eq. 3, it should be easily expressible in terms of $\lambda$. - Line 130: $g_*$ is not defined. - Lines 132-134: The wording and notation need clarification (e.g., $\| z \|^2 = N_x \times M_x$, usually $\| z \|^2$ is a scalar, $N_x \times M_x$ is a tuple) - The derivation of Eq. 8 seems out of place and might be better suited for section 3.2.3. The clarity of the introduction could be improved as it currently reads too much like related work. The work of Chung et al. [2, 3] should be discussed separately in the related work section. Additionally, the statement about GAN inversion methods on lines 104-105 should consider other methods that optimize intermediate latent variables (e.g., https://arxiv.org/abs/1703.03208, https://arxiv.org/abs/2102.07364). Algorithm 1 is confusing with its indices, and it is unclear whether $x_T$ or $x_0$ is the initial latent variable. Similarly, Algorithm 2 would benefit from including requirements similar to those in Algorithm 1 to enhance clarity. Finally, while the results are significant, the paper would benefit from highlighting the unique theoretical contributions more clearly. The differentiation from non-blind methods, especially in solving more general or harder inverse problems, should be emphasized to better showcase the method's potential and impact. In summary, while the paper presents a novel and promising approach to image restoration using pretrained diffusion models, it would benefit from addressing the theoretical and clarity issues mentioned above. Strengthening these aspects will enhance the overall quality and impact of the work. Technical Quality: 2 Clarity: 2 Questions for Authors: How does the method work for general inverse problems? Does Algorithm 1 actually require the measurements to be in image space? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: - The method is presented as superior to non-blind methods [2, 3], but these methods tackle more general inverse problems that do not require measurements in image space (e.g. sparse-view CT or phase retrieval). This difference should be emphasized. - It would be beneficial to know which problems cause the method to fail, at least in supplementary materials. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. Please see the detailed response below. ### Q2. Specific areas needing attention >The phrase "better neural network architecture choices" on lines 21-22 should be made more specific to avoid ambiguity. thanks. We mean that better NN architectures (for example, Transformers) have led to better generative models. We will make this clearer in our revision. >Equations 1, 3, 4, 6, and 8: \DeclareMathOperator*{\argmin}{arg,min} thanks for pointing this out. We will correct all of them in our revision. > Lines 132-134: The wording and notation need clarification thanks. We will make the notation more clear. >The derivation of Eq. 8 seems out of place and might be better suited for section 3.2.3. thanks. We will move it to section 3.2.3. ### Q3. introduction enhancement and [2, 3] should be discussed separately thanks for pointing this out. We will adjust the introduction and discuss [2, 3] in the related work. ### Q4. Statement about GAN inversion methods on lines 104-105 Thanks for pointing this out. We will include the mentioned works in our revision. ### Q5. Algorithm 1 and 2 are confusing with their indices. Thanks for pointing this out. We will use clear indices for both Algorithm 1 and Algorithm 2 in our revision. ### Q6. Solving more general IR tasks? BIRD can be applied to a wide range of IR tasks, not just the blind ones. BIRD can solve non-blind tasks (known degradation operators) like inpainting and colorization. We refer the reviewer to Figure 1 in our rebuttal PDF, where we show some visual results of our method solving more IR tasks. Our method remains the same as for the blind case, except that we do not optimize for the degradation operator (as it is known). ### Q7. Solving IR tasks where the measurements are not in image space? Yes, BIRD can handle IR tasks where the measurements are not in image space. Our method only requires a differentiable degradation operator (or some differentiable approximation of it). BIRD can handle the non-differentiable JPEG-deartifacting. In Figure 1, we show some visual results of the sparse-view CT. ### Q8. The differentiation from non-blind methods, BIRD's potential and impact Although BIRD is primarily proposed as a blind zero-shot method, here we discuss some of the similarities/differences with non-blind methods. To further showcase the potential of BIRD, we highlight three aspects that demonstrate the advantages of our method. *Robustness to severe degradation* As depicted in Figure 2 of our rebuttal PDF, BIRD is more robust to severe degradation (SRx16), both in terms of image quality and faithfulness. *Robustness to noise distribution* Real noise is usually not Gaussian. We compare BIRD with non-blind methods using a mixture of Gaussian and speckle noise. As shown in Figure 3 of our rebuttal PDF, although DPS[2] is robust to noise distribution, BIRD generates higher quality images. *Robustness to error in the degradation model* Non-blind methods generally rely on an off-the-shelf method to estimate the degradation operator. These off-the-shelf methods are not 100% accurate. Thus, a valuable feature of a non-blind IR method is its robustness to errors in degradation operator estimation. In Figure 4 of our rebuttal PDF, we show some visual results where we simulate a small error in the degradation operator. BIRD produces better quality outputs. | **Method**| **SR x16**| **SRx4 (noise with a mixture model)** |**Deblur (degradation operator with a small error)** | |---|---|---|---| | DDNM[20] | 21.73/0.451 | 21.26/0.472 | n.a | | DPS[2] | 21.13/0.447 | 23.76/0.277 | 23.86 /0.360 | | BIRD | 21.95/0.349 | 24.82/0.239 | 24.56/0.251 | Quantitative Comparison (PSNR/LPIPS) with state-of-the-art non-blind methods. ### Q9. Problems causing the method to fail BIRD needs a parametric form of the degradation model (albeit with unknown parameters). Such a constraint is not satisfied for some IR problems like image deraining, and so our method cannot handle it. We mention this limitation in our main paper at lines (233-236). --- Rebuttal 2: Comment: We are sorry, we did not manage to include this part during our initial version of rebuttal. ### Q1. The theoretical analysis in Section 3.1 needs revision Thanks for pointing this out. We will update Section 3.1 and take the comments of the reviewer about "specific areas needing attention" into account, including fixing the notation issues. Here, we provide a brief derivation that leads to the same final result and comment on the theory of BIRD. (We keep the same notation as in the paper.) We begin with Eq. (3): $$ \hat{x} = \arg\min_{x \in \mathbb{R}^{N_x \times M_x}} \| y - H(x) \|^2 + \lambda R(x) $$ $R$ is a prior term, and $\lambda > 0$ trades off the likelihood and the prior. We define $\Omega \subset [0, 255]^{N_x \times M_x}$ as the domain of "realistic" images (i.e., the support of $p(x)$) and propose employing a formulation that implicitly assumes a uniform prior $p_U$ on $\Omega$. That is, $R(x) = - \log(p_U(x)) = -\log(\text{const} \cdot 1_\Omega(x))$, where $1_\Omega(x)$ is 1 if $x$ is in the support $\Omega$ and 0 otherwise. This results in the following formulation: $$ \hat{x} = \arg\min_{x \in \Omega} \| y - H(x) \|^2 $$ In theory, this choice ensures two key aspects: 1. *Guarantees realism:* By definition, we are restricting the search to $\Omega$. This is particularly illustrated in Figure 4 of our rebuttal, where BIRD generates more realistic results even in the presence of an error in the degradation model. 2. *Favors higher fidelity:* In other plug-and-play methods that use $R(x) = -\log(p(x))$, an image with higher $p(x)$ but lower fidelity (data term) can be favored over an image with a higher fidelity but a lower $p(x)$ (e.g., in the tail of the distribution). In contrast, in our case, the image with higher fidelity is always favored. This is particularly illustrated in Figure 2 of our rebuttal, where BIRD generates results with higher fidelity even under severe degradation. Returning to the derivation, to ensure that $x \in \Omega$, we parameterize $x$ via the initial noise of a pre-trained diffusion model $g$: $$ \hat{x} = \arg\min_{z \sim \mathcal{N}(0, I)} \| y - H(g(z)) \|^2 $$ Given that most of the density of a high-dimensional normal random variable is around $\|z\|^2 = N_x M_x$ [12, 19], we can derive Eq. (8) (shown in the paper). We will be happy to address any further concerns the reviewer may have. ### Q9. Problems causing the method to fail For the non-blind case, as suggested by the reviewer, we experimented with the task of phase retrieval and found that there were occasional convergence issues. A similar problem was reported in [2]. We will mention it in the limitations section. --- Rebuttal Comment 2.1: Comment: I appreciate the authors detailed responses to my and the other reviewers' comments. The planned revisions and additional clarifications enhance the paper's robustness and clarity. I believe these changes will improve the paper's quality and impact. > Yes, BIRD can handle IR tasks where the measurements are not in image space. Our method only requires a differentiable degradation operator (or some differentiable approximation of it). BIRD can handle the non-differentiable JPEG-deartifacting. In Figure 1, we show some visual results of the sparse-view CT. In Figure 1, in the case of sparse-view CT, the term "degraded" might be misleading. The measurements in this scenario are not in image space; they should be sinograms instead. Are you perhaps referring to baseline reconstructions derived from these measurements as "degraded"? If so, this interpretation differs from the conventional understanding of degradation used in inpainting tasks, where the degradation typically occurs within the image space itself. Also missing here is the extent to which the "sparse view" was applied. My initial concern regarding the reliance on existing theoretical content, which could diminish the perceived novelty of the contribution, remains unaddressed. While I appreciate the revisions made to Section 3.1, the emphasis on problem formulation appears to be somewhat minor and does not directly contribute to the core approach discussed in Section 3.2.3. A more focused theoretical analysis explaining the significance and utility of equations (19), (21), and (22) would be far more beneficial and would strengthen the overall argument of the paper. --- Rebuttal 3: Comment: Sorry for the delay. We would like to thank the reviewer for the fruitful discussion and the valuable comments that have helped improve our paper. Please find our detailed response below. ### Q10. Sparse-View CT in Figure 1 Thank you for pointing this out. We agree that the term "degraded" may not be the most accurate description. For the sparse-view CT, we adopted the same setting as [3] and used their official implementation available on GitHub (specifically, the "run\_CT\_recon.py" file) to generate the measurements, which are not in the image space. The sparsity level is set to 6. For an input tensor of size [1, 1, 256, 256], the sinogram has a size of [1, 1, 363, 30]. In Figure 1, we show the baseline reconstructions derived from these measurements (similar to Figure 4 in [3]). ### Q11. Section 3.2.3, Prior Work, Theoretical Analysis, Eq (19), Eq (21), Eq (22) [Part 1] Here, we provide a more theoretical analysis of the approach discussed in Section 3.2.3 and comment on Eq (19), Eq (21), and Eq (22). First, we want to note that the derivation in Section 3.2.3 is directly connected to our problem formulation as we aim to solve Eq (8): $$ \hat{x} = \arg\min_{z: \|z\|^2 = N_x M_x} \| y - H(g(z)) \|^2 $$ One challenge in solving Eq (8) is that an iterative approach is computationally expensive, as a single evaluation of $g(z)$ may take minutes. To address this, we introduce a family of generative processes $\text{DDIMReverse}(., \delta t) = g^{\delta t}$, parameterized by $\delta t \geq 1$. These processes are defined *not on all latent variables* $x_{1:T}$, but on a subset \{$x_0, x_{\delta t}, x_{2 \delta t}, \ldots, x_{T-2 \delta t}, x_{T-\delta t}, x_T$ \} of length $K$. The scalar $\delta t$ defines the jump in the generative process sampling. We can show that, by carefully factorizing both the diffusion forward process and the corresponding generative process and choosing the appropriate marginals, for all $\delta t$, $g^{\delta t}$ constitutes a valid generative model from $p(x)$. Here is a brief derivation: We consider the following factorization of the diffusion forward process: $$ q_{\delta t}(x_{1:T} | x_0) = q_{\delta t}(x_T| x_0) \prod_{i=1}^{K} q_{\delta t}(x_{(i -1) \delta t} | x_{i \delta t}, x_0) \prod_{t \notin \\{ 0, {\delta t}, {2 \delta t}, \ldots, {T-2 \delta t}, {T-\delta t}, T \\} } q_{\delta t}(x_t | x_0) $$ where $ q_{\delta t}(x_t | x_0) = \mathcal{N} \left( \sqrt{\bar{\alpha_t}} x_0, (1 - \bar{\alpha_t}) I \right) $ and $ q_{\delta t}(x_{(i -1) \delta t} | x_{i \delta t}, x_0) = \mathcal{N} \left( \sqrt{\bar{\alpha_{(i -1) \delta t}}} x_0 + \sqrt{1 - \bar{\alpha_{(i -1) \delta t}} - \sigma_{i \delta t}^2} \frac{x_{i \delta t} - \sqrt{\bar{\alpha_{i \delta t} }} x_0}{\sqrt{1 - \bar{\alpha_{i \delta t} }}}, \sigma_{i \delta t} ^2 I \right) $ Specifically, $q_{\delta t}(x_{(i -1) \delta t} | x_{i \delta t}, x_0) $ is defined such that $q_{\delta t}(x_{i \delta t} | x_0) = \mathcal{N} \left( \sqrt{\bar{\alpha_{i \delta t}}} x_0, (1 - \bar{\alpha_{i \delta t}}) I \right)$ for all $i \in [1, K]$. (Here, all the marginals $q_{\delta t}(x_t | x_0)$ for $t \in [1, T]$ match those in the original diffusion formulation.) We then define the corresponding generative process $p_{\theta, \delta t}$: $$ p_{\theta, \delta t}(x_{1:T} | x_0) = p_{\theta, \delta t}(x_T) \prod_{i=1}^{K} p_{\theta, \delta t}(x_{(i -1) \delta t} | x_{i \delta t}) \prod_{t \notin \\{ 0, {\delta t}, {2 \delta t}, \ldots, {T-2 \delta t}, {T-\delta t}, T \\}} p_{\theta, \delta t}(x_0 | x_t) $$ (Only the first part of the factorization is used to produce samples.) We define $ p_{\theta, \delta t}(x_0 | x_t) = \mathcal{N} \left( f_{\theta}^{(t)}(x_t), \sigma_t^2 I \right) $ and $ p_{\theta, \delta t}(x_{(i -1) \delta t} | x_{i \delta t}) = q_{\delta t}(x_{(i -1) \delta t} | x_{i \delta t}, f_{\theta}^{(i \delta t)}(x_{(i -1) \delta t})) $ where $ f_{\theta}^{(t)}(x_t) = \frac{x_t - \sqrt{1 - \bar{\alpha_t}} \cdot \epsilon_\theta^{(t)}(x_t)}{\sqrt{\bar{\alpha_t}}} $ We optimize $\theta$ using the variational inference objective: $$ J_{\delta t}(\epsilon_\theta) = \mathbb{E_{x_{0:T} \sim q_{\delta t} (x_{0:T})}} \left[ \log q_{\delta t} (x_{1:T} | x_0) - \log p_{\theta, \delta t} (x_{0:T}) \right] $$ $$ = \mathbb{E_{x_{0:T} \sim q_{\delta t} (x_{0:T})}} \left[ \sum_{i=1}^{K} D_{KL} \left(q_{\delta t} (x_{(i -1) \delta t} | x_{i \delta t}, x_0) \| \| p_{\theta, \delta t} (x_{(i -1) \delta t} | x_{i \delta t}) \right) + \sum_{t \notin \\{ 0, {\delta t}, {2 \delta t}, \ldots, {T-2 \delta t}, {T-\delta t}, T \\}} D_{KL} \left(q_{\delta t} (x_t | x_0) \|\| p_{\theta, \delta t} (x_0 | x_t) \right) \right] $$ where each KL divergence is between two Gaussians where only the mean depends on $\theta$. --- Rebuttal 4: Comment: ### Q11. Section 3.2.3, Prior Work, Theoretical Analysis, Eq (19), Eq (21), Eq (22) [Part 2] By substituting all the terms with their values: $$ J(\epsilon_\theta) \equiv \mathbb{E_{x_{0:T} \sim q_{\delta t} (x_{0:T})}} \left[ \sum_{i=1}^{K} \frac{\| x_0 - f_{\theta}^{(i \delta t)}(x_{i \delta t}) \|^2}{2 \sigma_{i \delta t}^2} + \sum_{t \notin [ 0, {\delta t}, {2 \delta t}, \ldots, {T-2 \delta t}, {T-\delta t}, T ]} \frac{\| x_0 - f_{\theta}^{(t)}(x_{t})\|^2}{2 \sigma_t^2} \right] $$ $$ \equiv \mathbb{E_{x_{0:T} \sim q_{\delta t} (x_{0:T})}} \sum_{t=1}^{T} \frac{\| x_0 - f_{\theta}^{(t)}(x_{t})\|^2}{2 \sigma_t^2} \equiv \mathbb{E_{x_0 \sim q(x), \epsilon \sim \mathcal{N}(0, I), x_t = \sqrt{\bar{\alpha_t}} x_0 + \sqrt{1 - \bar{\alpha_t}} \epsilon}} \sum_{t=1}^{T} \frac{\| \frac{x_t - \sqrt{1 - \bar{\alpha_t}} \epsilon}{ \sqrt{\bar{\alpha_t}}} - \frac{x_t - \sqrt{1 - \bar{\alpha_t}} \cdot \epsilon_\theta^{(t)}(x_t)}{\sqrt{\bar{\alpha_t}}})\|^2}{2 \sigma_t^2} $$ $$ J(\epsilon_\theta) \equiv \mathbb{E_{t \sim \mathcal{U}(0, 1); x_0 \sim q(x); \epsilon \sim \mathcal{N}(0, I)}} \left[ \| \epsilon - \epsilon_{\theta} (\sqrt{\bar{\alpha_t}} x_0 + \sqrt{1 - \bar{\alpha_t}} \epsilon, t) \|^2 \right] $$ The objective $J_{\delta t}(\epsilon_\theta)$ matches the original training objective of the vanilla diffusion model (defined over all time steps $[1, T]$). This implies that: 1) All the generative processes $g^{\delta t}$ are equivalent and can be used to sample from $p(x)$. 2) There is no need for retraining to sample from $g^{\delta t}$, as the training objective for the generative process involving only a subset of the latent variables is the same as the one corresponding to the vanilla diffusion model (which involves all the latent variables). Based on these results, we can rewrite the minimization in eq (8) as follows: $$ \hat{x} = \arg\min_{z: \|z\|^2 = N_x M_x} \| y - H(g^{\delta t \gg 1}(z)) \|^2 $$ This leads to a more efficient optimization due to the much faster evaluation of $g^{\delta t \gg 1}(z)$ (as the length of the sampling trajectory is much shorter than $T$), while remaining equivalent to eq (8). In eq (19), given an initial random Gaussian sample $z$, we use $g^{\delta t \gg 1}$ instead of the original $g$ to evaluate the optimization objective. Once $\| y - H(g^{\delta t \gg 1}(z)) \|^2$ is computed, we take a vanilla gradient descent step with respect to the optimization variables ($z$ and $\eta$), which leads directly to eq (21). However, when applying the gradient step, $z$ may deviate from the space \{ $z: \|z\|^2 = N_x M_x$ \}, so we project it back, which leads to eq (22). We repeat this process until convergence. Although that we build on existing techniques, our work introduces a novel formulation of inverse problems within the context of diffusion models and solves it in a unique and innovative way. We also would like to emphasize that we are the *first* to demonstrate that an inversion-based IR approach in the context of diffusion models is not only feasible but also highly effective. The reviewer aptly summarizes this by stating, *"The simplicity and novelty of the method lie in its creative combination of well-known techniques, making it a noteworthy contribution to the field."* As for the impact of our work, we believe that BIRD represents a significant leap forward in expanding the applicability of diffusion models to inverse imaging. BIRD provides a fresh perspective, and we hope it will inspire further research into the under-explored concept of inversion within the context of inverse problems and diffusion models.
Summary: This paper proposed new blind image restoration (BIR) method by exploring the image prior induced by diffusion model (DM). Different from the existing DM-based methods, this work presents a diffusion inversion technique, such that the estimated image can be constrained to lie in the image manifold learned by the pre-trained DM, and the computational cost can be reduced. Experiments on several image restoration tasks have been conducted to demonstrate the effectiveness of the proposed method. Strengths: The idea of diffusion inversion is interesting and reasonable, which broaden the concept of deep generative prior, and can possibly motivated more related studies. Weaknesses: The major problem is that the experiments are not comprehensive enough. - Some commonly used benchmarks in corresponding BIR tasks were not considered. For example, for the blind motion deblur task, commonly used Lai [1] and Levin [2] datasets were not used for evaluation. - Some advanced deep prior-based methods for corresponding image restoration tasks were not compared. For example, for the deblur task, some DIP-based methods e.g., [3][4][5], were not compared; for the super-resolution task, also some DIP-based ones, e.g., [6][7], were not included References: [1] W. Lai et al. A comparative study for single image blind deblurring. CVPR, 2016. [2] R. Köhler et al. Recording and playback of camera shake: Benchmarking blind deconvolution with a real-world database. ECCV, 2012. [3] D. Ren et al. Neural blind deconvolution using deep priors. CVPR, 2020. [4] D. Huo et al. Blind image deconvolution using variational deep image prior. TPAMI, 2023. [5] J. Li et al. Self-supervised blind motion deblurring with deep expectation maximization. CVPR, 2023. [6] J. Liang et al. Flow-based kernel prior with application to blind super-resolution. CVPR, 2021. [7] Z. Yue et al. Blind image super-resolution with elaborate degradation modeling on noise and kernel. CVPR, 2022. Technical Quality: 3 Clarity: 2 Questions for Authors: It seems that Figure 1 indeed shows an example of deblurring with motion blur, while the caption says Gaussian blur. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors have discussed the limitations, which are reasonable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. Please see the detailed response below. ### Q1. Add more comparisons [3-7] | Method| Zero-shot?| Task-agnostic? | Code available? | | |---|---|---|---|---| | BIRD | *Yes* | *Yes* | N.A | | | [3] | Yes | No | Yes | | | [4] | Yes | No | Yes | | | [5] | No | No | No* | | | [6] | Yes | No | Yes | | | [7] | No | No | Yes | | We thank the reviewer for the suggestion. BIRD is zero-shot and task-agnostic, while [4-7] are dataset-based (involving training) and [3-7] are task-specific. We followed recent zero-shot works, such as [8, 9], which do not consider dataset-based approaches, as such a comparison is not apple-to-apple. Nonetheless, we report the mentioned comparison in the following tables and will include them in our revision. Unfortunately, the inference script for [5] is not publicly available. We have contacted the authors and will include the comparison in a comment if we receive the scripts. | **Method**| **CelebA**| **Imagenet** | **Lai[1]** | **Levin[2]**| |---|---|---|---|---| | [3] | 20.12 | 19.15 | 21.13 | 33.07 | | [4] | 21.43 | 20.45 | **25.12** | - | | BIRD | **24.67** | **23.76** | 24.57 | **34.18** | Quantitative comparison (PSNR) on Gaussian Deblurring | **Method**| **CelebA**| **Imagenet** | |---|---|---| | [6] | 20.62 | 20.45 | | [7] | 22.38 | **22.53** | | BIRD | **22.75** | 22.15 | Quantitative comparison (PSNR) on Super-resolution ### Q2. Benchmark on Lai [1] and Levin [2] Datasets for Blind Deblurring Lai [1] and Levin [2] datasets are generally used for dataset-based [7] and task-specific methods like [3, 4, 5]. We followed the recent zero-shot and task-agnostic works (the same category as our paper) such as [8, 9], which benchmark on ImageNet and FFHQ/CelebA. In the table above, we added a comparison on Lai [1] and Levin [2] for Gaussian deblurring. For [3] and [4], we report the results shown in their papers. We note that [4] did not compare on Levin [2]. ### Q3. Figure 1 shows an example of deblurring with motion blur and not Gaussian blur Thanks for pointing this out. We will correct it in our revision. [8] Denoising Diffusion Restoration Models. Neurips 2022 [9] Zero-Shot Image Restoration Using Denoising Diffusion Null-Space Model. ICLR 2023
Summary: The paper presents a method to accelerate blind image reconstruction by leveraging pre-trained diffusion models. The authors suggest a strategy that simultaneously optimizes the degradation model parameters and the restored image, thereby improving the reconstruction process’s efficiency. Furthermore, the authors introduce a sampling method based on a pre-trained diffusion model. This method is devised to ensure the restored images are in alignment with the image manifold, a critical factor in preserving the restored images’ integrity. To enhance the speed of the process, the authors consider advancing in the forward diffusion process using large time steps, Thus they are able to significantly reduce the runtime of restoration process. The experimental results demonstrate improved performance across various tasks, outperforming BlindDPS, a recent solution for blind inverse problems. Furthermore, it surpasses a couple of other blind image reconstruction methods that do not rely on diffusion models. Strengths: - The paper demonstrates a significant improvement in speed and memory efficiency over the primary method, BlindDPS. Additionally, the quality of the reconstruction has also been enhanced. - The method, in particular, is articulated and presented effectively in the paper. Weaknesses: - The paper lacks a theoretical discussion and experimental comparison with some recent or related blind diffusion methods for inverse imaging problems, such as [3-5]. - [3] “Fast Diffusion EM: A Diffusion Model for Blind Inverse Problems with Application to Deconvolution” by Laroche, Charles, Andrés Almansa, and Eva Coupete. *WACV* 2024. - [4] “Gibbsddrm: A Partially Collapsed Gibbs Sampler for Solving Blind Inverse Problems with Denoising Diffusion Restoration” by Murata, Naoki, et al. *ICML* 2023. - [5] “A Diffusion Model with State Estimation for Degradation-Blind Inverse Imaging” by Ji, Liya, et al. *AAAI* 2024. Technical Quality: 3 Clarity: 3 Questions for Authors: - It would be beneficial to include the results of the version with δt=1 in Tables 1, 3, and 4 as well. - The paper only qualitatively considers JPEG de-artifacting and compares it solely to DIP. Why are more recent methods such as the following not considered? - [1] “Towards Flexible Blind JPEG Artifacts Removal” by Jiang, Jiaxi, Kai Zhang, and Radu Timofte. *ICCV* 2021. - [2] “DriftRec: Adapting Diffusion Models to Blind JPEG Restoration” by Welker, Simon, Henry N. Chapman, and Timo Gerkmann. *TIP* 2024. - Minor point: - In Figure 3, the image index should be corrected from (f) to (e). Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: no comments. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. Please see the detailed response below. ### Q1. Theoretical Discussion and Experimental Comparison to [3-5] | Method| Zero-shot?| Task-agnostic? | Code available? | | |---|---|---|---|---| | BIRD | *Yes* | *Yes* | N.A | | | [1] | No | No | Yes | | | [2] | No | No | Yes | | | [3] | Yes | No | Yes | | | [4] | Yes | Yes | Yes | | | [5] | No | Yes | No | | We thank the reviewer for the suggestion. We will add the content below to the paper. First, let us discuss the references [3-5]. BIRD is a zero-shot method (no training involved), while [5] involves fine-tuning a diffusion model and training a state estimator. It also requires a dataset of input-output pairs for training, whereas BIRD does not. References [3-4] are zero-shot approaches, but fundamentally different from BIRD. A key difference is that [3-4] *modify* the original diffusion sampling by adding a projection-based step to enforce consistency with the corrupted image. In contrast, in BIRD we use the original diffusion sampling scheme as is. Moreover, [3] applies the EM algorithm after each diffusion reverse step to jointly update the image latent and the kernel blur. [4] extends DDRM to the blind case by adopting a Gibbs sampler to enable efficient sampling from the posterior distribution. We note that [3] is task-specific (proposed only for blind deconvolution) while BIRD can handle different IR tasks. [4] can only handle linear IR tasks (BIRD can handle non-linear ones like JPEG de-artifacting) and is not easily applicable to problems involving linear operators for which the SVD is computationally infeasible. Below we also show a quantitative comparison between BIRD and methods [3,4]. Unfortunately, we could not compare to [5] because we could not find code for it and the results in [5] is not directly comparable to our paper because of not using the exact degradation or noise model. | **Method**| Motion Deblur | Gaussian Deblur | SR x8 | |---|---|---|---| | [3] | 23.18/0.284 | 24.52/0.235 | n.a | | [4] | 22.94/0.314 | 23.57/0.266 | 22.17/0.357 | | BIRD | **23.76**/**0.263** | **24.67**/**0.225** | **22.75**/**0.306** | Quantitative comparison (PSNR/LPIPS) on CelebA. ### Q2. Comparison to [1-2] We thank the reviewer for their suggestion. We did not include comparisons to methods [1-2] because BIRD is zero-shot and task-agnostic, while [1-2] are dataset-based (involving training) and task-specific (JPEG deartifacting). In our paper, we followed recent zero-shot works, such as [8, 9], which do not consider dataset-based approaches, as such a comparison is not an apple-to-apple. Nonetheless, we compared to [1], which provides a pretrained model in their official code. Unfortunately, we could not compare to [2] as they do not provide a pretrained model and we could not train their model in time. We will include a comparison to [2] in our revision. In the table below, we illustrate the key difference between zero-shot (single-image) and dataset-based methods. Dataset-based approaches tend to work well when tested with data having the same degradation operators/noise distributions seen during training, but their performance drops when dealing with new data. We further show this key difference in Figure 5 of the rebuttal PDF. In that Figure 5, we show that when testing [1] with no noise (the same as during training), it outputs a clean image. However, when adding a small amount of noise not seen during training, [1] produces artifacts. In contrast, BIRD yields the same result under both settings. | **Method**| **CelebA (w/o noise)**| **CelebA (w/ noise)** | |---|---|---| | [1] | **29.24**/0.242 | 26.45/0.318 | | BIRD | 28.75/**0.221** | **28.67**/**0.224** | Quantitative comparison (PSNR/LPIPS) on JPEG de-artifacting on CelebA ### Q3. Include the Results of the Version with δt=1 in Tables 1, 3, and 4 We show below the updated Table 3. Running BIRD with δt=1 is computationally almost infeasible as one image take around 22500 seconds. Indeed, one of our motivations is to propose a fast (and computationally feasible) diffusion inversion that benefits from the ability to jump ahead in the diffusion reverse process (δt higher than 1). We will add the results for some reasonably small δt (δt=10 or δt=20) for other tables in our revision. (If the reviewer means the case with one jump (δt=1000), this is so feasible and we can add it.) | step size (δt) | PSNR ↑ | LPIPS ↓ | Time [s] ↓ | |---|---|---|---| | δt=1 | - | - | 22500 | | δt=50 | 28.74 | 0.218 | 412 | | δt= 100 | 28.67 | 0.224 | 234 | | δt= 200 | 28.45 | 0.237 | 110 | Updated Table 3 ### Q4. In Figure 3, the image index should be corrected from (f) to (e) Thanks for pointing this out. We will correct it in our revision. [8] Denoising Diffusion Restoration Models. Neurips 2022 [9] Zero-Shot Image Restoration Using Denoising Diffusion Null-Space Model. ICLR 2023 --- Rebuttal Comment 1.1: Comment: I appreciate the authors’ efforts in addressing many of the reviewers’ comments. I am inclined to remain positive on the score, as most of my original concerns have been addressed. However, I look forward to seeing the feedback from a couple of other reviewers before deciding on their remaining concerns.
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for their time and feedback. We attached a pdf containing some additional results based on their comments and suggestions. Pdf: /pdf/d7ccbe6f8498b97f3ab8890ed32a366d8125795d.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Asymptotics of Alpha-Divergence Variational Inference Algorithms with Exponential Families
Accept (poster)
Summary: This paper focuses on the theoretical study of variational inference using alpha-divergences using exponential family. This is an important problem in the field of variational inference. Specifically, this work proves a geometric convergence rate for the algorithm proposed in [9]. Moreover, this paper also proposes an alternative optimization algorithm and its convergence guarantees are provided. The proposed algorithm finds applications in VAE. Strengths: 1. Asymptotic convergence rate of the algorithm proposed in [9] is derived in this work. Moreover, this paper proposes an alternative unbiased algorithm with convergence guarantees. 2. Assumptions (H1)-(H3) follow by reasonable explanations. 2. Applications of the proposed algorithm to VAE are provided. Weaknesses: 1. It only considers the variational family to be exponential models. 2. The convergence is proved for the proposed algorithm in the asymptotic sense. Technical Quality: 4 Clarity: 4 Questions for Authors: 1. Typo at line 81, ''negative'' $\to$ ''positive'' since you drop the negative $\frac{1}{\alpha(\alpha-1)}$ term. 2. What is the technical difficulty to derive a convergence result similar to Theorem 1 for the proposed algorithm? Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough review and honest feedback. Below, we provide detailed responses to your second question. The two weaknesses you pointed out are adressed in our global rebuttal. We hope that our responses will clarify the issues you raised. > What is the technical difficulty to derive a convergence result similar to Theorem 1 for the proposed algorithm? It does not present any major difficulty and can be done by following the exact same steps as in the proof of Theorem 1. We would get a similar result, with a slightly slower rate however: the $\gamma$ in the convergence speed would be replaced by $\gamma V(\eta_*)$, and we have $V(\eta_*) \leq 1$ by Jensen’s inequality. It is easy to gain some intuition for why this is the case: under the mean parameterization, the update is changed from $(1 - \gamma) \mu + \gamma \mathcal R$ to $(1 - \gamma V) \mu + \gamma V \mathcal R$, so we are still performing a convex combination of $\mu$ and $\mathcal R$ but with a less aggressive coefficient, thus yielding a slower rate of convergence. This also provides a heuristic as to why UNB moves slower at the start of the experiments: when the variational parameter is poorly chosen, $V$ is closer to $0$, which caps the progress that can be made at each step when the only values allowed for $(\gamma_t)$ are between $0$ and $1$. \ We are grateful for your careful review, we hope our responses have adequately addressed your questions and provided further insights. --- Rebuttal Comment 1.1: Title: Response to the rebuttal Comment: Thank you for the response. I would like to keep my score.
Summary: This procedure studies the asymptotic properties of a variational algorithm that ensures a monotonic decrease in the alpha-divergence. Specifically the authors investigate behavior in the setting where the variational distribution belongs to the exponential family of distributions. In this setting, and when a key integral can be analytically calculated, the the authors show monotonic convergence at a geometric rate. For scenarios where a key integral cannot be calculated explicitly, the authors show an unbiased empirical approach. Furthermore the authors show that this empirical approach enjoys almost sure convergence to a local minimizer of the alpha-divergence. Finally, the paper presents, both, simulated and real-data experiments to support the theoretical claims. Strengths: This is an very well-written paper and clear despite its high level of mathematical rigor. The authors consider an important problem of alpha-divergence minimization, which has broad impacts in the ML community. The convergence analysis and unbiased empirical minimization algorithm appear novel and interesting. Weaknesses: The paper does not contain any major weaknesses that I could find, beyond limitations of the analysis. These limitations include the specific assumption of an exponential family variational distribution, as well as the stated assumptions H1-H4 and C0-C3. Below are some detailed comments: * L25 : The inclusive KL divergence should be $D(p||q)$. * L69 : Shouldn't the posterior be proportional to the joint $p(x,y)$ rather than the marginal $p(y)$? * L80 : "Assuming the argmax is uniquely defined at each iteration" it is unclear whether this is a reasonable assumption * Eq. (5) : Notation $(\partial A)^{-1}(\mu)$ undefined * L108 : Is it obvious that $\mathcal{R} = U/V$? * L196 : Notation $\circ$ undefined (is this a composition?) Perhaps the biggest weakness is Sec. 5. This section falls a little flat as it concludes "we both obtain a biased and unbiased algorithm" without making those algorithms explicit or obvious (at least to me). Also, it is unclear how strong an assumption it is that the exponential family does not depend on $x$ as in $q(\cdot \mid x) = q$. In what reasonable scenario would the encoder be independent of the input? Technical Quality: 4 Clarity: 4 Questions for Authors: See "Weaknesses" section. Confidence: 2 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The authors do not explicitly state limitations of the proposed methodology. Nevertheless the authors do make explicit assumptions of the analysis. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your meticulous review and kind feedback. Below, we provide detailed responses to each of your questions, aiming to further clarify our work and address any remaining uncertainties. > *Diverse remarks on notation* + L196 : Notation $\circ$ is undefined (is this a composition?) Yes, $\circ$ denotes a composition. We thank you for carefully reading our paper and will make sure to correct these mistakes. > L69 : Shouldn't the posterior be proportional to the joint $p(x, y)$ rather than the marginal $p(y)$? Indeed, we can take $p(y) = p_{\theta}(x, y)$. The dependency in $x$ is dropped for notational ease. > L80 : "Assuming the argmax is uniquely defined at each iteration" it is unclear whether this is a reasonable assumption. As you point out, this assumption can be challenging in the general case, but when the variational family is an exponential family as in (H1), the argmax in (1) is uniquely defined by $(1 - \gamma) \partial A(\eta) + \gamma \mathcal R(\eta)$ if and only if this quantity is in $F$. This is why the choice of $\gamma$ is so critical: if $\mathcal R(\eta)$ is not in $F$, we must take $\gamma$ small enough to ensure that the next iterate will be valid. Under (H1), the set $F$ is open and convex, which implies two things. First, by convexity, if $\gamma_0 \in (0, 1]$ is a valid choice, then all $\gamma \in (0, \gamma_0]$ are also valid choices. Second, by openness, it is always possible to find such a $\gamma_0$. > L108 : Is it obvious that $\mathcal R = U / V$? Note that $V$ is the normalizing constant that makes $q_{\eta}^{\alpha} p^{1-\alpha} / V$ a probability density function. By linearity of the integral, $U / V$ is the expectation of the measurable function $S$ under a distribution with density $q_{\eta}^{\alpha} p^{1-\alpha} / V$, i.e. $\check\varphi_{\eta}^{\alpha}$. This explains why $\mathcal R = U / V$. > [Sec. 5.] falls a little flat as it concludes "we both obtain a biased and unbiased algorithm" without making those algorithms explicit or obvious. Also, it is unclear how strong an assumption it is that the exponential family does not depend on $x$ as in $q = q(\cdot | x)$. The assumption $q_{\eta}(\cdot | x) = q_{\eta}$ in Section 5 was used only to illustrate a specific point: when $p_{\theta}$ is fixed and the decoder does not depend on $x$ (i.e., $p_{\theta} = p$ and we only update $\eta$), the exact gradient of the VR bound equals (up to a negative factor) $\mathcal R(\eta) - \partial A(\eta)$. This was meant to show that, in this particular scenario, an iteration of the algorithm discussed in Section 3 corresponds to a gradient ascent step on the VR bound. This assumption is not representative of the VAEs' training process, which uses the gradient estimators (17) and (18) with an Adam optimizer. Specifically, estimator (18) is used for VR and estimator (17) for UB. We will reorganize this section to clarify these points. \ Thank you once again for your detailed review and supportive comments. We hope that our responses have satisfactorily addressed your questions and concerns. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: Thanks for the thorough rebuttal. I think this is a strong piece of work and will keep my score. Unfortunately I can't increase my confidence score as I am unfamiliar with the relevant work in this area and I have not had a chance to thoroughly validate the analysis beyond a standard detailed read through. As a side note, I personally find $p(y) = p_\theta(x,y)$ notation to be very confusing / misleading. Especially when you also refer to the prior $p_\theta(y)$ (L57). You may want to consider revising this. At the very least you should explicitly state that x is assumed implicit for brevity. --- Rebuttal 2: Title: Response to Reviewer Comment Comment: Thank you again for your positive feedback. Regarding the notation, we understand your concern about potential confusion. We will emphasize the fact that the data $x$ is fixed throughout the optimization process, and will rename the prior $p_0$ to avoid any ambiguity and improve the accessibility of the paper.
Summary: The paper studies the converge properties of an optimization algorithm, used to minimize the alpha divergence when the variational approximation belongs to the family of exponential distributions. The optimizer is a modification of an existing method and its performance is competitive with existing methods (without being stronger). However, the new approach enjoys theoretical guarantees. Strengths: This paper is line with recent publications on proving the convergence of VI when minimizing the KL-divergence. Obtaining similar results for the alpha-divergence is a natural next step. The main contribution of the paper is theoretical, and I find the results convincing (although I did not read the appendix in detail). The theory motivates a revision to existing methods, which seems to be competitive. Minimizing alpha-divergence is notoriously difficult and I appreciate the paper's discussion of algorithms to do so, and demonstration on some examples. Weaknesses: The paper lacks clarity and is difficult to read. First, the notation is heavy. Two ways to address this might be to introduce a table of notation and to clearly delineate definitions as formal statements. I'm guessing this wasn't done because of space constraints, but this could be a good use of the additional page for accepted papers. In Section 5, I was puzzled by the assumption that $q_\eta(\cdot \mid {\bf x}) = q_\eta$, that is the factor does not depend on ${\bf x}$. Does this mean that the encoder maps each image to the same latent representation? It seems like the VAE would then be useless, at least for the task of data compression. This also makes me question the results of Section 6. I'm guessing that while the VAE fails to do any meaningful compression of existing images, it may still generate convincing new images. I'd like to see such images (potentially in the appendix), as a supplement to Table 1, and a simple check that the model is trained in a useful manner. Once the authors clarify this point, I can adjust my score. Technical Quality: 3 Clarity: 2 Questions for Authors: I'll use this for minor comments and clarification questions. - instead of (or in addition to) exclusive and include KL, write KL(q||p) and KL(p||q) - Figure 1 and Figure 2 are hard to read. In Figure 1, indicate the optimum. Could the colors for the objective function be more distinct than the colors of the trajectories? - Figure 2: font size should font size of text. Could the box plots be replaced with trajectories and shaded intervals? - line 314: fix the reference to eq:vrbound - Define the IWAE and VAE baselines. Does this simply correspond to minimizing KL(q||p) and KL(p||q)? Is this using the variational family as prescribed in line 275? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for your careful reading and constructive comments on our submission. Below, we respond to each of your questions, hoping to clarify any uncertainties. > The paper lacks clarity and is difficult to read. We are sorry to hear that despite our best efforts to make the paper and the notation as clear and readable as possible, you encountered some difficulty in reading our work. We acknowledge that the notation in the paper is dense and will do our best to improve the overall readability by following your advice. > In Section 5, I was puzzled by the assumption that $q_{\eta}(\cdot | x) = q_{\eta}$, that is the factor does not depend on $x$. Does this mean that the encoder maps each image to the same latent representation? The assumption $q_{\eta}(\cdot | x) = q_{\eta}$ in Section 5 was used only to illustrate a specific point: when $p_{\theta}$ is fixed and the decoder does not depend on $x$ (i.e., $p_{\theta} = p$ and we only update $\eta$), the exact gradient of the VR bound equals (up to a negative factor) $\mathcal R(\eta) - \partial A(\eta)$. This was meant to show that, in this particular scenario, an iteration of the algorithm discussed in Section 3 corresponds to a gradient ascent step on the VR bound. This assumption is not representative of the VAEs' training process, which uses the gradient estimators (17) and (18) with an Adam optimizer. Specifically, estimator (18) is used for VR and estimator (17) for UB. We will reorganize this section to clarify these points. > Figure 2: font size should be font size of text. Could the box plots be replaced with trajectories and shaded intervals? Like you suggest, we originally intended for Figure 2 to show full trajectories with shaded intervals, but the result was cluttered and difficult to interpret due to the overlapping of five lines and their respective intervals. Additionally, the trajectories were quite noisy, although Polyak-Ruppert averaging could mitigate this specific issue. We will make further efforts to improve the clarity of the plot. > Define the IWAE and VAE baselines. Does this simply correspond to minimizing KL(q||p) and KL(p||q)? Yes, IWAE is obtained when $\alpha = 0$ and VAE corresponds to $\alpha = 1$. \ We extend our gratitude for your careful review and insightful feedback, and we hope our responses have effectively addressed your questions and concerns. We acknowledge the issues you noticed in the writing of our paper and will make sure to fix them. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: I've read the authors' rebuttals and I thank them for addressing my comments. I'd like to ask for a clarification regarding section 5 and the special case $q_\eta(\cdot \mid {\bf x}) = q_\eta$. The authors write "we can exploit this link to derive a VAE training procedure for the unbiased algorithm (10)." After re-reading the paragraph, I still do not understand how this link is exploited in order to generate a training algorithm for the case where $q_\eta(\cdot \mid {\bf x}) \neq q_\eta$. Regarding the figure, I understand that plotting trajectories can sometimes hurt readability. I hope the authors will consider adjusting the font size. > Yes, IWAE is obtained when $\alpha = 0$ and VAE corresponds to $\alpha = 1$. Thank you for the clarification. --- Rebuttal 2: Title: Clarification on Section 5 Comment: We acknowledge that Section 5 lacks clarity and appreciate the opportunity to resolve this issue. In what follows, we expand on the reasoning behind this part of the paper. \ Consider an algorithm whose updates are of the form $\mu_{t+1} = \mu_t +\gamma_t \mathrm M_th(\mu_t)$, where $\mathrm M_t$ is a positive definite matrix and $h:F\to F$. Usually, $h$ is the gradient of some function $H$ that we want to maximize. In our case, it is equivalent to consider $\mathrm M_t = \mathrm{Id}$ and $h=\partial_{\eta} H$, or $\mathrm M_t=\mathrm F(\mu_t)$ and $h=\partial_{\mu} H$, where $\mathrm F(\mu)$ is the Fisher Information Matrix of the model at $\mu$,. This arises directly from the chain rule, along with the identities $\mu = \partial A(\eta)$ and $\partial_{\eta}\mu=\mathrm F(\mu)$ recalled in Appendix A.1. Indeed, they imply $\partial_{\eta} H(\mu)=\mathrm F(\mu)\partial_{\mu} H(\mu)$. Since the gradients of interest are easier to compute w.r.t. $\eta$, we will choose the first option. Specifically, we have $\partial_{\eta}q_{\eta}(y)=\left(S(y) - \mu\right) q_{\eta}(y)$, which leads to $\partial_{\eta} V(\mu)=\alpha \left(U(\mu) - \mu V(\mu)\right)$. For the unbiased algorithm $(5)$, we set $h(\mu)=\frac{U(\mu)}{V(\mu)} - \mu$, which we recognize as the gradient of $H:\mu\mapsto\frac{1}{\alpha}\log V(\mu)$. Multiplying $H$ by the positive constant $\frac{\alpha}{1-\alpha}$ and writing it in integral form, we find that $\frac{\alpha}{1 - \alpha} H(\mu)=\frac{1}{1-\alpha}\log \mathbb E_{y\sim q_{\eta}}\left[\left(\frac{p(y)}{q_{\eta}(y)}\right)^{1-\alpha}\right]$ (remember that $\mu$ and $\eta$ are related by the one-to-one mapping $\partial A$). In the more general case where we have $p_{\theta}(\cdot, x)$ instead of $p$ and $q_{\eta}(\cdot|x)$ instead of $q_{\eta}$, this is the Variational Rényi (VR) bound: $\mathcal L_{\alpha}^{\mathrm R}(p_{\theta}, q_{\eta}, x)=\frac{1}{1-\alpha}\log \mathbb E_{y\sim q_{\eta}(\cdot | x)}\left[\left(\frac{p_{\theta}(x, y)}{q_{\eta}(y| x)}\right)^{1-\alpha}\right]$. In other words, the exact biased algorithm is a particular case of gradient ascent on the VR bound. By doing a similar work for the unbiased algorithm, where $h(\mu_t)=U(\mu_t) - \mu_tV(\mu_t)$, we get $H(\mu)=\frac{1}{\alpha} V(\mu)=\frac{1}{\alpha} \mathbb E_{y\sim q_{\eta}}\left[\left(\frac{p(y)}{q_{\eta}(y)}\right)^{1-\alpha}\right]$. Hence, the unbiased algorithm is a particular case of gradient ascent on a new bound given by $\mathcal L_{\alpha}^{\mathrm G}(p_{\theta}, q_{\eta}, x)=\frac{1}{1-\alpha} \mathbb E_{y\sim q_{\eta}(\cdot | x)}\left[\left(\frac{p_{\theta}(x, y)}{q_{\eta}(y| x)}\right)^{1-\alpha}\right]$. Using the REINFORCE gradient and backpropagation, we train VAEs to maximize these bounds. The respective methods are referred to as VR and UB in the paper. \ We hope this reformulation of the proof will help clarify any ambiguities. Should you have any further questions or require additional explanations, please let us know. Thank you again for your time and constructive comments. --- Rebuttal 3: Comment: Thank you for engaging. However these additional details do not provide the clarification I'm looking for. Namely, why do we need to study the case where $q_\eta (\cdot \mid {\bf x}) = q_\eta$ in order to derive a learning algorithm? In the above derivation, I do not see where in the calculation of $\frac{\alpha}{1 - \alpha} H(\mu)$ we exploit the fact that $p(\cdot, x) = p$ and $q_\eta = q(x)$. And once we derive $H$ in this particular case, why can we backtrack to the more general case? I believe the writing should make it clear why this detour is necessary, how is the simpler case exploited, and why can we then go back to the more general case. Would it not be possible to directly work in the more general case? > Using the REINFORCE gradient and backpropagation, we train VAEs to maximize these bounds. The authors have not defined "reinforce" gradient, could you define it? --- Rebuttal 4: Comment: Thank you for your feedback. We appreciate the opportunity to clarify our approach. In the paper, we distinguish between two contexts of variational inference: the traditional framework discussed in sections 3 and 4, and the more complex VAE training setup. 1. In the traditional setting, we want to approximate the true posterior $p_{\theta}(\cdot|x)$ by learning the parameters of a single distribution $q_{\eta}(\cdot|x)$. Since the data $x$ remains fixed throughout the entire training process, we drop all dependencies in $x$: we suppose that $p_{\theta}(\cdot|x)$ is known up to a constant (through the function $p$), and simply we denote $q_{\eta}$. This does not mean that the learnt parameter isn't dependent on the data. Rather, we are optimizing a parametric distribution over the entire dataset, e.g., coefficients in Bayesian logistic regression. 2. In contrast, VAEs involve optimizing both an encoder $q_{\eta}(\cdot|x)$ and a decoder $p_{\theta}(x|\cdot)$ *simultaneously*. One of the main goals with VAEs is to learn a meaningful latent representation of the data, so each data point must be effectively mapped to the latent space, hence the particular conditioning on $x$. Unlike the traditional setup, where we learn a single distribution, VAEs require joint optimization of two networks. This complexity necessitates adjusting the approach. The derivation in our paper shows that the algorithms MAX and UNB perform gradient ascent on specific variational bounds, namely the VR bound and $\mathcal L_{\alpha}^{\mathrm{G}}$. The VR bound, for example, is defined as $\mathcal L_{\alpha}^{\mathrm R}(p_{\theta}, q_{\eta}, x)=\frac{1}{1-\alpha}\log \mathbb E_{y\sim q_{\eta}(\cdot|x)}\left[\left(\frac{p_{\theta}(x, y)}{q_{\eta}(y|x)}\right)^{1-\alpha}\right]$. If we are in setup 1, the only thing we can act on is the parameter $\eta$. Differentiating the previous formula w.r.t. $\eta$ shows that MAX is a form of gradient ascent on $\mathcal L_{\alpha}^{\mathrm R}$. For VAEs, we need to optimize both the encoder and decoder networks, and we are not changing $\eta$ and $\theta$ but the weights of the networks, of which the parameters are functions (say $\eta=\eta(W)$ and $\theta=\theta(W)$, where $W$ denotes the weights of both networks). To handle this new task, we use the reparameterization trick for differentiable optimization and the REINFORCE gradient, i.e., we write $\partial_{W}f(x, y, W)=f(x, y, W)\cdot \partial_{W}(\log f(x, y, W))$ with $f = \left(\frac{p_{\theta(W)}(x, y)}{q_{\eta(W)}(y|x)}\right)^{1-\alpha}$ and $\partial_{W}$ denotes the derivative with respect to the networks' weights. If further details are needed, we can provide a more comprehensive derivation. What ties the algorithms from sections 3 and 4, and the VAE optimization process we propose together is the fact that they minimize the alpha-divergence by maximizing the same variational bounds. The procedures that lead to maximizing the bounds, however, are very different. We hope this clarifies the fact that these two frameworks are different and thus need be treated adequately. Thank you for your consideration. --- Rebuttal Comment 4.1: Comment: Thank you for providing additional details. I'm well familiar with how training a VAE works and this is not my point of confusion. Moreover, while everything that the authors state in their response is correct (distinction between Bayesian inference and learning for VAEs), this still does not answer my question. I'll restate it once last time: why do you need to first derive the case where $q_\eta$ does not take in an input ${\bf x}$, i.e. you're not learning an encoder and doing amortized variational inference, in order to derive the training procedure for the VAE? The sentence that I find confusing is > We can exploit this link to derive a VAE training procedure for the unbiased algorithm. I still don't see how the link is exploited. Perhaps you can clarify the following: when you write "$q_\eta (\cdot \mid {\bf x}) = q_\eta$" and "$q_\eta$ does not depend on ${\bf x}$", what setting are you examining here? As far as I can tell, this is not what you call "the traditional setting" where you do posterior learning, since you're still learning $\theta$ (or the weights $\theta$ depends on). It is simply an encoder which doesn't take in ${\bf x}$ or some specific input $x_n$. As is, I still believe this is a valuable contribution to the NeurIPS community and I'll maintain my score, which a weak accept. --- Rebuttal 5: Comment: We appreciate your patience and apologize for misunderstanding your point of confusion, which is absolutely valid. This specific sentence in the paper is indeed wrong, due to a poor formulation of the underlying idea. \ Throughout the entire paper, the function $q_{\eta}$ always depends on $x$ and can be written $q_{\eta}(\cdot|x)$. For simplicity and to reduce notation clutter, we omit this dependency until Section 5. In Section 5, the function denoted $q_{\eta}(\cdot|x)$ is not the density of a distribution in an exponential family parameterized by $\eta$. Instead, it could be expressed $\tilde q_{f_{\eta}(x)}(\cdot)$, where $f_{\eta}$ is a neural network. When we wrote "the parameter $\theta$ is not updated, $q_{\eta}(\cdot|x)$ belongs to an exponential family and $q_{\eta}(\cdot|x)=q_{\eta}$ and does not depend on $x$", we meant that we were temporarily reverting to the "traditional VI" setting. The goal there was to explain why MAX performs gradient ascent on the VR bound when we are in the traditional setting and $q_{\eta}(\cdot|x)$ is the density of an exponential distribution. \ The exact reasoning that led to the VAE training algorithms is the one described in the comments above. To summarize, we start by noticing that in the setting of Sections 3 and 4, the algorithms MAX and UNB are gradient ascent procedures, respectively on the VR bound and $\mathcal L_{\alpha}^{\mathrm G}$. Training VAEs to maximize these bounds leads to the methods referred to as VR and UB. \ We hope this explanation will satisfactorily address your concerns about this part of the paper. We will delete the problematic sentence and reorganize section 5 so that it better reflects the train of thought behind the VAE training methods we use. Thank you for your thorough review and for helping us ensure the clarity of our paper. --- Rebuttal Comment 5.1: Comment: Thank you for getting back to me. My intention is not to be stubborn, and I'm quite keen to better understand your work. The content of Section 5 is still not entirely clear to me, but at this point, I believe I need to have another close look at the paper in light of the comments provided by the author, and I'll discuss the matter with the other reviewers. Ideally, I would like to read a revised Section 5. Since this section is fairly short, I invite the authors to rewrite Section 5 to include their clarifying comments. That said, I recognize this is an unusual request and even if the authors did not do this, I'd still consider changing my score after re-reading the paper and our exchange during the rebuttal period. --- Rebuttal 6: Title: Section 5 revamp proposition Comment: Thank you for your interest in our work and for giving us the opportunity to improve it. Below is a draft of the revised Section 5 you asked for. We hope that this revision will clarify the points you raised. --- In this section, we explain how to transpose the algorithms presented in Section 4.1 to the training of Variational Auto-Encoders (VAEs) [22]. We start by showing that the exact versions of the biased and unbiased algorithms correspond to gradient ascent procedures on two different variational bounds. Let us first address the case of the biased algorithm. Recall that it writes $\displaystyle{\mu_{t+1}=\mu_t+\gamma_t\left[\frac{U(\mu_t)}{V(\mu_t)} - \mu_t\right]}$. For $\eta\in E$ and $\mu=\partial A(\eta)$, we define the Variational Rényi (VR) bound [26] by $$\mathcal L_{\alpha}^{\mathrm R}(\mu)=\frac{1}{1-\alpha}\log\mathbb E_{y\sim q_{\eta}}\left[\left(\frac{p(y)}{q_{\eta}(y)}\right)^{1-\alpha}\right]=\frac{1}{1-\alpha}\log V(\mu).$$ Noticing that $\partial_{\eta}V(\mu)=\alpha \left(U(\mu) - \mu V(\mu)\right)$, we can thus express an iteration of the biased algorithm as $$\mu_{t+1}=\mu_t+\frac{\gamma_t\cdot\alpha}{1-\alpha}\partial_{\eta}\mathcal L_{\alpha}^{\mathrm R}(\mu_t)=\mu_t+\frac{\gamma_t\cdot\alpha}{1-\alpha}\mathrm{F}(\mu_t)\partial_{\mu}\mathcal L_{\alpha}^{\mathrm R}(\mu_t).$$ Under (H1), the Fisher Information Matrix $\mathrm{F}(\mu_t)$ is positive definite, hence the biased algorithm is a gradient ascent procedure on the VR bound. The unbiased algorithm, which writes $\mu_{t+1}=\mu_t+\gamma_t\left[U(\mu_t) - \mu V(\mu_t)\right]$, similarly amounts to performing gradient ascent on the variational bound $\mathcal L_{\alpha}^{\mathrm G}$ defined by $$\mathcal L_{\alpha}^{\mathrm G}(\mu)=\frac{1}{1-\alpha} \mathbb E_{y\sim q_{\eta}}\left[\left(\frac{p(y)}{q_{\eta}(y)}\right)^{1-\alpha}\right]=\frac{1}{1-\alpha} V(\mu).$$ In the context of VAEs [22], we learn both a probabilistic encoder $y\mapsto \tilde q_{f(x;\eta)}(y)$ and a probabilistic decoder $x\mapsto\tilde p_{g(y;\theta)}(x)$, where $\tilde q_{\eta}$ and $\tilde p_{\theta}$ are densities from families parameterized respectively by $\eta$ and $\theta$, while $x\mapsto f(x;\eta)$ and $y\mapsto g(y;\theta)$ are neural networks. For simplicity and to align with the usual notation for VAEs, we will denote $ \tilde q_{f(x;\eta)}(\cdot)=q_{\eta}(\cdot | x)$ and $\tilde p_{g(y;\theta)}(\cdot)=p_{\theta}(\cdot | y)$. We will also use the shorthand $\phi=(\eta, \theta)$. Since the biased and unbiased algorithms studied in the previous sections minimize the alpha-divergence by maximizing the variational bounds $\mathcal L_{\alpha}^{\mathrm R}$ and $\mathcal L_{\alpha}^{\mathrm G}$, we propose to train VAEs to maximize those same bounds. In this new setting, they write $$\mathcal L_{\alpha}^{\mathrm R}(\eta, \theta, x)=\frac{1}{1-\alpha}\log\mathbb E_{y\sim q_{\eta}(\cdot|x)}\left[\left(\frac{p_{\theta}(x, y)}{q_{\eta}(y | x)}\right)^{1-\alpha}\right],$$ $$\mathcal L_{\alpha}^{\mathrm G}(\eta, \theta, x)=\frac{1}{1-\alpha}\mathbb E_{y\sim q_{\eta}(\cdot|x)}\left[\left(\frac{p_{\theta}(x, y)}{q_{\eta}(y | x)}\right)^{1-\alpha}\right].$$ To update both the encoder and decoder simultaneously, we differentiate them with respect to $\phi$ using the reparameterization trick [22]. If $z\sim r(\cdot)$ and there exists a mapping $v$ such that $v(z; \eta, x)$ has the same distribution as $y$ when $y\sim q_{\eta}(\cdot | x)$, then we have $$\partial_{\phi}\mathcal L_{\alpha}^{\mathrm R}(\eta, \theta, x)=\mathbb E_{z\sim r(\cdot)}\left[\overline w_{\alpha}(z, \eta, x)\partial_{\phi}\left(\log\frac{p_{\theta}(x, v(z; \eta, x))}{q_{\eta}(v(z; \eta, x) | x)}\right)\right],$$ $$\partial_{\phi}\mathcal L_{\alpha}^{\mathrm G}(\eta, \theta, x)=\mathbb E_{z\sim r(\cdot)}\left[w_{\alpha}(z, \eta, x)\partial_{\phi}\left(\log\frac{p_{\theta}(x, v(z; \eta, x))}{q_{\eta}(v(z; \eta, x) | x)}\right)\right].$$ where $\displaystyle{w_{\alpha}(z, \eta, x)=\left(\frac{p_{\theta}(x, v(z; \eta, x))}{q_{\eta}(v(z; \eta, x)|x)}\right)^{1-\alpha}}$ and $\displaystyle{\overline w_{\alpha}(z, \eta, x)=\frac{w_{\alpha}(z, \eta, x)}{\int_{\mathsf Y} {w}_{\alpha}(z', \eta, x)r(z')\nu(\mathrm d z')}}$. To train VAEs, we simply plug batch estimates of these gradients into an optimizer like Adam. Notably, $\partial_{\phi}\mathcal L_{\alpha}^{\mathrm G}$ can be estimated unbiasedly, while estimators of $\partial_{\phi}\mathcal L_{\alpha}^{\mathrm R}$ are subject to bias. We will study the practical implications of this fact in Section 6.
Summary: This paper proposes the asymptotic analysis for both exact and empirical alpha-divergence minimization algorithms, especially in the case of infinite number of iterations. The paper mainly focuses on the exponential family setting, and provides geometric convergence analysis of exact minimization algorithms using fixed point theories. To bypass the difficulty on studying the empirical minimization algorithms due to the bias of previous algorithms, a novel unbiased algorithm is proposed and analyzed, including almost sure convergence to a local minimizer and a law of the iterated logarithm. Finally, the paper experiments on toy gaussians and on variational auto-encoders to show the effectiveness of the proposed algorithm. Strengths: - The paper is clearly organized, including both exact and empirical analysis. - The paper evaluates the asymptotic properties as the number of iterations goes to infinity, which are seldom discussed in previous works. - Extensive discussions on the convergence of empirical alpha-divergence minimization algorithms are provided in the paper. Weaknesses: - The novelty of the paper appears to be limited. The basis of the theoretical analysis is mostly covered by [1], and the main theoretical analysis focuses only on exponential families. - Some assumptions in the theoretical analysis could be further evaluated. - For Assumption (H3) in section 3, it remains unclear when the mapping $\mathcal{M}_{\gamma}$ is a contraction. - It would be beneficial to further verify this for $q_{\eta}$ and $p$ in the case of the exponential family. - For Assumption (C2) in section 4, the paper states that for “$\alpha \in (0, \frac{1}{2})$ suppose specific behaviors on the relative tails”. It would be illuminating to verify this assumption in experiments, especially in VAE experiments, where the choice of $\alpha$ can make a difference to the empirical results. - The experiments of this paper could be extended. It would be nice to add experiments like Bayesian logistic regression as in [1]. Furthermore, the empirical results lack significance. In the toy gaussian experiments, the proposed UNB method achieves more accurate but slower convergence results compared to the biased MAX method, and is surpassed by the NAT method with proper step size. In VAE experiments, the UB approach is not significantly better than the VR approach in some cases. - The relation between the unbiasedness and the computational intensiveness of the proposed algorithm could be further justified. The bias issue is not analyzed theoretically because “biased gradient estimators hinder any theoretical study”, and there seems to be a trade-off between unbiasedness and computational intensiveness regarding the comparison of MAX and UNB methods in toy experiments. [1] K. Daudel, R. Douc, and F. Roueff. Monotonic Alpha-Divergence Minimisation for Variational Inference. Journal of Machine Learning Research, 24(62):1–76, 2023. Technical Quality: 3 Clarity: 3 Questions for Authors: - The paper asserts that “biased gradient estimators hinder any theoretical study”. Is it possible to compare the biased and unbiased algorithms on a relatively simple example like gaussian distributions or two-mixture of gaussians? - If assumption (H3) is not satisfied, do we still have geometric convergence? If not, what would be the convergence rate? - In VAE experiments, different $\alpha s$ result in different empirical results for both VR and UB methods. Moreover, UB achieved relatively worse results for $\alpha \in (\frac{1}{2}, 1)$, which is the interval where assumption C2 could be satisfied. Could the authors provide a heuristic understanding of this phenomenon? - In line 208 of section 4, the paper states “on top of that, we lose the monotonicity property”. The monotonicity property is essential for the geometric convergence in the exact alpha-divergence minimization algorithm analysis. Does it also contribute to the empirical algorithm analysis? Is it beneficial to solving the instability problems for large step sizes? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: - The paper states the limitation of only considering the exponential family. Still, there are additional limitations when focusing on this setting. Please refer to the second point of weaknesses. - Although the paper claims to focus mostly on asymptotic theory, the quality of experiments could still improve to better support the theoretical analysis. Please refer to the third point of weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough and constructive review of our paper. We greatly appreciate the time and effort you have dedicated to providing feedback. We are committed to addressing the concerns raised and improving our work. Below, we respond to each of your points in detail. > The basis of the theoretical analysis is mostly covered by [1]. While our work indeed builds on the algorithm proposed in [1], which had already established the monotonicity property and explicit update form for the exponential family, our contributions lie in proving the algorithm’s convergence at an asymptotically geometric rate and towards a minimizer of the alpha-divergence. We further consider this algorithm in the empirical setting and propose an unbiased version, for which we provide two convergence theorems. > For Assumption (C2), the paper states that “$\alpha \in (0, ½)$ supposes specific behaviors on the relative tails”. It would be illuminating to verify this assumption in experiments. We have found that convergence can still occur even if assumption (C2) is not met. For example, when $p$ is the density of a Cauchy distribution and the variational family is Gaussian, we still observe convergence (Appendix C). So far, we have not encountered a simple case where convergence entirely fails. This suggests that assumption (C2) could be relaxed or even replaced in future studies. > It would be nice to add experiments like Bayesian logistic regression. Due to space constraints, we had to make choices regarding which experiments to present in the paper. We opted for the toy Gaussian examples as we found them to be more insightful than Bayesian logistic regression or other classical applications of variational inference algorithms. > The empirical results lack significance. We believe that the main takeaway from the toy gaussian experiments is that the MAX approach is highly robust against aggressive hyperparameter tuning strategies, and generally converges faster than other methods with theoretical guarantees. Although the NAT method performs well when it converges, it has shown occasional instability and is costlier per-iteration. > In VAE experiments, the UB approach is not significantly better than the VR approach in some cases. It is true that the UB approach does not always show a significant advantage over the VR approach in our VAE experiments. However, we believe that this study offers useful insights. As you pointed out, the choice of $\alpha$ makes a difference in the empirical results, which is quite exciting as the idea behind the use of alpha-divergences in variational inference is to overcome some limitations of the traditionally used KL divergence which can be problematic on certain datasets. We observe that a proper tuning of $\alpha$ can greatly improve the model’s performance, though the optimal value seems to depend on both the gradient estimator and the dataset. A thorough analysis of the phenomena at play is beyond the scope of this paper, but is certainly a compelling question for future research. > The bias issue is not analyzed theoretically because “biased gradient estimators hinder any theoretical study”. Analyzing the MAX approach theoretically presents two main difficulties. First, if it converges, it converges to a minimizer of an approximation of the alpha-divergence, rather than the true alpha-divergence itself. Second, this approximation is hard to characterize, since it amounts to finding and analyzing a function whose gradient is $\widehat U / \widehat V$. > Is it possible to compare the biased and unbiased algorithms on a relatively simple example like gaussian distributions or two-mixture of gaussians? In the experiments section, both the biased and unbiased algorithms are compared in the case of a Gaussian variational family with Gaussian, mixture of Gaussian and Cauchy targets (see also Appendix C). The biased algorithm is referred to as MAX, while the unbiased algorithm is UNB. > If assumption (H3) is not satisfied, do we still have geometric convergence? If not, what would be the convergence rate? We need assumption (H3) to obtain the geometric rate of convergence. Lemma 2 (found at the beginning of Appendix A.1) establishes that, as long as the sequence $(\eta_t)$ remains bounded, at least one fixed point of the mapping $\mathcal M_{\gamma}$ will be a limit point of this sequence. However, evaluating whether this limit point is the definitive limit and determining the convergence rate without (H3) is beyond the scope of this study. > UB achieved relatively worse results for $\alpha \in (½, 1)$, which is the interval where assumption C2 could be satisfied. Could the authors provide a heuristic understanding of this phenomenon? We believe that the phenomenon at play is quite complex and depends mostly on the dataset, since opposite behaviors are observed between CIFAR10 and CelebA. Moreover, as we already discussed, assumption (C2) might be subject to relaxation, thus it may not fully explain the deeper reasons behind the observed convergence behavior. > Does [the monotonicity property] contribute to the empirical algorithm analysis? Is it beneficial to solving the instability problems for large step sizes? Empirically, we can not guarantee the monotonicity property unless an accept-reject step is integrated into the procedure. The main idea behind the analysis of the sample-based algorithm is to use the mean parameterization of the exponential family to write it as a Robbins-Monro procedure, hence we do not leverage the monotonicity property seen in the exact setting. However, the fact that the exact versions of MAX and UNB enjoy this property may explain why these two approaches appear to be quite stable and follow direct paths to minimizers. This behavior is illustrated in Figure 1, and similarly observed in the mixture of Gaussian and Cauchy cases. We can include additional figures in Appendix C to further exemplify this matter.
Rebuttal 1: Rebuttal: We deeply thank the reviewers for their careful and detailed reviews of our manuscript. We are grateful for their constructive feedback and for offering us an opportunity to improve our work. Below, we provide responses to some points that have been raised by multiple reviewers. We hope that this discussion will satisfactorily address any concerns the reviewers may have. > (Reviewer pDqJ) The main theoretical analysis focuses only on exponential families. > (Reviewer FQDG) It only considers the variational family to be exponential models. We chose to focus on exponential families as they offer a convenient setting and enjoy interesting theoretical properties. First, they ensure the existence of a unique solution to the argmax problem in (3), given that $\gamma$ is chosen appropriately (this is not a restrictive assumption). They also allow us to state (H3) in an understandable way, rather than through obscure integral conditions. Finally, we believe that the principles established in this restricted setting offer insights and intuition about the problem at hand, potentially serving as a foundation for future analyses in more general contexts. > (Reviewer pDqJ) For Assumption (H3) in section 3, it remains unclear when the mapping $\mathcal M_{\gamma}$ is a contraction. It would be beneficial to further verify this for $q_{\eta}$ and $p$ in the case of the exponential family. > (Reviewer FQDG) The convergence is proved for the proposed algorithm in the asymptotic sense. In Proposition 1, we explore the contraction property of the mapping $\mathcal M_{\gamma}$. The analysis reveals that, under assumption (H3), this mapping is a contraction in a neighborhood of the parameter $\eta_*$. It can be noticed that when $\mathcal Q$ is an exponential family as in (H1) and the target $p$ belongs to $\mathcal Q$, i.e. $p=q_{\eta_*}$, assumption (H3) is always verified. Indeed, we can show that in this setting, the only fixed point (c.f. Lemma 1) is the target parameter $\eta_*$, and that we have $\rho_* = \alpha < 1$. Determining the size of the neighborhood in which $\mathcal M_{\gamma}$ is a contraction is challenging in the general case. However, in the simpler case where $Q$ is a real Gaussian family with fixed variance and $p$ is Gaussian (not necessarily with the same variance), this condition is verified for any mean in $\mathbb{R}$. Again, we thank all reviewers for their comprehensive and constructive feedback. We hope that our responses have adequately addressed your questions and concerns, and that our work has been significantly improved as a result.
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper explores alpha-divergence Variational Inference (VI) from a theoretical perspective, and in particular the monotonic alpha-divergence minimization algorithm. It includes an asymptotic analysis of the algorithm applicable to exponential families, establishing conditions that ensure convergence to a local minimizer at a geometric rate. The theoretical examination of the sample-based counterpart of the algorithm leads to a modified unbiased version, for which both almost sure convergence and a law of the iterated logarithm are provided. Experimental validation using synthetic and real-world datasets illustrates and supports the theoretical findings of the paper. Strengths: This is a relevant analysis of alpha-divergence VI algorithms, delivering an in-depth study of their behavior. It is well-written, with technically intricate proofs that are articulated with clarity. Overall, it is a very nice contribution to the field. Weaknesses: I mainly have minor comments: - Some assumptions would benefit from more detailed discussion. Specifically, I am unsure why conditions (H4) and (C0') are considered realistic and sensible. - The impact of the number of samples K used in the sample-based algorithm is underexplored: what influence does it have? Additionally, is it important to maintain the same number of samples as the number of iterations increases? Technical Quality: 4 Clarity: 4 Questions for Authors: - Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough review and encouraging feedback on our submission. We provide detailed responses to each of your questions below, hoping to clarify any uncertainties you may have had reading our paper. > I am unsure why conditions (H4) and (C0') are considered realistic and sensible. The rationale behind assumption (H4) is that as we approach the optimal solution, we can afford to be progressively less conservative in our choice of $\gamma$, since $\mathcal R(\eta_t)$ should always remain sufficiently close to the set $F$, or even belong to that set. More specifically, one can show that (H4) holds if the set $\mathsf K_{\mathcal L_{\alpha}(\eta_0)}$ is compact, meaning that the set of $\eta$'s that verify $\mathcal L_{\alpha}(\eta) \leq L_{\alpha}(\eta_0)$ is bounded. Consequently, each decrease in the alpha-divergence allows for an increase in the highest permissible value of $\gamma$. We are open to including a concise discussion on this matter in the Appendix. Regarding (C0’), it is essentially a stricter version of (C0) and involves a design choice left to the practitioner's discretion. It's important to note that (C0) and (H4) are incompatible, which partly explains why we do not get a geometric convergence rate in empirical scenarios. >What influence does the number of samples $K$ have in the sample-based algorithm? Increasing the number of samples $K$ reduces the bias of the estimator used in the first sample-based algorithm (MAX). Notably, the estimator $\widehat U / \widehat V$ becomes asymptotically unbiased, meaning that as $K \to +\infty$, it converges to $\mathcal R$. This is illustrated in the toy Gaussian experiment, where a small sample size ($K = 10$) causes the MAX algorithm to converge to suboptimal parameters with higher alpha-divergence compared to unbiased algorithms. To highlight the impact of sample size on bias, we can include additional plots in Appendix C. Aside from this, a smaller sample size accelerates per-iteration computation but results in noisier estimates of the intractable integrals. This creates a tradeoff, a detailed exploration of which is beyond this paper's scope. > Is it important to maintain the same number of samples as the number of iterations increases? Theoretically, maintaining the same number of samples across all iterations is useful to satisfy condition (iv) in the proof of Theorem 3, as it guarantees the continuity of the function $\mu \mapsto \mathrm{Cov}(G(\mu))$. However, in practice, one might opt to use fewer samples in the early iterations and increase the number later on to improve the accuracy of the final outcome. \ Thank you again for your attentive review and encouraging feedback on our submission, we hope our responses have helped resolve any ambiguities.
null
null
null
null
null
null
BiScope: AI-generated Text Detection by Checking Memorization of Preceding Tokens
Accept (poster)
Summary: This paper presents BiScope, a new algorithm for AI-generated text detection, that leverages logits from a tect expansion task to detect AI-generated text. The algorithm proceeds in three steps: 1. Given a candidate input text T, break it into two segments (seg1, seg2, such that seg1 + seg2 = T). A "text completion" prompt is created where given <summary(T), seg1>, the goal is to predict seg2. The rationale for this is to give the detection algorithm full context awareness of the candidate input text. 2. These completion prompts are passed through several open-source LLMs with seg2 tokens present in a teacher forcing manner. The logits are used as features for subsequent downstream classification. 3. The features are used to train a binary classifier to detect AI-generated text. The authors perform several experiments and ablations in their paper, and find that BiScope is effective and efficient compared to many baselines (including GPTZero). Strengths: 1. The paper presents a novel algorithm for AI-generated text detection, a research area growing in importance due to the development of powerful LLMs. Utilizing bidirectional logit information to discriminate between LLM-generated and human-written text is an interesting idea, and the authors have done a good job figuring out the technical details for an effective implementation. 2. The paper performs thorough experiments on five AI-generated text detection datasets, five popular commerical LLMs, and in both IID / OOD settings. The paper compares their method to both academic and commerical AI-generated text detectors, including GPTZero, a popular commerical software for AI-generated text detection. Additionally, the paper showcases the robustness of BiScope to paraphrasing attacks. 3. The paper does some additional ablation studies on the choice of open source LLMs for step 2 above, an efficiency analysis, and the choice of classifier features. Weaknesses: 1. The out of distribution (OOD) experiments in the paper are quite limited, which put the baselines at an unfair disadvantage. The authors experiment with two OOD settings (L280-290), making either the text domain OOD or the generative LLM, **but not both**. Hence, I don't think the OOD settings are fully OOD, I think this puts BiScope at an unfair advantage over all baselines, since many of them have not received any in-distribution training. 2. The paper would be stronger with more baselines, ideally some watermarking methods too. I am curious to know how BiScope compares to newer methods like Binoculars [5], as well as text watermarking algorithms like KGW [1], EXP-Edit [2], SemStamp [3], or [4]? 3. A few more ablation experiments would make the paper stronger. These were some additional questions I had: a) how much does the number of segments (`n`) in Section 3.5 matter? b) What is the performance like if a classifier is trained just on the feed-forward logits of the same open-source LLMs (using the same training data), without any completion prompts? [1] - https://arxiv.org/abs/2301.10226 [2] - https://arxiv.org/abs/2307.15593 [3] - https://arxiv.org/abs/2310.03991 [4] - https://arxiv.org/abs/2305.08883 [5] - https://arxiv.org/pdf/2401.12070 --- **After rebuttal**: Thank you for the response. I've raised my score to 5 due to extra baselines like Ghostbusters and Binoculars. Technical Quality: 2 Clarity: 4 Questions for Authors: 1. AI-generated text detectors typically need to operate in a low FPR setting, to minimize the risks of labeling innocent text as AI-generated. Given this, what are the Table 1 true positive rates at a low FPR of say 0-1% (or equivalently, an AUC-ROC curve in that range)? TPR at low FPR ranges (0-1%) is a standard metric for evaluating AI-generated text detectors which has been used in many previous papers and blogs: https://arxiv.org/abs/2303.13408, https://openreview.net/pdf?id=DEJIDCmWOz, https://arxiv.org/pdf/2401.12070, https://foundation.mozilla.org/en/blog/who-wrote-that-evaluating-tools-to-detect-ai-generated-text/ 2. How in-distribution is the RADAR training data compared to the evaluation data used? Since RADAR was not retrained, I'm worried the baseline maybe put at an unfair disadvantage compared to BiScope. 3. In Table 1, I'm a bit surprised that there's almost no drop in detection accuracy before/after paraphrasing for most baselines in the non-Essay settings (line 300-301). What is the reason for this? This seems in contradiction with findings in multiple prior works. 4. In Table 2, there's a strong length bias in the two domains. Does that make the classification task easier than it would have been in a length-controlled setting (take first K tokens of AI-generated / human-generated text and classify them)? Presentation / nits: * Figure 4 is hard to read, could it be converted to a table? * Move the baselines description (L112-117) to a dedicated subsection / paragraph in Section 4.1 * In Section 3.4, add the list of open-source LLMs used in main body Confidence: 4 Soundness: 2 Presentation: 4 Contribution: 3 Limitations: Yes, adequately addressed Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable review and suggestions. Here are our point-by-point responses: **W1**: Regarding the OOD evaluation, previous studies [1, 5, 7, 8] shifted either the data (cross-dataset) or the generative models (cross-model).. We strictly follow their settings in our OOD evaluation. Our comparison with baselines is fair because we retrained all the baselines (except RADAR which is not open-source) and BiScope using the same datasets. And the testing was performed on the same datasets too. Please let us know if we misunderstand your question. **W2**: Thank you for your suggestions. We have included three more baselines in the Table 6 (in the submitted PDF file), including Binoculars [5], GhostBuster [1], and OpenAI Detector [6]. The results illustrate that BiScope outperforms all three new baselines in most cases across both the normal and paraphrased datasets. On the other hand, watermarking techniques allow the detection of generated text by limiting the generation within some special vocabulary. They require modifications to the decoding strategy of the generative model. This does not align with the application scenario of BiScope and the baselines as they do not require any modification to the LLM’s generation process and can work in a black-box setting. Although it is difficult to compare the two, we consider them complementary. We will cite [10-13] and include the above discussion. **W3**: Thank you for your suggestion. We have implemented these two ablation experiments, and the results are shown in Table 7 and Table 8 in the uploaded PDF file, respectively. Table 7 presents the ablation results with different segmentation strategies in the multi-point splitting of BiScope. We tested three strategies: splitting at every 50% text length, every 25% text length, and every 10% text length (as used in our paper). The results indicate that a more fine-grained splitting interval generally improves BiScope's performance. However, in a small number of cases, a smaller splitting interval may degrade performance. We chose to use 10% as it achieves the highest detection scores in most cases while reaching low degradation in corner cases. Table 8 presents the comparison results when using and not using the completion prompt in BiScope. The results show that in 25 of 45 cases, using the completion prompt performs better. Additionally, the completion prompt is more compatible with the summary procedure. Thus, we chose to use the completion prompt in BiScope. **Q1**: Thanks for the suggestion. We further present the AUC-ROC curve of our method, shown in Figure 9 (in the submitted PDF file). The results show that our BiScope reaches over 0.8 detection TPR on average when the FPR is only 0.01, outperforming all the baselines on all the five generative models’s data. **Q2**: RADAR is officially trained on the Openwebtext dataset, which contains over 8 million human-written texts and over 8 million AI-generated texts crafted by RADAR’s authors. These texts are very similar to the data in the Yelp dataset. Since the RADAR’s authors do not provide their training code, we did not further fine-tune RADAR on our datasets, but when calculating the F1 score, we did search the best threshold for RADAR’s output on our datasets. Considering the substantial amount of RADAR's pre-training data and our in-distribution threshold searching, the comparison in our paper does not put RADAR in an unfair position. **Q3**: In Table 1, we present the results of BiScope on the paraphrased datasets under both the in-distribution (seen) setting and OOD (unseen) setting. Under the in-distribution setting, BiScope and all the baselines are trained and tested on the paraphrased dataset, resulting in a very small performance drop for BiScope (as mentioned in line 300-301). However, under the OOD setting, BiScope and all the baselines are trained on the normal dataset and tested on the paraphrased dataset, as shown in the right-most column in Table 1. We observe a significant F1 score drop for all methods, including BiScope. For example, BiScope’s detection F1 score drops by around 0.1 on the Arxiv dataset. Notably, BiScope’s performance drop is the smallest among all the detectors in most cases. **Q4**: Thank you for pointing out the potential influence caused by the text lengths. We note that the length bias can be clearly identified in the Yelp, Essay, and Creative datasets, where the human-written texts can be twice as long as the AI-generated texts. Thus, we re-ran BiScope on these three datasets, using the first K characters as the input, where K equals the average text length of each individual dataset as shown in Table 2. The results are presented in the following table. For the Yelp dataset, the average detection F1 score drops within 0.01, while for the Creative and Essay datasets, there is even a slight increase in the average detection F1 score. Therefore, the length bias in the dataset does not provide BiScope with unfair advantages. |||Normal Dataset In-Distribution|||||Paraphrased Dataset In-Distribution|||| Normal | Paraphrased | |---|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |Dataset|Input|GPT-3.5-Turbo|GPT-4-Turbo|Claude-3-Sonnet|Claude-3-Opus|Gemini-1.0-Pro|GPT-3.5-Turbo|GPT-4-Turbo|Claude-3-Sonnet|Claude-3-Opus| Avg. | Avg. | |Yelp|Clipped|0.8968|0.9256|0.9574|0.9456|0.9436|0.9003|0.9329|0.9760|0.9711|0.9338|0.9451| ||Unclipped|0.9023|0.9405|0.9652|0.9532|0.9486|0.9064|0.9473|0.9814|0.9789|0.9420|0.9535| |Creative|Clipped|0.9980|0.9960|0.9954|0.9954|0.9975|0.9955|0.9955|0.9954|0.9945|0.9965|0.9952| ||Unclipped|0.9985|0.9950|0.9960|0.9930|0.9964|0.9955|0.9945|0.9955|0.9940|0.9958|0.9949| |Essay|Clipped|1.0000|0.9990|0.9990|0.9975|1.0000|0.9990|0.9990|1.0000|0.9985|0.9991|0.9991| ||Unclipped|1.0000|0.9990|0.9985|0.9970|0.9994|0.9965|0.9990|0.9990|0.9980|0.9988|0.9981| Thank you for all the presentation suggestions. We will further modify our paper based on them. --- Rebuttal Comment 1.1: Title: Thank you for the response. I've raised my score to 5 Comment: Thank you for the response and extra experiments! I've raised my score to 5 due to extra baseline and ablation experiments in the PDF (Ghostbusters and Binoculars). --- Reply to Comment 1.1.1: Title: Thanks for your feedback. Comment: Thanks for your feedback and appreciation! We will include all the experiments in our next version and further polish our paper based on your suggestions.
Summary: The paper describes work on detecting machine-generated texts using a proposed method called BiScope which exploits a model’s states by considering both the preceding token information and the next token information via an bi-directional cross-entropy loss calculation method. The proposed BiScope method does not make use of any additional finetuning and instead leverages on the calculated forward and backward cross-entropy losses as features for the binary classifier. The performance of said classifier can also be improved through the use of summaries. Results show that using BiScope with summaries outperform existing SOTA methods for AI text generators across aspects such as in and out-of-distribution results, intentional paraphrasing, and efficiency. Overall, I believe the study has the level of completeness, technical rigor, and impact required for a NeurIPS paper. Strengths: The paper is well-written, easy to follow, and has the level of completeness required for NeurIPS. The proposed bi-directional calculation of losses which the authors hypothesized to contain information preceding token and next token has been properly motivated with a clear research framing. The proposed method has also been extensively and rigorously compared with a number of SOTA baselines, a large compiled multi-domain dataset, and was able to prove its superiority over previous approaches. Weaknesses: I do not see any strong cases of technical issues. However, some points can be considered to improve the overall quality of the work and support the realistic application of the method: It might be best to further emphasize what specific additions or changes were made to the compiled existing dataset (Arxiv, Yelp, code, etc) in the main paper and possibly name this. Moreover, you should also provide immediate details about the datasets used particularly on the length (whether they are essay-length, paragraph-length, etc), language coverage, as well as register or domain. To strengthen the contribution of the study, I strongly suggest the authors to do the same experiment as done by Tulchinskii et al (https://proceedings.neurips.cc/paper_files/paper/2023/hash/7baa48bc166aa2013d78cbdc15010530-Abstract-Conference.html) and on evaluating the limitation of the proposed BiScope method in detecting whether non-native written texts are falsely identified as machine-generated. If BiScope is still able to clearly differentiate non-native written texts from machine-generated texts, then this is an advantage for the authors. Moreover, this particular advantage may also give another favorable reason to use the model other than its performance. Technical Quality: 4 Clarity: 4 Questions for Authors: 1. Is there a limit or a threshold for the number of approximator models as described in Step 2 for calculating the bi-directional cross-entropy losses? Moreover, is there a criteria for the selection of these models to ensure the converged values of the losses are met? Can users just go for open models? I think this part is not much discussed/clarified. 2. Is the combination of FCE and BCE statistically significant over using just one of them? The values presented in Table 4 seem to be very close with each other particularly on text-based datasets. 3. Following #1, is there a particular model or combination of models that gives the best approximation of FCE and BCE loss values and particular text split combination for classification? Future researchers may want to only test with the best setup combinations for comparing SOTA or baselines. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The paper can benefit from a much clearer discussion of limitations of the work, particular as it has not emphasized aspects such as language and ability of the proposed method to be robust on non-native speaker written texts as already explored by previous works (see cited work above). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your appreciation and suggestions. Here are our point-by-point responses: **W1**: Thank you for pointing out the problem. We generated our datasets using five of the latest commercial LLMs, following the generation methods outlined in previous studies [1, 7, 8]. Due to the page limit of the initial submission, we placed the details of the generated datasets (e.g., data amount, length statistics) in Appendix C. We will add more details to the main text in our next version. **W2**: Thank you for your suggestion. Due to the time constraints, we were unable to generate non-native language data for our datasets to provide a fair comparison during the rebuttal period. However, the setting you recommended is certainly an important OOD setting for all detectors. We will include [9] in our paper’s discussion and provide more detailed results in the next version. **Q1**: Thank you for the question. We presented a detailed ablation study using various open-source surrogate LLMs in Table 5 in Appendix E.1. According to the results, the performance of BiScope improves when using more surrogate detection models. Notably, BiScope can maintain over 98% of the best (ensembled) performance even when only using a single surrogate model (e.g., Llama2-7B, Mistral-7B). This consistent performance of BiScope across various open-source LLMs illustrates its scalability and compatibility with different open-source LLMs, providing more flexible options for users. There is no limit or threshold for the number of surrogate models, and BiScope can be compatible with any combination of surrogate models. **Q2**: We presented a more detailed comparison of the FCE-only, BCE-only, and FCE+BCE versions of BiScope in Table 4 of Appendix E.2. The combination of FCE and BCE outperforms either FCE only version or BCE only version in 64% of cases, showing >0.35 and >0.09 maximal detection F1 score improvement, respectively. Such results demonstrate the necessity of this combination. **Q3**: Thank you for your suggestion. For simplicity, we recommend using either Llama2-7B or Llama2-13B as the surrogate model, while the ensemble of more surrogate models is always welcomed. For the text split method, we recommend splitting the text at every 10% length, as used in our paper. We will open-source our code and datasets for future researchers. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: Thanks for the response, authors. My questions have been clarified. Please ensure that this will be included in the main paper, particularly the selection and recommendation of what surrogate model to use, as this is one of the first things that came to me when reading the paper. I like the proposed BiScope method as it is simple to understand and seems to be effective compared to other baselines, as shown in the experiments, hence my favorable score. --- Reply to Comment 1.1.1: Title: Thanks for your feedback Comment: Thanks for your appreciation. We will include all the clarifications in our next version.
Summary: This paper proposes extracting various features from predictive distributions of surrogate LLMs to detect LLM-generated text. Relative to prior work, the main novelty appears to be the use of bi-directional cross-entropy losses to extract features. These features are then fed into a traditional supervised classifier to make predictions, which is estimated on a labelled dataset of human and machine-written text. Evaluations on a new dataset (expanded version of existing datasets) shows that the method is competitive with some prior approaches. Strengths: * The paper tackles an important problem: automatic detection of LLM-written text. * The paper contributes datasets that cover several state-of-the-art LLMs and multiple domains, including code. * The paper includes some results for the more realistic out-of-distribution condition, where novel LLMs / genres are introduced at test time relative to the training data. The approach appears to be quite robust to new LLMs, in some genres. * The approach appears to be quite robust to paraphrased text. Weaknesses: * There are some missing comparisons, e.g. https://arxiv.org/abs/2401.12070. * The performance in the most important setting (OOD) is mixed. * The presentation is somewhat confusing. For example, it’s unclear from what exposition (e.g., Figure 3) which steps occur at training time and which steps occur at test time. * The proposed approach amounts to a supervised binary classifier. However, the obvious baseline (fine-tuned BERT) is not included in the comparisons, even though this approach (“OpenAI classifier”) is prominent and discussed in related work. Why not? * Important details are not included in the main text. For example, for the cross-model evaluation, which models are trained on and which models are held-out? Hopefully, the latest LLMs are held-out and previous generation LLMs (e.g., GPT-2) are used for training. * Why is F1 used as the metric? Usually, in detection scenarios we are interested in detection accuracy while maintaining a low false alarm rate, which suggests using ROC-based metrics, e.g. AUC restricted to the low false alarm region. * The bold-underlines in Table 1 seem a bit random. For example, there are cases where two identical values occur in the same column but both are not bolded (0.9955). Ideally, these would represent statistical tests of which relative improvements are significant. * The discussion of limitations is completely lacking. For example, it seems like the proposed approach requires evaluating multiple LLMs in parallel. Also, I’m unclear on why there is not an evaluation setting in which both data and model shift occurs, relative to the training data. Technical Quality: 3 Clarity: 2 Questions for Authors: See "Weaknesses" Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: No; see "Weaknesses" Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your insightful review. Here are our point-by-point responses: **W1**: We have included three more baselines in the comparison in Table 6 (in the submitted PDF file): Binoculars [5], GhostBuster [1], and OpenAI Detector [6]. The results show that BiScope outperforms all three baselines on both the normal and paraphrased datasets. **W2**: We have presented the experimental results in both cross-model and cross-dataset OOD settings, as well as the evaluation on unseen paraphrased data, in the right three columns of Table 1 in our main text. Additionally, more detailed OOD results are provided in Table 3 in the appendix. We observe that BiScope performs the best or the second best in more than 75% cases. **W3**: In Figure 3, the first three steps are used in both the classifier training and testing periods to extract features. During the classifier training period, the extracted features are used to train a classifier in step 4. In contrast, during the testing period, the classifier in step 4 is fixed, and we use this trained classifier to make predictions based on the extracted test sample’s features. We will further modify Figure 3 to make this process clearer. **W4**: Thank you for pointing out the OpenAI Detector. We have included RADAR [7] in our main text’s results. RADAR is a fine-tuned RoBERTa, outperforming the OpenAI Detector [6] in most cases. Therefore, we initially chose not to include the OpenAI Detector in our paper. To further address your concern, we have included the OpenAI Detector in Table 6 (in the submitted PDF file). The results show that BiScope outperforms the OpenAI Detector with more than 0.2 F1 score improvement on average on both the normal and paraphrased datasets. **W5**: In our cross-model OOD evaluation (Section 4.2), we trained the classifier on human data and AI-generated data from one LLM and then tested the trained classifier on AI-generated texts from the other four LLMs and calculated the average F1 score. This process was repeated for all five generative LLMs, and the average scores are reported in Table 1 in the main text. More detailed results are presented in Table 3 in our appendix. Results show that BiScope performs the best or the second best in more than 75% cases. Specifically, when we trained BiScope on the oldest GPT-3.5-Turbo’s data and tested it on all the other four latest LLMs’ data, the detection F1 score exceeded 0.92 on average. **W6**: Thanks for pointing out additional metrics. We use the F1 score in our paper since it is a commonly used metric in previous papers [1, 8], considering both precision and recall. We further present the TPR-FPR (ROC) curve of BiScope in Figure 9 (in the submitted PDF file). We observe that our BiScope reaches over 0.8 detection TPR on average when the FPR is only 0.01, outperforming all the baselines on all the five generative models’s data. **W7**: Thanks for your recommendation. We will further polish our tables based on your suggestion. **W8**: Thank you for pointing out the limitation problem. Due to the page limit of the submission, we had to place the limitation section in Appendix F in our initial submission. We will move it to the main text. Regarding the OOD evaluation, previous studies [1, 5, 7, 8] shifted either the data (cross-dataset) or the generative models (cross-model). We followed their settings to test our OOD performance. --- Rebuttal Comment 1.1: Title: Thanks! Comment: Thanks for the response. My concerns are largely addressed and I have updated my recommendation accordingly. Nice work! --- Reply to Comment 1.1.1: Title: Thanks for your feedback! Comment: Thank you very much for your appreciation. We will include all the rebuttal experiments in our main paper and polish our paper based on your suggestions.
Summary: This paper develops an AI-generated text detection method called BiScope. The key idea is to formulate the detection task as a guided text completion task. The generated text and the original text are used to calculate two types of cross-entropy losses, which are used to extract features for classification. Strengths: - (S1) The idea of taking the preceding token account for AI-generated text detection is interesting. - (S2) The paper is well-written and easy to follow. Weaknesses: - (W1) The feature extraction is computationally expensive, as it needs to run inference at least twice with LLMs for summary generation and text completion. It’s not clear if the feature extraction cost is worth for the improvements. (Similar to the comment below). - (W2) Missing references and comparisons, especially against the following methods. Among them, [Ref 2] should be a solid baseline method for the classification-based approach. [Ref 4] is recent work and is optional to compare but reports that simply using n-gram and POS features would be sufficient to detect machine-generated text and thus interesting to compare as a baseline. - [Ref1] OUTFOX: LLM-Generated Essay Detection Through In-Context Learning with Adversarially Generated Examples https://arxiv.org/abs/2307.11729 - [Ref 2] Ghostbuster: Detecting Text Ghostwritten by Large Language Models https://arxiv.org/abs/2305.15047 - [Ref 3] Smaller Language Models are Better Zero-shot Machine-Generated Text Detectors https://aclanthology.org/2024.eacl-short.25.pdf - [Ref 4] Your Large Language Models Are Leaving Fingerprints https://arxiv.org/abs/2405.14057 Technical Quality: 3 Clarity: 3 Questions for Authors: Please respond to the weaknesses raised above. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: I believe it’s important to mention the computational cost in the body text as a limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your insightful review. Here are our detailed point-by-point feedbacks for your questions: **W1**: As mentioned in Section 3.3., the summary generation is not necessary. We have two designs for generating the completion prompt: one with a summary and the other without. The latter has a significantly shorter processing time. As illustrated in Figure 5, this adjustment allows BiScope to achieve over 8.6x shorter processing time per sample. As shown in Table 1, BiScope without summary only results in an average detection F1 score degradation of less than 0.015 and still substantially outperforms the baselines. Users can choose between these two designs to balance detection accuracy and efficiency according to their specific needs. We will further modify our main text to further clarify this. **W2**: Thank you for suggesting the references and baselines. We present a comparison between BiScope and GhostBuster [1] in Table 6, demonstrating that BiScope outperforms GhostBuster with a 0.06 average F1 score increase on both the normal and paraphrased datasets. Additionally, GhostBuster is 3x slower than BiScope in processing a single sample and requires several hours to identify the optimal feature compositions. We did not compare with [2] due to the lack of open-source implementation. Instead, as mentioned in the global response, we compared with two other baselines, Binoculars [5] and OpenAI Detector [6], which have comparable or better results than [2-4]. We will reference [1-4] in our paper and discuss them in the related work. --- Rebuttal Comment 1.1: Comment: Thank you for the clarification. Both my concerns have been addressed. I updated the score accordingly. --- Reply to Comment 1.1.1: Title: Thanks for your feedback! Comment: Thanks for your appreciation. We will include all the clarification in our main text.
Rebuttal 1: Rebuttal: We thank all the reviewers for your thoughtful comments! We are glad that the reviewers found our paper “tackles an important problem” with a novel idea. We also thank you for your appreciation of our dataset contribution, method’s robustness, and paper presentation. To further address your concerns, we conducted more experiments and provided more detailed evidence to support our proposed method. Here is a summary of the supplementary information provided in the rebuttal materials: 1. We compare our BiScope with **three** more baselines that are recommended by the reviewers, including Binoculars, GhostBuster, and OpenAI Detector. We observe that BiScope outperforms all three baselines on all five datasets with more than 0.06 average detection F1 score, even when the data is intentionally paraphrased. 2. We test BiScope under a length-controlled setting, presenting the low sensitivity of BiScope to the length of the input text with less than 0.01 detection F1 score degradation. 3. We evaluate BiScope without any completion prompt and compare its performance with the original version in our paper, illustrating that the completion prompt allows BiScope to perform better in 56% of cases. 4. We evaluate BiScope with different numbers of segments during the feature extraction step, justifying the necessity of our proposed multi-point splitting. The results show the trend that a finer-grained segmentation interval leads to a higher detection F1 score. 5. We provided the TPR-FPR(ROC) curves of BiScope and compared them with the baseline methods’, showing that BiScope achieves the highest TPR (more than 0.8 on average) in a low FPR setting (FPR=0.01). 6. We also present point-by-point responses to all the other questions and concerns from all the reviewers on the rebuttal page under each review. Due to the length limit of each individual rebuttal, we present the most requested experimental results in the supplementary PDF file. We also list the general references used across all the rebuttal materials here. --------- **References** [1] Verma, Vivek, et al. "Ghostbuster: Detecting Text Ghostwritten by Large Language Models." NAACL. 2024. [2] McGovern, Hope, et al. "Your Large Language Models Are Leaving Fingerprints." arXiv preprint arXiv:2405.14057 (2024). [3] Koike, Ryuto, Masahiro Kaneko, and Naoaki Okazaki. "Outfox: Llm-generated essay detection through in-context learning with adversarially generated examples." AAAI. 2024. [4] Mireshghallah, Niloofar, et al. "Smaller Language Models are Better Zero-shot Machine-Generated Text Detectors." EACL. 2024. [5] Hans, Abhimanyu, et al. "Spotting LLMs With Binoculars: Zero-Shot Detection of Machine-Generated Text." ICML. 2024. [6] Solaiman, Irene, et al. "Release strategies and the social impacts of language models." arXiv preprint arXiv:1908.09203. 2019. [7] Hu, Xiaomeng, Pin-Yu Chen, and Tsung-Yi Ho. "Radar: Robust ai-text detection via adversarial learning." NeurIPS. 2023. [8] Mao, Chengzhi, et al. "Raidar: geneRative AI Detection viA Rewriting." ICLR. 2024. [9] Tulchinskii, Eduard, et al. "Intrinsic dimension estimation for robust detection of ai-generated texts." NeruIPS. 2023. [10] Kirchenbauer, John, et al. "A watermark for large language models." ICML. 2023. [11] Kuditipudi, Rohith, et al. "Robust distortion-free watermarks for language models." arXiv preprint arXiv:2307.15593. 2023. [12] Hou, Abe, et al. "SemStamp: A Semantic Watermark with Paraphrastic Robustness for Text Generation." NAACL. 2024. [13] Yang, Xi, et al. "Watermarking text generated by black-box language models." arXiv preprint arXiv:2305.08883. 2023. Pdf: /pdf/78d414b4bbb5fd186a0390775c8e7a36850dc5d4.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Unified Guidance for Geometry-Conditioned Molecular Generation
Accept (poster)
Summary: The paper introduces UniGuide, a unified framework for geometry-conditioned molecular generation using unconditional diffusion models. UniGuide is designed to address the adaptability issues in current molecular diffusion models by providing a general training-free approach via a condition map that transforms complex geometric conditions to match the diffusion model’s configuration space, allowing for self-guidance during the generation process. UniGuide is demonstrated to be effective in various drug discovery tasks, including structure-based, fragment-based, and ligand-based drug design. The framework shows either on-par or superior performance compared to specialized models, highlighting its potential to streamline the development of molecular generative models. Strengths: 1. The paper introduces a novel, training-free method to guide diffusion models based on expected geometry conditions, enhancing adaptability without additional training overhead. 2. The framework is thoroughly evaluated across multiple drug discovery tasks, demonstrating its effectiveness. Weaknesses: 1. The contribution and the generalizability of UniGuide are somewhat overstated. In the Introduction and Figure 1, the available conditions are described in a quite general way, including not only structures and surfaces, but also densities. As this paper mainly focuses on geometry-aware conditions, clarifying the current scope in the introduction and main figures would enhance the paper's accuracy. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. In the FBDD experiments, can the UniGuide framework also be applied to the DiffLinker model, considering it is diffusion-based backbone? 2. Based on W1, it is noticed that there is a brief exploration on the density condition in App. G. Given its potential to enhance the model's capabilities, it is suggested to provide additional details about this setting (e.g., task definition, condition mapping) and to reference App. G in the main body of the paper. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The limitation of this paper is well discussed Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for the reviewer's insightful feedback and are pleased with the positive comments on UniGuide's novelty and effectiveness for various drug discovery tasks. We would like to clarify the remaining questions and concerns in the following. &nbsp; > The contribution and the generalizability of UniGuide are somewhat overstated; the available conditions are described in a quite general way, including not only structures and surfaces, but also densities We appreciate the reviewer's feedback and acknowledge the ambiguity between UniGuide's contributions, as shown in Fig. 1, and the empirical evaluations presented in our experimental section. To resolve this ambiguity: - We **extended the setting from App.G on density-conditioned guidance as part of our rebuttal** and explain it in more detail down below. - We plan to incorporate this novel setting into the final part of Sec.5.1 in the updated manuscript, referring to it as density-based drug design (DBDD), a more challenging task than LBDD. > Given its potential [of guiding w.r.t. densities] to enhance the model's capabilities, it is suggested to provide additional details about this setting (e.g., task definition, condition mapping) and to reference App. G in the main body of the paper We appreciate the reviewer's suggestion and are pleased to provide a more detailed explanation of the technical aspects of the DBDD setting. Task Definition - As motivated in App.G, we anticipate UniGuide to be particularly useful in scenarios where explicit information about advantageous features of the ligand is provided in the form of 3D densities. Examples of this include - a) volumetric densities that indicate beneficial placement of certain atom types, such as oxygen atoms [2] or - b) pharmacophore-like retrieval of advantageous positions for aromatic rings, as utilised in [3]. - On a technical level, the DBDD setting assumes this information to be provided as follows: - Instead of a reference ligand's shape, we only have the protein pocket's surface, which primarily defines exclusion zones rather than precise atom placement. Please refer to Fig.1 in the attached PDF. - Instead of a reference ligand's structure, we only have access to (multiple) atom-type densities that indicate preferred locations for optimal interaction with the protein. Condition Map: Adapting UniGuide for the DBDD scenario requires minor adjustments - The protein surface is treated like shapes in standard LBDD, defining an exclusion zone based on proximity to the surface. - The atom densities are thresholded to reflect regions of high interest and converted to surfaces using the marching cubes algorithm [4] - To include feature information, we employ a modified condition map similar to Eq.21 that extends the transformation from the conformation to the configuration space. - Moreover, the number of atoms guided by each density is adjusted based on its volume, reflecting the varying influence of each density, and guidance is only applied if atoms are sufficiently close. **We have included a new figure in the attached PDF that visually demonstrates qualitative results for this setting**. While our current approach represents a promising first step in tackling this task, we acknowledge the potential for further refinement. We are eager to explore future improvements within the UniGuide framework. > For FBDD [...], can the UniGuide framework also be applied to the DiffLinker model, considering it is diffusion-based backbone While direct application of UniGuide's FBDD condition map is not possible with DiffLinker, it can be combined with the surface-based Linker Design condition map (App.F.1) to guide linker atom positions. - DiffLinker's Modifications: Since DiffLinker fixes condition fragments in space and only learns to diffuse the linker atoms, the model becomes unsuitable for UniGuide's FBDD condition map from Sec. 4.2. - DiffLinker's Similarity to EDM: However, DiffLinker's diffusion backbone is very similar to EDM, which we successfully combined with UniGuide for controlled generation in Linker Design, see Tab.3. - Compatibility with Surface-Based Conditioning: Moreover, DiffLinker can still be combined with the surface-based Linker Design condition map (App.F.1) to guide the placement of linker atoms towards a specific region between the fragments. &nbsp; We appreciate the positive feedback and are excited to move forward with these improvements. We hope that we have addressed all outstanding concerns satisfactorily and welcome further discussion. &nbsp; [1] Hoogeboom, Emiel, et al. Equivariant diffusion for molecule generation in 3d, 2022. [2] Zaucha, Jan, et al. Deep learning model predicts water interaction sites on the surface of proteins using limited-resolution data, 2020 [3] Zhu, Huimin, et al. A pharmacophore-guided deep learning approach for bioactive molecular generation, 2023 [4] Lorensen, William E., and Harvey E. Cline. Marching cubes: A high resolution 3D surface construction algorithm, 1998 --- Rebuttal Comment 1.1: Comment: Thanks for your explanations and additional results. My concern has been resolved and I have raised my score to 7. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you very much for the kind words and for increasing the score of our paper. We are pleased to hear that we were able to resolve the reviewer's concerns. Best regards, The Authors
Summary: This paper proposed a training-free framework for guided diffusions in unconditional molecular generation. UniGuide applies to a wide range of design tasks such as SBDD, FBDD and LBDD, unified by the proposed condition map, which projects from the product space of conditional input that lies in general geometric space to some favorable datapoint in the data space that diffusion operates within. Strengths: - The idea of condition map that unifies different downstream tasks is novel. - This paper is generally easy to follow. Weaknesses: - In all tables, the authors claimed to highlight the best "diffusion-based approach" in bold, which seems misleading since there are also non-diffusion baselines. I would recommend the authors to reconsider this style of presentation in order to faithfully reveal the general performance. - After checking other baseline results, UniGuide seems not so competitive in downstream tasks such as LBDD and linker design in FBDD. This casts doubt on its effectiveness. - There are a number of important baselines missing, e.g. DecompDiff [1] and IPDiff [2] for SBDD tasks, and LinkerNet [3] for LBDD. [1] DecompDiff: Diffusion Models with Decomposed Priors for Structure-Based Drug Design [2] Protein-Ligand Interaction Prior for Binding-aware 3D Molecule Diffusion Models [3] LinkerNet: Fragment Poses and Linker Co-Design with 3D Equivariant Diffusion Technical Quality: 2 Clarity: 2 Questions for Authors: - The condition map works similarly to molecular translation or some form of retrieval given conditional inputs. How would the authors compare to those methods? - For SBDD, the authors only reported QVina Dock scores in the main results. However, SBDD models have been criticized for inaccurate structure modeling [1]. It seems to me that Vina Score and Vina Minimize used in [2] would serve as a better indicator for the pose quality. Can the authors also include these metrics in their results? [1] Benchmarking Generated Poses: How Rational is Structure-based Drug Design with Generative Models? [2] DecompDiff: Diffusion Models with Decomposed Priors for Structure-Based Drug Design Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback regarding our manuscript, particularly regarding the presentation and interpretation of UniGuide. We believe we can address these concerns effectively, as detailed in the following answers.  &nbsp; > Only highlight the best diffusion-based approach  - We are happy to revise the updated manuscript's presentation style as the reviewer suggested. - We want to emphasise that we never intended to mislead the reader by highlighting the results in a way that favours UniGuide. We are confident in the benefits provided by UniGuide and agree that they should be presented without ambiguity. > UniGuide seems not so competitive in LBDD and Linker design in FBDD tasks - Clarifications on LBDD performance - We stress that "on-par" performance for UniGuide compared to specialised approaches is a significant achievement, as our method imposes no constraints or additional training. The general response provides clarifications on LBDD performance and the reasoning behind the experimental settings. We highlight that **UniGuide achieves very high shape similarity and low graph similarity**, even though it does not rely directly on the reference structure (as in SQUID). - Clarifications on FBDD (Linker design) performance - We agree that UniGuide performs on par with DiffLinker overall in the linker design task and does not strictly outperform it. Yet, we want to emphasise again that **we simply guide the sampling of an EDM [1] model that was not optimised for linker design** (unlike DiffLinker, which is a conditional model limited to the linker design task only). In Fig.4 and App. F.2, we showcase how UniGuide can be flexibly adapted to more tasks such as scaffolding or fragment growing. - Despite not being trained explicitly for linker design, UniGuide(EDM) performs competitively with DiffLinker: Our method generates the most diverse linkers (Uniqueness) and successfully recovered nearly 60% (Recovery) of the actual linkers from the test set. While baseline methods may achieve higher recovery rates, they produce less diverse solutions. Moreover, UniGuide (EDM) generates more complex structures as measured by the number of rings. However, this also leads to a slightly worse SA score as these molecules are not as easily synthesisable. - Another advantage of UniGuide not relying on extra training for the linker design task is that it is **agnostic to the fragmentation procedure used to obtain the condition fragments**. This means that UniGuide will generalise to unseen fragments if the underlying molecule fits within the training distribution. - We realise that the original manuscript lacked clarity on these aspects, which may lead to misunderstandings. In the updated version, we will better contextualise the FBDD results. > Additional baselines - SBDD: - We include DecompDiff and IPDiff in the updated version of the manuscript as requested by the reviewer. Furthermore, we are happy to include additional Vina metrics to indicate the pose quality better. Please refer to Tab.1 in the attached PDF. We have also provided additional experimental details and comparisons to different baselines in the general response.  - FBDD: - We thank the reviewer for pointing us towards the LinkerNet work. Like DiffLinker, LinkerNet is a conditional model that is additionally able to model fragments with more degrees of freedom, i.e. their pose and centre of mass. Since this is also reflected in their experimental setup (LinkerNet can predict the condition fragment poses, the baselines randomly rotate them), it is not directly comparable to the experiments conducted in Table 3. However, this is an exciting addition to the linker design task, which can be readily implemented with minor adjustments to the condition maps presented by UniGuide. We will investigate this addition for inclusion in the updated version of the manuscript. > Q: Connection to retrieval and molecular translation - Comparison to molecular translation: While UniGuide's primary focus is novel drug design with geometric constraints, its adaptability allows for future exploration of molecular translation in 3D, similar to existing 2D approaches [3,4]. This could include, for example, optimising a molecule's properties while preserving its 3D shape using the surface condition map. However, this would necessitate property regressors and the combination of UniGuide with classifier guidance. - Comparison to retrieval-based methods: Retrieval-based methods, such as shape-based Virtual Screening (VS),  involve retrieving existing data based on predefined criteria or queries, whereas UniGuide focuses on generating new samples based on a predefined condition without involving existing data. Our comparisons for LBDD show that we surpass VS in terms of 3D and 2D similarity (Tab.1). We are curious to explore the inclusion of retrieval-based methods into the condition map in the future. &nbsp; We hope that we have adequately addressed the concerns raised and believe the reviewer's feedback has significantly enhanced the presentation of our results and the experimental section. We look forward to further discussion. &nbsp; [1] Hoogeboom et al. Equivariant diffusion for molecule generation in 3d [2] Schneuing et al. Structure-based drug design with equivariant diffusion models [3] Jin et al. Hierarchical generation of molecular graphs using structural motifs [4] Jin et al. Junction tree variational autoencoder for molecular graph generation --- Rebuttal Comment 1.1: Comment: Thank you for the response. Given the additional results and comparison with more diffusion baselines, I'm still a bit concerned about the effectiveness and necessity of introducing geometric guidance, since it only marginally improves upon the performance of backbone models (EDM, DiffSBDD, etc). The author mentioned that these models have not been specifically engineered for certain tasks, but it seems to me that the point of adapting existing models for downstream tasks is primarily aimed to benefit from them so as to boost the task-specific performance, which UniGuide has yet to achieve. In this regard, I'm inclined to maintain the score. --- Reply to Comment 1.1.1: Title: Clarification on performance and broader impact of UniGuide Comment: Thank you very much for your response. While we consider UniGuide's performance improvements significant (details below), we want to emphasise that a performance-focused discussion overlooks UniGuide's **broader potential and impact: UniGuide's formulation enables to tackle entirely novel drug discovery tasks where no established baselines or sufficient data exists.** Specifically, our goal is not to adapt existing models to boost their performance in downstream tasks but to provide a generally applicable guidance framework that makes (unconditional) base models useful for various tasks and practical applications. This capability is well demonstrated in our experiments on density-based drug design (no data) and symmetric proteins (limited base model); see Figures in PDF. Our results show that UniGuide is versatile (EDM was applied to both LBDD & FBDD) and effective, consistently delivering performance that matches or exceeds task-specific models (which are limited to a single task and require extra training) and outperforms alternative conditioning mechanisms. This sets the basis for UniGuide's long-term objective to reliably translate novel tasks directly to a generative model by incorporating any newly developed geometric conditions through its condition map, thereby accelerating the overall drug discovery process (for real-world applications). We also want to highlight a selection of our results that demonstrate the consistency and significance of our improvements, and we kindly ask the reviewer to reassess their evaluation in light of this evidence: - **LBDD**: **UniGuide is state-of-the-art** and outperforms alternative guidance mechanisms (cf. follow-up to our general response). An improvement over the base EDM model for LBDD (as indicated by the reviewer) is not possible as **EDM can only be applied for the LBDD tasks with UniGuide** (Table 3, Rebuttal PDF). - **SBDD**: **UniGuide demonstrates superior performance over all evaluated conditioning mechanisms for diffusion models** (cf. follow-up to our general response). Specifically, we improve the base model DiffSBDD in terms VINA score of up to 1 when utilising UniGuide and up to 1.8 when additionally using UniGuide's version of Clash Drift. This results in competitive performance of the base model compared to conditional, task-specific models such as DecompDiff and IPDiff. - **FBDD**: **UniGuide enhances the VINA score by 0.5 compared to the evaluated baseline for general, pocket-conditioned FBDD tasks** and generates more valid and connected ligands (as shown in Table 13 of the manuscript). While the observed improvement initially appeared marginal to the reviewer, we believe further consideration of the experimental evidence and UniGuide's potential for novel applications and model enhancement may offer a different perspective. We appreciate your feedback and hope our explanation has been helpful in clarifying UniGuide's potential impact.
Summary: This paper proposed a method named UniGuide for conditional molecular generation with unconditional diffusion models, without the need of additional training and parameters. The proposed framework is an extension of self-guided diffusion models on the conditional molecular generation task. The authors designed different condition maps C: S x Z $\rightarrow$ Z for ligand-based generation and structure-based / fragment-based generation, enabling guidance from conditions in a unified fashion. Strengths: * The paper is generally easy to follow and well-written * The theoretical justification is provided and oblation studies / visualization results are sufficient Weaknesses: * Technical contribution * My biggest concern is on the superiority of the proposed method over other conditional sampling methods. E.g. the special case of S = Z can be implemented with inpainting technique; the shape-based generation can also be implemented with some technique similar to classifier guidance (e.g. the validity guidance used in [1][2]) by designing a loss function based on denoised datapoint. However, the authors didn't discuss about the superiority of their method with sufficient experimental supports. * Limitation: The performance is strongly relied on the based unconditional model. * Clarification * I don’t think the approximation of Eq 10 makes much sense to me: f approximates the clean data point, then how could the condition c be a Gaussian distribution taking this clean data point as the mean? I think the concept of condition c $\in$ Z in this paper is closer to datapoint satisfying condition c, or there should be another mapping from the data space to the condition space. * Eq 17 doesn’t strictly follow the definition of the condition map: c and z should have the dimension of n x (3 + d) — Eq (18) should have corresponding index selection / mask operations. * Experiments * In 5.1 Ligand-based drug design, why don’t the authors compare their model with conditional EDM? How could UniGuide (shapeMol), a conditional model with soft constraints, outperform Cond-Shape2Mol? * In 5.3 Fragment-based drug design, I think it’d better also compare UniGuide with an inpainting version of other baselines. References: [1] Peng, X., Guan, J., Liu, Q., & Ma, J. (2023). Moldiff: Addressing the atom-bond inconsistency problem in 3d molecule diffusion generation. arXiv preprint arXiv:2305.07508. [2] Guan, J., Zhou, X., Yang, Y., Bao, Y., Peng, J., Ma, J., ... & Gu, Q. (2023). DecompDiff: Diffusion Models with Decomposed Priors for Structure-Based Drug Design. In International Conference on Machine Learning. Technical Quality: 2 Clarity: 3 Questions for Authors: See the weaknesses above. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback and will address their concerns below. &nbsp; > Superiority of UniGuide and insufficient experimental support UniGuide's appeal builds upon multiple aspects: - No Training: UniGuide is based on guidance, and as such, it **does not require any additional training but achieves the desired sampling behaviour at inference time** - The unification aspect of UniGuide: **Unconditional models can readily be adapted to new settings via UniGuide**. This adaptation effectively reduces to the definition of a condition map C, which can be non-differentiable. To support this, we guide an unconditional EDM model for the LBDD task (Tab.1) as well as the Linker Design task (Tab.3) and show that we outperform specialised approaches. - The special case of $S=Z$ (not the general case!) can be addressed using techniques based on inpainting. However, UniGuide leads to better or equal performance, making it favourable over inpainting. This is supported across our experiments, such as in Tab.2 or App.F.2. - Extensive experimental support: We benchmark UniGuide against specialised non-diffusion and diffusion-based models (including conditional and guidance-based alternatives) across tasks involving geometric conditions. Our results prove that **UniGuide provides better or on-par performance, indicating that controlling unconditional models with UniGuide is effective, simple, and easily extended to novel settings**. Yet, we acknowledge the need for a more explicit discussion and refer the reviewer to our general response for further clarification on SBDD, LBDD (where UniGuide excels), and FBDD (see also the response to reviewer YA7u). > Shape-based generation via technique akin to validity guidance - We agree that the proposed loss used for validity guidance [1] could be adjusted to the LBDD case, and we thank the reviewer for pointing this out. - Yet, **UniGuide is more general and decouples the surface computation (input to the condition map) and gradient computation**, which is consistently applied to the L2 loss between the clean estimate and the target condition.   - This decoupling has multiple implications: - Due to the loss formulation, the condition map does not have to be differentiable, making it agnostic to how the surface points are computed and, therefore, more flexible. - At the same time, the condition map follows an explicit geometric intuition, which makes it easily adjustable to novel scenarios. This aspect is especially well reflected in the new experiments discussed in our general response. We guide towards a volume reflecting the symmetry requirement to generate symmetric proteins. - In our general response, we discuss incorporating drift terms into UniGuide (akin to validity guidance) for SBDD. As part of this addition, we will conduct experiments to compare an adjustment of validity guidance with our surface condition map. We will share the results as soon as they are available. > Limitation: Reliance on the base model - While UniGuide's performance is inherently tied to the performance of the underlying base model (as noted in our limitations), **this connection also has a positive effect as it encourages research on unconditional generation**. Our experiments show that UniGuide can directly translate improvements to better task-specific performance. UniGuide also facilitates the application of these models to diverse downstream tasks, broadening their utility. > Eq10 does not make much sense - We apologise for the confusion caused by the poor presentation of Eq.10. - In the current manuscript, we missed to specify that the condition $c$ must lie in the same space as the samples $z\_t$ (the configuration space). Only this assumption makes the Gaussian approximation possible. To clarify this, we will add further explanations near Eq.10. - We would like to highlight that we discuss and lift this assumption on the condition $c$ in our method section (Sec.4). Note that the condition map $C$ serves as a transformation that takes the source condition $s$ and the clean approximation of $z\_t$ as inputs, and outputs a suitable target condition $c$ that lies in the configuration space and can be used directly in the guidance loss presented in Eq.14/15. > Eq.17 does not match the definition of C - In Eq.17, we define the condition map as $C(s, \hat{z}^{\mathcal{A}}\_t)=C(\tilde{z}, \hat{z}^{\mathcal{A}}\_t)$, where $\tilde{z}\in Z$ is a configuration that specifies $m<N$ nodes (for $m=N$ there is no point in guidance).  - In order to match the dimension of the clean data estimate $\hat{z}\_t$ with $\tilde{z}$, we subset $\hat{z}\_t$ to $m$ nodes as indicated by the superscript $\mathcal{A}$ in the condition map. - For example, for SBDD, the source condition is a protein with $m= N^{P}<N$ nodes. - While we believe Eq.17 and its definition are consistent, we would appreciate further clarification if we have overlooked any aspects. > In 5.1 Ligand-based drug design, why do the authors not compare their model with conditional EDM?  In our general response, we clarify our reasoning for the LBDD evaluation and hope that our answer explains why we did not compare UniGuide (EDM) to conditional EDM.  > How could UniGuide (shapeMol), a conditional model with soft constraints, outperform Cond-Shape2Mol? - We believe that what is meant by "Cond-Shape2Mol" maps to the conditional ShapeMol [2]. We followed this suggestion and added UniGuide (ShapeMol) to our comparisons; see Tab.3 (PDF). > For FBDD, add inpainting comparison - We added this in Tab.2 in the PDF and would refer to our general response for additional details on this setting. &nbsp; Again, we thank the reviewer for their constructive comments, which have significantly improved our work. &nbsp; [1] Schneuing et al. Structure-based drug design with equivariant diffusion models [2] Chen et al. Shape-conditioned 3d molecule generation via equivariant diffusion models --- Rebuttal Comment 1.1: Title: Additional results posted as official comment Comment: Dear ZcJd, As promised in our initial response, we conducted additional experiments to compare UniGuide with an adaption of validity guidance for LBDD and included a version of "clash drift" into UniGuide's condition map for SBDD. As we believe that the results are relevant to all reviewers, we kindly refer you to our official comment. Thank you again for your valuable feedback. &nbsp; Best regards, The Authors
Summary: The paper introduces UniGuide which is a general framework for conducting conditioning over the unconditional molecule diffusion models during inference. To achieve this, Uniguide introduced a concept called a condition map for different applications. With a condition map,it could control the score function by adding a task-related gradient term for generating samples with desired properties. The experiment has been conducted over 3 settings of SBDD, FBDD, LBDD to demonstrate the effectiveness of the proposed method. Strengths: 1. The paper is generally well-written. The motivation is clear and important to the broad audience for taking usage of unconditional models to different complex scenarios. 2. The paper utilizes extensive experiment settings. I appreciate the efforts to conduct experiments in all the SBDD, FBDD, and LBDD settings. Weaknesses: 1. However, the paper's contribution is hard to evaluate. Though the author claims a new general framework for Uniguide, the form takes exactly as a gradient guidance form for diffusion models which has been explored in the previous literature and following ups [1]. Hence, the key contribution, given in the previous works, limits to a direct application of gradient-based guidances of molecule diffusion to different scenarios. I would like the author to clarify more about the contribution. 2. The important related works are missing such as [1,2]. I suggest the authors do a comprehensive review over the relevant literature. [1] Equivariant Energy-Guided SDE for Inverse Molecular Design. ICLR 2023 [2]Training-free Multi-objective Diffusion Model for 3D Molecule Generation ICLR 2024 Technical Quality: 3 Clarity: 3 Questions for Authors: refer to above Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: No Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their valuable feedback and will address the raised concerns in our answer below. &nbsp; > However, the paper's contribution is hard to evaluate. Though the author claims a new general framework for Uniguide, the form takes exactly as a gradient guidance form for diffusion models which has been explored in the previous literature and following ups [1] - We acknowledge that UniGuide builds upon the existing framework of gradient-based guidance for diffusion models. However, our primary goal was to **develop a method capable of conditioning on general geometric information, a capability absent in previous work**. We achieve this through two key innovations: self-guidance and the condition map. Key Differentiators: - Condition Map: **UniGuide leverages a novel condition map to incorporate diverse geometric information**, such as protein structures, molecular fragments, and molecular surfaces, as guidance conditions. This is a significant departure from previous work like [1, 2, 3], which are limited to conditioning only on specific (quantum) properties.  - Self-guidance: UniGuide is a self-guiding approach that modifies the reverse process of the diffusion model without relying on additional networks to guide the generation, similar to prior work [5]. This highlights the flexibility of UniGuide as it does not require additional training (neither training a specialised conditional diffusion model nor training additional networks for guidance). This is unlike prior work [1] that requires training additional regressors to guide the generation. - Property-Based Conditioning: In principle, UniGuide can also perform (quantum) property-controlled generation. As this task focuses on global graph properties, it would require additional regressors.  - EEGSDE [1], on the one hand, computes the guidance loss between a condition property value $\mathbf{c}$ and the output of a property prediction network that is finetuned over different noise levels of the diffusion process $\mathbf{m}_{\theta}(\mathbf{z}_t,t)$.  - In contrast, UniGuide uses a property regressor trained on clean samples, leveraging a clean approximation ( $\mathbf{\hat{z}}_t$ ) of the noisy data ( $\mathbf{z}_t$ ) for guidance. This approach, explored in [2], differs from our focus on self-guidance. While previous studies [1,2,3] explore property-based conditioning with various conditioning mechanisms, our work investigates self-guidance without relying on external regressors or classifiers, and its broad application across various drug discovery tasks. - Multiple Conditioning: We further indicate that, unlike prior works [1,2], **UniGuide can support conditioning both on geometric conditions (scaffolds, proteins or shapes) and global graph properties (quantum properties)**, purely during inference. However, this requires additional property regressors [2]. - Motivation for new drug discovery strategies: Inspired by the performance on the LBDD task, we further motivate the applicability of UniGuide for the generation of molecules given molecular densities, see App.G and our detailed answer to Reviewer D4nw. We further provide a qualitative example in Fig.1 in the attached PDF. UniGuide is Not a "Direct Application": Consequently, we can confidently state that our work does not merely constitute a "direct application" of gradient-based guidance. **UniGuide introduces fundamental innovations that enable geometric conditioning**, unlocking novel applications and expanding the capabilities of controlled molecule generation. > The important related works are missing  - The reason why we omitted the discussion on property-based conditioning is that the current self-guidance formulation of UniGuide does not support drug discovery tasks involving global graph properties. However, it is possible to combine UniGuide with classifier-guidance to enhance the generated molecules' quality for different downstream drug discovery applications. - We acknowledge that the omission of this discussion may have led to some confusion. We will incorporate the suggested prior works [1, 2] into the related works section and describe how they differ from UniGuide. &nbsp; We hope that the we were able to appropriately adress reviewer's concerns and are looking forward to a constructive discussion. &nbsp; [1] Bao et al. Equivariant energy-guided sde for inverse molecular design [2] Han et al. Training-free Multi-objective Diffusion Model for 3D Molecule Generation [3] Hoogeboom et al. Equivariant diffusion for molecule generation in 3d [4] Dhariwal et al. Diffusion models beat gans on image synthesis [5] Song et al. Loss-guided diffusion models for plug-and-play controllable generation --- Rebuttal Comment 1.1: Title: Thanks for your response Comment: I carefully check the rebuttal. I am now convinced that Uniguide holds a difference with the direct application of gradient-guided generation and I also appreciate the novelty of geometric condition maps. I increased my scores to appreciate the efforts. --- Reply to Comment 1.1.1: Title: Thank you for your response Comment: Thank you very much for the kind words; we are grateful for an increased score and are excited about the positive feedback on the novelty of the UniGuide framework. We are pleased to hear that you found our additional clarifications on gradient-guided generation satisfactory, which will be included in the camera-ready version of the paper. Best regards, The Authors
Rebuttal 1: Rebuttal: We are pleased with the positive feedback on our work, particularly **noting its motivation and its broad and flexible applicability across various drug discovery tasks**. &nbsp; We incorporated clarifications and additions to our evaluations - please refer to the attached PDF and individual responses for detailed information. Presentation Style - We revised the tables’ presentation style as shown in the PDF. - Initial Distinction: Our quantitative comparisons distinguish between non-diffusion and diffusion-based methods. - Conditioning Focus: Within diffusion-based methods, we discern the conditioning approaches that control the same backbone. We aim to isolate the effect of different conditioning techniques (inpainting, conditional training, and self-guidance via UniGuide). LBDD (mainly by YA7u) - We need to clarify that Tab.1 (Tab.3 PDF) contains various pieces of information that were not presented clearly, obscuring UniGuide's performance: - The LBDD task is addressed in different ways: by leveraging only the molecular shape (UniGuide, ShapeMol, VS) or utilising both the shape and the reference structure (SQUID, ShapeMol+g). We indicate this difference in difficulty by a ✓ for methods that only use the shape information and a ✗ for the others. - Furthermore, **the ratio metric is the most important** as it combines shape and graph similarities. **UniGuide (EDM) outperforms all other approaches**, highlighting that our approach discovers novel (low graph similarity) ligands satisfying a given shape. - Why we include both UniGuide (ShapeMol[U]) and UniGuide (EDM): 1. To isolate the effect of UniGuide and compare it with the shape-conditioning of ShapeMol, we trained ShapeMol without its conditional part (ShapeMol[U]). The improvements of UniGuide (ShapeMol[U]) over ShapeMol illustrate the performance benefits provided by our guidance. 2. Since UniGuide can be applied to any base model, we asked what happens if we use an off-the-shelf EDM model instead of ShapeMol[U]. The substantial performance improvement over both UniGuide (ShapeMol[U]) and ShapeMol, combined with the simplicity of applying UniGuide, demonstrates the effectiveness and benefits of our method. - We will revise the updated manuscript to provide a clearer and more detailed explanation of these aspects, ensuring it is better understood. - Inclusion of UniGuide (ShapeMol) in LBDD (Tab.3 PDF, suggested by ZcJd): Our application of UniGuide with the surface condition map (Eq. 21) to the conditional ShapeMol demonstrates that UniGuide provides additional benefits also for conditional models. We report this as UniGuide (ShapeMol) in Tab.3 (PDF). SBDD (Tab.1 PDF, suggested by YA7u&ZcJd) - As suggested, we added extra metrics: VinaScore, VinaMin, and VinaDock, cf. [1,2]. - We also include DecompDiff [1] & IPDiff [2] as baselines. - Due to a discrepancy in the vina metrics (caused by post-processing [3]), we reevaluated UniGuide and the other baselines and report the updated results for Tab.2 (manuscript) in Tab.1 (PDF). - We draw the following conclusions from the updated results: - Our results, when utilising the same model backbone [3], demonstrate that **UniGuide outperforms alternative conditioning mechanisms**, such as inpainting and conditional training. - Furthermore, UniGuide surpasses DecompDiff (no drifts) regarding VinaScore and VinaMin. DecompDiff (all drifts) incorporates additional guidance drifts to enhance the binding affinity. We anticipate that integrating these drifts into UniGuide will further improve performance, and we will present the results for these additions once they become available. - We believe that the addition of such drift terms is also crucial for an adequate comparison with IPDiff and DecompDiff, which both add reference and pocket priors to improve the binding affinity specifically. - Nevertheless, **UniGuide performs very well just by controlling an off-the-shelf unconditionally trained diffusion model**, proving its generality and effectiveness. Linker Design (Tab.2 PDF, suggested by ZcJd) - We implement an inpainting-inspired method for Linker Design and include it in our comparisons, see Tab.2. We use EDM and inpaint the condition fragments, following [3], and observe that the inpainting mechanism alone is not sufficient for the task of Linker Design. - Additionally, we refer to the experiments in App.F.2, where we evaluate UniGuide for the FBDD (+pocket information) task and compare it to the inpainting technique [3], see Tab.13. These experiments include scaffolding, fragment linking, and fragment growing. Tab.13 demonstrates UniGuide’s favourable performance over inpainting across various metrics. An additional qualitative comparison is available in Fig.7. **Novel tasks beyond Sec.5** (inspired by YdVq&D4nw) - Conditioning on densities: We elaborate on the scenario motivated in App.G and refer to this novel setting as Density-Based Drug Design. Fig.1 (PDF) shows how UniGuide can be utilised to condition on densities of atom types or aromatic rings. Please refer to our response to D4nw for extra details. - Extension to proteins: Even though our main focus centres around small molecules, UniGuide can be used to generate symmetric proteins (C8 symmetry for this PoC), cf. [4]. In particular, we designed a pie-like volume element that resembles the desired angles for the monomer and guided the protein generation with the surface condition map from Eq. 21, see Fig.2 (PDF). &nbsp; We are grateful for the opportunity to address the reviewers' concerns and are committed to incorporating their feedback to improve our work further. &nbsp; [1] Guan et al. DecompDiff: diffusion models with decomposed priors for SBDD [2] Huang et al. Protein-ligand interaction prior for binding-aware 3d molecule diffusion models [3] Schneuing et al. Structure-based drug design with EDM [4] Watson et al. De novo design of protein structures and function with RFdiffusion Pdf: /pdf/3f10015ec9dbd1b000da9814293f4386332b83e5.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper presents a framework for geometric guidance of diffusion models to enable flexible generation for protein and small molecule tasks. Their method is based on self-guidance from geometric conditions. They propose a condition map to map geometric conditions to the latent condition space to guide diffusion. They demonstrate experiments on three main tasks: ligand-based, structure-based and fragment-based drug design. Strengths: The paper addresses a relevant problem in drug design effectively. There is a need for a general framework for guiding diffusion models for the multitude of protein and small molecule design tasks. This work proposes a novel solution by focusing on geometry-based conditioning, and using a condition map to map different conditions to a common space. They provide extensive experimental evaluation on three major tasks, achieving comparable performance, or outperforming recent methods. Weaknesses: A key benefit of this approach is the generalizability of the method. The main concerns I have are in relation to this. Regarding training the model, can the authors provide additional details on the training? For example, are all three tasks trained with different conditioning to guide the model, together? Or is one model trained separately for each task? What is the effect of training on multiple tasks together? For example, if one model is trained per task, then it is difficult to see the benefit of this approach, other than in the conditioning map. In the baseline experiments, I feel that there are some related work missing on works that condition diffusion models. For example, how would a simple conditioning mechanism, such as the ones used in the image-text generation literature, fare against this approach? How would this method compare to other latent diffusion models, such as [1-2]? It seems there are a multitude of latent diffusion models for generation out there, I find it difficult to compare those with this. The related work section also does not contrast this work with others in great detail. ### References [1] McPartlon, Matt, et al. "LATENTDOCK: Protein-Protein Docking with Latent Diffusion." [2] Watson, Joseph L., et al. "De novo design of protein structure and function with RFdiffusion." Nature 620.7976 (2023): 1089-1100. Technical Quality: 3 Clarity: 3 Questions for Authors: - What is the relation with this work and latent diffusion? Could one similarly encode the geometric conditions using a latent encoder? - What is the computational complexity in training these models? - What other design tasks could this extend to? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback and positive evaluation of our method! We provide detailed answers in the following.  &nbsp; > Additional details on training required for UniGuide’s generality - We highlight that **UniGuide does not require extra training** as it controls the generation of pretrained unconditional diffusion models during inference using guidance.  - Therefore, Uniguide can be directly applied to a suitable (unconditional) model, i.e. a model that matches the configuration space required for the task and is trained on a fitting dataset (e.g. ZINC). We discussed this in the limitations, but we will ensure that this aspect is clearly conveyed in the updated manuscript. - Drug design tasks have varying requirements. SBDD and FBDD need protein information, while LBDD and linker design do not. Therefore, the underlying model to which we apply UniGuide varies with the required configuration: We use the unconditional protein-ligand model from [1] for SBDD and FBDD and the molecular generative models ShapeMol or EDM for LBDD and linker design. - In summary, UniGuide’s benefit is additive: It does not require a specialised model (no extra training), and the same model can be applied to multiple tasks. > Comparison with simpler conditioning mechanisms?  - Controlling the generation of diffusion models using guidance in combination with additional classifiers/regressors is common for the image and text domains [7]. In contrast, UniGuide is a self-guiding [6] approach (Eq.12) that does not require additional models to control the generation, unlike [7]. Another key difference is our **focus on the molecular domain, enabling unified guidance while respecting the requirements of molecular structures: (1) geometric conditions and (2) equivariance**. - The condition map is the central element in achieving this. It maps from the space of source conditions S to the configuration space Z (the diffusion model’s output space), realising (1), and maintaining equivariant updates (2) when satisfying Theorem 4.1. - We compare to (i) conditionally trained diffusion models and (ii) an inpainting-inspired [1] technique for SBDD and FBDD and show that UniGuide is generally favourable, see e.g. Tab.2.  - For LBDD, UniGuide is the first method to successfully tackle this task purely at inference time using the surface condition map. > Q: Computational complexity?  - Inference:  - The computation cost associated with UniGuide results from computing the guidance gradients at inference time, see Eq.15.  We report runtime comparisons in App.E.2. UniGuide is slightly slower than the inpainting-inspired method [1]. At the same time, the DiffSBDD-cond model provides a baseline for pure sampling without guidance as the computational cost for controlled generation was already invested upfront for the conditional training (which is much higher). - Training: - Details on the unconditional training of EDM and ShapeMol[U] are provided in App.C and App.D.1, respectively. For the SBDD experiments, we use DiffSBDD checkpoints, as provided by the authors [1]; see App.E.2. > Q:Relation with latent diffusion models? Could one encode geometric conditions using a latent encoder? - UniGuide can readily be applied to guide a latent diffusion model, for example, GeoLDM [4] instead of EDM [5]. - In the special case when the source condition is a molecular structure ($S=Z$), one could use the encoder of the latent diffusion model to map the condition to the latent space and guide with UniGuide there. - In the general case of $S \neq Z$, a possible solution is to leverage the decoder D of the respective latent diffusion model and keep the condition map in the data space. That is, instead of computing $C(s, \hat z_t)$, one estimates the clean latent code and applies the decoder afterwards: $C(s, D(\hat z_t))$. > Q: Does UniGuide extend to other tasks? How does it compare to [2-3]? UniGuide also applies to the domains presented in [2-3]. We did not include them in the current version of the manuscript as they focus on docking and protein generation only, while we focus on drug discovery centred around small molecules. - However, we checked the mentioned papers and would like to share some ideas and results on how the presented tasks can be accomplished with UniGuide: - In the context of docking [2], UniGuide could be used to condition on information about the protein complex. Contact information, for example, could be leveraged by guiding to specific positions of the C-a atoms. Coarser positional information, cf. Fig.G.1 [2], could be done in a similar fashion to the shape-based generation presented for LBDD. - Compared with the settings from [3], **UniGuide can be used as an alternative to generate symmetric proteins**. We adopted this idea and chose the C8 symmetry for this PoC. In particular, we designed a pie-like volume element that resembles the required angles and guided the protein generation with the surface condition map from Eq.21. We present the results of this experiment in Fig.2 in the pdf. - Additionally, as prompted by reviewer D4nw, we expand the exploration of Density-based Drug Design in Fig. 1 of the PDF. &nbsp; We hope that our answers sufficiently clarify the raised questions. We are thankful for the reviewer’s suggestion to investigate the protein settings and are excited about our results for generating symmetric proteins.  &nbsp; [1] Schneuing et al. Structure-based Drug Design with Equivariant Diffusion Models [2] McPartlon et al. LATENTDOCK: Protein-Protein Docking with Latent Diffusion [3] Watson et al. De novo design of protein structures and function with RFdiffusion [4] Xu et al. Geometric latent diffusion models for 3d molecule generation [5] Hoogeboom et al. Equivariant diffusion for molecule generation in 3d [6] Song et al. Loss-guided diffusion models for plug-and-play controllable generation [7] Bansal et al. Universal guidance for diffusion models --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response and improving the clarity of understanding the method. I believe that the rebuttal adds clarity to the paper. I appreciate the additional results in generating symmetric proteins. However, as other reviewers mentioned, the geometric guidance led to marginal improvements in performance. Further, the additional experiments on rebuttal Table 2 only demonstrate minimal performance improvements. As a result, I maintain my score. --- Reply to Comment 1.1.1: Title: Thank you and additional clarifications Comment: &nbsp; Thank you for your response and for recognising the additional results we provided as part of our rebuttal. We are glad that our explanations have enhanced the clarity of our paper. Regarding the mention of marginal performance improvements, we have addressed this in detail in our response to YA7u, offering a broader perspective on UniGuide's potential impact and objectives. We kindly refer you to that response and hope it provides further clarity. Specifically for the FBDD experiments, while we agree that Table 2 shows UniGuide's on-par performance with a task-specific model, we would like to emphasise the significantly improved VINA scores by 0.5, as demonstrated in Table 13 of the manuscript. &nbsp; We hope these clarifications highlight UniGuide's full potential. Thank you once again for your valuable feedback and the engaging discussion.
null
null
null
null
null
null
SCAFFLSA: Taming Heterogeneity in Federated Linear Stochastic Approximation and TD Learning
Accept (poster)
Summary: This paper analyzes the convergence of scaffnew in the quadratic setup and achieves a linear speedup in the number of clients. It is not attained by the original paper of Scaffnew. Additionaly, the author find an application of the federated quadratic problem -- Federated Linear Stochastic Approximation and TD Learning. Strengths: This paper tries to analyze the convergence of scaffnew in the quadratic setup and achieves a linear speedup in the number of clients. It is not attained by the original paper of Scaffnew. Weaknesses: The achieved speedup holds only for quadratic setup and the application of the quadratic loss function is quite limited. The experiments are a little simple. Technical Quality: 3 Clarity: 2 Questions for Authors: According to my understanding, this work saves more communication complexity compared with scaffnew, while it is not claimed by the authors. This is observed in Table 1. Is it another advantage of this analysis? Can the theoretical analysis be extended to a general case? The comparison in Figure 1 is hard to catch. Can you plot them in one figure for comparison, at least using the same legend? And considering more value of H and N. Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The achieved speedup holds only for quadratic setup and the application of the quadratic loss function is quite limited. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **The achieved speedup holds only for quadratic setup and the application of the quadratic loss function is quite limited.** We emphasize that our analysis holds for the general setting of linear stochastic approximation, which encompasses minimization of quadratic functions, but also works for other settings where the system matrix is not symmetric. This is crucial, as the matrices involved in TD learning are not symmetric. We stress that our analysis is the first of its kind for federated LSA. **The experiments are a little simple.** The Garnet problems used in our experiments are common for federated TD, and the only problem that we are aware of in the federated TD literature. We stress that our paper is a theoretical paper, and the purpose of our experiment is thus only to illustrate our theory numerically. Nonetheless, we are happy to add more experiments on other problems if the reviewer has some specific problems to suggest. **According to my understanding, this work saves more communication complexity compared with scaffnew, while it is not claimed by the authors. This is observed in Table 1. Is it another advantage of this analysis?** This is indeed the case, thank you for pointing it out. We will make sure to add a sentence about this in the discussion after Corollaries 5.2 and 5.3. **Can the theoretical analysis be extended to a general case?** Extending our analysis to other settings, e.g. for strongly-convex and smooth problems is much more technical, and is a very interesting direction for future research. **The comparison in Figure 1 is hard to catch. Can you plot them in one figure for comparison, at least using the same legend? And considering more value of H and N.** Thank you for pointing this issue, we will update the plots in the camera ready version, making sure that the y-axis is the same each time. We will also provide addditional plots for $H \in \\{1, 10, 100, 1000, 10000\\}$ and $N \in \\{10, 100\\}$ in the appendix; we add these additional plots to the pdf attached to the global rebuttal. If you have any additional suggestions to improve the experiments in the camera ready, we will be happy to integrate them. --- Rebuttal Comment 1.1: Comment: Thanks for your reply. Personally, I am positive about this work. However, we must recognize that there are some limitations in this paper, such as only applying to quadratic problems, the algorithm is directly the same as Scaffnew. Based on these, I will maintain my score!
Summary: This paper provides a non-asymptotic analysis of Federated Linear Stochastic Approximation. The authors provide (biased and unbiased) finite-time MSE bounds for general LSA and TD learning under the assumption that the noise is i.i.d.. For Markovian noise, only an unbiased MSE bound is provided. Most importantly is that these bounds express this error as a function of the step-size used, number of agents and number of local updates. Finally, a new algorithmic variant, namely, SCAFFLSA is introduced, reducing communication between agents while maintaining the linear speed up in the algorithm. The contributions of the paper are illustrated through numerical studies. Strengths: The reviewer is not very familiar with the federated learning literature, but as far as they are aware, most of the ideas of the paper are new. The reviewer was unable to look at all proofs in detail, but have not identified any issues with respect to correctness. Weaknesses: The organization of the paper is not great. First, it is very hard to follow the main text given the amount of symbols and equations. Secondly, contributions related to i.i.d and Markovian noise and TD learning seemed to be scrambled together. I particularly think that splitting the results into different sections (one for i.i.d. noise, one for Markovian noise and one tailored to TD learning) would improved the readability of the paper a lot. Also, there is a missing "c" superscript right above equation (1) in Algorithm 1. The paper should be revised for clarity and typos. Technical Quality: 3 Clarity: 2 Questions for Authors: - Is it possible for the authors provide a table similar to Table 1 in order to illustrate and compare the different outcomes between the three different scenarios analyzed by the paper (i.r. the i.i.d. , Markov and TD setting)? -Do the authors believe that their analysis could be extended to federated learning algorithms with vanishing step-size? - It is possible that I missed this in the main text, but are any drawbacks to using SCAFFLSA instead of Federated LSA? Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors identify and state which assumptions are necessary to hold for each of the results to be true, so the limitations of this work are objectively identified. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Is it possible for the authors provide a table similar to Table 1 in order to illustrate and compare the different outcomes between the three different scenarios analyzed by the paper (i.r. the i.i.d. , Markov and TD setting)?** Thank you for the suggestion, which will greatly help to improve the readability of our paper. We will add the following table, instantiating our results for TD learning, in the camera ready version. In this table, we highlight the dependence on the discount factor $\gamma$ and on the smallest eigenvalue of the $\Sigma_\varphi^c$ (see Assumption TD3). Regarding Markovian sampling, complexity is the same as i.i.d. setting up to a multiplicative factor $\max_c \tau_{mix}(c)$ (as defined in Assumption A2) in the number of local samples $H$. We will add a sentence in both tables' captions mentioning this. | Algorithm | Communication complexity $T$ | Local updates $H$ | Sample complexity $TH$ | |-----------------------------|------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------|---------------------------------------------------------------------------------------------------------------------| | FedTD (Doan 2020) | $\mathcal{O}\left(\tfrac{N^2}{(1-\gamma)^{2} \nu^{2} \epsilon^2} \log \tfrac{1}{\epsilon}\right)$ | $1$ | $\mathcal{O}\left(\tfrac{N^2}{(1-\gamma)^{2} \nu^{2} \epsilon^2} \log \tfrac{1}{\epsilon}\right)$ | | FedTD (Cor 4.4) | $\mathcal{O} \left({\tfrac{1}{ (1-\gamma)^{2} \nu^{2} \epsilon}} \log\tfrac{1}{\epsilon} \right)$ | $\mathcal{O}\bigl(\tfrac{1}{ N \epsilon } \bigr)$ | $\mathcal{O}\left(\tfrac{1}{N (1-\gamma)^2\nu^2\epsilon^2} \log\tfrac{1}{\epsilon}\right)$ | | SCAFFTD (Cor. 5.3) | $\mathcal{O}\left(\tfrac{1}{(1-\gamma)^2\nu^2} \log\tfrac{1}{\epsilon}\right)$ | $\mathcal{O}\bigl(\tfrac{1}{N\epsilon^2} \bigr)$ | $\mathcal{O}\left(\tfrac{1}{N (1-\gamma)^2\nu^2\epsilon^2} \log\tfrac{1}{\epsilon}\right)$ | **The organization of the paper is not great. First, it is very hard to follow the main text given the amount of symbols and equations. Secondly, contributions related to i.i.d and Markovian noise and TD learning seemed to be scrambled together. I particularly think that splitting the results into different sections (one for i.i.d. noise, one for Markovian noise and one tailored to TD learning) would improved the readability of the paper a lot.** We will make the separation between results on general federated LSA and federated TD more clear, by stating first the general results in i.i.d. and markovian settings (in two different paragraph), then moving to federated TD. We will also use the additional page in the camera ready to give more details regarding equations and mathematical symbols. If you have any additional suggestion regarding presentation, we are happy to hear it to try to make the paper as clear as possible. **Do the authors believe that their analysis could be extended to federated learning algorithms with vanishing step-size?** The analysis could indeed easily be extended to federated linear stochastic approximation with vanishing step sizes, using the same technique. We refrained from doing so due to space limitation. Regarding extension to more general "federated learning", we want to emphasize that our analysis already works for the special case of quadratic problems $\min_\theta \| X \theta - y \|^2$, that can be cast as $X^\top X \theta = X^\top y$, and fit our assumptions as long as $X^\top X$ is invertible. Still, we stress that our analysis is more general since we do not assume $\bar{A}^c$ to be symmetric. Extending our analysis to other settings, e.g. for strongly-convex and smooth problems is much more technical, and is a very interesting direction for future research. **It is possible that I missed this in the main text, but are any drawbacks to using SCAFFLSA instead of Federated LSA?** This is the main claim of our paper: in terms of sample and communication complexity, SCAFFLSA is guaranteed to perform as good as FedLSA in all settings, and can significantly reduce communication cost in heterogeneous settings. We show this theoretically, and illustrate it numerically; we refer the reviewer to the pdf attached to the general rebuttal for additional numerical evidence of this. The only drawback of SCAFFLSA lies in the fact that each agent is required to store an additional vector (the local control variate), which increases the cost in terms of memory for each agent. This drawback is shared with all control variates methods (such as Scaffold, Scaffnew...), and is generally not a problem provided agents have access to sufficiently large memory. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their responses and for incorporating my suggestions in order to improve the readability of the paper. I have no further questions.
Summary: This paper first analyzed the performance of federated linear stochastic approximation or FedLSA algorithm. Second, it proposed a new algorithm called stochastic controlled averaging for Federated LAS or SCAFFLSA and analyzed its performance. The key idea of SCAFFLSA is to use a control variable to mitigate the client drift. The performance of the proposed algorithm was verified using experiments. Strengths: 1. The paper proposed a new analytical framework to analyze the sample and communication complexity of FedLSA. Using this approach, the paper analyzed the performance of SCAFFLSA. The paper also extended the analytical framework to Federated TD learning. 2. The paper proposed a new algorithm, SCAFFLSA, which mitigate the client drift using a control variable. The sample and communication complexity of SCAFFLSA is significantly better than that of FedLSA and Scaffnew. The performance analysis of SCAFFLSA was also extended to Federated TD. Weaknesses: The experiments are relatively weak. It will be more beneficial if the authors could provide more applications of the proposed algorithm to other problems, especially for federated TD learning. Technical Quality: 3 Clarity: 3 Questions for Authors: What are the functions $\boldsymbol{A}^c()$ and $\boldsymbol{b}^c()$ like in practice? Could you please provide some examples? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: There was no discussion on the limitations of the proposed algorithm. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **The experiments are relatively weak. It will be more beneficial if the authors could provide more applications of the proposed algorithm to other problems, especially for federated TD learning.** The Garnet problems used in our experiments are common in the TD literature and serve the purpose of illustrating numerically our theoretical findings. Nonetheless, we are happy to add more experiments on other problems if the reviewer has some specific problems to suggest. **What are the functions $A^c()$ and $b^c()$ like in practice? Could you please provide some examples?** In TD learning, the system is given by $\bar{A}^c = \mathbb{E} [\phi(s)\{\phi(s)-\gamma \phi(s')\}^{\top}]$ and $\bar{b}^c = \mathbb{E}[\phi(s) r^c(s,a)]$, where $s \sim \mu^c, s' \sim P^{\pi,c}(\cdot|s)$. Thus, taking the sampling set $\mathsf{Z} = \mathcal{S} \times \mathcal{A} \times \mathcal{S}$, we can define $A^c(s, a, s') = \phi(s)\{\phi(s)-\gamma \phi(s')\}^{\top}$ and $b^c(s, a, s') = \phi(s) r^c(s,a)$. The points $s, a, s'$ are then either i.i.d. samples (following Assumption TD1) or a Markov chain (following Assumption TD2).
null
null
Rebuttal 1: Rebuttal: We thank the reviewers for their thorough feedback. We are pleased that reviewers deemed our contributions as new ("new analytical framework", "new algorithm", reviewer WekN; and "the ideas of the paper are new", reviewer bpaa), and that our analysis technique is the first to show that Scaffnew has linear speed-up ("and achieves a linear speedup in the number of clients. It is not attained by the original paper of Scaffnew", reviewer czvr). Reviewers WekN and czvr found our experiments a little bit too simple. Although we stress that this paper is a theoretical paper, we understand the concern and provide some more experiments, with different settings of number of agents $N$ and number of local steps $H$ on a federated Garnet problem. The results can be found in the file attached: when agents are homogeneous, both algorithms perform very similarly; when agents are heterogeneous, FedLSA gets more and more biased as the number of iterations grow, while SCAFFLSA still reaches the same precision even when number of communication is divided by 1000. We remark that the experimental setup that we used is very common in federated temporal difference learning, and we are not aware of other problems used in the literature. If reviewers have suggestions for other problems, we will be happy to add them to the manuscript. We address the other more specific concerns directly in the rebuttal to each reviews. Pdf: /pdf/fcf64cb915f4a4fea80942f5bbc1249bb789e16c.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Empowering Active Learning for 3D Molecular Graphs with Geometric Graph Isomorphism
Accept (poster)
Summary: This paper proposes a principled AL paradigm to alleviate the annotation hurdle of 3D molecular graphs. It introduces a novel diversity component for 3D molecular graphs, which is provably at least as expressive as the GWL test. Furthermore, the authors develop an effective and efficient pipeline to compute uncertainties for 3D molecular graphs rooted in Bayesian inference. Strengths: + The authors develop an effective method for computing diversity among different 3D molecular graphs and introduce a Bayesian Geometric Graph Neural Network (BGGNN). The BGGNN takes a 3D graph as input and produces the desired properties along with uncertainty values. + The motivation of the paper is clear, and it includes well-defined theoretical proofs. Weaknesses: + The paper lacks some necessary explanations, such as those for "USR" and "GSL," making it difficult to understand. + Experiments were conducted only on the QM9 and MD17 datasets, so the effectiveness and efficiency on larger datasets remain unknown. Technical Quality: 3 Clarity: 2 Questions for Authors: + The authors formulate a criterion based on uncertainty and diversity. How do these two factors respectively impact performance? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Please see the weaknesses part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > W1: lack of necessary explanations Thank you for your comments. For "USR", the primary concept involves using statistical moments to approximate the geometry of the molecules, capturing essential features of their shapes. Detailed explanations of this method are provided in Section 2.1.3 (lines 189 to 214). If you feel that additional background information is needed, please let us know the specific points where more details would be beneficial so that we can improve our explanation. We do not have the abbreviation "GSL" in the paper. If you mean "GWL," we present the high-level ideas of the GWL test from lines 159 to 164. The key message we want to convey is, as stated in the paper, "the GWL test imposes an upper bound on the expressive power of 3D GNNs." We also provide the details of this test in Appendix A.2. We believe that the high-level explanations highlighted in the main paper are sufficient to understand our claims. We also provide detailed explanations in Appendix. Therefore, we kindly request not to view this as a weakness of our work. However, we will definitely take your suggestions and further clarify the context. > W2: experiments on larger datasets While our experiments were conducted exclusively on the QM9 and MD17 datasets, this choice was made because our primary focus is on small molecules and to validate our method on **well-established** and **most commonly used** benchmarks. QM9 and MD17 are necessary benchmark datasets in the field of 3D molecular learning. Almost all representative works in the field use them for evaluation [1-4]. Other reliable benchmark datasets for 3D scientific data would go beyond molecules, like the Materials Project dataset for crystal materials and the Fold Dataset for proteins. These data contain special structures (e.g., periodic structures for crystals), and thus they are out of the scope of this work. We will explore the potential of our active learning methods on these macromolecules as future work. If you have concerns about complexity, we can vectorize the diversity computation implementation for GPUs for faster computations. We have updated the results comparing selection times (please see Table 3 of the attached PDF) and included additional discussions in the global response (item 3), to reflect our new vectorized implementation. It can be seen that our sampling approach is highly effective and efficient. Note that the Coreset approach is an important diversity-based baseline, and our sampling approach is much more efficient than it (127 vs 64.9 minutes). The Random approach does not involve any sophisticated sampling strategy, and our method is just slightly more expensive than it (53 vs 64.9 minutes). Thus, we believe our sampling approach is readily applicable to larger datasets. > Q1: individual impact of uncertainty and diversity We have shown in the ablation study (Sec. 4.4) that our proposed diversity metric alone is highly effective. We can statistically conclude that with only the proposed diversity metric, we outperform all other baselines. Additionally, the uncertainty metric clearly further improves our method as all the results suggest. **In summary, our ablation study (Sec. 4.4) demonstrates that our proposed diversity metric is highly effective on its own. Statistically, we outperform all other baselines using only this metric. Furthermore, incorporating the uncertainty metric enhances our method even more.** Additionally, **we have presented new ablation results in the global response** (Figure 2 and Table 2), please see the details there. We compared our full method (combining uncertainty and diversity) against methods using only uncertainty or only diversity. The results indicate that our diversity component is highly effective in selecting informative samples for model training. Moreover, the performance improvement achieved by our method compared to using only uncertainty or only diversity-based selection is statistically significant ($p < 0.001$), as shown in Table 2 of the PDF file in the global rebuttal. Incorporating uncertainty further refines our method by integrating additional chemical contexts, such as atom types. **This clearly demonstrates that our approach is superior, as it not only distinguishes different geometries with precision but also accounts for uncertainties in chemical contexts.** Reference [1] K.S., et al. SchNet: A continuous-filter convolutional neural network for modeling quantum interactions. NIPS'17 [2] J.K., et al. Directional message passing for molecular graphs. ICLR'20 [3] Y. L., et al. Spherical message passing for 3D molecular graphs. ICLR'22 [4] J. G., et al. GemNet: Universal Directional Graph Neural Networks for Molecules. NeurIPS'21 --- Rebuttal Comment 1.1: Title: Thank you. Comment: Thank you for your thoughtful response. My concern has been addressed. I will increase the score from 4 to 5. --- Rebuttal 2: Title: The reviewer-author discussion stage is ending soon Comment: Dear Reviewer Abqb, Thank you for your comments! As **the reviewer-author discussion stage is ending in less than 30 hours**, we kindly remind you that we have provided a detailed rebuttal (both the individual rebuttal and global rebuttal) to address your concerns. We hope our clarifications and additional experiments (in the global rebuttal PDF file) have resolved the concerns to your satisfaction. If you believe your concerns have been adequately addressed, we kindly ask you to reconsider your scores. We are more than willing to discuss them in detail if you have additional questions. Sincerely, Authors --- Rebuttal 3: Comment: Dear Reviewer Abqb, Thank you for your response and for increasing your score. We are happy to know that your concerns have been addressed. Sincerely, Authors
Summary: This paper introduces a principled active learning (AL) paradigm tailored for molecular learning. The proposed AL approach aims to alleviate the hurdle of human annotation by automatically querying labels for the most informative samples. The authors treat molecules as 3D molecular graphs and they introduce a set of new 3D graph isometries for 3D graph isomorphism analysis, which are shown to be as expressive as the Geometric Weisfeiler-Lehman (GWL) test. To ensure the selection of samples with maximal uncertainties, the authors design a Bayesian geometric graph neural network specifically for 3D molecular graphs. Active sampling is formulated as a quadratic programming (QP) problem integrating these components. Experimental results demonstrate the effectiveness of the proposed AL paradigm, highlighting the advantages of the diversity and uncertainty methods introduced. Strengths: 1. **Innovative Isometries for Graph Discrimination**: The introduction of a new set of 3D graph isometries is a significant contribution, demonstrating important efficacy in distinguishing between different graphs. This advance enhances the expressiveness and applicability of the proposed method. 2. **Theoretical Support and Clarity**: The methodology is explained with theoretical support 3. **Comprehensive Experimental Evaluation**: The experiments clearly highlight the method's improvements over existing baselines. The inclusion of statistical analysis further strengthens the validity of the results, providing evidence of the method's effectiveness. Weaknesses: 1. **Overly Strong Claims**: Some of the claims made in the paper are quite strong and require more careful justification. For detailed points of concern, please refer to the Questions section. Providing additional evidence or more nuanced discussions could strengthen these claims. 2. **Need for More Detailed Experimental Insights**: While the experimental section is comprehensive, it could benefit from additional insights and details. Specific experiments would be clearer and more informative with further explanation. Providing more context and interpretation of the results would enhance the reader's understanding and the overall impact of the findings. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. **Clarification Needed on Line 175**: The sentence starting in line 175 needs to be more carefully justified. Specifically, how can you assert that SchNet is upper bounded by distance isometry but not by triangular and cross-angular isometries? Please provide a more detailed explanation or supporting evidence for this claim. 2. **Generalization in Section 4.2**: It is not clear how the experiment in Section 4.2 demonstrates generalization. Could you provide more information and insight on the choice of graphs used in this section? Additionally, explaining the rationale behind the experimental setup and how it supports your conclusions would be helpful. 3. **Complete Results for Section 4.4**: Please report all the results from Section 4.4. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations addressed Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > W1: Overly Strong Claims Thank you for your comments. We will use more precise language and include additional discussions in the paper to clarify our claims. We will address your concerns in the Questions section. > W2: Need for More Detailed Experimental Insights In the current version of our paper, we have provided the necessary information for the overall experimental setup, as well as detailed analyses for each individual experiment. Even so, we agree with you that some rationales and high-level insights are missing, especially for readers outside this field to easily understand the paper. As we cannot revise the papers for now, we summarize the key rationales and insights in this rebuttal; **we will integrate them in our next version.** - 4.1 Experiment setup: We choose different types of molecular graphs for various tasks to demonstrate the generalizability of our methods. We test our methods on molecular systems in both equilibrium and non-equilibrium states, covering various quantum properties and molecular dynamics tasks. This will be further detailed for your Question 2 below. - 4.2 The performance comparison of our method with the baselines: This shows that our active sampling approach is significantly better than mainstream active learning methods. This further means that, given a budget to select more informative samples for wet-lab annotation (usually expensive for molecules), our method can select the most informative subset, thereby significantly improving the learning efficacy and efficiency. - 4.3 An examination of the query budget study: This shows that for various annotation budgets, our method is consistently better than baselines, which indicates that the superior capability of our sampling approach is robust in practice. - 4.4 An ablation study focusing separately on the diversity and uncertainty components: This indicates that both the proposed sampling approaches are novel and effective. Combining them considers both the geometry and chemical contexts in 3D molecules, and thus achieving the best active learning performance. - 4.5 An evaluation of the computation time: This shows our method is not only effective but also efficient. Our sampling approach has similar efficiency to the Random sampling, and is much more efficient than the important diversity-based active learning baseline Coreset. This further indicates our approach is readily applicable to macromolecules like materials and proteins, which will be conducted as our future work. > Q1: Clarification Needed on Line 175 To clarify, we mean that SchNet has at most the same expressive power in distinguishing different geometric structures as our method. Since both methods rely solely on distance information, they can distinguish structures uniquely defined by their distances alone. Our method is deterministic, meaning that, with the same distance information, **a perfectly trained SchNet can be at most as powerful as our method**. In practice, it is almost impossible for a SchNet to be perfectly trained. > Q2: Generalization in Section 4.2 The research of 3D molecular learning is new, and there are only a few reliable benchmark datasets for 3D molecules (containing atom types as well as XYZ coordinates for all atoms for each molecule). We choose our datasets based on the following two criteria: QM9 consists of molecules in equilibrium, while MD17 contains several thermalized (i.e. non-equilibrium, slightly moving) molecular systems. Additionally, QM9 contains various quantum properties for molecules, like the important HOMO and LOMO orbitals. MD17 is for dynamic system simulation, thus it contains labels for both the energy and atomic forces. In summary, we test our methods on molecule systems in both equilibrium and non-equilibrium, covering various quantum properties and molecular dynamics tasks. QM9 and MD17 are necessary benchmark datasets in the field of 3D molecular learning. Almost all representative works in the field use them for evaluation [1-4]. We follow this conventional setup in our work. Besides QM9 and MD17, other reliable benchmark datasets for 3D scientific data would go beyond molecules, like the Materials Project dataset for crystal materials and the Fold Dataset for proteins. These data contain special structures (like periodic structures for crystals), and thus they are out of the scope of this work. We will explore the potential of our active learning methods on these data as future work. We will include these discussions in the next version of our paper. > Q3: Complete Results for Section 4.4 We have included additional results from the ablation study, which can be found in the global response (please refer to Figure 2 and Table 2 of the attached PDF file ). We've also included more discussions in item 2 of the global rebuttal. Basically, the results show that both the uncertainty-only and diversity-only approaches outperform all baselines. Additionally, we show both diversity and uncertainty components significantly contribute to the overall performance. We did not include other baselines in this plot to avoid overwhelming it, but if by "report all the results from Section 4.4", you mean including all the baselines in this plot as well, we will certainly do so in the next version of the paper. Reference [1] K.S., et al. SchNet: A continuous-filter convolutional neural network for modeling quantum interactions. NIPS'17 [2] J.K., et al. Directional message passing for molecular graphs. ICLR'20 [3] Y. L., et al. Spherical message passing for 3D molecular graphs. ICLR'22 [4] J. G., et al. GemNet: Universal Directional Graph Neural Networks for Molecules. NeurIPS'21 --- Rebuttal Comment 1.1: Comment: I appreciate the authors' responses. Including all the baselines from the experiments in Section 4.4 would enhance the completeness of the paper, and I strongly encourage adding them to the appendix. Given that my concerns have been satisfactorily addressed, I am willing to increase my score. --- Rebuttal 2: Comment: Dear Reviewer JSdU, Thank you so much for your constructive comments, which are very helpful in improving the clarity of our paper. Also thanks for acknowledging that we've addressed your concerns and raising your score. As we cannot revise the paper for now, we'll definitely include these new results either in the main paper or in the Appendix in the next version. Sincerely, Authors
Summary: The paper proposes an active learning scheme for molecular property prediction using uncertainty estimates from Dropout Monte Carlo and diversity metrics. In each active learning iteration, molecules are selected by maximizing uncertainty and diversity in the batch by solving a quadratic programming problem. The authors propose a 48-dimensional vector that encodes statistical moments of reference distances, angles and cross-angles for 4 reference points. Experiments on QM9 and MD17 datasets are performed to benchmark the effectiveness of the proposed method against several baselines. Strengths: 1. The proposed 48-dimensional vector that encodes statistical moments of reference distances, angles and cross-angles for 4 reference points is a promising and novel metric to quantify geometric diversity of 3D molecular structures. 2. The proposed quadratic programming formulation of the molecule selection step allows to trade-off uncertainty and diversity of selected conformations. 3. The performed experiments demonstrate the usefulness of the proposed diversity metric and clearly outperforms the random selection baseline and other, non-Bayesian-uncertainty-based active learning approaches. Weaknesses: 1. Many of the claims in the paper are too bold, neglecting several important related works in the field: - There are several active learning applications for 3D GNNs already in the literature, especially in the neural network potential literature: https://www.nature.com/articles/s41467-021-21376-0 , https://www.nature.com/articles/s41524-023-01104-6 - In the review of alternative 3D structural descriptors, the most common methods such as SOAP ( https://journals.aps.org/prb/abstract/10.1103/PhysRevB.87.184115 ), or ACE ( https://journals.aps.org/prb/abstract/10.1103/PhysRevB.99.014104 ) are neglected. - Unlike claimed by the authors, the proposed Dropout Monte Carlo scheme for active learning in molecular property prediction with 3D GNNs is not novel, but has been proposed previously: https://www.nature.com/articles/s41524-024-01277-8 2. The considered baselines might not be the most relevant to the method and do not allow to attribute the origin of the empirical performance increase given that neither Coreset nor Learning loss uses a Bayesian uncertainty. BatchBALD would be a much more relevant baseline given that the same uncertainty estimates can be used. This would also allow to investigate the effect of the quadratic programming formulation against the greedy clustering-based approach of BarchBALD. It would also allow to investigate the benefit of the proposed diversity metric vector against, for example, classical descriptors such as SOAP. This would allow to clarify the underlying source of the outperformance: Is it the Bayesian UQ estimate, the quadratic programming formulation of the proposed novel structure descriptor vector? 3. The quadratic scaling of the computational cost of the method is concerning. The pool sizes considered in the experimental section are very small (15,000) in the field of 3D molecular property prediction, where datasets can easily span several million conformations (PubChemQC, ANI-2x, GEOM). Given that there is already a computational overhead visible over competing AL methods, this difference will only increase for realistically-size datasets. Technical Quality: 3 Clarity: 3 Questions for Authors: Does the 48-dimensional vector contain any information about the atom-species involved in distances, angles and cross angles or is this metric purely geometry-based? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors discuss some limitations of the proposed approach, including the scaling to large datasets such as OC20. However, the scaling to large atom sizes could be discussed more as requiring further research given that it is non-obvious that the statistical moments measured at 4 reference points are sufficient to distinguish protein-size geometries or large crystal structures. Additionally, it should be stated more clearly that the proven power of the proposed descriptors exceeding the GWL test only holds before compressing the descriptors into the 48-dimensional vector. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > W1: Many of the claims in the paper are too bold, neglecting several important related works in the field. Thank you for your comments. We will modify the language in the paper to clarify our intentions. The first two active learning (AL) papers do not specifically consider 3D geometry information and **differ from our paper/method**, which focuses on 3D molecules. The 3D geometry of molecules is crucial in determining molecular properties, but entangles unique challenges for designing AL schemes, which motivates our work. We will make sure to include them in the literature review. Regarding the descriptors, our method produces a 48-dim vector **describing the geometry of a molecule**. Our method is equivariant to roto-translations and **invariant to permutations**, as the statistical quantities do not change due to permutation. Meanwhile, SOAP produces vectors that **describe local atomic environments** using spherical harmonics and radial basis functions in an atom-wise manner. In general, they are equivariant to roto-translations but **not invariant to permutations**. ACE involves a systematic expansion that can describe various orders of interactions (e.g., two-body, three-body); however, it is **less of a conventional descriptor compared to our method and SOAP**. We will make sure to include these two papers in our discussions in the paper. Regarding the Monte Carlo scheme, this paper was published on May 3, 2024, which is only around 20 days before the submission deadline. We were not aware of this paper at the time. We would like to say that this is **concurrent work**, and we will discuss this paper in our next version. > W2: Bayesian baseline and SOAP descriptors We have included the results for BatchBALD in the global rebuttal (please refer to Figure 1 of the attached PDF). It can be clearly seen that **our method outperforms BatchBALD**. Moreover, we have performed statistical tests to confirm that **our improvement compared to BatchBALD is significant** with a $p$-value << $0.001$ (please refer to Table 1 of the attached PDF). In regards to the SOAP descriptors, as we discussed in W1, SOAP produces vectors that describe local atomic environments, whereas we aim to measure diversity based on global features. There are ways to concatenate and compress the SOAP descriptors into lower-dimensional global descriptors; however, given the short time frame of the rebuttal period, we cannot provide any results on this. We will definitely consider working on this in the future. > W3: computational cost First, an active sapling strategy will rely on the backbone 3D GNN models for downstream molecular learning. However, existing mainstream 3D GNNs have the complexity of $O(n^2)$ (DimeNet [1], SphereNet [2]) or even $O(n^3)$ (GemNet [3]). Hence, the proposed sampling approach with $O(n^2)$ does not increase the order of the overall complexity, and might not be the major computational overhead of the whole learning pipeline. As mentioned in Sec. 2.3 of the paper, we implemented a solution to execute the QP problem on the GPU (instead of the CPU) using the parallel implementation of the alternating direction method of multipliers, as detailed in [4]. Furthermore, we have vectorized our implementation and utilized the GPU to perform calculations to make the quadratic diversity matrix computation faster (find more details in item 3 the global rebuttal). Overall, our efficiency is slightly worse than that of the basic Random sampling. For larger datasets, we can also address scalability by first sub-sampling from the unlabeled pool using only the uncertainty criterion (which is linear and thus scalable) to select the most uncertain samples. We then apply the proposed active sampling criterion only to the selected subset. This strategy has been used in previous AL research with promising results [5]. We plan to explore this strategy on large-scale molecular datasets in the future. > Q1: 48-dim vector It is purely geometry-based. However, we want to emphasize that our framework contains two components for selecting important molecules: diversity and uncertainty. The diversity component focuses on the geometric perspective of view. The uncertainty part considers atom types, which are embedded into node features. By combining both uncertainty and diversity, **we take both the chemical contexts and geometric contexts into consideration**. > L1: scaling to large atom sizes with statistical moments and expressive power after compressing the descriptors Thank you for your comments, we will include such a discussion in the limitation section. USR is a well-known work for recognizing similar molecular shapes [6], where they use distance information only and employ the first three moments (mean, variance, skewness) to approximate the distribution of distances. The authors state in the paper (page 5, right column, 2nd paragraph) that "Such an approach is based on a theorem [7] from statistics, which proves that a distribution is completely determined by its moments." With this theorem in mind, we can reconstruct the distribution with high fidelity by increasing the number of translated moments as well as the order of computed moments. We found that **distance information alone cannot describe a complete isometry space**, so we study a complete isometry space by **considering angular isometries**, eventually resulting in a theoretically guaranteed solution for precise molecular diversity computing. With the proposed complete isometry space and sufficient statistical moments, our method can be at least as expressive as the GWL test. However, in practice, we compress into a lower-dimensional descriptor for efficiency. This also addresses the scaling issue. For protein-sized geometries or large crystal structures, one can choose to include more statistical moments to better distinguish different structures. As our work focuses on small molecules, we use four statistical moments. --- Rebuttal Comment 1.1: Comment: > The first two active learning (AL) papers do not specifically consider 3D geometry information These works do train 3D GNNs via active learning. They do not perform any diversity-based selection, but the models and datasets are inherently 3D. When the authors claim that their model uses the atomic species via the uncertainty-metric, so do these works consider the 3D structure via the uncertainty metric. > SOAP descriptors It is unfortunate that there are no results on this given that the proposed 48-dim vector seems to be the biggest novelty of the paper, so a comparison to the clustering capabilities to standard approaches such as SOAP (acknowledging the fact that another step is required to obtain a global descriptor from the local descriptors) would be highly valuable. > existing mainstream 3D GNNs have the complexity of O(n^2) (Assuming n is the number of atoms in the system) Then these 3D GNN backbones have have complexities of O(d^p n), i.e. are linear in the number of atoms, times the average number of neighbors d (which is a constant that is depending on the cut_off and the density of the system, but independent of n). If the proposed method is indeed O(n^2), this might become problematic for larger systems in case the cost starts to dominate the cost of the backbone. > Q1: 48-dim vector: It is purely geometry-based This should be noted in the limitations given that 2 molecules with similar geometries, but different atom species tend to behave very differently, even though the 48-dim vector treats them very similarly. --- Rebuttal 2: Title: References to Rebuttal Comment: References [1] J.K., et al. Directional message passing for molecular graphs. ICLR'20 [2] Y. L., et al. Spherical message passing for 3D molecular graphs. ICLR'22 [3] J. G., et al. GemNet: Universal Directional Graph Neural Networks for Molecules. NeurIPS'21 [4] M.G., et al. GPU acceleration of admm for large-scale quadratic programming. Journal of Parallel and Distributed Computing [5] S. C., et al. Active batch selection via convex relaxations with guaranteed solution bounds. TPAMI [6] P. J B., et al. Ultrafast shape recognition to search compound databases for similar molecular shapes. Journal of computational chemistry [7] Hall P. A distribution is completely determined by its translated moments. Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete. --- Rebuttal 3: Title: Reply to Reviewer UZqH (Part I) Comment: Thank you for your prompt reply. Your insights are invaluable to us. > These works do train 3D GNNs via active learning... First, we apologize for any confusion caused by our language. When we said that they do not specifically consider 3D geometry information, we meant that the active learning primarily pertains to 3D geometries, as in our diversity metric. The scopes of the mentioned two methods [1,2] are slightly different, but for the active selection part, both methods sample configurations in sub-regions where the machine learning models have maximal **uncertainty** about for quantum mechanics (e.g., DFT) calculations, to be added to training data. In performing similar tasks (e.g., in our MD17 Benzene experiments), their methods share some similarities with our uncertainty component. However, their methods specifically targets non-equilibrium state modeling, while our method is more general, applicable to molecular systems in both equilibrium (e.g., QM9 experiments) and non-equilibrium states, covering various quantum properties and molecular dynamics simulation tasks. Moreover, our approach incorporates both uncertainty and diversity. While 3D information is considered in active learning approaches such as [1, 2], it is not as central as in our diversity computations. Usually, the models (3D GNNs) play an important role in uncertainty quantification; since many node features, other than geometries, are fed as inputs to the model, uncertainty quantification may or may not prioritize 3D geometric information. Our diversity component is laser-focused on 3D geometric information. While pure uncertainty-based methods focus solely on the most uncertain samples, our method balances exploration and exploitation, leading to more comprehensive and reliable sample selection. Experiments in Sec. 4.4 in our paper and Fig. 2 in the global rebuttal PDF show that, our method consistently outperforms the individual use of uncertainty for all active learning iterations. We understand your point of view that these methods also consider 3D information, although not as specific as our method (especially the diversity component). We will make sure to include these references, discuss our focus in further detail, and modify our sentences to avoid bold claims, as you mentioned in your first reply. > It is unfortunate that there are no results on this given that the proposed 48-dim vector seems to be the biggest novelty of the paper, so a comparison to the clustering capabilities to standard approaches such as SOAP (acknowledging the fact that another step is required to obtain a global descriptor from the local descriptors) would be highly valuable. Unfortunately, we did not have enough time to complete the experiments during the rebuttal period. However, during this discussion period, we managed to perform the experiments using the SOAP descriptor and compare its clustering capabilities to our 48-dimensional vector. We first obtain the local SOAP descriptors and then aggregate them to obtain a global descriptor for a molecule. Then, we apply the same setting as mentioned in section 2.3 of the paper to obtain the results. We herein present the results. The reported values are Mean Absolute Error (MAE) values. *Abbreviations:* D= Diversity Only, B= Uncertainty + Diversity | Iteration | |*mu* | | |*lumo*| | |----------|------|-|---|----|----|--| | | SOAP D| Our D | Our B |&#124; SOAP D| Our D | Our B | | 1 | 2154| 1769| 1741 | &#124; 1016 | 890 | 876 | 2 | 1901| 1576| 1550 | &#124; 921 | 805 | 799 | 3 | 1732| 1440| 1412 | &#124; 839 | 740 | 732 | 4 | 1701| 1352| 1315 | &#124; 797 | 692 | 677 | 5 | 1587| 1225| 1205 | &#124; 759 | 680 | 660 | 6 | 1414| 1157| 1121 | &#124; 699 | 629 | 615 | 7 | 1322| 1092| 1072 | &#124; 667 | 604 | 594 *p-values* in the table below show that our proposed 48-dimensional vector significantly improves the performance of the selection strategy over the SOAP descriptor. ||*mu*|*lumo*| |--| ---| --- | |*p-value*| 3.24 x 10E-6| 2.29 x 10E-5| It can be clearly observed that our method (whether using diversity alone or both diversity and uncertainty) outperforms SOAP descriptors. This outperformance may be attributed to the local nature of SOAP descriptors; aggregation to obtain a global descriptor can be limiting, as it may not fully capture the intricate geometric variations. Meanwhile, another important aspect to consider is permutation invariance. Through aggregation, we can ensure such symmetry. However, it is challenging and potentially a future work to determine how to effectively design a global descriptor based on SOAP that fulfills permutation invariance. We will include the results and discussion of both BatchBALD and SOAP in our paper. --- Rebuttal 4: Title: Reply to Reviewer UZqH (Part II) Comment: > $O(n^2)$ might become problematic for larger systems in case the cost starts to dominate the cost of the backbone. We refer to the case in which we assume an unbounded cutoff distance and a body-order of 2 (which is the most basic case, such as SchNet[2] that only uses relative distances), the complexity of GNN is $O(n^2)$. In general, $d< n$ with a practical cut-off distance and the model complexity is $O(d^{k-1} n)$, where $k$ is the body order. Asymptotically, we agree that our method might dominate the computational complexity of GNNs, but in real implementations, the constant term in the complexity of GNNs might be very large compared to that of our methods because of the large amount of calculations in the network, including linear and nonlinear transformations, feature aggregation and passing, etc.. Therefore, this domination will occur only with a reasonably large number of $n$. In this work, we focus on molecules, so the number of atoms ($n$) is usually small. For other scientific data like proteins, $n$ is large, but this is out of the scope of this work. However, we acknowledge that for large-scale datasets that contain several million conformations, the difference in complexity will increase, and we will address this in our discussion of limitations. But, most importantly, as an active learning approach, our primary focus is on minimizing the costs associated with performing annotation, rather than the computational costs tied to GNNs. These annotation costs can vary, including computational expenses like those incurred from DFT calculations ($O(n^3)$), costs from wet lab experiments, or even the time and expertise required from specialists. In many cases, we prefer to allocate more computational resources to active learning, as this investment can lead to more efficient and cost-effective labeling. > This should be noted in the limitations given that 2 molecules with similar geometries... By combining both uncertainty and diversity, we take into consideration both the chemical and geometric contexts. If we only consider the 48-dimensional vector, then, as you mentioned, the scenario you described would be a potential limitation. We acknowledge this concern and will include it in the limitations section. [1] Hyperactive learning for data-driven interatomic potentials. npj Comput Mater [2] Quantum-chemical insights from deep tensor neural networks. Nature Communications --- Rebuttal Comment 4.1: Title: Reply to Reviewer UZqH (Part III) Comment: In Part I of the reply, we present the results of using SOAP descriptors to compute the diversity. We managed to conduct additional experiments using SOAP descriptors to compute the diversity matrix and supplemented it with the uncertainty component for a complete comparison with our method. We herein present the full results. The reported values are Mean Absolute Error(MAE). *Abbreviations:* D= Diversity Only, B= Uncertainty + Diversity | Iteration | |*mu* | | | |*lumo*| | | |----------|------|-|---|---|----|----|--|--| | | SOAP D| Our D |SOAP B| Our B |&#124; SOAP D| Our D |SOAP B | Our B | | 1 | 2154| 1769|2057| 1741 | &#124; 1016 | 890 | 1013 | 876 | 2 | 1901| 1576|1877| 1550 | &#124; 921 | 805 |902| 799 | 3 | 1732| 1440|1721 |1412 | &#124; 839 | 740 | 838| 732 | 4 | 1701| 1352|1539| 1315 | &#124; 797 | 692 |791 | 677 | 5 | 1587| 1225|1456| 1205 | &#124; 759 | 680 |744| 660 | 6 | 1414| 1157|1345| 1121 | &#124; 699 | 629 |711| 615 | 7 | 1322| 1092|1280| 1072 | &#124; 667 | 604 |681| 594 The *p-values* in the table below further demonstrate that our proposed method (using both diversity and uncertainty) significantly improves the performance of the selection strategy compared to the SOAP descriptor (using both SOAP diversity and uncertainty). ||*mu*|*lumo*| |--| ---| --- | |*p-value*| 4.20 x 10E-6| 2.51 x 10E-6| It is evident that our method (using both diversity and uncertainty) outperforms the SOAP descriptors (using both SOAP diversity and uncertainty). As mentioned earlier, global SOAP descriptors may have limited capabilities in capturing essential global geometric information, leading to suboptimal performance. We will include the full results of both BatchBALD and SOAP in our paper.
Summary: This paper describes a way of sampling 3D graphs for active learning on molecules, leveraging isometries. Strengths: I think this work is interesting, and potentially useful. Clearly some kind of active learning would have useful use cases. Also, the use of isometries could also be relevant, depending on the exact nature of the sampling. Weaknesses: I think the main weakness of this work is that it needs stronger connections to the actual chemistry. * The actual elements are important. Two molecules can have similar geometries, yet very different electronic interactions that give rise to very different properties. * The paper would benefit from explaining the relationship between the different isometries and actual molecules and chemical properties. * The space of molecules is very large. The tests hold back part of known datasets as unlabeled molecules. But the most useful new labels may not be in the QM9 dataset, but may be in some other dataset, or even a novel molecule. Thus, a more interesting algorithm would be one that identified a molecule X as as important new molecule to add, allowing some search for it. Of course, this would raise a number of other important issues, such as generating the new molecule X, etc. * The paper does not test against 2-D methods. Not sure why. Perhaps they perform well. Technical Quality: 2 Clarity: 3 Questions for Authors: * How does sampling the graphs relate to sampling the interactions that are responsible for properties? * Is chirality considered? * Why do the graphs in Figure 3 not start at the same loss? Before any active learning has occured, shouldn't that the situation? * Are the other AL techniques also designed to be used in this iterative manner? If not, I think a better baseline test would be to use the same number of samples. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The paper does not adequately address limitations. I would imagine one would be that it does not consider the actual elements of each atom. In other words, two molecules with similar geometries are considered similar, even though they could be very different chemically. What about periodic structures? Also, what if there is not a pre-existing source of valid, but unlabeled molecules? A common situation is that there are only a small number of molecules known at all. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >W1: about actual atoms Our framework contains two components for selecting important molecules: diversity and uncertainty. **We do consider the actual atoms in the uncertainty part, as the atom types are embedded into node features.** Our diversity component focuses on molecular geometries, **which essentially reflect the interactions among atoms in a molecule.** Thus, 3D shapes significantly influence molecular properties [1, 2]. As a concrete example, when predicting the important HOMO-LUMO gap of QM9 using MAE (lower is better), using data without 3D geometries produces *1.23-3.28* [3], while using data with 3D info significantly improves it to *0.03-0.06* [4]. The GWL test also focuses on geometries, and 3D GNNs aim to integrate geometric information into the learning process. **In summary, by combining both uncertainty and diversity, we take both the chemical contexts and geometric contexts into consideration.** We will emphasize this in the next version. > W2: the relationship between different isometries, actual molecules, and chemical properties **Full geometric information is important for computing chemical properties.** As we discussed in lines 172-179, different isometries are mainly related to the completeness of the geometric information and the universal approximation of 3D GNNs. For example, simply using the distances (Reference Distance Isometry) cannot recognize two molecules with the same edge lengths but different angles, which is expected to produce suboptimal property predictions. Using angle information (Triangular Isometry), for example, improves the MAE of the HOMO-LUMO gap from *0.063 to 0.033* [4]. >W3: extend to different datasets and even generate new molecules Very insightful view! First, we can definitely extend QM9 to new datasets like MD17, as our active selection is applicable to all 3D molecules. We do it separately because we follow the fashion of the community and it's easy for evaluation. Second, this study falls into molecular property prediction, and molecular generation is a different topic. We agree that extending our algorithm to identify novel molecules can be highly beneficial. **As you mentioned, this approach could raise other issues, such as the validation of the generated molecules, which is far beyond the scope of this work.** Therefore, we kindly request not to view this as a weakness of our work. >W4: does not test against 2-D method The scope of this work is 3D molecular learning based on 3D GNNs. As we have pointed out previously, the 3D conformations of molecules determine their properties; thus, **3D GNNs outperform 2D GNNs by a large margin**. Formulating molecules as 3D graphs introduces new challenges for active selection criteria, which motivates our current work. > Q1: sampling the graphs and sampling the interactions Interactions of atoms are reflected by the 3D positions of all atoms in a graph, which is exactly the reason why we formulate molecules as 3D graphs. We design strategies to select the most informative molecules for efficient learning given a limited budget, this is, we select molecules with the most salient interatomic features. > Q2: chirality No. Our isometries are defined in the E(3) group instead of the SE(3) group, and the only difference is reflection, i.e., chirality in chemistry. First, AL selects more informative samples, and we want chiral molecules to have similar informativeness (diversity scores). Chirality can significantly affect molecular properties. If we treat a molecule as an important sample to learn from but don't treat its chiral counterparts as important, the chiral molecules are less likely to be seen in training. This can result in a GNN that is very biased due to the lack of diverse chiral data during training. Moreover, it is for efficiency. Some recent GNNs (like GemNet [5]) can recognize chiral molecules, but **the complexity is $O(n^3)$ while ours is $O(n^2)$.** > Q3: Figure 3 not start at the same loss Since all the methods start with the same initial labeled training set, their starting MAE values on the test set will be the same. **We have therefore plotted the MAE values from the first iteration onwards, to focus on the comparative performance of the methods after they start selecting samples using AL.** > Q4: Iterative manner of AL **Active Learning (AL) techniques are designed to operate in an iterative manner (see mainstream AL papers mentioned in the Related Work section).** A budget $k$ is imposed on the number of unlabeled samples that can be queried for annotation in each iteration.**We followed this conventional setup**. > L1: actual elements of each atom Please refer to W1; **our method consists of two parts that take into account both geometric and atomic information**. Diversity in this paper focuses on geometries. Actually, we did conduct experiments for diversity that, **a vector for atom types** was concatenated to the current geometric vector, but the performance remained the same. This is mainly because, considering chemical priors like interatomic forces, it's nearly impossible for two different stable molecules to have the same 3D shape. > L2: periodic structures **Periodic structures are only for crystal materials** (see famous methods CGCNN, MEGNET, ALIGNN etc). Our focus is on small molecules, so it's beyond our scope of interest. We will discuss this. > L3: unknown molecules This can be a very interesting perspective and a new research direction that is outside the scope of this work. We will make sure to include this in our limitations in the paper. Reference [1] G. T., et al. Principles governing amino acid composition. Journal of Molecular Biology [2] B. J., et al. Torsional Diffusion for Molecular Conformer Generation. NeurIPS'22 [3] Y. L., et al. Spherical message passing for 3D molecular graphs. ICLR'22 [4] J. G., et al. Neural message passing for quantum chemistry. ICML'17 [5] J. G., et al. GemNet. NeurIPS'21 --- Rebuttal Comment 1.1: Title: Completely agree that 3D is important Comment: Thank you for the rebuttal. I completely agree that 3D geometry is crucial. But it is not clear to me why 2 molecular graphs with different elements is not considered diverse, but rather uncertain. To a chemist, those are two completely different molecules, even though the graph, with elements removed, is the same. In other words, the formalism for graphs, as far as I can tell, described on line 227, does not include the elements. After careful thought, I think this work has promise, but will stay with my rating. --- Rebuttal 2: Title: The discussion period is closing in 32 hours Comment: Dear Reviewers yNCZ, Thank you for your insightful comments, which have been very helpful in improving the clarity of our work. As the **reviewer-author discussion stage is ending in 32 hours**, we kindly remind you that we have provided a detailed response to address your concerns. We hope our clarifications have resolved the issues to your satisfaction. If you believe your concerns have been adequately addressed, we kindly ask you to reconsider your scores. However, if there are any aspects that still need further clarification, please let us know, and we are more than willing to discuss them in detail. Sincerely, Authors --- Rebuttal 3: Title: We are eagerly awaiting your response Comment: Dear Reviewer yNCZ, Thanks again for your insightful comments, which we believe will improve the clarity of our work. Regarding your concerns, we've made efforts to address them in detail in our rebuttal. As you are the only reviewer who has not responded to our initial rebuttal, we sincerely hope you can check it at your earliest convenience. Since the discussion period is approaching its end, we hope you can let us know if we have addressed your critical points and reconsider the score if we have. Meanwhile, we welcome any additional questions and wish to discuss them in detail. Sincerely, Authors --- Rebuttal 4: Title: Follow up response to Reviewer yNCZ Comment: Thanks for your comment and discussion! While we highly respect your thought to maintain the rating, we still hope you can reconsider if other aspects in your initial set of comments have been clarified and addressed. ---------------------------------- Regarding this new comment, we provide clarifications below: 1) **We do consider elements in $\mathbf{G}$ in line 227.** In $\mathbf{G}=(V, E, P)$, $V$ denotes the set of elements (atom types), **as mentioned in the same line**. Particularly, this is a widely adopted setting in mainstream 3D GNN models for molecular learning [1,2,3,4]. 2) A typical active learning process includes two stages: the selection stage (selecting the most informative molecules to annotate, to reduce the annotation budget), and representation learning (like molecular property prediction and molecular dynamic simulation). **Elements are considered in both stages.** 3) The selection stage (Eq. (4) in the paper, lines 266-272) combines two strategies (diversity and uncertainty) to select the most informative molecules. **Elements are considered in the uncertainty part.** Following classical geometric descriptors, diversity focuses on geometric descriptors. **Hence, the overall selection stage (Eq. (4), where *r* denotes uncertainty scores for molecules) considers elements.** 4) For the example you gave - two molecules with the same graph but different elements, $\mathbf{G}$ in line 227 would be different, thus uncertainty would be very different, through Eq. (4) the informativeness of these two molecules are totally different. 5) Last but not least, for our diversity descriptor, we included a comparison with a well-known geometric descriptor in chemistry, the SOAP descriptor [5, 6, 7], which produces descriptors that describe local atomic environments using spherical harmonics and radial basis functions. **SOAP considers both geometric information and elements (species).** Our results reveal that our diversity component outperforms SOAP; our overall method (using both diversity and uncertainty as in Eq. (4)) also outperforms the SOAP descriptors (using both SOAP diversity and uncertainty). This outperformance can be attributed to the local nature of SOAP descriptors. Our work is among the first to consider a global 3D geometric descriptor for molecular learning, combined with an uncertainty component that considers chemical contexts (elements etc), enabling more accurate and robust quantification of the informativeness of an unseen (by the GNN) molecule. &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; We herein present the results. The reported values are Mean Absolute Error (MAE, lower is better). &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; *Abbreviations:* D= Diversity Only, B= Uncertainty + Diversity | Iteration | |*mu* | | | |*lumo*| | | |----------|------|-|---|---|----|----|--|--| | | SOAP D| Our D |SOAP B| Our B |&#124; SOAP D| Our D |SOAP B | Our B | | 1 | 2154| 1769|2057| 1741 | &#124; 1016 | 890 | 1013 | 876 | 2 | 1901| 1576|1877| 1550 | &#124; 921 | 805 |902| 799 | 3 | 1732| 1440|1721 |1412 | &#124; 839 | 740 | 838| 732 | 4 | 1701| 1352|1539| 1315 | &#124; 797 | 692 |791 | 677 | 5 | 1587| 1225|1456| 1205 | &#124; 759 | 680 |744| 660 | 6 | 1414| 1157|1345| 1121 | &#124; 699 | 629 |711| 615 | 7 | 1322| 1092|1280| 1072 | &#124; 667 | 604 |681| 594 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; The *p-values* in the table below further demonstrate that our proposed method (using both diversity and uncertainty) significantly improves the performance of the selection strategy compared to the SOAP descriptor (using both SOAP diversity and uncertainty). ||*mu*|*lumo*| |--| ---| --- | |*p-value*| 4.20 x 10E-6| 2.51 x 10E-6| [1] K.S., et al. SchNet: A continuous-filter convolutional neural network for modeling quantum interactions. NIPS'17 [2] J. G., et al. Neural message passing for quantum chemistry. ICML'17 [3] Y. L., et al. Spherical message passing for 3D molecular graphs. ICLR'22 [4] J. G., et al. GemNet: Universal Directional Graph Neural Networks for Molecules. NeurIPS'21 [5] S.D., et al. Comparing molecules and solids across structural and alchemical space. Physical Chemistry Chemical Physics [6] M.J., et al.Machine learning hydrogen adsorption on nanoclusters through structural descriptors. npj Computational Materials [7] A. B., el al. On representing chemical environments. Physical Review B - Condensed Matter and Materials Physics ----------------- Hope these clarifications can address your concern. If you have further questions, don't hesitate to let us know, and we are always ready to discuss them in detail.
Rebuttal 1: Rebuttal: We thank the reviewers for their invaluable comments and suggestions. In this global response, we would like to clarify a few points and present new results based on the feedback received. ### Clarification To start our rebuttal, we would like to clarify a few points about our method. > Our model considers both geometry and chemical contexts to achieve the best performance Firstly, our method consists of two parts: uncertainty and diversity. The diversity component, based on our proposed geometric isometries, aims to focus on diverse molecules, thereby capturing a wide range of chemical properties. The diversity part is our major contribution. As we will discuss later, the results of studying the individual contributions of uncertainty and diversity to performance show that the diversity component is highly effective. Additionally, our method includes an uncertainty component, which aims to quantify and incorporate the uncertainty in our model's predictions. The uncertainty part also takes chemical contexts (like atom types) into account as node features, enhancing the model's ability to recognize and better learn uncertain chemical interactions. **Therefore, our method considers both geometry and chemical contexts to achieve the best performance.** ### Additional Results To this end, we would like to discuss some additional results we obtained, which can be found in the **attached PDF file**. References will be made to this attached file rather than the paper unless otherwise specified. > 1. Comparison with a new baseline, BatchBALD (as suggested by Reviewer UZqH) In Figure 1, we included the results for a new baseline, BatchBALD[1], adapted for our regression setting following [2]. Similar to the uncertainty component in our method, BatchBALD is based on Bayesian uncertainty, and it serves as an important baseline in this venue. As observed in the results, **our method consistently achieves a lower MAE (the lower the better) at any given AL iteration compared to BatchBALD**. For our analysis of other baselines, please refer to the paper. In addition, as a complement to the result above, we updated Table 1 with p-values obtained using paired t-tests between our method and all baselines, now including BatchBALD. **The performance improvement achieved by our method is statistically significant** ($p << 0.001$) for all $4$ properties tested against BatchBALD. Given that BatchBALD is also a Bayesian-based uncertainty approach, this comparison further highlights the robustness and effectiveness of our method, especially the combination of uncertainty and diversity. To this end, it is important to emphasize that our method has two components: uncertainty and diversity. The diversity component, which is our main contribution, plays a more fundamental role in improving performance. This is because capturing 3D atomic geometries is crucial for accurately modeling and understanding molecular interactions. By incorporating diversity, our method ensures a more comprehensive selection of informative samples, leading to better overall results. > 2. On the individual impact of diversity and uncertainty components (as suggested by Reviewer Abqb and Reviewer JSdU) In Figure 2, we present a study on the individual impact of the diversity and uncertainty components. It is clear that our proposed method outperforms the individual use of diversity or uncertainty alone. The key to this outperformance lies in our method’s dual focus on both geometric importance and chemical contexts. Moreover, it can be observed that the diversity component alone shows strong performance, it is only slightly less effective than our method because it **captures the geometries of molecules, which are fundamental in distinguishing different molecules with different properties**. On top of this, we also conducted statistical tests to conclude that **the improvement of our method is significant** compared to only diversity or only uncertainty ($p << 0.001$) in Table 2. > 3. Vectorized implementation of our methods and updated timing results (in response to the concerns of Reviewer UZqH and Reviewer Abqb) In Table 3 of the PDF attached, we provided updated results on the average time taken by our method. Compared with the original results in the paper (see *Table 2* in the paper), the computational time is reduced from 15 minutes slower than random to 11 minutes slower than random. This improvement in calculation time is due to **vectorizing our implementation and utilizing the GPU to perform calculations for diversity matrix computation**. Vectorization accelerates computations by processing multiple data points parallelly, a task at which GPUs excel. For instance, deep neural networks also benefit from parallelization, achieving faster performance on GPUs compared to CPUs. A simple example is calculating the inner product of two vectors. Without vectorization, we compute the products of corresponding elements one by one and then sum them sequentially. With vectorization, the GPU performs element-wise multiplications in parallel and then computes the sum in a single, efficient reduction step. It can be seen that our sampling approach is highly effective and efficient. Note that the Coreset approach is an important diversity-based baseline, and our sampling approach is much more efficient than it (127 vs 64.9 minutes). The Random approach does not involve any sophisticated sampling strategy, and our method is just slightly more expensive than it (53 vs 64.9 minutes). Thus, we believe our sampling approach is readily applicable to larger datasets. [1] Andreas Kirsch, et al. BatchBALD: Efficient and Diverse Batch Acquisition for Deep Bayesian Active Learning, NIPS [2] D.H.,et al. A framework and benchmark for deep batch active learning for regression, JMLR Pdf: /pdf/b5b793ee6ce960225a0418a0c5aa4802aa37b0d8.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Classification Done Right for Vision-Language Pre-Training
Accept (poster)
Summary: The paper proposes a simple alternative to CLIP-style pretraining that doesn't require a text encoder and can be done using only a text tokenizer. The goal is to provide a simpler yet more efficient alternative to vision-language model (VLM) pretraining, which is known to be very expensive. Additionally, the authors simplify the VLM pertaining setup into a classification task which is both novel and very intuitive. The authors demonstrate the efficiency of their method through experiments on both classification and vision-and-language downstream tasks. Additionally, they present a comprehensive set of ablations to dissect the performance gains of their proposed approach. While there are some limitations to their proposed method, the simplification offers valuable benefits for traditional classification and vision-language tasks. Strengths: 1. The paper is well-written and easy to follow, with clear motivation and well-planned experiments. 2. The proposed method simplifies vision-language pretraining by eliminating the need for a text encoder, using a text tokenizer instead. This approach demonstrates comparable or better performance on both classification and vision-language tasks. It effectively transforms the contrastive pretraining task into an open vocabulary classification task, where text tokens provide supervision. This insight is powerful as it can help the community develop strong vision encoders more efficiently. 3. The experiments and ablations presented are thorough. The authors systematically analyze every component of their method, including data, model scale, and the tokenizer. 4. The paper also highlights that for simple classification tasks or vision question-answering tasks, CLIP-like pretraining is not necessary for learning strong, robust vision encoders. This is a novel insight of the paper. Weaknesses: 1. For classification tasks, the authors only present ImageNet-1K classification accuracy and do not perform experiments on other popular few-shot and zero-shot classification benchmarks. Including these benchmarks would have strengthened the paper. 2. Vision-language models like CLIP are also used in text-to-image generation systems. The authors do not address this aspect or present any experiments showcasing the effects of using their encoder in such systems. 3. This work shares similarities with research that cleans pretraining captions before performing CLIP-style pretraining [1]. However, the authors do not compare their method against such approaches. I believe this comparison is important, as synthetic captions are increasingly used for CLIP-style pretraining. 4. There is a small reference error in line 235. [1] Fan, Lijie, et al. "Improving clip training with language rewrites." Advances in Neural Information Processing Systems 36 (2024). Technical Quality: 4 Clarity: 4 Questions for Authors: 1. In Table 7, the authors show that removing stop-words has no effect and state in line 279 that "keeping stopwords could help the vision encoder." This statement is not explained and seems counter-intuitive. Could the authors elaborate on this observation? 2. The authors claim they can use IDF weights in an online manner. Could the authors explain how they achieve this? I don't understand how this process can be done online. 3. The authors demonstrate that the performance of SuperClass is as good as or better than traditional CLIP-style pretraining on ImageNet-1K classification. However, it would be interesting if the authors could specify which classes benefit more from their method compared to traditional CLIP. Such an analysis could help understand where the gains stem from. Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: 1. The authors acknowledge that their approach completely ignores word order, which can significantly impact the encoder's ability to understand tasks requiring spatial reasoning. 2. Additionally, this approach may not be suitable for text-to-image systems, as these models need to understand word order and infer relationships from text to generate accurate images. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **W1**. Other popular few-shot and zero-shot classification benchmarks. **A1**. Thanks for the advice. We have added more evaluation benchmarks, including 10-shot classification on ImageNet-1k, pets and cars, and zero-shot classification on 8 more datasets. **10-shot classification** We follow the setting of Cappa[1]. For each dataset and model, we run 3 times, and report the mean results and variance in the following table. Our method surpasses CLIP on IN-1K and Pets by clear margins with improvements of 1.6 and 2.2 points, while being comparable with CLIP on Cars (92.6 v.s 92.7). | Case | IN-1K | Pets | Cars | |-------------|---------------|---------------|---------------| | MAE | 44.0(0.1) | 57.7(0.2) | 32.5(0.1) | | Dinov2 | _77.0(0.1)_ | _94.2(0.1)_ | 76.8(0.2) | | Cappa* | 70.6(0.2) | 92.6(0.5) | 92.2(0.2) | | CLIP | 75.6(0.1) | 92.2(0.6) | **92.7(0.3)** | | Superclass | **77.2(0.1)** | **94.6(0.1)** | _92.6(0.1)_ | **zero-shot classification** Following LiT[2], we train a text encoder on Datacomp-1B by keeping the image encoder locked. In this way, we are able to perform zero-shot classification. We tested the model on the following 10 datasets. Superclass beats openCLIP on seven of the datasets. It is worth noting that SuperClass uses a ViT large model with patch size 16, while the baseline method adopts a patch size of 14. This makes the superiority of SuperClass over openCLIP even clearer. | Case | Model | Data | Seen samples | IN-1K | imagenet-v2 | imagenet-r | imagenet-a | imagenet-sketch | GTSRB | Rendered SST2 | ObjectNet | SUN397 | Country211 | |------------|----------|-------------|--------------|-------|-------------|------------|------------|-----------------|-------|---------------|-----------|--------|------------| | openCLIP | ViT-L/14 | Datacomp-1B | 12.8B | 79.2 | 72.1 | 90.8 | 69.6 | 68.0 | 58.5 | 61.0 | 74.3 | 74.3 | 31.6 | | SuperClass | ViT-L/16 | Datacomp-1B | 12.8B | 79.7 | 72.4 | 91.6 | 68.8 | 70.6 | 58.5 | 61.6 | 73.9 | 73.8 | 32.3 | [1] Tschannen, Michael, et al. "Image captioners are scalable vision learners too." NeurIPS 2024. [2] Zhai, Xiaohua, et al. "Lit: Zero-shot transfer with locked-image text tuning." CVPR. 2022. > **W2**. The application in text-to-image generation systems. **A2**. Thanks for the advice. Text-to-image generation is indeed a particularly important application of vision-language models. However, due to the limited time available for rebuttal, we were unable to attempt and train this task. We will explore this issue further in the future. > **W3**. The effect of synthetic captions **A3**. Thanks for the nice advice. We use the code provided by LaCLIP[1] https://github.com/LijieFan/LaCLIP for investigation. We compare SuperClass against CLIP with ViT-B/16 following the setting of LaCLIP. The numbers of CLIP and LaCLIP are directly borrowed from the paper. As shown by the results in the following table, Superclass can also benefit from rewritten captions, and the improvement is even greater than that of CLIP (+1.1 vs. +1.6 in zero-shot and +1.2 vs. 1.9 in Linear probing). The possible reason is that the rewritten captions transform the sentence structure but keep the major objects and subjects intact, indirectly enhancing the weight of those visual-related words. | Case | Data | epoch | Zero-shot | Linear Probing | |------------|------------|-------|------------|----------------| | CLIP | CC3M | 25 | 15.8 | 54.5 | | SuperClass | CC3M | 25 | 16.9(+1.1) | 55.7(+1.2) | | LaCLIP | CC3M recap | 25 | 21.5 | 56.5 | | SuperClass | CC3M recap | 25 | 23.1(+1.6) | 58.4(+1.9) | [1] Fan, Lijie, et al. "Improving clip training with language rewrites." NeurIPS 2024. --- Rebuttal 2: Title: Rebuttal by Authors Part 2 Comment: > **Q1**. The effect of removing stop-words In Table 7, removing stop-words leads to a decrease in classification accuracy, for example, the linear probing accuracy decreases from 76.0 to 75.7 (-0.3), and the zero-shot accuracy reduces from 61.7 to 61.0 (-0.7). Therefore, we conclude that stopwords could help the vision encoder. This might be explained by the fact that stop-words also carry some useful visual information. For example, 'she', 'her', and 'him' can indicate a person's gender; 'on', 'off', 'above', 'below', 'up', and 'down' can indicate operational status or position; '@' is likely to indicate an email address. Stop words and punctuation are listed below. `['i', 'me', 'my', 'myself', 'we', 'our', 'ours', 'ourselves', 'you', "you're", "you've", "you'll", "you'd", 'your', 'yours', 'yourself', 'yourselves', 'he', 'him', 'his', 'himself', 'she', "she's", 'her', 'hers', 'herself', 'it', "it's", 'its', 'itself', 'they', 'them', 'their', 'theirs', 'themselves', 'what', 'which', 'who', 'whom', 'this', 'that', "that'll", 'these', 'those', 'am', 'is', 'are', 'was', 'were', 'be', 'been', 'being', 'have', 'has', 'had', 'having', 'do', 'does', 'did', 'doing', 'a', 'an', 'the', 'and', 'but', 'if', 'or', 'because', 'as', 'until', 'while', 'of', 'at', 'by', 'for', 'with', 'about', 'against', 'between', 'into', 'through', 'during', 'before', 'after', 'above', 'below', 'to', 'from', 'up', 'down', 'in', 'out', 'on', 'off', 'over', 'under', 'again', 'further', 'then', 'once', 'here', 'there', 'when', 'where', 'why', 'how', 'all', 'any', 'both', 'each', 'few', 'more', 'most', 'other', 'some', 'such', 'no', 'nor', 'not', 'only', 'own', 'same', 'so', 'than', 'too', 'very', 's', 't', 'can', 'will', 'just', 'don', "don't", 'should', "should've", 'now', 'd', 'll', 'm', 'o', 're', 've', 'y', 'ain', 'aren', "aren't", 'couldn', "couldn't", 'didn', "didn't", 'doesn', "doesn't", 'hadn', "hadn't", 'hasn', "hasn't", 'haven', "haven't", 'isn', "isn't", 'ma', 'mightn', "mightn't", 'mustn', "mustn't", 'needn', "needn't", 'shan', "shan't", 'shouldn', "shouldn't", 'wasn', "wasn't", 'weren', "weren't", 'won', "won't", 'wouldn', "wouldn't"]` `{'{', '+', '(', '$', '}', '!', '%', '\\', '<', ';', '|', ']', '"', "'", ',', '&', '=', ')', '_', '^', '~', '#', '@', '.', '[', '*', '?', ':', '/', '>', '-'}` > **Q2**. Online IDF **A2**. To implement online IDF, we set up two global variables to track the number of seen samples (N) and the occurrence count of each subword. As training proceeds, these values will gradually approach the true word frequency of the entire dataset. The advantage of online IDF is its ease of use, allowing the code to be directly transferred to new datasets. We also conducted experiments to compare recognition accuracy using online IDF and offline IDF. The zero-shot accuracy with online IDF drops by about 0.3, and the linear probing accuracy drops by about 0.1. > **Q3**. The analysis of classification results **A3**. We evaluated the performance of CLIP and Superclass on the 1000 classes in ImageNet. Among them, Superclass performs better in 502 classes, while CLIP outperforms in 270 classes, and they perform equally well in the remaining classes. We also extracted the top 10 classes with the largest performance differences and displayed them in the table below. We do not find any obvious patterns, but it is possible that Superclass and CLIP can complement each other. This is an interesting direction for future research. | Class name | muzzle | music speaker | yellow garden spider | cardboard box / carton | Carolina anole | black-footed ferret | monitor | missile | cricket insect | garter snake | |------------|--------|---------------|----------------------|------------------------|----------------|---------------------|---------|---------|----------------|--------------| | CLIP | 0.72 | 0.68 | 0.82 | 0.66 | 0.66 | 0.62 | 0.36 | 0.44 | 0.58 | 0.82 | | SuperClass | 0.88 | 0.84 | 0.96 | 0.80 | 0.81 | 0.76 | 0.50 | 0.58 | 0.70 | 0.94 | | Class name | European polecat | oxygen mask | parallel bars | ox | promontory | English Setter | stethoscope | split-rail fence | rotisserie | cassette player | |------------|------------------|-------------|---------------|------|------------|----------------|-------------|------------------|------------|-----------------| | CLIP | 0.64 | 0.78 | 0.82 | 0.68 | 0.62 | 0.92 | 0.94 | 0.88 | 0.98 | 0.48 | | SuperClass | 0.44 | 0.64 | 0.70 | 0.58 | 0.52 | 0.82 | 0.84 | 0.78 | 0.88 | 0.38 | --- Rebuttal 3: Comment: At its core, this paper introduces a new way to pre-train open-world Vision-Language foundational models, it's simple and elegant and seems to have maintained the zero-shot capabilities that CLIP offers. But I believe that using such a model might not be suited for generation, where one may want a text encoder to infer the relationships between words, i.e. "A dog over a cat" vs "A cat over a dog". Additionally, I believe that such a technique can also be used to clean the pertaining data of VLMs, a problem that LAION has faced in the past [1]. [1] https://cyber.fsi.stanford.edu/news/investigation-finds-ai-image-generation-models-trained-child-abuse Thanks for the comprehensive answers to my questions, I will increase the score for soundness in my review. --- Rebuttal Comment 3.1: Comment: Thank you for your insightful feedback and for recognizing the strengths of our paper. We appreciate your thoughtful comments on the potential applications and limitations of our model. Your suggestions provide valuable directions for future work. Thank you again for your comprehensive review and for increasing the score. Your support and recognition are greatly appreciated.
Summary: This paper proposes a simple classification-based vision-language pretraining method. The proposed SuperClass approach directly uses an off-the-shelf subword-level tokenizer to obtain the classification labels from raw text, without requiring any preprocessing. Then, the vision encoder is trained by optimizing the multi-label softmax loss, with Inverse Document Frequency (IDF) as the weight of each label. SuperClass achieves promising performance on classification and various vision-language tasks. Ablation experiments are conducted to validate the impact of different design choices. Strengths: 1. The proposed method in this paper reveals the potential of classification in vision-language pretraining, which provides empirical evidence for further researching. 2. With a simple framework that requires no data preprocessing, SuperClass enables large-scale training on paired image-text data. Compared to previous classification-based pretraining methods, the proposed approach demonstrates greater practical applicability. 3. SuperClass is training-efficient by removing the need for a text encoder, and the extensive experimental results demonstrate the effectiveness and scalability of the proposed approach. Weaknesses: 1. The robustness of the proposed method to different model types remains unclear. All the experiments in this paper use ViT as the vision encoder, and there is no evidence to demonstrate the effectiveness of the SuperClass on other encoder architectures, such as ResNet. 2. The ablation experiment on different classification losses is only conducted on classification tasks, using a ViT-B/16 backbone and 512M seen data samples. It is important to demonstrate the robustness of the softmax loss on a broader range of vision-language tasks, beyond just classification. Furthermore, the impact of the choice of loss function on the scalability of the proposed method is not discussed. Technical Quality: 2 Clarity: 3 Questions for Authors: Please refer to the weaknesses. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The authors discuss some of the limitations of their work in Section 5. But I would like them to consider some of my concerns above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **W1**: The robustness of the proposed method to different model types remains unclear. **A1.** Thank you for the valuable advice. To evaluate the robustness of our proposed method across different model types, we selected two representative convolution-based networks: ResNet50 and ConvNext-Tiny. We compare SuperClass against CLIP for ImageNet zero-shot (LiT) and linear probing classification, as shown in the table below. All experiments were conducted with a batch size of 16k and 1.28B seen samples. | Method | Backbone | Zero-shot | Linear Probing | |------------|---------------|-----------|----------------| | CLIP | RN-50 | 60.73 | 70.28 | | Superclass | RN-50 | 62.81 | 71.92 | | CLIP | ConvNext-tiny | 59.94 | 70.35 | | Superclass | ConvNext-tiny | 62.85 | 72.33 | We observe that SuperClass surpasses CLIP in all settings by a clear margin, ranging from 1.64 to 2.91. These results demonstrate that the superiority of SuperClass over CLIP is robust across different model architectures. > **W2**. The robustness of softmax loss on a broader range of vision-language tasks **A2**. Thank you for the advice. We further compare the sofrmax loss against different losses on various vision-language tasks. We selected three model sizes (ViT-S/16, ViT-B/16 and ViT-L/16) and trained them on Datacomp-1b with 512 million seen samples. The detailed results are presented in the following table. We observe that softmax is the best-performing loss function across different vision-language tasks and model sizes. | Loss | dataset | Seen Sample | Backbone | IN 0-shot | Linear Prob | VQAv2 (val) | GQA | VizWiz (val) | TextVQA (val) | SciQA (Img) | MMBench (en/cn) | MME (P/C) | POPE | MMMU | SEEDBench | |:-------:|:-----------:|:-----------:|:--------:|:---------:|:-----------:|:-----------:|:-----:|:------------:|:-------------:|:-----------:|:---------------:|:--------------:|:-----:|:-----:|:---------:| | Twoway | Datacomp-1b | 512M | vit-S/16 | 50.6 | 66.3 | 65.44 | 55.65 | 43.5 | 15.13 | 66.44 | 51.71/40.72 | 1278.42/335.35 | 79.81 | 34.41 | 50.04 | | BCE | Datacomp-1b | 512M | vit-S/16 | 47.7 | 65.8 | 64.49 | 55.63 | 48.15 | 13.27 | 65.99 | 50.08/39.86 | 1282.30/323.92 | 79.87 | 35.3 | 50.12 | | ASL | Datacomp-1b | 512M | vit-S/16 | 48.3 | 66 | 64.58 | 55.49 | 44.96 | 13.44 | 65.59 | 50.77/40.80 | 1290.92/315.15 | 80.46 | 35.1 | 49.98 | | Softmax | Datacomp-1b | 512M | vit-S/16 | 51.7 | 67 | 65.6 | 56.03 | 43.29 | 16.61 | 64.65 | 49.91/41.92 | 1315.08/306.78 | 81.46 | 35.8 | 51.03 | | Twoway | Datacomp-1b | 512M | vit-B/16 | 59.7 | 74.8 | 68.05 | 57.79 | 47.35 | 22.09 | 66.63 | 54.55/46.04 | 1350.92/335.71 | 82.51 | 36.8 | 53.02 | | Margin | Datacomp-1b | 512M | vit-B/16 | 58.1 | 73.5 | 67.08 | 56.67 | 44.17 | 17.87 | 64.9 | 53.43/42.95 | 1341.75/312.14 | 81.71 | 34.7 | 52.32 | | BCE | Datacomp-1b | 512M | vit-B/16 | 58.5 | 73.6 | 67.35 | 56.91 | 49.2 | 18.34 | 64.95 | 52.49/43.47 | 1327.92/332.14 | 81.89 | 36.7 | 52.76 | | ASL | Datacomp-1b | 512M | vit-B/16 | 58.7 | 73.8 | 67.59 | 57.02 | 47.02 | 19.01 | 65.44 | 54.72/46.39 | 1345.54/357.5 | 81.93 | 35.3 | 52.75 | | Softmax | Datacomp-1b | 512M | vit-B/16 | 60.8 | 75.6 | 68.08 | 57.27 | 47.6 | 23.73 | 65.44 | 54.55/46.13 | 1310.65/335.00 | 82.58 | 34.6 | 52.53 | | Twoway | Datacomp-1b | 512M | vit-L/16 | 66.7 | 78.3 | 70.2 | 58.36 | 46.25 | 27.21 | 64.35 | 57.30/48.62 | 1365.98/315.00 | 82.97 | 36 | 53.87 | | BCE | Datacomp-1b | 512M | vit-L/16 | 64.9 | 77.2 | 69.56 | 57.93 | 48.62 | 24.9 | 64.3 | 57.64/47.33 | 1316.55/355.71 | 83.17 | 35.1 | 54.01 | | ASL | Datacomp-1b | 512M | vit-L/16 | 66.1 | 77.6 | 69.71 | 58.43 | 51.16 | 25.49 | 65.49 | 58.41/49.48 | 1389.29/330.35 | 83.51 | 34.6 | 54.07 | | Softmax | Datacomp-1b | 512M | vit-L/16 | 68.3 | 80.1 | 70.27 | 58.03 | 48.98 | 28.87 | 67.03 | 57.30/49.14 | 1334.33/366.42 | 83.36 | 35.4 | 54.41 | > **W3**. The scaling properties of loss function **A3**. Due to the limited time for rebuttal, we only ablated the effect of different losses with different model sizes. As the model size increases, softmax consistently achieves the best accuracy while also demonstrating equal or better scalability compared to other losses. | | ViT-S/16 | ViT-S/16 | ViT-B/16 | ViT-B/16 | ViT-L/16 | ViT-L/16 | |:-------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:| | | ZS | LP | ZS | LP | ZS | LP | | Twoway | 50.6 | 66.3 | 59.7 | 74.8 | 66.7 | 78.3 | | Margin | 47.4 | 65.6 | 58.1 | 73.5 | 64.6 | 76.9 | | BCE | 47.7 | 65.8 | 58.5 | 73.6 | 64.9 | 77.2 | | ASL | 48.3 | 66.0 | 58.7 | 73.8 | 66.1 | 77.6 | | Softmax | 51.7 | 67.0 | 60.8 | 75.6 | 68.3 | 80.1 | --- Rebuttal Comment 1.1: Comment: Thank you for the response. I will maintain my score at this stage --- Reply to Comment 1.1.1: Comment: Thank you for your quick response and for taking the time to review our paper. We appreciate your feedback and are grateful for your recognition of our work.
Summary: This paper explores a new direction to pretrain vision backbones using large scale image-text pairs for learning visual representations which are suitable to various downstream tasks. More specifically, this work proposes a classification based objective function as an effective alternative to CLIP's standard cross-modality similarity based constrastive loss. The proposed model SuperClass uses a image encoder to map image to token probabilities having a head size equal to CLIP's tokenizer vocabulary. The texts are converted into labels using weighted tokenizer. Both CLIP baseline and SuperClass are pretrained on datacomp image-text pairs dataset and evaluation results across various vision and vision-language benchmarks are reported. The proposed method shows greater efficiency due to being text-encoder free and also performs favorably well over previous approaches. Extensive ablation studies are performed to justify the design choices made in the paper. Strengths: **Strengths:** 1) The idea of pretraining large scale vision models using classification objectives is very motivating, as it provides advantages over image-text contrastive loss such as compute efficiency, disentangling the role of text embeddings etc. 2) The proposed method is simple and effective. 3) The experimental results are favorable for SuperClass against its direct baseline CLIP. 4) The choice of the components in the proposed method such as IDF based weighting, loss function and tokenization has been validated in the paper via ablation studies. 5) Paper is easy to read and understand. Weaknesses: **Weaknesses** 1) In my understanding, one of the weaknesses of this work is lack of comparisons with related works. For example, the only method with which SuperClass is compared with is CLIP which is only a baseline. I believe there should be comparisons with other related SOTA works. 2) The proposed approach might not be capable of doing multi-label classification or zero-shot classification as compared to other competitors such as RAM, CLIP etc. Also CLIP has out of the box additional features such as prompt ensembling, zero-shot segmentation [1] etc which might be not present in SuperClass. This would question how it can comprehensively show effectiveness over CLIP and other vision-language models. 3) It is not clear how the proposed method learns suitable representations when using the subwords as labels which does not correspond to any visual concept. I believe there is a bit of analysis missing in the manuscript. 4) The results for the zeroshot and linear probing in Tab1 and Tab 2 of the main paper are different for the same model. It is unclear if why the results are different for same model. [1] Extract Free Dense Labels from CLIP (ECCV 2022) Technical Quality: 2 Clarity: 3 Questions for Authors: Please refer to the weaknesses section for the questions. I will highly recommend the authors submit a rebuttal response. I will be happy to reconsider my final scores based on the response from the authors. Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: Yes the authors have adequately addressed the limitations in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **W1**: Comparisons with other related SOTA works **A1**. Thanks for the advice. We have added more SOTA methods for comparison, including self-supervised learning methods (MoCov3, Dinov1&v2, MAE, BEiT, CAE) and weakly-supervised methods (CatLIP, Cappa). Additionally, we have evaluated more downstream tasks and datasets, such as semantic segmentation on ADE20K and instance segmentation on COCO. Please refer to our response to Reviewer1(aefc) and Reviewer4(bBvT) for more results. **Linear probing on ImageNet-1K** | Method | Pre-training Data | ViT-Base | ViT-Base | ViT-Large | ViT-Large | |:---:|:---:|:---:|:---:|:---:|:---:| | | | #Seen Samples | Top-1 (%) | #Seen Samples | Top-1 (%) | | contrastive or clustering based | | | | | | | MoCov3 | IN1K | 400M | 76.7 | 400M | 77.6 | | DINO | IN1K | 512M | 78.2 | - | - | | iBoT | IN22K | 400M | 79.5 | 256M | 81.0 | | DINOv2 | LVD-142M | 1.28B | - | 1.92B | 84.5 | | reconstruction based | | | | | | | BEiT | D250M+IN22K | 1B | 56.7 | 1B | 73.5 | | SimMIM | IN1K | 1B | 56.7 | - | - | | CAE | D250M | 2B | 70.4 | 2B | 78.1 | | MAE | IN1K | 2B | 68.0 | 2B | 75.8 | | language-image pretraining based | | | | | | | CLIP | WIT400M | 12.8B | 78.5 | 12.8B | 82.7 | | Cappa | WebLI-1B | - | - | 9B | 83.0 | | OpenCLIP | Datacomp-1B | - | - | 12.8B | 83.9 | | Superclass | Datacomp-1B | 12.8B | 80.2 | 12.8B | 85.0 | | Superclass | Datacomp-1B | 1.28B | 78.7 | 1.28B | 82.6 | | Superclass | Datacomp-1B | 512M | 75.6 | 512M | 80.5 | **Instance segmentation and semantic segmentation** Results of instance segmentation are obtained by using Mask R-CNN on COCO with an input resolution of 1024×1024. Semantic segmentation results are obtained by using UperNet on ADE20K with an input resolution of 512×512. | Method | #Seen Samples | Semantic Segmentation mIoU | Instance Segmentation APmask | |:---:|:---:|:---:|:---:| | Supervised | - | 49.9 | 43.9 | | MoCov3 | 400M | 49.1 | 44.0 | | BEiT | 400M | 53.3 | 47.1 | | CAE | 2B | 54.7 | 47.6 | | MAE | 2B | 53.6 | 47.2 | | CLIP | 12.8B | 57.9 | 48.3 | | Superclass | 1.28B | 56.2 | 48.1 | | Superclass | 12.8B | 58.4 | 49.0 | **VLM downstream tasks** | Model | Size | VQAv2 | GQA | VizWiz | TextVQA | SciQA | MME | MMBench | PoPE | MMMU | |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | openCLIP | ViT-Large | 74.54 | 61.03 | 50.47 | 38.16 | 67.33 | 1434.86/268.92 | 60.73 | 85.52 | 35.9 | | MAE | ViT-Large | 63.5 | 54.58 | 50.22 | 11.55 | 64.75 | 1175.04/343.92 | 42.44 | 80.69 | 35.7 | | DINOv2 | ViT-Large | 73.32 | 61.87 | 49.15 | 14.08 | 64.9 | 1335.61/296.78 | 57.9 | 86.24 | 35.3 | | Superclass | ViT-Large | 75.24 | 60.96 | 54.33 | 39.2 | 66.09 | 1371.32/321.78 | 63.14 | 85.69 | 36 | openCLIP and Superclass are trained with 12.8B seen samples. > **W2**: The proposed approach might not be capable of doing multi-label classification or zero-shot classification ...how it can comprehensively show effectiveness over CLIP and other vision-language models. **A2**. Our paper focuses on the issue of vision encoder pretraining, specifically whether the pre-trained model can perform better in vision-language models. Therefore, the native zero-shot capability may not be the primary property we consider. However, we can use Lock Image Tuning(LiT)[69] to equip our pre-trained vision backbone with zero-shot classification and retrieval capabilities. After LiT, we also could do zero-shot segmentation using MaskCLIP[a]. The experimental results show that SuperClass could achieve much better performance on the Pascal context and COCO stuff dataset. | Method | Backbone | #Seen sample | PASCAL Context | COCO Stuff | |------------|----------|--------------|----------------|------------| | CLIP | ViT-B/16 | 1.28B | 16.2 | 8.7 | | Superclass | ViT-B/16 | 1.28B | 20.2(+4.0) | 13.2(+4.5) | [a] Extract Free Dense Labels from CLIP (ECCV 2022) > **W3**: using the subwords as labels does not correspond to any visual concept **A3**. Firstly, it is important to note that a single object class can be mapped to one or multiple subwords. Here is a detailed explanation of how our method works in both scenarios: 1. Single Subword Mapping: - When a class is mapped to a single subword, the process is similar to traditional classification tasks. The model learns to associate the subword with the corresponding visual patterns. - This is akin to standard classification where each class is represented by a unique label, and the model learns the association between the label (subword) and the visual features. 2. Multiple Subword Mapping: - When a class is mapped to multiple subwords, our optimization objective is to maximize the co-occurrence probability of subwords that belong to the same class. - This means the model learns to associate multiple subwords with the corresponding visual patterns, effectively capturing the relationship between subwords and the visual concept they represent. > **W4**: The results for the zero-shot and linear probing in Tab1 and Tab 2 of the main paper are different for the same model. **A4**. Because the models are trained with different #seen samples. In Table 1 and Table 2, we use the ViT-L/16 as the backbone. The models in Table 1 are trained with 12.8B seen samples. In Table 2, we study the effect of different seen samples. The models are trained with 128M, 512M, and 1.28B seen samples, respectively. --- Rebuttal Comment 1.1: Title: Thank you for providing the rebuttal response Comment: Dear Authors, Thank you for providing the rebuttal response. While the proposed SuperClass method is not natively zero-shot, it shows other various flexibilities as demonstrated in the rebuttal. More importantly, this is a new pretraining style for vision backbones rather than a variant of CLIP. Honestly, the current pitch of the paper puts too much emphasis on CLIP (instead of advocating for vision backbone pretraining), and that is why many of the concerns from the reviewers are about comparisons with CLIP-like models. In the end, I believe this paper would allow the research community to improve vision backbone pretraining and have good insights. Therefore I will increase my score to weak accept, and hope that the paper is accepted. For the final version, I strongly recommend the authors to include all the rebuttal discussions in the main paper, and also put more emphasis on vision backbone pretraining, so that no ambiguities are left. --- Reply to Comment 1.1.1: Comment: Thank you for your valuable reviews and feedback. We appreciate your insights on emphasizing the new pretraining style for vision backbones rather than comparing it predominantly with CLIP. We will ensure that the final version of the paper includes all the rebuttal discussions and places greater emphasis on vision backbone pretraining to eliminate any ambiguities. Thank you again for your constructive comments and for increasing your score to a weak accept. We hope that our paper will contribute positively to the research community.
Summary: This paper introduces a multi-label classification pre-training style for visual image encoder pre-training. Strengths: The proposed method is straightforward. Weaknesses: - The zero-shot capacity of such a multi-label pre-trained model is not well demonstrated. - The paper lacks a comprehensive comparision with weakly-supervised or unsupervised visual encoder methods. Only comparing with CLIP is not enough. - Some arguments about previous so-called "bag-of-word classification" pre-trained methods may not be correct. Technical Quality: 2 Clarity: 3 Questions for Authors: - 1. The greatest advantage of CLIP is its zero-shot capacity across various downstream tasks. The zero-shot ability of SuperClass is not well-demonstrated on downstream tasks like zero-shot text-image retrieval, zero-shot text-video retrieval, and zero-shot STR. The reviewer thinks the authors should include an analysis of the zero-shot ability like the original CLIP paper. - 2. The baselines should not be CLIP-style pre-trained vision language models. The proposed method aims to pre-train a vision transformer. The baseline should be other weakly-supervised methods or self-supervised methods like MAE, etc. The authors should compare these methods in terms of training efficiency and performance of transfer learning. These methods also show good transfer learning ability, for example, MAE has good performance of transfer learning on COCO, while it is only pre-trained on ImageNet-1K. - 3. Some arguments about previous so-called "bag-of-word classification" pre-trained methods may not be correct. In the introduction, the authors claim " However, these methods fail to gain popularity from the community, as most of the experiments are conducted on a small scale and there is no evidence showing their scalability to data size and model size in comparison to CLIP ". Nevertheless, CatCLIP[1] already conducts experiments with CLIP-H on DataComp-1.3B. - 3.1. The authors should also compare with these "bag-of-word classification" pre-trained methods. - 4. The authors say "All experiments were carried out on an A100 GPU equipped with 80GB of memory." Could the authors provide training times for the experiments in Table 1? Besides, are experiments in Table 2 also trained on an A100 80G? If so, could the authors provide the training time? [1] https://arxiv.org/pdf/2404.15653 Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >**Q1** Zero-shot capacity. **A1.** We thank the reviewer for triggering the discussion on zero-shot abilities of CLIP and our model. Honestly, our model does not come with trivial zero-shot image-text retrieval usage. However, we can enable this behavior by levering LiT [a], which learns a text encoder by keeping the image encoder fixed. As shown by the following table, we test the zero-shot ability of our model on 10 datasets. Superclass beats openCLIP on seven of the datasets. It is worth noting that SuperClass uses a ViT large model with patch size 16, while the baseline method adopts a patch size of 14. This makes the superiority of SuperClass over openCLIP even clearer. | Case | Model | Data | Seen samples | IN-1K | imagenet-v2 | imagenet-r | imagenet-a | imagenet-sketch | GTSRB | Rendered SST2 | ObjectNet | SUN397 | Country211 | |------------|----------|-------------|--------------|-------|-------------|------------|------------|-----------------|-------|---------------|-----------|--------|------------| | openCLIP | ViT-L/14 | Datacomp-1B | 12.8B | 79.2 | 72.1 | 90.8 | 69.6 | 68.0 | 58.5 | 61.0 | 74.3 | 74.3 | 31.6 | | SuperClass | ViT-L/16 | Datacomp-1B | 12.8B | 79.7 | 72.4 | 91.6 | 68.8 | 70.6 | 58.5 | 61.6 | 73.9 | 73.8 | 32.3 | Moreover, we'd like to emphasize that there is another increasingly more important application of the CLIP model. Specifically, CLIP is predominantly the default vision encoder in existing vision-language models (e.g., LLaVA, BLIP). We show that when combined with a large language model, SuperClass is able to substantially improve performance over CLIP. Please refer to Table 2 and Table 3 on the paper for details. [a] Zhai, Xiaohua, et al. "Lit: Zero-shot transfer with locked-image text tuning." CVPR. 2022. >**Q2** The baseline should be other weakly-supervised methods or self-supervised methods like MAE, etc **A2.** Thanks for the suggestion. We have added more SOTA methods for comparison, including self-supervised methods (MoCov3, Dinov1&v2, MAE, BEiT, CAE) and weakly-supervised methods (CLIP, Cappa). Additionally, we have evaluated more downstream tasks and datasets, such as semantic segmentation on ADE20K and instance segmentation on COCO. Finally, Following the LLaVA setup, we combine frozen CLIP models, self-supervised models, and SuperClass models with the pre-trained Vicuna-V1.5-7B and perform downstream tasks. The experimental results demonstrate that the proposed method could achieve better than self-supervised ViT pre-training methods, like Dinov2, and weakly-supervised methods, like CLIP. **Linear probing on ImageNet-1K** | Method | Pre-training Data | ViT-Base | ViT-Base | ViT-Large | ViT-Large | |:---:|:---:|:---:|:---:|:---:|:---:| | | | #Seen Samples | Top-1 (%) | #Seen Samples | Top-1 (%) | | contrastive or clustering based | | | | | | | MoCov3 | IN1K | 400M | 76.7 | 400M | 77.6 | | DINO | IN1K | 512M | 78.2 | - | - | | iBoT | IN22K | 400M | 79.5 | 256M | 81.0 | | DINOv2 | LVD-142M | 1.28B | - | 1.92B | 84.5 | | reconstruction based | | | | | | | BEiT | D250M+IN22K | 1B | 56.7 | 1B | 73.5 | | SimMIM | IN1K | 1B | 56.7 | - | - | | CAE | D250M | 2B | 70.4 | 2B | 78.1 | | MAE | IN1K | 2B | 68.0 | 2B | 75.8 | | language-image pretraining based | | | | | | | CLIP | WIT400M | 12.8B | 78.5 | 12.8B | 82.7 | | Cappa | WebLI-1B | - | - | 9B | 83.0 | | OpenCLIP | Datacomp-1B | - | - | 12.8B | 83.9 | | Superclass | Datacomp-1B | 12.8B | 80.2 | 12.8B | 85.0 | | Superclass | Datacomp-1B | 1.28B | 78.7 | 1.28B | 82.6 | | Superclass | Datacomp-1B | 512M | 75.6 | 512M | 80.5 | **Instance segmentation and semantic segmentation** Results of instance segmentation are obtained by using Mask R-CNN on COCO with an input resolution of 1024×1024. Semantic segmentation results are obtained by using UperNet on ADE20K with an input resolution of 512×512. | Method | #Seen Samples | Semantic Segmentation mIoU | Instance Segmentation APmask | |:---:|:---:|:---:|:---:| | Supervised | - | 49.9 | 43.9 | | MoCov3 | 400M | 49.1 | 44.0 | | BEiT | 400M | 53.3 | 47.1 | | CAE | 2B | 54.7 | 47.6 | | MAE | 2B | 53.6 | 47.2 | | CLIP | 12.8B | 57.9 | 48.3 | | Superclass | 1.28B | 56.2 | 48.1 | | Superclass | 12.8B | 58.4 | 49.0 | **VLM downstream tasks** | Model | Size | VQAv2 | GQA | VizWiz | TextVQA | SciQA | MME | MMBench | PoPE | MMMU | |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | openCLIP | ViT-Large | 74.54 | 61.03 | 50.47 | 38.16 | 67.33 | 1434.86/268.92 | 60.73 | 85.52 | 35.9 | | MAE | ViT-Large | 63.5 | 54.58 | 50.22 | 11.55 | 64.75 | 1175.04/343.92 | 42.44 | 80.69 | 35.7 | | DINOv2 | ViT-Large | 73.32 | 61.87 | 49.15 | 14.08 | 64.9 | 1335.61/296.78 | 57.9 | 86.24 | 35.3 | | Superclass | ViT-Large | 75.24 | 60.96 | 54.33 | 39.2 | 66.09 | 1371.32/321.78 | 63.14 | 85.69 | 36 | openCLIP and Superclass are trained with 12.8B seen samples. >**Q3** Some arguments about previous so-called "bag-of-word classification" **A3.** Thanks for pointing it out. We will rewrite the description here. CatLIP [46] is a concurrent work. We have already cited and compared it in the paper. --- Rebuttal Comment 1.1: Title: Discussion Comment: Dear authors: Thanks for your time, efforts, and response. - **Q1** So far, we only see zero-shot performance on classification tasks. It seems that CLIP training style shows broader zero-shot ability in different downstream tasks. - **Q2** From the comparison, I do not see superioty of the proposed pre-trained method compared to self-supervised learning especially the proposed method is actually weakly-supervised. Table 4 in DINOv2 provide a more comprehensive comparison. It seems DINOv2 also have better transfer learning results in its Table 10. Overall, I do not think the proposed methods show enough superiorty compared to CLIP and existing weakly-supervised/self-supervised methods, especially when there are some similar works like CatLIP. At this time, I still hold my original score. [1] https://arxiv.org/pdf/2304.07193 --- Rebuttal 2: Title: Rebuttal by Authors Part 2 Comment: >**Q4** Compared with these "bag-of-word classification" pre-trained methods **A4.** We have included the comparison with other classification-based methods, like CatLIP in the subsection Word-level tokenizer vs. Subword-level tokenizer. The word-level tokenizer is used in CatLIP [46], which carefully selected approximately 40,000 "gold labels" from the datacomp-1B dataset. Aside from the tokenizer being different, all models are trained under the same settings. The results of Table 4 show that with the increasing size of the model, the subword-level tokenizer gradually outperforms the word-level tokenizer, whether in classification tasks or vision & language tasks. We also provide the results of finetuning on ImageNet-1k in the below Table. The superclass could achieve better performance than CatLIP. | Model | Pretraining | ImageNet-1k Fine-tuning | |---|---|---| | OpenCLIP ViT-L/14 | Datacomp-1B | 87.4 | | CatLIP ViT-L/16* | Datacomp-1B | 86.5 | | Superclass ViT-L/16 | Datacomp-1B | 87.8 | *number from the paper >**Q5** GPU usage **A5.** This is a typo. What we meant to say is all experiments were carried out on 80G A100 GPUs. We will fix it in the revised version. --- Rebuttal 3: Comment: Thanks for your reviews and feedback. >**Q1:** So far, we only see zero-shot performance on classification tasks. It seems that CLIP training style shows broader zero-shot ability in different downstream tasks. **A1:** We have evaluated zero-shot retrieval on COCO dataset and zero-shot segmentation on Pascal context and COCO stuff (see more details in the response to Reviewer W9bX). The experimental results show that Superclass could achieve competitive performance compared to CLIP. | Case | Model | COCO Image-to-Text | | COCO Text-to-Image | | |:---:|:---:|:---:|:---:|:---:|:---:| | | | R@1 | R@5 | R@1 | R@5 | | openCLIP | ViT-Large | 62.6 | 84.2 | 46.9 | 70.7 | | SuperClass | ViT-Large | 62.1 | 83.3 | 47.1 | 70.7 | | Method | Backbone | #Seen sample | PASCAL Context | COCO Stuff | |---|---|---|---|---| | CLIP | ViT-B/16 | 1.28B | 16.2 | 8.7 | | Superclass | ViT-B/16 | 1.28B | 20.2(+4.0) | 13.2(+4.5) | Besides, we also present results of linear probing and fine-tuning on ImageNet-1K. Compared to CLIP, SuperClass outperforms openCLIP on seven out of ten zero-shot classification datasets. Our method achieves a 1.1% higher accuracy in IN-1K linear probing (85.0 vs 83.9). To further demonstrate the transfer ability of our method, we conducted semantic segmentation on ADE20K and instance segmentation on COCO. Moreover, we provide experimental results on various vision & language tasks after integrating with large language models. The results show that our method outperforms CLIP. We hope that by showcasing our competitive performance in zero-shot retrieval, zero-shot segmentation, linear probing, fine-tuning, semantic segmentation, instance segmentation, and various vision & language tasks, we can address the reviewer’s concern of "only see zero-shot performance on classification tasks". >**Q2:** From the comparison, I do not see the superiority of the proposed pre-trained method compared to self-supervised learning, especially since the proposed method is actually weakly-supervised. **A2:** Compared to the current SOTA self-supervised model DINOv2, our method achieves a 0.5% higher accuracy in IN-1K linear probing (85.0 vs 84.5). Although SuperClass has seen more samples, our method is very simple and straightforward. DINOv2 adopts a dual-tower structure and adds a bunch of bells and whistles as shown in Table 1. Furthermore, we would like to remind Reviewer aefc the very important comparison on vision and language tasks. Our method outperforms DINOv2 in 7 out of 9 tasks, and the overall score is significantly higher than that of DINOv2 (Dinov2 54.66 vs. Ours 58.94). Please note that the Dinov2 used here is distilled from the ViT-giant model. These improvements are significant. We thank the reviewer bBvT for the comment, "At its core, this paper introduces a new way to pre-train open-world Vision-Language foundational models," and Reviewer W9bX for noting, "this is a new pretraining style for vision backbones rather than a variant of CLIP," our proposed new pretraining method is distinctly different from previous pretraining approaches. The differences from other most closely related classification-based pretraining methods are also evident: our method does not require manually curated "golden labels" and directly uses tokenized raw text as supervision. In the paper subsection "Word-level tokenizer vs. Subword-level tokenizer," we demonstrate through experiments that our method has better performance and scalability. As Reviewer 3a7k mentioned, "Compared to previous classification-based pretraining methods, the proposed approach demonstrates greater practical applicability." Overall, we believe we have provided a thorough response to Reviewer aefc's comments and hope that Reviewer aefc will review our feedback in light of the perspectives that align their rating with other reviewers. Title: Rebuttal by Authors Part 3 --- Rebuttal 4: Title: Discuss Comment: Dear authors: Thank you for you response. - Q1 *** Could you please tell me where the zero-shot 46.9 COCO Text-to-Image result comes from? I check the paper of CLIP[1], the zero-shot image retrieval result is 37.8. - Q2 *** - CLIP is born in 26 Feb 2021. Many other VLM pre-trained under language supervision emerged, e.g., EVA-CLIP[2] - Self-supervised pre-training and weaky-supervised methods are also hot. According to the provided information, the proposed method uses 10x data (12.8B), and it is close to DINOv2(1.92B). If we talk about "performance and scalability", DINOv2 ViT-g/14@448 achieves 86.7% linear probing results on ImageNet-1K. Overall, the baselines are far from strong. The reviewer does not know why we need such a method when we have many other powerful and versatile pre-trained VLMs/self-supervised large models/weakly supervised large models. I still retain my score at this time. Best, Reviewer aefc [1] https://arxiv.org/pdf/2103.00020 [2] https://arxiv.org/abs/2303.15389 --- Rebuttal 5: Comment: We thank reviewer aefc for their timely response. Regarding Q1, we utilized the open-sourced checkpoint [1] from paper [2], which employs ViT-Large as its backbone and is trained with Datacomp-1B and 12.8 billion seen samples. This serves as a stronger baseline compared to the original OpenAI CLIP. >**Reviewer aefc commented "The reviewer does not know why we need such a method when we have many other ... pre-trained ... large models" (VLMs/self-supervised/weakly supervised).** It seems the reviewer suggests that foundational modeling research might be redundant when the absolute performance are weaker compared to a "large model" pretrained in a systematic way. The reviewer appears to prioritize "higher numbers" even when comparing in an apple-to-orange setting. While we acknowledge that a big model with higher numbers is often advantageous from a downstream users' perspective, we believe that a scientific foundational model researcher would in favor of careful ablations rather than chasing higher numbers with biggest models. For instance, the reviewer argues that SuperClass cannot demonstrate better "performance and scalability" than DINO v2, citing that the DINO v2 *ViT-g 448* model, pretrained on a customized dataset at 448 resolution - superior to our largest *ViT-L 224* model at 224 resolution. We appreciate the reviewer's observation that our *ViT-L 224* resolution model is less powerful compared to a *ViT-g 448* resolution model. It appears that the reviewer has overlooked various systematic differences - unintentionally or for some other unknown reason - comparing scientific papers in an apple-to-orange manner. >**The reviewer aefc further suggested "Overall, the baselines are far from strong. "** The baselines we compared here are CLIP trained with DataComp [2] and DINOv2 [3], which are important milestone papers accepted at NeurIPS 2023 and TMLR 2024, respectively. For some unknown reason, the reviewer believes CLIP, DataComp and DINOv2 are far from strong. We kindly respect the reviewer's false claim. We would like to wish the reviewer the best, and politely disagree with the reviewer's comments and rating. We further thank other reviewers for their professional, scientific, objective reviews. [1] https://huggingface.co/laion/CLIP-ViT-L-14-DataComp.XL-s13B-b90K [2] DATACOMP: In search of the next generation of multimodal dataset, NeurIPS 2023. [3] DINOv2: Learning Robust Visual Features without Supervision, TMLR 2024. Title: Rebuttal by Authors Part 4 --- Rebuttal 6: Title: Discussion Comment: Thank the authors for the response. ## 1. "performance and scalability" *** Well, the authors say that your methods have better "performance and scalability" in https://openreview.net/forum?id=Hd2EOwKItm&noteId=KeV1R5CtDE **A2**. So, - The reviewer provides examples that previous methods have better "performance and scalability". - The proposed method use **10x** data, but is close to DINOv2 with the same parameters. - The reviewer does not even mention similar work CatCLIP[1], which also trains CLIP in a classification way. It also demonstrates better "performance and scalability". ## 2. It seems the reviewer suggests that foundational modeling research might be redundant when the absolute performance are weaker *** The authors are trying to mislead readers. - The paper does not propose something novel, CatCLIP[1] already does the classification pre-training job. - The performance/scalability of the paper is no better than recent literature [2,3]. The proposed method does not demonstrate its necessity when there are many other strong self-supervised/weakly-supervised/language supervised (may belong to the weakly-supervised class) methods, so it is reasonable that the reviewer thinks this work may not be necassary for the community. ## 3. Different data from CLIP paper The authors do not answer this question. https://openreview.net/forum?id=Hd2EOwKItm&noteId=vqnfjpW7aT Q1. ``` Could you please tell me where the zero-shot 46.9 COCO Text-to-Image result comes from? I check the paper of CLIP[1], the zero-shot image retrieval result is 37.8. ``` ## 4. Finally, as an independent reviewer, the reviewer thinks he can have his own opinion and rating about the paper. Best, Reviewer aefc [1] https://arxiv.org/pdf/2404.15653 [2] https://arxiv.org/abs/2303.15389 [3] DINOv2: Learning Robust Visual Features without Supervision, TMLR 2024. --- Rebuttal 7: Comment: Thank the Reviewer aefc for the feedback. We have addressed all Reviewer aefc's questions in our previous responses, including but not limited to - performance and scalability > Line #212 subsection "Data Scaling results Line" and #221 subsection "Model scaling results" in the paper > The results of Zero-shot classification, VLM downstream tasks ... in https://openreview.net/forum?id=Hd2EOwKItm&noteId=JxjyAuZo5M (Rebuttal by Authors) > Few-shot classification in A1 https://openreview.net/forum?id=Hd2EOwKItm&noteId=1hHvBn6szv (Response to Reviewer bBvT) - comparisons and differences with CatLIP[46] > A3 in https://openreview.net/forum?id=Hd2EOwKItm&noteId=JxjyAuZo5M (Rebuttal by Authors) "CatLIP [46] is a concurrent work. We have already cited and compared it in the paper. " > Line #69 and Line #233 subsection "Word-level tokenizer vs. Subword-level tokenizer" in the paper > Fine-tuning results in A4 https://openreview.net/forum?id=Hd2EOwKItm&noteId=OOhKhswjjl (Rebuttal by Authors Part 2) > https://openreview.net/forum?id=Hd2EOwKItm&noteId=KeV1R5CtDE (Rebuttal by Authors Part 3) "The differences from other...classification-based pretraining methods are also evident...manually curated "golden labels"..." - Performance comparisons with CLIP and Dinov2 > The results of Zero-shot classification, VLM downstream tasks ... in https://openreview.net/forum?id=Hd2EOwKItm&noteId=JxjyAuZo5M (Rebuttal by Authors) > The results of 10-shot classification in A1 of https://openreview.net/forum?id=Hd2EOwKItm&noteId=aVM6gFdSHR (Response to Reviewer bBvT) ... - Different data from CLIP paper > https://openreview.net/forum?id=Hd2EOwKItm&noteId=ogRsx4SgwE (Rebuttal by Authors Part 4) "Regarding Q1, we utilized the open-sourced checkpoint [1] from paper [2]...This serves as a stronger baseline compared to the original OpenAI CLIP." We do not intend to engage in repetitive responses and hope to strengthen and solidify our work based on the reviewers' valuable feedback. We further thank all the reviewers.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Decomposed Prompt Decision Transformer for Efficient Unseen Task Generalization
Accept (poster)
Summary: This paper proposes the Multi-Task Prompt Decision Transformer (MPDT) algorithm for zero-shot multi-task offline reinforcement learning (RL). Leveraging a pre-trained language model (PLM) with prompt tuning, the MPDT innovatively decomposes multi-task prompts into task-specific and cross-task components. It also achieves zero-shot generalization through test time prompt alignment. Evaluations across various benchmarks demonstrate that MPDT outperforms prior multi-task (meta) offline RL methods. Strengths: - The paper is well-written and easy to follow. - The idea of decomposing prompts is straightforward for MTL and easy to understand. - The experiments are comprehensive. Weaknesses: - My main concerns come from the novelty of the paper. The major contribution lies in the use of cross-task prompts and task-specific prompts, specifically the prompt decomposition and prompt distillation in Section 4.1. - Regarding prompt decomposition, although the authors aim for cross-task prompts to contain common knowledge across tasks, the model and loss design do not ensure this. Specifically, since the authors use element-wise multiplication on $P_c$ and $P_k$, $P_c$ functions more as a common scaling factor for different task-specific prompts. After training, $P_c$ could become a constant scalar, and $P_k$ could simply scale as $1/N$ of $P_k^{teacher}$. The authors could provide the distribution of $P_c$ values to verify if it truly encapsulates common knowledge. And it is better to have a loss function to guide the common knowledge extraction during training. - About the test time adaptation, if the authors believe that $P_c$ contains the common knowledge, I suppose we should train a randomly initialized $P_k$ (or select one from a training task according to task similarity) for the test task using the alignment loss. This way, the model will still take $P_k \cdot P_c$ as prompt input. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the weakness part. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitations and social impact are provided in the conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your careful review of our work. We'll answer your questions one by one in the following, including some misunderstandings and some essential academic questions worth exploring. **W1: About the novelty of the paper** Please check Author Rebuttal AR1. **W2: About the prompt decomposition** We visualized the cross-task prompt matrix $P_c$ and the task-specific prompts $p_k$ for 10 tasks on MT10, as shown in Figures 1 and 2 in the PDF of Author Rebuttal. From the visual results, $P_c$ exhibits an irregular data distribution, suggesting weighted representations of common knowledge across tasks. In contrast, we found that some $p_k$ matrices for specific tasks show sparsity (values approaching 0), which may explain the essence of prompt decomposition: during training, $P_c$ extracts general features that are then integrated into task prompts $P_k^*$ through element-wise multiplication with sparse $p_k$. To demonstrate that $P_c$ is more than just a generic scaling factor, we conducted experiments on three datasets: Cheetah-vel, MW ML45, and MT50. We performed fine-tuning without prompt distillation, directly applying $P_c$ element-wise multiplied with the frozen teacher prompts $P_k^*$ for each task. The TTA process remained consistent across experiments. The results are shown in the table below: |Method|Cheetah-vel|MW ML45|MW MT50| |-|-|-|-| |$P_c \circ P_k^*$| -169.50|92.88|421.34| |MPDT | -139.88 |347.21|1559.94| If $P_c$ in MPDT is merely a generic scaling factor, we would expect the results in the first row to be similar to MPDT's performance. However, MPDT demonstrates a significant performance advantage, indicating that the prompt decomposition process effectively separates knowledge, aligning with our expectations. We fully agree that an effective loss function can guide the prompt decomposition process and help the model converge faster. An intuitive idea is to consider adding a sparse loss concerning $p_k$. We incorporate the sum of absolute values of elements in $P_k$ as a penalty term into the loss function, with a weight $\lambda_1$ of 0.2, termed as $L_{sparse}$. $ L_{sparse}=\lambda_1 \sum_{k=1}^{|S|} \sum_{i=1}^m \sum_{j=1}^n |P_{k_{ij}}|$ We validated the model's performance using $ L_{sparse}$ on the Cheetah-vel and MW ML45 datasets. |Method|Cheetah-vel|MW ML45| |-|-|-| |MPDT+$L_{sparse}$| -138.24.|350.49| |MPDT | -139.88 |347.21| We found that MPDT+$L_{sparse}$ slightly outperforms the original MPDT in terms of performance, implying that the sparsity level of $P_k$ is correlated with prompt decomposition and the model's final performance. Due to rebuttal time constraints, exploration of more instructive loss function designs will be part of our future work. Besides loss design, in our experiments, we tried different learning rates for $P_c$ and $P_k$ but observed optimal performance when both had the same learning rate. We speculate that this occurs because, over training iterations, both prompts converge to their optimal values, and differing learning rates disrupt their joint convergence, leading to poorer performance under similar runtime conditions. Below, we present experimental results on the ML45 dataset where different learning rates were applied to $P_c$ and $P_k$. | | $lr_{P_c}$=0.01 |$lr_{P_c}$=0.001 |$lr_{P_c}$=0.0001 | |-|-|--|--| | $lr_{P_k}$=0.01| 310.74 |307.36 | 311.40| | $lr_{P_k}$=0.001| 198.17 | 350.99 | 338.21| | $lr_{P_k}$=0.0001| 204.94| 104.07 | 347.21 | **W3: About the test time adaptation** We have already validated similar ideas in the 'Impact of adaptation method' (Section 5.3). We investigated (1) combining $P_c$ with the average of all task-specific prompts $P_k$ from the training set (we call $\hat{P_k}$) for TTA and (2) freezing the $P_c$, randomly initializing a new task-specific prompt $P_r$ combined with $P_c$ for TTA. Following the reviewer's suggestion, we added (3) selecting one $P_k$ from a training task combined with the $P_c$ for TTA, where we randomly sampled three $P_k$ from training tasks and computed the average model performance. We found that both of these initialization methods resulted in suboptimal outcomes. We attribute this primarily to the limited information that TTA can provide [1], where introducing task-agnostic gradients (e.g., additional $p_k$) may significantly degrade model performance. Using $P_c$ for initialization proves to be the optimal approach. Nonetheless, we observed (4) that omitting TTA entirely, i.e., not utilizing any information from test tasks, and instead combining $P_c$ with the average of $p_k$ from training tasks directly for model inference, yields better results than using $P_c$ alone. |Method|Cheetah-vel|MW ML45|MW MT50 | Is Pc frozen |Using TTA| |-|-|-|-|-|-| |(1)$P_c \circ \hat{P_k}$|-148.50|332.80|1482.75|×| ✓ | |(2) $P_c \circ P_r$|-171.75|320.30|1418.50|✓ | ✓ | |(3) $P_c \circ P_k$|-159.26 | 325.02| 1079.82 |×| ✓ | |(4) $P_c \circ P_k$ | -167.80|149.21|824.07|×|×| |MPDT|$\mathbf{-139.88}$ |$\mathbf{347.21}$|$\mathbf{1559.94}$|×|✓ | [1] https://link.springer.com/article/10.1007/s11263-024-02181-w In the case of any follow-up questions, we will in turn provide with further clarifications. --- Rebuttal Comment 1.1: Title: Response to the author Comment: Thanks for the response. I've raised my rating to 6. --- Reply to Comment 1.1.1: Title: Response to the Reviewer d5sD Comment: We sincerely thank the reviewer for kindly raising the score.
Summary: Multi-task learning is a critical pursuit in decision-making, and Decision Transformer (DT) is a popular framework in solving various decision-making problems. The author observe the suboptimal performance of prior work utilizing DT for multi-task learning, and propose a new method, Multi-Task Prompt Decision Transformer (MPDT), to alleviate this issue. MPDT is mainly composed of these components: - GPT2 pre-trained weights for initialization. - Prompt decomposition: cross-task prompt + task-specific prompt to prevent gradient conflicts. - Test time adaptation (TTA): dynamically optimizes the cross-task prompts during testing time. Experiments are conducted on standard Meta-RL tasks, demonstrating prominent improvements compared to prior baselines. Ablation experiments are also extensively conducted. Strengths: Originality: - The MPDT framework combining pre-trained weights, prompt decomposition and TTA together is original. - The prompt decomposition technique to prevent gradient conflicts is orginal and well-motivated. Clarity: - This paper is well-written and well-organized. - The method is easy to follow. Significance: - Though pre-training DT is a popular trend in decision-making, this is the first work to successfully apply it in multi-task learning, to the knowledge of the reviewer. - The experimental results are prominent, compared with a set of strong baselines. Weaknesses: *Significant* - It is unclear to the reviewer that whether it is fair to compare MPDT with baselines like MT-BC, Soft-Prompt, and Prompt DT. since MPDT adopts much test-time information. How can the authors guarantee that they use the same amount of information in the test set for all baselines? If it is truly unfair, it would make the experiments less convincing. Please provide explanations for this. - Did the authors reproduce the results of baselines by themselves? If so, how did they pick the hyper-parameters? And for few-shot generalization, which appears to be a new benchmark designed by the authors, did the authors extensively tune the hyper-parameters of baselines? *Major* - It is always hard to claim the so-called "SOTA", which demands very rigorous statistical analysis. And it is rarely true in nowadays AI community, see https://rl-conference.cc/review_guidelines.html. Running 3 times is acceptable to present the performance, but not enough to support "SOTA". And the variances in Table 1,2 are very large (some [$\mu-\sigma$, $\mu+\sigma$] intervals are overlapped), thus making it hard to establish statistical improvements. The reviewer recommends removing this claim. - The framework is a bit complicated and engineering. Specifically, the 3 components, namely pre-training+prompt decomposition+TTA, are orthogonal and not well-connected. And thus the technical novelty of the method is a bit lacking. The novelty concentrates on prompt decomposition+distillation, while solely applying them doesn't achieve good performance (Table 3). *Minor* - The wording "Unseen Tasks Generalization" and "Few-shot Generalization" in Section 5.2 are misleading, as few-shot generatlization is also for unseen tasks. - The name of the proposed method, *Multi-Task Prompt DT*, might not be suitable, since it cannot reflect the differences from Prompt DT, which is already a well-developed algorithm in muti-task RL. Technical Quality: 3 Clarity: 3 Questions for Authors: - It looks unnatural to the reviewer that only the weights of prompts are trainable while the embedding layer and GPT blocks are all frozen, as shown in Figure 1. The ablation experiments of [1][2] show that parameter-efficient tuning like LoRA achieves better performance than freezing the whole block. Can the author provide some ablation experiments or reasonable explanations for this? - Notably, the ablation experiments in Table 3 and performance of MPDT-WP show that, only putting all these 3 components together can achieve best performance. If these components are really orthogonal and helpful, then why simply using one or two of them won't achieve overall improvements? For example, on MW ML45, only using all 3 components can beat Prompt-DT. Could the author share some insights on this? - How much efforts did the authors spend in tuning the hyper-parameters of MPDT? [1] Can Wikipedia Help Offline Reinforcement Learning? arXiv preprint arXiv:2201.12122. [2] Unleashing the Power of Pre-trained Language Models for Offline Reinforcement Learning. arXiv preprint arXiv:2310.20587. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitations are discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the careful review of our work. **W1: How can the authors guarantee that they use the same amount of information in the test set for all baselines?** In methods not involving prompts, we indeed fine-tuned these methods on the test set. To highlight the superiority of MPDT as much as possible, we allowed for some unfairness in comparison. Therefore, we sampled $|X|$ labelled samples for fine-tuning MT-BC, MT-ORL, and HDT methods, making the experiments in Table 1 persuasive for non-prompt methods. For methods involving prompts (Soft-Prompt and Prompt-DT),to ensure the absolute fairness of our method, we add experiments combining Soft-Prompt and Prompt-DT with TTA. The experimental results, labelled as Soft-Prompt-TTA and Prompt-DT-TTA, are shown in the table below. ||Soft-Prompt-TTA|Prompt-DT-TTA|MPDT |-|-|-|-| |Cheetah-dir|$2.91\pm0.74$|$8.33\pm0.41$|$50.32\pm 11.47$ |Cheetah-vel|$-160.51\pm10.13$|$-204.57\pm8.36$|$-139.88\pm19.65$ |Ant-dir|$119.14\pm10.72$|$120.07\pm2.67$|$121.84\pm8.01$ |MW ML10|$251.37\pm6.17$|$316.74\pm12.05$|$371.01\pm9.41$ |MW ML45|$301.74\pm18.55$|$299.47\pm8.97$|$347.21\pm11.52$ |MW MT 10|$541.99\pm10.46$|$1027.80\pm15.58$|$1317.52\pm8.22$ |MW MT 50|$519.78\pm20.97$|$1134.72\pm7.92$|$1559.94\pm2.49$ |Average|$226.35$|$386.08$|$518.28$ Soft-Prompt-TTA showed performance improvements across all tasks, whereas Prompt-DT-TTA experienced performance declines in some tasks. The main reason for this is that Prompt-DT relies on high-quality trajectory data for prompts during testing, and applying TTA on unlabeled data may have adversely affected prompt optimization. Our MPDT method continues to demonstrate advantages across the majority of tasks. **W2: Did the authors reproduce the results of baselines?** We have replicated the experimental results of the baselines. In Table 6 of Appendix B, we provide the configuration of hyperparameters used. For Prompt-DT, MT-BC, and MT-ORL, we use the code provided by the Prompt-DT, adjusting most parameters to ensure a fair comparison with MPDT. Soft-Prompt, based on MPDT, training a universal prompt directly on datasets from all tasks. HDT followed all the details from the original paper. To ensure a fair comparison with MPDT, we aligned some general hyperparameters with those used by MPDT (See Author Rebuttal AR4). For experiments in the few-shot generalization scenario, we did not adjust any hyperparameters and maintained the configuration from Table 6 because the chosen baselines either had architectures designed for few-shot scenarios or transferable insights applicable to our study. [1] https://arxiv.org/abs/2304.08487 **W3: Removing the claim of "SOTA".** Thanks for the suggestions and comments. We will remove the statement regarding SOTA terminology and clarify that our results are competitive. **W4: The technical novelty and performance problem.** About the technical novelty, please check Author Rebuttal AR1. About the performance problem in Table 3, please check Q2. **W5: The misleading problem of unseen tasks.** We are considering changing "Unseen Tasks Generalization" in Section 5.2 to "zero-shot generalization," so that our whole experiments can be referred to as "Unseen Tasks Generalization." **W6: The name might not be suitable.** Our main innovation lies in the implementation of prompt decomposition, distillation and subsequent TTA alignment in prompts. It can be modified to "Decomposed Prompt Decision Transformer Enables Unseen Task Generalization. This title may be more aligned with the current algorithm. **Q1: It looks unnatural that only the weights of prompts are trainable while the embedding layer and GPT blocks are all frozen.** Please check Author Rebuttal AR3. **Q2: Why simply using one or two of components won't achieve improvements?** The experimental results shown in Table 3 (fourth row) are misleading. The cross-task prompt $P_c$ contains rich inter-task information, providing a strong initialization for adapting to test tasks. To achieve optimal performance, fine-tuning task-specific information with TTA is necessary. $P_c$ performs poorly in directly generalizing to unseen tasks. However, this does not imply that the prompt decomposition component cannot be used independently. To verify this, we performed experiments where $P_c$ was element-wise multiplied with the average of $P_k$ from all training tasks (referred to as $P_{ka}$ ), directly used for test tasks without TTA. As shown in the table below, our performance still surpasses that of the first row. The original intention of the fourth row in Table 3 was to demonstrate the poor performance of using $P_c$ alone. However, we recognize that this should not be construed as an experiment on the prompt decomposition component. Therefore, we propose replacing the fourth row in Table 3 with the results obtained by using $P_c \circ P_{ka}$ as the prompt for test tasks to avoid confusion. |Decomposition|Distillation|TTA|Cheetah-vel|MW ML45|MW MT50| remark |-|-|--|-|-|-|- |$\times$|$\times$|$\times$|-171.23|91.97|400.71| |$\checkmark$|$\checkmark$|$\times$|-145.27|337.80|1304.07|$P_c\circ P_{ka}$ |$\checkmark$|$\checkmark$|$\checkmark$|$\mathbf{-139.88}$|$\mathbf{347.21}$|$\mathbf{1559.94}$| **Q3: How much efforts did the authors spend in tuning the hyper-parameters?** Almost all hyperparameters are referenced from Prompt DT, and we did not excessively tune our hyperparameters, which facilitates a fair comparison with other methods. The primary hyperparameter we focused on tuning is the prompt length $l$. The ablation experiments of the optimal value for $l$ are shown in Figure 3. In the table below, we present the results of ablation experiments on another hyperparameter $r$ for Cheetah-vel and MW ML45. Overall, MPDT is relatively sensitive to the selection of hyperparameters, which is a potential advantage of our work. ||Cheetah-vel|MW ML45 |-|-|- |r=1|-139.88|347.21 |r=4|-138.08|344.10 |r=10|-135.51|350.33 --- Rebuttal 2: Comment: I thank the authors for the detailed responses to my questions. On reading the responses, most of my concerns have resolved. Two concerns remain: - *(minor)* W1. Prompt DT cannot utilize the same amount of test data due to its design. To alleviate this(?), the authors conduct experiments on Prompt-DT+TTA for fair comparison, but TTA isn't suitable for Prompt-DT, making it not fair enough. (Anyway, this is not a big problem.) - *(major)* Q1. The question that "why the GPT blocks are all frozen" is not answered yet. Another concern: - *(significant)* The initial hyperparameter of Prompt-DT is #layer=3, #heads=1. The rebuttal states that "To ensure a fair comparison with MPDT, we aligned some general hyperparameters with those used by MPDT (See Author Rebuttal AR4)", and it hence seems that the experiments of Prompt-DT in this paper use #layer=12, #heads=12. If so, that would be unfair, since the other hyperparameters of Prompt-DT must be tuned extensively due to a significant change in the model size. --- Rebuttal 3: Title: Response to the Reviewer 6jy3(1) Comment: We sincerely appreciate the reviewers' thorough examination. Below, we address the concerns one by one. **W1: Prompt DT cannot utilize the same amount of test data, and the TTA is unsuitable.** The Prompt-DT+TTA method selects the prompt $p$ that allows the model to perform optimally on the test task from the high-quality trajectories of the training tasks and then fine-tunes $P$ on the same amount of unlabeled test data as MPDT for TTA. This approach ensures zero-shot generalization (without using high-quality trajectories from the test task as prompts) and effectively combines Prompt-DT with TTA. We believe this is the optimal way to adapt Prompt-DT to zero-shot scenarios. The performance fluctuations of Prompt-DT+TTA across different tasks are mainly because Prompt-DT is not inherently a prompt-tuning method (despite using frozen prompts), as it requires high-quality labeled trajectories as prompts during both testing and training, which is a very strong prior condition. It would be more accurate to say that Prompt-DT might not be suitable for zero-shot generalization scenarios rather than stating it is unsuitable for TTA. In few-shot scenarios, MPDT still outperforms Prompt-DT, demonstrating MPDT's superior performance in existing scenarios and its ability to extend to new scenarios that existing algorithms struggle to handle effectively. From a rigor perspective, Prompt-DT could be removed from Table 1, which also indirectly highlights the limited exploration of prompt-based offline RL methods in zero-shot generalization scenarios. **Q1: Why the embedding layer and GPT blocks are all frozen.** In Author Rebuttal AR3, we conducted an ablation study on whether to freeze the embedding layer. Also, based on the ablation study conclusions from [1][2], we speculate that the embedding layer and GPT blocks are highly correlated. To achieve sufficient performance, one should either (1) freeze both the embedding layer and GPT blocks simultaneously and add external components (e.g., prompt) or (2) train both the embedding layer and GPT blocks together (either full fine-tuning or using LoRA). The choice of approach involves a trade-off between model performance, computational cost, and scenario requirements. From the perspective of model performance, the primary purpose of using prompts is to preserve the model's inherent prior knowledge as much as possible. Both [2] and we believe that the full model fine-tuning used in [1] may lead to model overfitting, further disrupting the internal knowledge of the model. We considered adapting the method from [2] to our task (training the embedding layer and LoRA), but the results obtained based on the code provided by [2] differed significantly from those in the original paper. Moreover, when we applied [2] to our datasets, ML45 and MT50 (as shown in the table below), the performance on the training set was far inferior to MPDT, and even careful hyperparameter tuning could not alleviate the significant fluctuations in the reward curve. We speculate that language models using the LoRA structure may still face convergence difficulties on RL data. Furthermore, the experiments in [2] primarily focus on single-task scenarios, avoiding the challenges of multi-task scenarios where task gradients may mix with the internal knowledge of the model. ||ML45 (Training Performance)|MT50 (Training Performance) |-|-|- |LAMO[2]| $367.563\pm129.37$ |$ 930.84\pm 437.62$ |MPDT| $ 604.21\pm11.05$ | $1687.35\pm4.09$ Furthermore, [2] introduced a language prediction auxiliary loss to ensure the model retains its language task memory during training. However, if the introduction of LoRA is entirely suitable and does not disrupt the internal knowledge of the model, there should be no need for an auxiliary loss. In other words, we must fully understand whether full fine-tuning or LoRA fundamentally introduces new knowledge by disrupting existing knowledge (replacement) or guides the model to use existing knowledge to achieve RL tasks (integration). The distinction between these two approaches results in differences in the model's performance ceiling and floor. The extent of adjustments within the model may need to be carefully controlled. Additionally, current work needs to explore convergence and generalization adequately. MPDT combines transferable prompts with the pre-trained model in a more harmonious and adaptable way. Most importantly, freezing the embedding layer and GPT blocks preserves the model's complete knowledge, which we believe is a better choice. --- Rebuttal 4: Title: Response to the Reviewer 6jy3(2) Comment: From the perspective of computational cost, the advantages of MPDT are clear (1.42M vs. 125.5M fine-tune) or (1.42M vs. 3.5M LoRA). Fully fine-tuning a model is undoubtedly costly, and LoRA's 3.5M parameters are based on the adjustable parameter count in a single-task scenario, as reported in [2]. When extending to a multi-task scenario, the parameter count will increase proportionally to the number of tasks. From the perspective of scenario requirements, using prompts offers a natural advantage for knowledge transfer. Prompts have sufficient capacity to guide pre-trained models to adapt to reinforcement learning tasks with smaller data sizes. Additionally, achieving zero-shot generalization with prompts results in significantly less disruption to the model's knowledge and a lower error tolerance rate than fully fine-tuning the model. [1] Can Wikipedia Help Offline Reinforcement Learning [2] Unleashing the Power of Pre-trained Language Models for Offline Reinforcement Learning **Significant concern: The other hyperparameters of Prompt-DT must be tuned due to a change in the model size.** We did indeed use the 12-layer, 12-head version of Prompt-DT. To ensure a fair comparison, we reran the experiments and carefully adjusted some hyperparameters (setting the learning rate to 2e-5 and the learning rate decay weight to 1e-5). Other hyperparameters are independent of the model size. The model performance did not change significantly under the optimal hyperparameter configuration. The table below shows the performance of Prompt-DT and Prompt-DT-TTA across all tasks after adjusting the hyperparameters. We will replace the corresponding results in the main text with these updated results. Table 1: Zero-shot generalization ||Prompt-DT|Prompt-DT-TTA |-|-|-| |Cheetah-dir|$-7.92\pm2.97$|$9.03\pm2.11$ |Cheetah-vel|$-192.38\pm11.80$|$-203.07\pm4.01$ |Ant-dir|$123.46\pm10.70$|$121.64\pm3.83$ |MW ML10|$317.31\pm14.98$|$314.08\pm12.93$ |MW ML45|$294.55\pm8.71$|$294.87\pm10.06$ |MW MT 10|$1087.54\pm17.09$|$1030.85\pm14.77$ |MW MT 50|$994.63\pm5.99$|$1137.69\pm13.58$ Table 2: Few-shot generalization ||Prompt-DT |-|-| |Cheetah-dir|$934.78\pm5.33$ |Cheetah-vel|$-37.80\pm2.09$ |Ant-dir|$411.96\pm9.28$ |MW ML10|$315.07\pm6.17$ |MW ML45|$473.34\pm4.12$ --- Rebuttal Comment 4.1: Comment: Thank you for your insights! From the results, the reviewer guesses that LoRA tuning trick might only be superior when learning single task with limited data. Now all my concerns resolved. I will raise my score to 7. --- Rebuttal 5: Title: Response to the Reviewer 6jy3 Comment: We agree with the reviewer's point and sincerely thank the reviewer for raising the score.
Summary: This paper proposes a new method called Multi-Task Prompt Decision Transformer (MPDT) for efficient generalization to unseen tasks in offline reinforcement learning. MPDT involves two stages. First, the multitask training phase: MPDT is initialized with parameters from a pertained LM, which is GPT2. It decomposes the task prompt into a cross-task prompt shared across tasks and task-specific prompts. Second, the test time adaptation: The cross-task prompt is further optimized on unlabeled test tasks using test time adaptation by aligning the distributions of the test samples and training samples. The paper evaluates MPDT on seven meta RL environments from both MuJoCo and MetaWorld, showing its superior performance over baselines in generalizing to unseen tasks. Strengths: 1. Based on the review's knowledge, combining decision transformer and test time adaptation together for efficient multi-task RL solving is novel. 2. Keeping the weights of the pretrained LM frozen is a natural idea to leverages its rich prior knowledge. 3. Extensive experiments on seven Meta-RL environments demonstrate MPDT's effectiveness over DT-based baselines. 4. Ablation studies individually analyze the impact of different components including prompt decomposition, distillation, and test time adaptation. Overall, MPDT appears to be a promising approach for multi-task offline RL by combining prompt-based techniques with test time adaptation in a novel way. Weaknesses: 1. The paper appears to be hastily written and the presentation is hard to follow. (see question section) 2. The authors claim that a major improvement compared with other PDTs is the decomposition of prompt into the common part and task-specific part. I was expecting to see the common part will be trained across tasks and the task-specific part is within a specific task. However, in line 186, the authors say ‘We use standard normal distribution to initialize Pc, uk and vk.’ Besides, in Eq(4), they are equally optimized for each task. I didn’t see why the common part is the slow weights and the task-specific part is the fast weight, which is claimed by authors in line 167. Possible to explain here? 3. In line 192, the sentence ‘we obtain teacher task prompt $p_k^{teacher}$ for each task by using traditional prompt-tuning method individually. ‘ What method is used? How to learn it? Is there a separate learning phase for $p_k^{teacher}$? If so, the method requires three training phases instead of two. Also its quality matters. Can authors explain the details here? 4. In line 201, for test time adaptation, ‘ we randomly select a subset X of unlabeled test samples,….’ Can authors clarify what is this subset? A sequence in the DT is organized as (r_t, s_t, a_t, r_{t+1}, s_{t+1}, a_{t+1}…). I recommend being specific what you select? Technical Quality: 3 Clarity: 2 Questions for Authors: 1. How do tasks for each meta-environment differ? Do they have different reward functions $R(s_t, a_t)$ or transition function $T(s_t, a_t) \to s_{t+1}$. It question is important for the proposed method MPDT. Because I wonder what kind of distribution shift that test-test can handle and what it cannot. 2. How does the quality of the few shot prompt effect the model’s final performance? Specifically, using expert chunk, random chunks, medium policy chunks, and mixed quality chunks. It would be better to have some analysis here. Presentation Questions: 3. In algorithm 1, L_{dis} is calculated without being used? 4. In Eq.(4) What is $M((P_k^*, \tau))$? It is used without any explanation? Also, how many input does M take? Why there is a double layer bracket? 5. In Line 176, the authors state that $P_k\in \mathbf{R}^{l\times s}$ is the vector multiplication of two low rank vectors $v_k\in R^{l \times r}$ and $u_k \in R^{l\times s}$. First low-rank is a matrix property, there is no low-rank vector. Second, putting the wording issue aside, what is the vector multiplication here? If it is the matrix multiplication, which is used in the low-rank decomposition methods, then the shape of the output should be $r\times s$. Better to clarify here. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: See my above comments for weakness and questions Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for reviewing our work attentively. We will answer the reviewer's questions one by one in the following. **W1: The paper appears to be hastily written and the presentation is hard to follow** We will revise all unclear and erroneous statements highlighted by the reviewer further to improve the logical structure and clarity of the paper. **W2: The problem about prompt decomposition** The significance of prompt decomposition lies in splitting the task prompt $P_k^*$ into cross-task prompts $P_c$ and task-specific prompts $P_k$. Both prompts are trained concurrently during the training process, where data from different tasks are sequentially inputted into the model. For the current task $k$, the model computes $P_k^*$ as the element-wise product of $P_k$ and $P_c$, which is then concatenated with the input data and optimized on the current task. Using standard normal distribution to initialize $P_c$, $u_k$ and $v_k$ is a common practice for prompt initialization [1,2]. $P_c$ is termed "slow" because it captures universal knowledge shared among all tasks in the task set $S$. Its learning rate should be slow to prevent overfitting to specific tasks. On the other hand, $P_k$ is termed "fast" because it compensates by quickly adapting to the characteristics of the current task, aiding better model convergence. In our experiments, we attempted different learning rates for $P_c$ and $P_k$ but found that performance was optimal when both had the same learning rate. We speculate that this is because, over training iterations, both prompts converge to their optimal values, and setting different learning rates disrupts their joint convergence, resulting in poorer performance under similar runtime conditions. Below, we present experimental results on the ML45 dataset where different learning rates were applied to $P_c$ and $P_k$. ||$lr_{P_c}$=0.01|$lr_{P_c}$=0.001|$lr_{P_c}$=0.0001| |-|-|-|-| |$lr_{P_k}$=0.01|310.74|307.36|311.40| |$lr_{P_k}$=0.001|198.17|350.99|338.21| |$lr_{P_k}$=0.0001|204.94|104.07|347.21| [1] https://arxiv.org/abs/2109.04332 [2] https://arxiv.org/abs/2210.02390 **W3: How to obtain teacher task prompt?** Here exists a separate training process for $P^{teacher}_k$, which has dimensions $\mathbb{R}^{l\times s}$. Prompt tuning is used to learn $P^{teacher}_k$ independently for each task $k$. Since there are no gradient conflicts in training on individual tasks, this process is fast and straightforward. We set the batch size to 256 and train for 100 epochs, typically converging in about 1 hour. There are indeed three training stages, but we consider the training of $P^{teacher}_k$ as a preparatory data phase. Once learned, $P^{teacher}_k$ does not require retraining. **W4: Can authors clarify what is subset $X$?** Please check Author Rebuttal AR2. **Q1: How do tasks for each meta-environment differ?** Each task within the environment has distinct reward functions and state transition functions. The Cheetah-dir environment is one-dimensional, rewarding agents based on the angular difference between their movement direction and target direction. The Cheetah-vel environment is also one-dimensional, rewarding agents based on the difference between their velocity and a target velocity. The Ant-dir environment is two-dimensional, rewarding agents based on the angular difference between their movement direction and uniformly sampled target directions within 360 degrees. ML10, ML45, MT10 and MT50 are trained and tested in a three-dimensional physical space. Each task in these datasets is a goal-conditioned environment, where the state space across all tasks shares the same dimensions. The action space remains identical across different tasks, though specific dimensions in the state space represent different semantic meanings. The task divisions are detailed in Table 9 of Appendix C. Due to space limitations, we describe 14 robotic manipulation tasks in Table 1 of the attached PDF for the Author Rebuttal. **Q2: How does the quality of the few shot prompt effect the model’s performance?** Intuitively, the quality of data used for fine-tuning cross prompts does indeed affect the final model performance. We conducted additional experiments focusing on data quality. In the Cheetah-vel and ML45 environments, we differentiated the quality of datasets into expert, medium, random, and mixed datasets. Specifically, we randomly selected labelled test set trajectories and partitioned the first 30% as random, the middle 30% as a medium, and the last 30% as expert datasets. The mixed datasets are an average blend of these three types. Each dataset consists of 200-time steps, which aligns with the setup for few-shot scenarios relative to the size of the training set. ||Cheetah-vel|ML45| |-|-|-| |expert datasets|-30.10|586.84| |medium datasets|-41.73|502.64| |random datasets|-935.66|37.91| |mixed datasets |-30.73|579.09| We found that models fine-tuned using expert datasets perform the best, which aligns with our intuition. Additionally, the performance of models fine-tuned on mixed datasets is close to that on expert datasets, suggesting implicitly that the MPDT method can extract information from suboptimal datasets to ensure model performance. **Q3: $L_{dis}$ is calculated without being used?** We are sorry for this mistake. The parameter update rule is $\theta \leftarrow \theta-\alpha \nabla_\theta \mathcal{L}_{Total}$. **Q4: Why there is a double layer bracket?** $\mathcal{M}(P_k^*,\tau)$ here denotes the concatenation of task prompt $P_k^*$ and trajectory $\tau$ inputted into MPDT $\mathcal{M}$. **Q5: What is the vector multiplication here?** The dimension of $u_k$ should indeed be $\mathbb{R}^{r\times s}$. Furthermore, we should refer to both $v_k$ and $u_k$ as low-rank matrices rather than vectors, as these parameters exhibit matrix properties. Therefore, $P_k\in\mathbb{R}^{l\times s} $ is obtained through matrix multiplication $P_k = v_k\otimes u_k$. --- Rebuttal 2: Title: Respectful Request for Reviewer‘s Valuable Feedback on Our Rebuttal Comment: Dear Reviewer zhrS, We sincerely appreciate the time and effort you have already dedicated to reviewing our submission. We have carefully considered and addressed your initial concerns regarding our paper. Given that the discussion phase is nearing its end, we would greatly appreciate it if you could share any further feedback so we can respond promptly. Additionally, we welcome any new suggestions or comments you may have. Thank you once again for your valuable insights and continued support. Best regards, The authors of Submission 9319
Summary: This paper proposes a novel Multi-Task Prompt Decision Transformer (MPDT), which leverages pre-trained language models as the initialization and adopts test-time adaptation. This approach achieves efficient generalization to unseen tasks through the prior knowledge from the pre-trained language model and by decomposing the task prompt. In the multi-task training stage, the prompt is decomposed into a cross-task prompt and a task-specific prompt, which can reduce gradient conflicts and computational load. Besides, the task-specific prompt is further decomposed into two low-rank vectors. Prompt distillation is also used to improve the quality of the prompt decomposition. In test-time adaptation, the method further optimizes the cross-task prompts on unseen tasks via alignment loss. An empirical study on Meta-RL benchmarks demonstrates the superior performance of MPDT compared to existing methods. Strengths: - This paper leverages GPT as the initialization to provide the prior knowledge for RL tasks, which is reasonable. The empirical study demonstrates that MPDT with the pre-trained language model initialization outperforms MPDT without the language model initialization. - This paper utilizes prompt decomposition and only updates the prompt parameter in the multi-task training, which significantly reduces the trainable parameters. Compared with baselines, MPDT with fewer trainable parameters achieves superior performance. - The empirical study demonstrates that MPDT outperforms the baselines, and further analysis demonstrates the effect of the components in this method. Weaknesses: - This paper proposed utilizing the pre-trained language model as initialization but lacks an explanation of why the language pre-trained model could contribute to the RL tasks. - This method needs a long prompt to perform well compared with the Prompt DT. - This paper lacks detailed explanation on why using the cross-task prompt, task-specific prompt, and the prompt distillation. Technical Quality: 3 Clarity: 2 Questions for Authors: - Can you explain why you use the pre-trained embedding layer for word tokens to encode the RL tokens? - Can you explain more about why you use prompt decomposition and prompt distillation? - Can you explain why you need to use a long prompt to achieve better performance? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: See weaknesses and questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for reviewing our work attentively. We'll answer your questions one by one in the following. Considering the word limitation, we combine Weaknesses and questions with similar meanings to answer. **W1: why the language model could contribute to the RL tasks?** In Section 4.1, the Initialization part, we explained the reasons for using language models for initialization,"Given that Transformers are data-intensive and require pre-training on substantial datasets to achieve satisfactory performance, integrating PLMs from the same architectural family into offline RL is a natural progression. Incorporating PLMs into RL is not predicated on the direct applicability of language data to RL tasks. Instead, the advantage lies in leveraging the deep, nuanced representations acquired by PLMs from a variety of datasets. Theserepresentations encode a broad spectrum of patterns, relationships, and contexts that can transcend purely linguistic tasks." Additionally, incorporating the parameters of pre-trained language models is also motivated by several studies in RL[1,2]. Leveraging the rich prior knowledge encoded in PLMs effectively addresses the data hunger challenge of transformer architectures, providing ample semantic information for RL tasks. These papers demonstrate the advantages of using language models for RL. Our approach differs from previous work by freezing the core architecture of the language model and using trainable prompts to guide the model's knowledge for RL tasks. This approach reduces computational demands (Table 7) while enhancing transferability. Experimental results in Table 1 comparing MT-ORL, MPDT-WP, and MPDT further validate the benefits of initializing MPDT with PLMs. [1] https://arxiv.org/abs/2310.20587 [2] https://arxiv.org/abs/2201.12122 **W2 & Q3: Why you need to use a long prompt to achieve better performance?** In Prompt DT, prompts used during training and testing consist of carefully selected high-quality trajectory segments. The original paper of Prompt DT does not specify the exact method for selecting high-quality prompts, and since the main focus of training is the model itself rather than the prompt, the requirements for prompt length are relatively low. Furthermore, the performance of Prompt DT requires sufficient support from prior information (labeled test data), as illustrated in Figure 3 of the Prompt DT paper, where prompt quality significantly impacts model performance, especially in scenarios with poor data quality. The key difference in the MPDT method lies in freezing the core model, significantly reducing parameter count compared to Prompt DT (1.42M vs 125.5M). More than directly comparing prompt lengths between Prompt DT and MPDT may be unfair. On the other hand, MPDT optimizes prompts initialized completely at random on training data, thus requiring almost no prior knowledge. In Figure 3 (with fixed training duration), we find that a prompt length of around 30 performs best. Without considering computational costs and training time constraints, we reintroduce performances of prompt lengths 3 and 9 on two tasks (trained until model convergence), finding that a prompt length around 9 also achieves highly competitive results. Moreover, prompt lengths between 9 and 30 are feasible in scenarios using a language model, as they do not noticeably increase the computational burden. | prompt length | Cheetah-vel | ML45 | |---------------|-------------|------| | 3 | -299.20 (about 8.5 h) | 110.83 (about 11.0 h) | | 9 | -141.49 (about 4.8 h) | 348.03 (about 6.7 h) | | 30 | **-139.88** (about 2.4 h) | **347.21** (about 4.9 h) | [1] https://arxiv.org/abs/2206.13499 **W3 & Q2: This paper lacks detailed explanation on why using the cross-task prompt, task-specific prompt, and the prompt distillation.** In the context of multitask reinforcement learning, conflicting gradients arising from different tasks due to their objectives and environments are key factors leading to poor model performance. Let us separate common features and conflicting features between tasks. In that case, it not only facilitates training the model on existing tasks but also allows us to use common features (manifested as prompts in our work) for prompt initialization in downstream tasks. The cross-task prompt remains consistent across all training tasks, while the task-specific prompt is tailored to each task's unique characteristics. Our structured decomposition enables more regulated and harmonious parameter updates, thereby enhancing parameter efficiency and facilitating the extraction of general knowledge more effectively. The primary purpose of prompt distillation is to aid in further separating two types of prompts. Due to the lack of explicit constraints, directly implementing prompt decomposition on the multitask dataset $S$ may lead to an overlap in the information learned by $P_c$ and $P_k$, potentially undermining their ability to capture distinct intended details. We found that employing knowledge distillation from prompts trained separately on individual training tasks was a successful approach for obtaining well-decomposable prompts. We verify the performance of the proposed component in Table 3 of the main text. **Q1: Why you use the pre-trained embedding layer for word tokens to encode the RL tokens?** Please check Author Rebuttal AR3. --- Rebuttal Comment 1.1: Comment: Thank the authors for their detailed responses and additional experiments. The Rebuttal AR3 experiment demonstrates the performance of sticking the word embedding. However, the explanation can not convince me. I will keep the score. --- Rebuttal 2: Title: Response to the Reviewer urHM Comment: We sincerely appreciate the reviewers' thorough examination and hold the reviewers' opinions in the highest regard. We strive to do our utmost to address reviewers' concerns, not only about the acceptance of the paper but also in providing meaningful insights for the RL community. The reviewer's concern can be understood as the possibility that not retraining certain layers of the language model may result in an inadequate understanding of RL data by the model. We believe preserving the language model's knowledge is crucial, and prompts can bridge the gap between the two tasks without retraining the embedding layer or GPT blocks. In AR3, we conducted an ablation study on whether to freeze the embedding layer. The experimental results are intuitive, as training only the embedding layer while freezing the GPT blocks still leads to a suboptimal understanding of input information. Ablation study conclusions from [1][2] also demonstrate that the embedding layer and GPT blocks are highly correlated ([1] performs full fine-tuning of a language model on RL tasks. In contrast, [2] trains a language model on RL tasks using LoRA). Comparing [1][2] with MPDT can further address the reviewer's concerns. The reviewer's concerns can be extended to whether to (1) freeze both the embedding layer and GPT blocks simultaneously and add external components (e.g., prompt) or (2) train both the embedding layer and GPT blocks together (either full fine-tuning or using LoRA). Only these two approaches can achieve better performance. The choice of approach involves a trade-off between model performance, computational cost, and scenario requirements. Our decision to adopt approach (1) is based on the following reasons. From the perspective of model performance, the primary purpose of using prompts is to preserve the model's inherent prior knowledge as much as possible. Both [2] and we believe that the full model fine-tuning used in [1] may lead to model overfitting, further disrupting the internal knowledge of the model. We considered adapting the method from [2] to our task (training the embedding layer and LoRA), but the results obtained based on the code provided by [2] differed significantly from those in the original paper. Moreover, when we applied [2] to our datasets, ML45 and MT50 (as shown in the table below), the performance on the training set was far inferior to MPDT, and even careful hyperparameter tuning could not alleviate the significant fluctuations in the reward curve. We speculate that language models using the LoRA structure may still face convergence difficulties on RL data. ||ML45 (Training Performance)|MT50 (Training Performance) |-|-|- |LAMO[2]| $367.563\pm129.37$ |$ 930.84\pm 437.62$ |MPDT| $ 604.21\pm11.05$ | $1687.35\pm4.09$ Furthermore, [2] introduced a language prediction auxiliary loss to ensure the model retains its language task memory during training. However, if the introduction of LoRA is entirely suitable and does not disrupt the internal knowledge of the model, there should be no need for an auxiliary loss. In other words, we must fully understand whether retraining the model internally (including the embedding layer or GPT blocks) fundamentally introduces new knowledge by disrupting existing knowledge (replacement) or guides the model to use existing knowledge to achieve RL tasks (integration). The distinction between these two approaches results in differences in the model's performance ceiling and floor. The extent of adjustments within the model may need to be carefully controlled. Most importantly, freezing the embedding layer and GPT blocks preserves the model's complete knowledge, which we believe is a better choice, and the experimental results also support this. From the perspective of computational cost, the advantages of MPDT are clear (1.42M vs. 125.5M fine-tune) or (1.42M vs. 3.5M LoRA). Fully fine-tuning a model is undoubtedly costly, and LoRA's 3.5M parameters are based on the adjustable parameter count in a single-task scenario, as reported in [2]. When extending to a multi-task scenario, the parameter count will increase proportionally to the number of tasks. From the perspective of scenario requirements, prompts have sufficient capacity to guide pre-trained models to adapt to reinforcement learning tasks with smaller data sizes. Additionally, achieving zero-shot generalization with prompts results in significantly less disruption to the model's knowledge and a lower error tolerance rate than fully fine-tuning the model. We demonstrated from both experimental results and theoretical principles whether it is necessary to freeze the embedding layer and GPT blocks. We sincerely welcome further discussion with the reviewer and will try to address any concerns. [1] Can Wikipedia Help Offline Reinforcement Learning? arXiv preprint arXiv:2201.12122. [2] Unleashing the Power of Pre-trained Language Models for Offline Reinforcement Learning. arXiv preprint arXiv:2310.20587.
Rebuttal 1: Rebuttal: We thank all the reviewers for their helpful feedback. Here, we address three main comments: innovation (AR1), the collection method of unlabeled data for TTA (AR2), and the purpose of using a pre-trained embedding layer (AR3). AR4 is the list of hyperparameters used during training and testing. The remaining questions and concerns are addressed in individual responses. In the attached PDF, we provide visualizations of the cross-task prompt and the task-specific prompts for MT10, along with introductions to 14 tasks in the MT50 dataset. All of these additional experiments and suggestions have been added to the updated main text. **AR1:** The MPDT framework proposed by us encompasses the following innovations: - Building upon prior knowledge from pre-trained models, we innovatively propose prompt decomposition, accelerating convergence through prompt distillation. Our idea of learning a transferable cross-task prompt by decomposing and distilling knowledge from multitask datasets is unique, which not only makes the prompt learning more performant but results in fewer parameters. - Current TTA feature alignment techniques primarily focus on directly optimizing model feature layers. In scenarios with extremely limited unlabeled test samples, we innovatively use cross-task prompts as a robust initialization. By utilizing alignment-based reverse optimization of prompts, we preserve the language model's capabilities and condense task-specific information into cross-task prompts. Disruptive experimental results in the table below demonstrate that using randomly initialized prompts for TTA initialization severely compromises model performance. - Our innovations interlock seamlessly, tightly integrating into an efficient and practical framework, paving the way for future paradigms in multi-task offline reinforcement learning. |Decomposition| Distillation| TTA|Cheetah-vel| MW ML45 |MW MT50|remark| |-|-|-|-|-|-|-| | $\times$ | $\times$ | $\times$|-171.23| 91.97 | 400.71|| | $\times$ |$\times$ |$\checkmark$ | -180.39|99.82 | 644.33 | Randomly initialized prompt| | $\checkmark$ | $\checkmark$| $\checkmark$ | $\mathbf{-139.88}$ | $\mathbf{347.21}$ | $\mathbf{1559.94}$ || **AR2:** Here, we introduce the data collection method for $X$. The model's testing phase usually occurs in a simulated environment where we predefine our expected reward values $\hat{r}$. The environment provides the initial state $s$ of the environment, consistent with the settings during inference in prompt DT methods. However, unlike in training tasks where ground-truth labels exist, for action $a_1$, we assign a value sampled randomly from the action space (which is typically consistent between training and testing tasks). Action dimension and action distribution are shown in the table below. We feed this sequence of Markov chains into the environment, obtaining rewards and the next environment states iteratively, assigning a randomly sampled value to action $a_2$ in subsequent iterations. This process is repeated $|X|$ times, resulting in data of the form $(\hat{r}_0, s_0, a_0, \hat{r}_1, s_1, a_1, \ldots, \hat{r}_{|N|}, s_{|N|}, a_{|N|})$. We consider the method of randomly sampling action values similar to assigning pseudo-labels to the data for TTA, helping the prompt understand the characteristics of the current task. ||Action dimension| Action distribution| |-|-|-| |Cheetah-dir|6 |[-1,1]| |Cheetah-vel|6 |[-1,1]| |Ant-dir|8|[-1,1]| |MW ML10|4|[-3,3]| |MW MT45|4|[-3,3]| |MW MT10|4|[-1,1]| |MW MT50|4|[-1,1]| **AR3:** The purpose of using a pre-trained embedding layer is to maximize the retention of the internal information of the language model. Considering the differences between our input $(\hat{r},s,a)$ and natural language inputs, part of the function of the prompt is also to guide the language model in understanding inputs from RL tasks. To validate the effectiveness of using the pre-trained embedding layer, we adopted the RL tokens embedding layer design method from Prompt DT. During multi-task training, we concurrently trained the embedding layer, denoted as MPDT-E. Each experiment was run three times to ensure stability and reproducibility of the results. || Cheetah-vel|MW ML45|MW MT50| |-|-|-|-| |MPDT-E |-186.07| 301.49| 380.36| |MPDT|-139.88| 347.21| 1559.94 | We found that training a new RL tokens embedding layer resulted in performance improvements inferior to using the pre-trained embedding layer. We attribute this mainly to the retraining of the embedding layer, which disrupts the language model's understanding of input knowledge. Additionally, our inputs themselves treat $(\hat{r},s,a)$ as a single token, with temporal relationships between tokens, resembling traditional language input structures to some extent. Simultaneously, guidance from the prompt maximizes bridging the gap between these aspects, preserving the performance gains from the pre-trained embedding layer more effectively than retraining. **AR4:** |Hyperparameters|Value |-|- |K|20 |demonstration length|20 |training batch size for each task $M$|16 |number of layers|12 |number of attention heads|12 |number of gradient updates in each iteration|5 |number of evaluation episodes for each task|5 |learning rate|1e-4 |learning rate decay weight|1e-4 |activation|RELU Pdf: /pdf/32e93b691c7bd8035dfd3e279e16bcb8f11b4d88.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Hierarchical Object-Aware Dual-Level Contrastive Learning for Domain Generalized Stereo Matching
Accept (poster)
Summary: The authors propose a novel framework to achieve strong domain generalization from synthetic disparity+semantic labels datasets. To achieve this goal, the framework employs object-aware dual-level contrastive learning (HODC) to guide the backbone network toward robust and general feature extraction: these general features are the key idea to achieve good generalization results. In particular, to achieve general feature extraction, authors present two hierarchical contrastive losses -- i.e., contrastive loss is applied at multi-scale -- that work i) at the same scale level (intra-scale); ii) at different scale levels (inter-scale). Exhaustive experiments (qualitative and quantitative) on five real datasets confirm the superiority of the proposal w.r.t. other related domain generalization techniques. Strengths: **State-of-the-art performance**: Tab. 1 assesses the performance of the proposal against several domain generalization frameworks (i.e., MS-Net, FCStereo, ITSA, GraftNet, and HVT) and one recent SOTA stereo architecture -- i.e., IGEV. Sometimes the proposal shows an error reduction of over 1% which is remarkable. It seems that the HODC framework exhibits competitive performance also w.r.t methods that achieve domain generalization using networks trained on NERF-generated stereo datasets (R1) or guided by an external sparse depth sensor (strong assumption) (R2). Finally, Tab. D shows interesting results on another automotive dataset: the proposal confirms the superiority over other techniques and often w.r.t. the vanilla network fine-tuned in the final domain. **Extensive experiments**: The main experiment in Tab. 1 is exhaustive -- i.e., includes other recent domain generalization techniques and a recent SOTA network. Furthermore, the authors run extensive ablation studies w.r.t. all proposed components of the framework. Notably, in Tab. 2 authors empirically demonstrate the effects of positive pairs and (more importantly) the impact of negative pairs, improving studies of the previous literature. **The reading was smooth**: The authors wrote the paper in a linear and clear way. They expose the problem to the reader and following logical steps they arrive at the proposed solution. The figures are clear and help the reader to understand the proposal and the results. There are some minor imperfections that can be fixed (see question paragraph). (R1) Tosi, F., et al. (2023). Nerf-supervised deep stereo. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 855-866). (R2) Bartolomei, L., et al. (2023). Active stereo without pattern projector. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 18470-18482). Weaknesses: **The method requires semantic labels to achieve SOTA performance**: it is true that the proposal is effective even without the object prior, however, it is necessary to achieve SOTA performance: not all datasets output this information. Authors could have better studied the usage of a SOTA segmentation framework (e.g., (R3)) to replace ground-truth semantic labels. (R3) Kirillov, A., et al. (2023). Segment anything. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 4015-4026). Technical Quality: 3 Clarity: 3 Questions for Authors: Before questions, I resume here the motivations behind my overall rating: given the text clarity, the exhaustive experiments w.r.t competitors and other studies (i.e., ablation studies and effects of negative and positive feature pairs), the requirements of additional semantic labels, my final rating is "Weak Accept". Authors can improve the paper by showing the effects of a SOTA segmentation network to replace ground-truth labels (this could be done offline, so there are no constraints on the segmentation model). Furthermore, experiments could be extended using other recent stereo networks robust against domain shift (e.g., RAFT-Stereo (R4)). **Minor comments**: 1) It is true that the framework does not require changes in the stereo network architecture, however, it still needs access to the model to train it with the proposed losses -- i.e., the stereo network is seen as a "white box" model. 2) Row 220: I suggest the authors add a little example of global and local scale to further reduce ambiguities (e.g., k=2, N_h = 2, N_w = 4 -> global scale is (2,4) and local scale is (4,8)). 3) Tab. 1: techniques are evaluated using all pixels or only non-occluded pixels? There is a little typo: it is ITSA, not ISTA. (R4) Lipson, L., et al. (2021). Raft-stereo: Multilevel recurrent field transforms for stereo matching. In 2021 International Conference on 3D Vision (3DV) (pp. 218-227). IEEE. ## After Reviewer-Authors decision After carefully reading the Authors' rebuttal, I decided to confirm my previous decision. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have adequately addressed the limitations. However, I would have also added that the method requires semantic labels. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your positive feedback and helpful suggestions. We would like to make the following response to your questions: > Q1: The method requires semantic labels to achieve SOTA performance: it is true that the proposal is effective even without the object prior, however, it is necessary to achieve SOTA performance: not all datasets output this information. Authors could have better studied the usage of a SOTA segmentation framework (*e.g.*, (R3)) to replace ground-truth semantic labels. A1: Thanks for expressing your concerns. We would like to highlight that, in contrast to other methods that utilize additional information (*e.g.*, using pretrained features on large-scale labeled datasets [3]) to improve the generalization ability of stereo matching networks, our method relies solely on synthetic data sources. In synthetic stereo datasets, object labels can be generated along with disparity maps without much extra effort, and they are often available [4, 5]. As using pseudo labels generated by pre-trained models trained on realistic data does not strictly follow the synthetic-to-real setting, in the future we plan to directly derive pseudo object masks from disparity for finding semantically and structurally driven matching. Additionally, our HODC is orthogonal with other proposed SOTA domain generalization approaches (*e.g.*, hierarchical visual transformation [1], adversarial shortcut perturbations [2]). The proposed HODC can incorporate these methods for data augmentation, which could further improve its generalization performance. > Q2: Furthermore, experiments could be extended using other recent stereo networks robust against domain shift (e.g., RAFT-Stereo (R4)). A2: Thank you for your advice. In our experiments, we included IGEV [6], a more recent deep network architecture that serves as a strong baseline, as it incorporates RAFT-Stereo with Geometry Encoding Volume (GEV) that shows SOTA in-domain as well as generalization performance. Experiments in Table 1 show that the generalization performance of HODC-IGEV is also substantially improved compared to the baseline, demonstrating the compatibility of HODC with recent stereo networks to enhance their robustness to domain shift. > Q3: It is true that the framework does not require changes in the stereo network architecture, however, it still needs access to the model to train it with the proposed losses -- i.e., the stereo network is seen as a "white box" model. A3: Thank you for providing insights regarding the extensibility of HODC with particular constraints. We wish to highlight that our HODC operates on the intermediate stage (*i.e.*, after feature extraction) in a stereo matching pipeline, and hence it does not need to access the particular structure of the model. Furthermore, as feature extraction is a very common component in deep-learning-based stereo matching networks, it is rather straightforward to integrate HODC into existing models, thus ensuring its universality. > Q4: Row 220: I suggest the authors add a little example of global and local scale to further reduce ambiguities (e.g., k=2, N_h = 2, N_w = 4 -> global scale is (2,4) and local scale is (4,8)). A4: Thank you for the suggestion to improve our paper. We will include the suggested example in the final manuscript to reduce ambiguities. > Q5: Tab. 1: techniques are evaluated using all pixels or only non-occluded pixels? A5: Following the implementations of recent SOTA methods (e.g., ITSA and HVT), we evaluate all pixels in the KITTI (and DrivingStereo) dataset and non-occluded pixels in Middlebury and ETH3D datasets. > Q6: There is a little typo: it is ITSA, not ISTA. A6: Thank you for pointing this out. We will fix the typos in our final manuscript. **References** [1] Domain Generalized Stereo Matching via Hierarchical Visual Transformation. CVPR 2023. [2] ITSA: An Information-Theoretic Approach to Automatic Shortcut Avoidance and Domain Generalization in Stereo Matching Networks. CVPR 2022. [3] GraftNet: Towards Domain Generalized Stereo Matching with a Broad-Spectrum and Task-Oriented Feature. CVPR 2022. [4] Virtual Worlds as Proxy for Multi-Object Tracking Analysis. CVPR 2016. [5] A naturalistic open source movie for optical flow evaluation. ECCV 2012. [6] Iterative Geometry Encoding Volume for Stereo Matching. CVPR 2023. --- Rebuttal Comment 1.1: Comment: Dear Authors, Thanks for your effort in the response. After carefully reading your rebuttal, this is my reply: **A1**: - _in the future we plan to directly derive pseudo object masks from disparity for finding semantically and structurally driven matching._ I agree: this is a valid alternative to my suggestion. **A2** - _...a more recent deep network architecture that serves as a strong baseline, as it incorporates RAFT-Stereo..._ I agree. However, RAFT-Stereo was just an example: there are a lot of recent SOTA stereo networks that could be inserted to enrich the paper. **A3** - _and hence it does not need to access the particular structure of the model._ I apologize for the confusion: for "white-box model" I was referring to networks that allow i) access to feature extraction architecture; ii) back-propagation. In practice, your proposal requires stereo networks with public code available (just a comment, not an issue). - _Furthermore, as feature extraction is a very common component in deep-learning-based stereo matching networks, it is rather straightforward to integrate HODC into existing models, thus ensuring its universality._ I agree. **A5** Thanks for the clarification. I suggest the authors to include that in the paper. Best regards, Reviewer ApGF --- Reply to Comment 1.1.1: Comment: Dear Reviewer ApGF: Thanks for your response and insightful comment on our work. We are glad to see that your concerns have been addressed. As you suggested, we will include them in our final version. Thanks again for your constructive advice! Best regards, The Authors
Summary: The authors propose an additional training objective for image stereo matching methods in order to improve generalization from synthetic training data to real test images. It consists in a contrastive loss pushing image features aggregated according to some superpixels in one image to be similar to corresponding features aggregates in the other image (resp. dissimilar to non-corresponding feature aggregates), across different scales. The authors exploit object instance segmentation available in synthetic training data (SceneFlow dataset in this study) to perform this superpixel segmentation, and show in their experiments that the proposed objective function consistently brings performance improvements on various real-image datasets. Strengths: The paper is very well written and was a pleasure to read. The proposed idea is simple and well presented, and the results and ablations are convincing. Weaknesses: Minor remarks: - the smooth L1 loss should be mathematically defined or a reference should at least be provided. - Reporting baseline results without the proposed additionnal losses in Table 3 would make the table easier to interpret. Technical Quality: 4 Clarity: 4 Questions for Authors: - How do the different methods evaluated perform on Sceneflow test set? Reporting impact of the proposed approach on in-domain performance might be insightful. - Ablation results in Table 3 using the proposed losses without object index map are rather good compared to the baseline methods without contrastive learning. This suggests that training for multi-scale correspondences may be of greater importance than using instance segmentation cues, which would downgrade the importance of semantics. Reporting results when using small base superpixel size (e.g. M=32), with contrastive losses but without object index priors could bring insights regarding this. Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your positive feedback and helpful suggestions. We would like to make the following response to your questions: > Q1: the smooth L1 loss should be mathematically defined or a reference should at least be provided. A1: Thank you for pointing this out. We will add a reference to the smooth L1 loss in the final manuscript. > Q2: Reporting baseline results without the proposed additional losses in Table 3 would make the table easier to interpret. A2: Thanks for your advice. We will include the baseline results in Table 3 in the final version of the paper. > Q3: How do the different methods evaluated perform on Sceneflow test set? Reporting impact of the proposed approach on in-domain performance might be insightful. A3: Thank you for your suggestions. Following your suggestion, we have evaluated the in-domain performance of the models with and without our dual-level contrastive loss using the SceneFlow test set following our experiment settings. The results are listed in the table below. | Method | >1px | >2px | >3px | EPE | | ---------------------- | ------- | ------- | ------- | -------- | | PSMNet | 9.2 | 5.1 | 3.8 | 0.96 | | FC-PSMNet [1] | 13.4 | 7.1 | 5.2 | 1.33 | | HVT-PSMNet [2] | 9.2 | 5.2 | 3.9 | 1.04 | | **HODC-PSMNet (Ours)** | 9.0 | 5.0 | 3.8 | 1.03 | | GwcNet | 8.0 | 4.5 | 3.4 | **0.88** | | FC-GwcNet [1] | 12.3 | 6.6 | 4.8 | 1.18 | | **HODC-GwcNet (Ours)** | **7.7** | **4.3** | **3.3** | 0.93 | Interestingly, the results revealed that our HODC can achieve high cross-domain performance as well as in-domain performance by establishing semantic and structural correspondence, with the same or even better threshold error rate compared to the baselines. Only a small decline is observed in the average end-point error metric. > Q4: Ablation results in Table 3 using the proposed losses without object index map are rather good compared to the baseline methods without contrastive learning. This suggests that training for multi-scale correspondences may be of greater importance than using instance segmentation cues, which would downgrade the importance of semantics. Reporting results when using small base superpixel size (e.g. M=32), with contrastive losses but without object index priors could bring insights regarding this. A4: Thanks for your advice regarding the role of instance segmentation cues. Instance segmentation cues allow us to generate regions with unambiguous semantic meanings, enabling us to establish semantically and structural correspondence accurately. Referring to Table 3, omitting the object prior will lead to a decline in the generalization performance of our HODC. We would like to highlight that $M$ denotes the segmentation scale, and a larger $M$ derives smaller base 'superpixel' sizes with finer representations. We restrict $M$ to be within $128$, as larger values will significantly increase computation cost and memory usage when calculating all pair correspondences. **References** [1] Revisiting Domain Generalized Stereo Matching Networks from a Feature Consistency Perspective. CVPR 2022. [2] Domain Generalized Stereo Matching via Hierarchical Visual Transformation. CVPR 2023. --- Rebuttal Comment 1.1: Comment: Thank you for this answer. Regarding A4, indeed I was mislead by the 'scale' terminology and I confused $(N_w, N_h)$ with the segmentation size, which seems in fact $(W/N_w, H/N_h)$. --- Reply to Comment 1.1.1: Comment: We would like to thank you for your invaluable feedback on improving the quality of our paper. Kindly let us know if you might have further comments, and we will do our best to address them.
Summary: This paper proposes a new framework for domain generalized stereo matching termed effective hierarchical object-aware dual-level contrastive learning (HODC). HODC improves the domain generalization ability of stereo matching by encouraging region-level information in extracted features. HODC can be easily integrated into current stereo matching architectures to improve their domain generalization ability. Strengths: 1. The proposed method achieves SOTA domain generalization ability across various real-world stereo matching datasets. 2. The method is easy to implemented and suits various stereo matching architectures. 3. The paper is well-written. Weaknesses: 1. The qualitative comparison in the article is insufficient. Considering that the generalization capability of stereo networks on public datasets is already quite good, the improvement of the method becomes more important from a visualization perspective. Especially since the article only uses PSMNet for qualitative comparison in Figure 1. 2. The paper lacks visualization analysis, making it difficult to discern how the proposed strategy impacts the learned feature representations. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. I would like to see this paper adds more qualitative visualizations, as well as evaluating the domain generalization performance on more challenging unseen domains such as the Booster and Spring datasets. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have discussed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your positive feedback and helpful suggestions. We would like to address your concerns as follows: > Q1: The qualitative comparison in the article is insufficient. Considering that the generalization capability of stereo networks on public datasets is already quite good, the improvement of the method becomes more important from a visualization perspective. Especially since the article only uses PSMNet for qualitative comparison in Figure 1. A1: Thanks for your advice. Due to space limitations, we included qualitative comparison results for PSMNet, GwcNet and IGEV on KITTI-2012, KITTI-2015, Middlebury, ETH3D in **Figures E**, **F**, and **G**, and results on the additional DrivingStereo [1] dataset in **Figure A** in the appendix of our paper. > Q2: The paper lacks visualization analysis, making it difficult to discern how the proposed strategy impacts the learned feature representations. A2: Thank you for expressing your concerns. Our proposed HODC aligns the regional features from the left image to the right under different scales, aiming at pulling the corresponding representations closer while pushing non-corresponding representations further apart. To validate the feasibility of this strategy in learning features with semantic and structural awareness, we visualized the representation similarity between a selected region within the left image and all pixels in the right image. As shown in **Figure 4**, the proposed HODC can accurately identify matched regions with limited ambiguities. More visualization results for feature representations on KITTI, Middlebury and ETH3D datasets are available in **Figures B**, **C** and **D** in the appendix of our paper. > Q3: I would like to see this paper adds more qualitative visualizations, as well as evaluating the domain generalization performance on more challenging unseen domains such as the Booster and Spring datasets. A3: Thanks for your suggestions. Following your advice, we evaluated the generalization performance of HODC on the challenging realistic Booster [2] dataset with quarter resolution and provided comparisons with other domain generalization methods. The results on the Booster training set are reported in the table below. | Method | >1px | >2px | >3px | | ---------------------- | -------- | -------- | -------- | | PSMNet | 71.5 | 55.9 | 47.3 | | FC-PSMNet [3] | 46.3 | 30.2 | 24.0 | | HVT-PSMNet [4] | 37.0 | 24.6 | 19.2 | | **HODC-PSMNet (Ours)** | **36.0** | **23.0** | **18.0** | | GwcNet | 73.3 | 61.7 | 54.9 | | FC-GwcNet [3] | 44.2 | 30.8 | 24.8 | | **HODC-GwcNet (Ours)** | 36.4 | 24.1 | 19.2 | The results on the Booster dataset indicate that HODC achieves satisfying results on challenging unseen domains, with significant improvements in generalization ability compared to baseline models. Following your advice, qualitative results on the Booster dataset are also provided in **Figure I** in the attached file in the global response. We would like to highlight that to further evaluate the generalization performance on challenging realistic scenarios, apart from using the widely used datasets, we have also provided quantitative and qualitative results on the DrivingStereo dataset, which contains diverse and challenging driving scenes. These results can be found in Appendix D of our paper. **References** [1] Drivingstereo: A large-scale dataset for stereo matching in autonomous driving scenarios. CVPR 2019. [2] Open challenges in deep stereo: the booster dataset. CVPR 2022. [3] Revisiting Domain Generalized Stereo Matching Networks from a Feature Consistency Perspective. CVPR 2022. [4] Domain Generalized Stereo Matching via Hierarchical Visual Transformation. CVPR 2023.
Summary: This work proposed the hierarchical object-aware dual-level contrastive learning (HODC) framework for stereo matching. Their major technical contribution is a dual-level contrastive loss, which matches object features between intra- and inter-scale regions. Applying the proposed loss and only trained synthetic datasets, various networks achieve state-of-the-art performance across multiple realistic datasets. Strengths: - The paper is well-written and easy to follow. - Instead of using the object information in a multi-task manner, the authors designed a contrastive loss with it, which is considered a novel idea. - The proposed loss can be easily plugged into network training. - The ablation study is thoroughly done. Weaknesses: As mentioned, it is not a new direction to explore the semantic and structural information in the stereo matching task (lines 43-45). Although the previous works took a different path when using this information, it is worth comparing the performance between those approaches and the method from this work. Technical Quality: 3 Clarity: 3 Questions for Authors: As mentioned in the Weaknesses section, I am curious about the performance of this work compared to the previous approaches that also utilize semantic information. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations were discussed in the Appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your positive feedback and helpful suggestions. We would like to make the following response to your questions: > Q1: As mentioned, it is not a new direction to explore the semantic and structural information in the stereo matching task (lines 43-45). Although the previous works took a different path when using this information, it is worth comparing the performance between those approaches and the method from this work. A1: Thanks for your suggestions. In our paper, we have highlighted that unlike our work, earlier works have explored semantic information by introducing subnetworks for semantic segmentation [1] or edge detection [2] within the stereo matching pipeline, adopting a direct multi-task methodology. Additionally, these prior works primarily focused on in-distribution scenarios instead of cross-domain ones. Following your recommendation for a more comprehensive analysis, we have conducted an experiment to compare the generalization performance of our HODC with the prior works [1, 2] that incorporate semantic information in stereo matching. As the source code for EdgeStereo [2] is unavailable, we replicated the experimental settings described in [2], utilizing only the Flyingthings3D training set for training, and then compared the generalization performance. The results presented in the table below demonstrate that networks trained with our HODC significantly outperform these previous approaches and exhibit superior generalization capability. | Method | KT15_EPE | KT15_3px | KT12_EPE | KT12_3px | | ---------------------- | -------- | -------- | -------- | -------- | | SegStereo [1] | 2.2 | 11.2 | 2.1 | 12.8 | | EdgeStereo [2] | 2.1 | 12.5 | 2.0 | 12.3 | | PSMNet | 6.4 | 29.9 | 5.5 | 27.3 | | **HODC-PSMNet (Ours)** | 1.4 | 6.3 | 1.2 | 6.0 | | **HODC-GwcNet (Ours)** | **1.2** | **5.5** | **0.9** | **4.8** | **References** [1] SegStereo: Exploiting Semantic Information for Disparity Estimation. ECCV 2018. [2] EdgeStereo: An Effective Multi-Task Learning Network for Stereo Matching and Edge Detection. IJCV 2020.
Rebuttal 1: Rebuttal: We thank all the reviewers for providing positive and insightful feedback. We are encouraged by the reviewers' appreciation that the paper is well-written, easy to follow and pleasure to read (Reviewer jWZr, NWJx, uUe2, ApGF), the ideas are novel yet easy to implement (Reviewer jWZr, NWJx), the results and ablations are convincing (Reviewer jWZr, uUe2, ApGF), the figures are clear and help the reader to understand the proposal and the results (Reviewer ApGF). As suggested by the reviewers, we have now further conducted experiments to **1)** compare the generalization performance of HODC with prior works that incorporate semantic information for stereo matching, **2)** evaluate the in-domain performance of the models with and without our dual-level contrastive loss using SceneFlow test set and **3)** evaluate the generalization performance of our HODC on the challenging realistic Booster dataset [1]. We have also included additional qualitative comparisons on the Booster dataset, which can be found in the attached file. We hope that these results will provide additional clarification of our method to the reviewers. **Reference** [1] Open challenges in deep stereo: the booster dataset. CVPR 2022. Pdf: /pdf/63bfc106931789924153e20035e9dcdc786a59f1.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Safe LoRA: The Silver Lining of Reducing Safety Risks when Finetuning Large Language Models
Accept (poster)
Summary: This paper proposes Safe LoRA to defend against the harmful finetuning issue for LLMs. The core idea of Safe Lora is to project the harmful gradient update to the subspace constructed by the alignment update. To guarantee utility performance, the authors propose to use cosine similarity to determine whether the update in a layer should be projected or not. Strengths: 1. This paper proposes a timely solution for the harmful finetuning issue for LLMs. 2. The idea is intuitive enough and should be an effective solution to the problem. 3. The refinement that uses cosine similarity to decide whether the projection is done for each layer is interesting, and brings practical performance enhancement. 4. The paper is well-written and concise enough, and I think the potential audience of this paper will be large given its simplicity. Weaknesses: 1) Baseline selection can be more comprehensive. While the authors compare with SafeInstr and BEA, the authors might also consider comparing with Vaccine (Huang et al, 2024), which is first available earlier than BEA and with source code available. Huang T, Hu S, Liu L. Vaccine: Perturbation-aware alignment for large language model[J]. arXiv preprint arXiv:2402.01109, 2024. 2) Some important literature is missing for the discussion. Please consider reviewing these related papers on harmful finetuning. [1] Fine-tuning can cripple your foundation model; preserving features may be the solution https://openreview.net/forum?id=VQ7Q6qdp0P (ICLR2024 template) [2] Vaccine: Perturbation-aware Alignment for Large Language Model https://arxiv.org/abs/2402.01109 (ICML2024 template) [3] Keeping LLMs Aligned After Fine-tuning: The Crucial Role of Prompt Templates https://arxiv.org/pdf/2402.18540 (ACL2024 template) [4] Immunization against harmful fine-tuning attacks https://arxiv.org/pdf/2402.16382 (ICLR2024 workshop template) [5] Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models https://arxiv.org/pdf/2402.02207 (ICML2024 template) ------------------------------------------------------------Concurrent------------------------------------------------------------ [6] Representation noising effectively prevents harmful fine-tuning on LLMs https://arxiv.org/pdf/2405.14577 (NeurIPS2024 template) [7] Lazy Safety Alignment for Large Language Models against Harmful Fine-tuning https://arxiv.org/abs/2405.18641 (NeurIPS2024 template) [8] No Two Devils Alike: Unveiling Distinct Mechanisms of Fine-tuning Attacks https://arxiv.org/pdf/2405.16229 (NeurIPS2024 template) [9] A safety realignment framework via subspace-oriented model fusion for large language models https://arxiv.org/pdf/2405.09055 (also a post-hoc defense like safelora) [10] Navigating the Safety Landscape: Measuring Risks in Finetuning Large Language Models https://arxiv.org/abs/2405.17374 (NeurIPS2024 template) I am aware that some of the listed work is concurrent work (e.g., concurrent submissions to NeurIPS 2024). However, it is encouraged to also cite and discuss them, because that will be beneficial for the development of the research field (but the authors should at least cite and discuss those existing works that appeared before the NeurIPS2024 review cycle). 3. The experiment can be more comprehensive. For example, will the ratio of harmful data affect the defense performance? Technical Quality: 3 Clarity: 3 Questions for Authors: How do you construct the alignment update in the experiment? In section 3.1, it is claimed that: > For the aligned and unaligned models, take Meta’s Llama for example, the aligned model will be the Chat model such that they are trained with an alignment goal [43, 28]. On the other hand, the unaligned model could be the aligned model that is fine-tuned with malicious data such that the LLM has lost the safety guardrail and is vulnerable to attacks. I am wondering in the experiments, are you using harmful data to first unaligned the chat model, and then obtain the alignment update? If this is the case, I am wondering if the safe gradient update can also be obtained by these procedures. i) Use vanilla Llama2 (not chating version) as an unaligned model. ii) Do the alignment on Llama2 and get the aligned model. iii) Subtract the weights of these two models. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper discusses the limitations. I don't think the mentioned limitation should be a problem in the acceptance of this paper. I am willing to further increase the score if the authors can provide experiments to address my raised concerns. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for offering detailed reviews on$\textsf{Safe LoRA}$. We appreciated the comment saying that “The paper is well-written and concise enough, … the potential audience of this paper will be large given its simplicity.” and shared a common perspective as the reviewer, laying importance in restoring alignment under practical scenarios. Below we provide a pointwise reply to the reviewer’s comment. &nbsp; **Literature Discussion** We thank the reviewer for bringing up related works and concurrent work for discussion. We would cite and discuss the mentioned papers in the revised version. For a brief introduction, please see the general response for the literature discussion. &nbsp; **More Performance Comparison** We conduct the official code of Vaccine and train the model on LoRA with the Llama-2 model. Then, we fine-tuned Vaccine models (single/double LoRA setting) with $\rho=2$. We show the results on Dialog Summary and Pure Bad datasets. **Single LoRA** We train Vaccine with LoRA (q_proj and v_proj) first, and then fine-tuned it on downstream task datasets. As seen in the table below, Vaccine reduces the harmfulness score to 3.282 on PureBad while the utility (MT-Bench $\uparrow$) is not maintained. Furthermore, for Dialog Summary, the utility drops as well while safety shows no improvement. | Dataset | Attack(adversarial data) | fine-tuned | Fine-tuning Method | Utility | Harmfulness Score | ASR | |:--------------:|:----------------------------:|:---------:|:------------------:|:-------:|:-----------------:|:------:| | Pure Bad | ✓ | ✓ | LoRA | 4.54 | 4.66 | 95.76% | | Pure Bad | ✓ | ✓ | Vaccine | 2.812 | 3.282 | 82.42% | | Dialog Summary | ✓ | ✓ | LoRA | 50.66% | 2.63 | 45.45% | | Dialog Summary | ✓ | ✓ | Vaccine | 10.83% | 3.209 | 80.30% | &nbsp; **Double LoRA** We train Vaccine with LoRA ("q_proj", "k_proj", "v_proj", "o_proj", "up_proj", "down_proj", "gate_proj", default setting of Vaccine) first, and then fine-tuned another LoRA (q_proj and v_proj) on downstream task. The results shown below indicate that using double LoRA fine-tuned on Pure Bad reduces utility (MT-Bench $\uparrow$). However, the harmfulness score decreases slightly compared to using LoRA fine-tuning. Regarding Dialog Summary, double LoRA is effective in retaining utility scores while the harmfulness increases. | Dataset | Attack(adversarial data) | fine-tuned | Fine-tuning Method | Utility | Harmfulness Score | ASR | |:--------------:|:----------------------------:|:---------:|:------------------:|:-------:|:-----------------:|:-----:| | Pure Bad | ✓ | ✓ | LoRA | 4.54 | 4.66 | 95.76% | | Pure Bad | ✓ | ✓ | Vaccine | 0.9937| 3.861 |87.27% | | Dialog Summary | ✓ | ✓ | LoRA | 50.66% | 2.63 | 45.45% | | Dialog Summary | ✓ | ✓ | Vaccine | 48.53% | 4.455 |94.85% | Although it seems that Vaccine does not effectively reduce harmfulness, it might be contributed to the fact that Alpaca was used for vaccine models in which the alignment might be compromised as shown in [R6]. Due to the time limit, we are unable to validate the cause which requires multiple experiments. However, we deem that both $\textsf{Safe LoRA}$ and Vaccine are diverse viable solutions toward soothing the misalignment of LLM fine-tuning and should both be considered during practical deployment. &nbsp; [R6]: Xiangyu Qi, Yi Zeng, Tinghao Xie, Pin-Yu Chen, Ruoxi Jia, Prateek Mittal, and Peter Henderson. Fine-tuning aligned language models compromises safety, even when users do not intend to! In The Twelfth International Conference on Learning Representations (ICLR 2024). &nbsp; **Effect on Harmful Data** We thank the reviewer for this additional question on experiments which we answer below. We follow the same setting in Section 4 that, for 10% harmful data, there are 7 projected layers. Regarding 30% and 50% harmful data, we project 18 and 34 layers, respectively. From the table below, it is evident that even with an increase in the ratio of harmful data, $\textsf{Safe LoRA}$ continues to effectively improve safety, reducing the harmfulness score to around 1.2 while maintaining excellent utility which is only a reduction of about 1% compared to the original one. | Original Model | 10% | 30% | 50% | |:-----------------:|:------:|:------:|:------:| | Utility | 49.16% | 50.19% | 48.24% | | Harmfulness Score | 1.533 | 3.460 | 3.915 | | ASR | 18.18% | 66.67% | 80.91% | | $\textsf{Safe LoRA}$ | 10% | 30% | 50% | |:-----------------:|:------:|:------:|:---:| | Utility | 49.67% | 48.92% | 49.71% | | Harmfulness Score | 1.301 | 1.233 | 1.312 | | ASR | 12% | 8.79% | 10.30% | &nbsp; **Building Alignment Update** The reviewer is exactly correct in the way of building alignment updates! In fact, we’ve written that in section 3.1, the base model is eventually considered as it shows similar performance to the maliciously fine-tuned one. Here we quote “As a result, … most open-source LLMs provide both their base model and chat/instruct models, users can conveniently use these official models to construct the alignment matrix without needing to train their own aligned or unaligned model.” To sum up, we note that here $\textsf{Safe LoRA}$ is a training-free and data-independent method as it requires only the aligned and base model. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: Thanks for the rebuttal. I am not sure how to buid alignment update in **your experiment**. Which way you are using? 1. Use harmful data to first unaligned the chat model, and then obtain the alignment update. 2. Use vanilla Llama2 (not chating version) as an unaligned model. ii) Do the alignment on Llama2 and get the aligned model. iii) Subtract the weights of these two models. --- Rebuttal 2: Title: Clarification of Alignment Update Comment: We thank the reviewer for the additional question which we will try to clarify as the following. In Section 3.1, we explain $\textsf{Safe LoRA}$ by constructing the alignment update with the unaligned model, i.e., alignment update = chat - unaligned. However, as mentioned in the rebuttal and original paper “*We discovered that … are identical to those of the base model*”. The base model is eventually considered to construct the alignment update in $\textsf{Safe LoRA}$, i.e., **alignment update = chat - base and all subsequent experiments (Table 1 through 5), as it is a more practical scenario**. --- Rebuttal 3: Title: Some questions on comparison with Vaccine Comment: Thanks for the prompt answer of constructing alignment update, I think the authors can modify the equation to showcase this as this is a very important procedure. I am kind of confused about this point when reading your paper. I still have a few questions regarding the comparison with Vaccine: 1. are you using chat model (aligned model) as the base model for Vaccine? 2. what alignment dataset you are using to train Vaccine? --- Rebuttal Comment 3.1: Title: More details about Vaccine Comment: Thank you for the reviewer's suggestion. We will emphasize after equation (1) that, in practice, we use the base model as the unaligned model, so users do not need additional data and training processes to obtain an unaligned model. &nbsp; We also appreciate the reviewer's additional question regarding the Vaccine implementation. We used the **Llama-2 Chat model** as the base model for Vaccine and followed the official code, which utilizes **Alpaca** as the alignment dataset. --- Rebuttal 4: Title: On Vaccine implementation Comment: Hi, thanks for the quick answer. If I got it correctly, in their official code, the alignment dataset used by Vaccine is not Alpaca, but a harmful prompt-safe answer dataset named BeaverTails. Aligned the model with Alpaca is not a correct way. I hope the authors can address this issue, as a fair evaluation with baselines is our important to the value of the work. --- Rebuttal Comment 4.1: Title: Clarification of Vaccine Implementation Comment: Thank you for the reviewer's feedback. You are correct that Alpaca is not an alignment dataset; we misunderstood its role. However, we used the official code provided [R1] and ran "*Vaccine.sh*" **without any modifications** to train Vaccine. Thus, the **alignment dataset we used is indeed BeaverTails_safe**. Vaccine mixed the Alpaca dataset with BeaverTails to prevent potential performance degradation that could occur from using only BeaverTails, thus incorporating normal data with Alpaca. &nbsp; [R1] https://github.com/git-disl/Vaccine/blob/main/script/alignment/Vaccine.sh --- Rebuttal 5: Title: Thanks for the rebuttal. I have increased my score Comment: Thanks for the clarification. Vaccine fails in some of your experiments probably because adding perturbation might break the original alignment of the Llama2-chat model (I think originally they used Llama2 (unaligned) model as base model). Safe Lora does not have that issue because it is a post-fine-tuning solution. That said, I still encourage the authors to include the comparison results with Vaccine into their next version of the paper, as they show a new observation and also show the superiority of the Safe Lora method. I appreciate the author's willingness to provide additional experiments/baselines to enrich their work. I therefore increase my score to 7. I will also actively participate the reviewer-AC discussion phase to support this paper. --- Rebuttal Comment 5.1: Title: Thanks for the Feedback Comment: We appreciate the reviewer's active engagement in the discussion at this stage and the decision to raise the score. It is indeed possible that adding perturbations to an aligned model could disrupt the alignment, potentially making Vaccine's results less effective. However, Vaccine represents an initial but important solution to the realignment problem of LLMs. We will also include the results of Vaccine in the next version of our paper.
Summary: This paper studies the problem that finetuning may compromise safety, as observed in the previous work. The author proposes Safe LoRA, a simple one-liner patch to the original LoRA implementation by introducing the projection of LoRA weights from selected layers to the safety-aligned subspace, reducing the safety risks in LLM fine-tuning while maintaining utility. Safe Lora effectively prevents the problem. Strengths: - The idea of Safe LoRA is simple yet effective, making it practical for mitigating safety risks in fine-tuning large language models (LLMs). The approach of achieving safety through weight space projection is both innovative and logical. This method addresses a critical issue in the fine-tuning of LLMs, where safety and alignment with human values can be compromised. - The method’s training-free and data-free nature is a significant advantage, making it accessible and cost-effective. The simplicity of implementing Safe LoRA without the need for additional data or retraining is a notable strength. - The methodology is well-presented, with the figure effectively conveying the core idea. The paper does a commendable job in breaking down complex concepts into understandable segments, aiding readers in grasping the significance and functioning of Safe LoRA. - The experiment is comprehensive on different scenarios and models. Weaknesses: I didn't see major flaws in the paper. Yet, I would like to suggest the author discuss more on the practical application scenario for Safe LoRA. For example, Safe Lora is intended to prevent unintended safety degradation in user fine-tuning or maybe can be applied in service-provider fine-tuning the model. Instead, it will not be applied for user-intended malicious fine-tuning. Clarifying the scope of the proposed method would make the paper more sound. Technical Quality: 3 Clarity: 3 Questions for Authors: Do you think there would be adaptive attacks for the method? How it would be designed? Discussing some related to adaptive attacks may help make the work more comprehensive. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have discussed limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your genuine appreciation of the clarity and precious reviews on our work! We are delighted to receive a comment denoting that “The idea of $\textsf{Safe LoRA}$ is simple yet effective, making it practical for mitigating safety risks in fine-tuning LLMs.” Please see the response below as we address your comments. &nbsp; **Application Scenario** We thank the reviewer for the practical question raised! Here, we imagine two application scenarios possible for $\textsf{Safe LoRA}$: the benign users’ part and the LLM API providers’ part (such as OpenAI). * **Benign Users** It is thought that benign users possess the model and can independently control the training process. Under these conditions, one use case would be companies aiming to fine-tune models based on business-related downstream data. This could be for internal employee queries or for customer service chatbots. However, it is shown that even when benign data is used for fine-tuning, alignment may still be compromised [R1]. To prevent the model from responding to inappropriate queries from anyone, $\textsf{Safe LoRA}$ would provide help to ensure that the fine-tuned model remains aligned. * **LLM API Providers** As an LLM API provider, users can upload their data for the LLM API provider to fine-tune, with users unable to interfere in the training process, only adjusting training parameters. In this scenario, the LLM API provider cannot spend extensive time checking whether the data is harmful but also wants to avoid the model from generating inappropriate responses after fine-tuning the user's data. Therefore, the LLM API provider would need to use $\textsf{Safe LoRA}$ to ensure that the model can withstand problematic queries while preserving the utility of the user's fine-tuning data. &nbsp; [R1] Xiangyu Qi, Yi Zeng, Tinghao Xie, Pin-Yu Chen, Ruoxi Jia, Prateek Mittal, and Peter Henderson. Fine-tuning aligned language models compromises safety, even when users do not intend to! In The Twelfth International Conference on Learning Representations (ICLR 2024). &nbsp; **Adaptive Attacks** We thank the reviewer for such a question and also deem the attack discussion necessary. Based on the previous application scenarios, adaptive attacks might be most plausible in the LLM API provider context. In the LLM API provider scenario, attackers are permitted to only upload malicious data and adjust training hyper-parameters (if allowed) but don’t have access to information about the model (such as weights or architecture) or interfere with the intermediate training process. We assume that attackers are aware that the API providers will use $\textsf{Safe LoRA}$ for realignment but do not know the exact alignment matrix. We propose two possible adaptive attack methods as the following: * **Method I - Knowledge Transfer Attack** Without any knowledge of the private model. The attacker can only obtain the alignment vector on the open-source model. Here, the goal is to create malicious data such that the cosine similarity between the gradient during training and the alignment vector is high, while still keeping the training loss low. This approach would ensure a decrease in the model's safety. The attacker then provides the LLM API provider with crafted malicious data, i.e. similar to a transfer attack designed to bypass $\textsf{Safe LoRA}$. However, to craft such data, the attackers should find a noise such that when applied to the text embedding (since LLMs convert discrete prompts into continuous embeddings) would result in the increase of cosine similarity that eventually evades the $\textsf{Safe LoRA}$ projection. Once the noise is identified, the attacker still needs additional steps of discrete-continuous optimizations to select the optimal discrete prompts for the generated noise, which can be done using genetic algorithms (GA) or other methods. Nevertheless, this attack method is time-consuming as it requires optimizing the noise for each iteration of the training process. Furthermore, converting malicious embeddings into prompts using GA also incurs significant time costs. Lastly, the surrogate alignment matrix might be very different from the target API, causing attacks to fail. * **Method II - Model Inversion Attack** Once again, the attacker can obtain the alignment vector by computing it on an open-source model. Since the dimension of this alignment vector matches that of the model weights, performing model inversion on the alignment vector might be able to generate data that represents safety. By mixing malicious data with the safety data obtained from model inversion and then uploading to the API provider, it might also be possible to bypass $\textsf{Safe LoRA}$. However, it is not known whether the alignment vector could be meaningful in model inversion or are current model inversion techniques on LLMs [R2] strong enough for such adaptive attacks. &nbsp; [R2] Morris, John X., et al. "Language model inversion." (In ICLR 2024) --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: Thanks for the rebuttal, which answers my questions. I increase the score to 7. --- Reply to Comment 1.1.1: Title: Thanks for the Feedback Comment: We are grateful for the reviewer's prompt response and for the decision to increase the score. The use cases for Safe LoRA and the potential adaptive attacks are indeed crucial aspects. We will incorporate these discussions into our paper to make it more comprehensive.
Summary: The paper proposed a post-hoc fine-tuning projection method which utilizes the aligned and unaligned weights of the model to compute the projection matrix. The method is simple (one-liner patch) and training-free. Extensive experiments showed the effectiveness of the proposed method. Strengths: 1. The paper is well-written and easy to follow 2. The motivation is sound which tries to address the problem of decreased safety after fine-tuning Weaknesses: 1. The evaluation is limited. It only focuses on the Llama family which shares very similar architecture 2. The proposed method is named Safe LoRA without justifying that the projection matrix is indeed related to safety. It seems like the main goal of the paper is to constrain the fine-tuned weights to be within a limit of the original weights. 3. The applicability of the method is unknown. The method seems to highly depend on the fine-tune dataset, e.g. if users want to fine-tune the model to become safer (with all benign fine-tuning datasets) since not all models are equally safety-aligned [1] from the beginning, the proposed method will actually hinder its safety. [1] Xie, Tinghao, et al. "SORRY-Bench: Systematically Evaluating Large Language Model Safety Refusal Behaviors." arXiv preprint arXiv:2406.14598 (2024). Technical Quality: 2 Clarity: 2 Questions for Authors: 1. in table 4, why the utility, harmfulness and ASR score become worse (even for LoRA, especially the utility score) compared to baseline reported in table 2? If fine-tuning on this dataset makes things worse, should we consider evaluating on some other datasets? 2. per my comments in the weakness, although the work is a one-liner patch, the applicability of it is unknown. 3. in both table 4 and table 5, the authors reported decreased harmfulness score and ASR and increased MT-Bench and utility score, does this imply that only harmful data will cause strong (thresholded by $\tau$) dis-similarity through the fine-tuned process? 4. what's the result on other models such as Mistral, Phi and Gemma? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The author addressed several limitations of the method. However, I think there are more limitations as mentioned in the weakness and questions section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing the work and stating that “The motivation is sound which tries to address the problem of decreased safety after fine-tuning”. We will address and justify some concerns raised by the reviewer in the comment below. &nbsp; **Insights for $\textsf{Safe LoRA}$** Regarding the safety representation in the alignment matrix, we note from other sources such as [R1, R2] that considered exploring the safety landscape and task arithmetics which are a recent trend that focuses on the interpretability of weight semantics. Following the pipeline, the projection matrix is constructed by treating weight space as a normed vector space and extracting the “alignment” semantics from the difference between the aligned and unaligned models. **By constructing the alignment matrix, we essentially create a hyperspace of alignment and by selective projection, we are able to preserve both the utility and safety.** Therefore, the projection of the trained LoRA weights is intended to map them into the alignment hyperspace rather than merely constraining the fine-tuned weights to remain within a limit of the original weights. &nbsp; [R1] Wei, Boyi, et al. "Assessing the brittleness of safety alignment via pruning and low-rank modifications." (In ICML 2024). [R2] Ilharco, Gabriel, et al. "Editing models with task arithmetic." (In ICLR 2023) &nbsp; **Performance and Settings** * **Regarding Table 2 & 4** The decrease in utility observed is primarily due to the impact of fine-tuning on the generalization ability [R3]. MT-Bench evaluates the overall performance of LLMs, and fine-tuning can negatively affect this broader capability. Consequently, the results in Table 4 have shown a decline. Additionally, the large difference in the number of training samples—520 times greater in Table 4 compared to Table 2—leads to models that are more specifically adapted to the training data, further contributing to the observed decrease in utility. * **Regarding Table 4 & 5** On the other hand, we note that the reviewer holds the correct sense. That is, the more harmful data exist, the more layers we may need to project to maintain the alignment. &nbsp; [R3]: Yang, Haoran, et al. "Unveiling the Generalization Power of Fine-Tuned Large Language Models." (In NAACL 2024) &nbsp; **Application Scenario** We thank the reviewer for the question. Due to word limits, please see the general response for the application scenario of $\textsf{Safe LoRA}$. &nbsp; **Safe Fine-tuning** To address the reviewer's concerns on safe fine-tuning, we conduct experiments based on the reviewer's examples to demonstrate that $\textsf{Safe LoRA}$ does not heavily depend on the fine-tuned dataset. We use the Mistral model, which initially has poor alignment with a harmfulness score of 2.003, as shown in the following Table. We use the safety instruction dataset [R4] to fine-tune the model and improve its safety. As a result, the harmfulness score decreased to 1.003. Meanwhile, when applying $\textsf{Safe LoRA}$ to the fine-tuned model with $\tau = 0.5$ (projecting 5 layers), the harmfulness score is 1.012. Compared to the original model, applying $\textsf{Safe LoRA}$ still leads to an improvement in alignment. | Fine-tuned Method | Harmfulness Score | |:---------------------:|:-----------------:| | None (original model) | 2.003 | | LoRA | 1.003 | | $\textsf{Safe LoRA}$ | 1.012 | &nbsp; [R4] Federico Bianchi, Mirac Suzgun, Giuseppe Attanasio, Paul Rottger, Dan Jurafsky, Tatsunori Hashimoto, and James Zou. Safety-tuned LLaMAs: Lessons from improving the safety of large language models that follow instructions. (In ICLR 2024). --- Rebuttal 2: Title: thanks authors Comment: thank the authors for the rebuttal. After reading it and other reviewer's comments, I have raise my score to 5. My concerns are mostly resolved. I didn't give a higher score because I still feel a deeper understanding of the method should be pursued and the application scenario is not fully convincing. I didn't give a lower score because as an academic paper, I feel there is merit in accepting this paper. good luck! --- Rebuttal 3: Title: Thanks for the Feeback and More Explanation Comment: We appreciate the reviewer’s feedback, and we are glad to know that we have addressed most of your concerns. Although there are still some concerns about the $\textsf{Safe LoRA}$ method itself and its related application scenarios, we will provide the following explanation in hopes of helping you gain a better understanding. &nbsp; **Deeper Insights of $\textsf{Safe LoRA}$** Currently, there are many related papers dedicated to manipulating models with arithmetic operations to enhance performance or add new functionalities to the models. We can use this concept to explain the working mechanism of $\textsf{Safe LoRA}$, where we improve model safety by projecting unaligned weights onto an aligned hyperspace. In $\textsf{Safe LoRA}$, aligned vectors play a crucial role. If one obtains a vector $A$ by subtracting the base model weights from a non-aligned model, this vector $A$ itself does not represent alignment. Using such a vector to create a projection matrix will not enhance safety when projecting fine-tuned weights onto it. This is why we need to derive the so-called aligned vectors from an aligned model. We do not arbitrarily manipulate the fine-tuned weights, nor do we simply restrict the distance of the fine-tuned weights from the original ones. Restricting this distance alone can lead to a decrease in utility and does not guarantee an improvement in safety. In summary, $\textsf{Safe LoRA}$ enhances safety by manipulating fine-tuned weights that are **further** from the aligned direction, pulling them back into the aligned hyperspace while maintaining utility. &nbsp; **Application Scenario** We will provide a more detailed explanation of the two application scenarios we mentioned earlier. * **Benign User** First, we explain why a benign user would need to use $\textsf{Safe LoRA}$. As demonstrated in the experiments from the paper, even if the training data is entirely benign, alignment can still be compromised. Therefore, $\textsf{Safe LoRA}$ is necessary to address this issue. Next, we discuss why a benign user needs to perform fine-tuning. Although current LLMs can indeed provide very logical question-and-answer interactions, using them directly as customer service bots to answer specific questions from customers is often insufficient. This is because LLMs typically lack information relevant to a particular company. Therefore, fine-tuning an existing LLM on the user’s data becomes necessary to tailor it to the company's specific needs and information. Finally, combining the above two points, users need to fine-tune models using their own data. However, since fine-tuning can potentially lead to a loss of alignment, and generally, users do not want their customer service bots to provide inappropriate responses—because this could damage the company's reputation and increase crime rates (as it becomes easier to access harmful information)—$\textsf{Safe LoRA}$ becomes necessary to address these concerns. * **LLM API Provider** The most common LLM API provider is OpenAI's ChatGPT. It allows users to upload data and configure relevant training parameters to fine-tune ChatGPT. However, since users are not always well-intentioned, there is a risk that they might upload malicious data. The LLM API provider cannot individually check each piece of data for harm, as this would be too time-consuming and would negatively impact the user experience. This is different from how ChatGPT checks user inputs for appropriateness during conversations. The volume of training data can be very large, making it impractical to check each piece individually. In fact, OpenAI currently does not perform checks on users' training data. **So why is $\textsf{Safe LoRA}$ necessary?** If a user uploads malicious data that removes alignment, they can fine-tune the model and then start querying it with harmful questions, potentially obtaining inappropriate answers. This undermines the alignment efforts made by LLM API providers, as malicious users could effectively bypass these safeguards by spending a small amount of money to get an unaligned model that can provide any response they desire. This issue could potentially increase societal risks by making it easier for people to access inappropriate information. Overall, $\textsf{Safe LoRA}$ allows model owners to restore their safety guardrails in an efficient manner regardless of any harmful data present. We hope that the detailed explanations we have provided about the $\textsf{Safe LoRA}$ method and its insight, as well as its application scenarios, will enhance your understanding of $\textsf{Safe LoRA}$ and its use cases. Thank you once again for taking the time to review the explanations provided. We hope that your concerns have been addressed.
Summary: This paper propose a novel training-free method, Safe LoRA, to project the original LoRA to the sadety-aligned subspace. The experimental results illustrate that the proposed method can preserve the utility of downstream task and the safety of LLM output. Strengths: Strength: 1. This paper focused on an important problem about LLM safety training. 2. The proposed method is also very easy to follow and the figure about the pipeline is very clear. 3. The authors provide some experimental results to illustrate the performance of the proposed safety Lora: preserve the utility and safety of LLM. Weaknesses: Weakness: 1. I'm still not clear why directly project the LoRA weights to safety subspace can achieve such a great performance. Maybe the authors can provide more analysis and visualization to explain it. Because the proposed method is very simple and efficient and therefore it is more important to illustrate the insights and explain the reason. 2. I think the experiment setting is special and the authors create these special settings to verify the performance (such as "we augmented the Dialog Summary dataset with 100 harmful samples." ). It would be better if the authors could evaluate the proposed method on more popular benchmarks for testing the safety issue of LLM. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. line 213: "Regarding the settings of LoRA, we only add LoRA to the “q_proj” and “v_proj” attention layers, ". Why do we only consider the “q_proj” and “v_proj” attention layers, which is not a common setting? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We genuinely appreciate the reviewer for the comprehensive comments concerning alignment of LLMs. We are delighted to receive the positive feedback that “The proposed method is also very easy to follow…” Please see our point-to-point response to your comments below. &nbsp; **On the explanation of $\textsf{Safe LoRA}$** We thank the reviewer for raising such an insightful question which we will try to answer below. Firstly, there is some other research [R1, R2] concerning weight semantics such as exploring the safety landscape and task arithmetics that are related to the alignment matrix that we proposed. **On the other hand, we note that in addition to the safety guardrail that was governed by the alignment matrix we also control the to-be-projected layers such that the utility can as well be preserved.** Specifically, figure 3 in the paper demonstrates the harmfulness score versus utility. $\textsf{Safe LoRA}$ maintains strong downstream task performance because we selectively set $\tau$ to control the number of projected layers rather than projecting every layer indiscriminately. This selective projection allows us to retain critical task-specific information while minimizing unnecessary alterations, resulting in better performance. In summary, our introduced alignment matrix is effective prior to guiding LoRA updates towards better safety, because it entails the effort in aligning a base model to a chat model that can refuse (some of) unsafe questions. &nbsp; [R1] Wei, Boyi, et al. "Assessing the brittleness of safety alignment via pruning and low-rank modifications." (In ICML 2024). [R2] Ilharco, Gabriel, et al. "Editing models with task arithmetic." (In ICLR 2023) &nbsp; **Clarification on the experiment setting** We believe that the reviewer has some misunderstanding with regard to this issue. In fact, adding 100 harmful samples to the Dialog Summary dataset wasn't a special setup aimed at verifying performance. **It was actually part of our broader approach to demonstrate datasets with varying levels of harmful content, ranging from purely harmful data (Pure Bad), and partially harmful (Dialog Summary) to entirely benign data (Alpaca).** This setup was not specifically tailored but rather a natural part of our experimental framework, while also adopted by many other research [R3, R4]. To demonstrate the performance of our method, we present the F1 score on the Dialog Summary dataset, which indicates that $\textsf{Safe LoRA}$ can maintain the utility of downstream tasks yet also safeguard harmful content regardless of the ratio. As for evaluating alignment, we use the benchmark proposed by [R5], which is also adopted by other works [R3]. Overall, our experimental setup wasn't specifically designed to verify performance. Instead, it was intended to demonstrate attacks of varying severity. &nbsp; [R3]: Jiongxiao Wang, Jiazhao Li, Yiquan Li, Xiangyu Qi, Muhao Chen, Junjie Hu, Yixuan Li, Bo Li, and Chaowei Xiao. Mitigating fine-tuning jailbreak attack with backdoor enhanced alignment.arXiv preprint arXiv:2402.14968, 2024. [R4]: Tiansheng Huang, Sihao Hu, Ling Liu. Vaccine: Perturbation-aware Alignment for Large Language Model. arXiv preprint arXiv:2402.01109, 2024. [R5]: Xiangyu Qi, Yi Zeng, Tinghao Xie, Pin-Yu Chen, Ruoxi Jia, Prateek Mittal, and Peter Henderson. Fine-tuning aligned language models compromises safety, even when users do not intend to! In The Twelfth International Conference on Learning Representations (ICLR 2024). &nbsp; **Clarification on the LoRA setting** We thank the reviewer for advising on the details. Here, we follow the official code of Llama [R6], where the default setting of the PEFT config uses q_proj and v_proj. &nbsp; [R6]:https://github.com/meta-llama/llama-recipes/blob/main/src/llama_recipes/configs/peft.py, Line 11 --- Rebuttal Comment 1.1: Title: Thanks for your response Comment: Thanks for your response. The authors have solved my concerns. --- Reply to Comment 1.1.1: Title: Thanks for the Feedback Comment: We sincerely thank the reviewer for the response and we felt honored that we could solve your concern, making more understanding towards the re-alignment problem of LLMs.
Rebuttal 1: Rebuttal: We would like to first thank the reviewers for their generous advice on $\textsf{Safe LoRA}$ as it will boost our understanding on the realignment issue of LLM fine-tuning. Due to character limits, we will be answering some common concerns in the general response below. &nbsp; **Application Scenario (Reviewer d4Du, 93t8)** For the application scenario, we imagine two application scenarios possible for $\textsf{Safe LoRA}$: benign users and LLM API providers (such as OpenAI). * Benign User It is thought that benign users possess the model and can independently control the training process. Under these conditions, some use cases include companies aiming to fine-tune models based on business-related downstream data. This could be for internal employee queries or for customer service chatbots. However, it is shown that even when benign data is used for fine-tuning, alignment may still be compromised [R1]. To prevent the model from responding to inappropriate queries from anyone, $\textsf{Safe LoRA}$ would provide help to ensure that the fine-tuned model remains aligned. * LLM API Provider As an LLM API provider, users can upload their data for the LLM API provider to train on, with users unable to interfere in the training process, only adjusting training parameters. In this scenario, the LLM API provider cannot spend extensive time checking whether the data is harmful but also wants to avoid the model starting to generate inappropriate responses after fine-tuning on user data. Therefore, the LLM API provider would need to use $\textsf{Safe LoRA}$ to ensure that the model can withstand problematic queries while preserving the utility of the user's data. &nbsp; [R1] Xiangyu Qi, Yi Zeng, Tinghao Xie, Pin-Yu Chen, Ruoxi Jia, Prateek Mittal, and Peter Henderson. Fine-tuning aligned language models compromises safety, even when users do not intend to! (In ICLR 2024). &nbsp; **Literature Discussion (Reviewer X4Co)** We note that several concurrent works came around the deadline which we will acknowledge and discuss in the revised version as they are all solutions towards the realignment of LLM fine-tuning. Here, we would provide a brief discussion of the previous literature mentioned by the reviewers and enclose a more detailed one in the revised version. Firstly, [R1] focuses on solving concept forgetting which is a phenomenon observed by [R1] where other tasks’ performance would decrease when a specific one is chosen for fine-tuning. Furthermore, [R2] aims at solving the problem of malicious fine-tuning where they discover certain perturbations that can be added to search for a model that resists malicious fine-tuning. Vaccine was then introduced by [R2] to mitigate such issues. Meanwhile [R3] utilizes a safe template to prohibit malicious responses. Similarly, [R4] proposes certain conditions as guidelines for effective defenses. Lastly, [R5] focuses on the realignment of VLLMs. &nbsp; [R1] Mukhoti, Jishnu, et al. "Fine-tuning can cripple your foundation model; preserving features may be the solution."(In TMLR 2024). [R2] Huang, Tiansheng, Sihao Hu, and Ling Liu. "Vaccine: Perturbation-aware alignment for large language model." (In ArXiv 2024). [R3] Lyu, Kaifeng, et al. "Keeping llms aligned after fine-tuning: The crucial role of prompt templates." (In ArXiv 2024). [R4] Rosati, Domenic, et al. "Immunization against harmful fine-tuning attacks." (In ArXiv 2024). [R5] Zong, Yongshuo, et al. "Safety fine-tuning at (almost) no cost: A baseline for vision large language models." (In ICML 2024). &nbsp; **Performance of $\textsf{Safe LoRA}$ on Other Public Models (Reviewer 93t8)** In response to the reviewer's comment, we performed additional experiments using the Gemma model. We conducted experiments on the Dialog Summary dataset using the same setup described in Section 4 and present the results in the Table below. **Consistent with the results from the Llama series, $\textsf{Safe LoRA}$ sacrifices little utility, with its Rouge F1 score at 46.49%, but effectively reduces the harmfulness score to 2.209.** Although SafeInstr and BEA both achieve good utility, they do not effectively improve alignment, with their harmfulness scores close to or greater than 3. | Attack(adversarial data) | fine-tued | Fine-tuning Method | Utility | Harmfulness Score | ASR | |:----------------------------:|:---------:|:---------------------:|:----------:|:-----------------:|:----------:| | ✘ | ✘ | None (original model) | 32.38% | 1.033 | 2.12% | | ✘ | ✓ | LoRA | 49.93% | 1.036 | 1.52% | | ✓ | ✓ | LoRA | 49.95% | 3.803 | 93.33% | | ✓ | ✓ | SafeInstr | **50.45%** | 3.389 | 90.61% | | ✓ | ✓ | BEA | 49.27% | 2.818 | 50% | | ✓ | ✓ | $\textsf{Safe LoRA}$ | 46.49% | **2.209** | **32.42%** |
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
AdaFace: A Versatile Face Encoder for Zero-Shot Diffusion Model Personalization
Reject
Summary: This work presents a zero-shot face-generation method based on diffusion models. The proposed method first extracts face features using Face2Vec and trains a network to map these features into the textual space (i.e., the prompt embedding space for diffusion’s text condition). The main difference from existing zero-shot face generation approaches is that, instead of adding conditions in the denoising UNet, the presented method incorporates the condition into the textual embedding. To prevent the subject information from overwhelming the generation (e.g., preserving only the subject ID while ignoring other descriptions), the authors propose a Composition Distillation Loss. This contrastive loss encourages the model to generate descriptions other than the subject information. The main shortcoming of the paper lies in the experimental validation. There are multiple components proposed, but their effectiveness is not adequately demonstrated. Additionally, both qualitative and quantitative evidence fail to show that the proposed method outperforms SoTA methods. Strengths: * The Compositional Distillation Loss is an interesting and intuitive approach to reducing the problem of subject information overwhelming the generation Weaknesses: * There is a lack of intuition behind using multi-timestep distillation. This approach can cause accumulated errors, and there is no evidence provided to support the benefits of adopting such a strategy. * The section on dynamic model expansion is very unclear. It is not specified which part of the model is expanded or if the tokens are simply replicated. While adding Gaussian noise to replicated tokens is mentioned, there is no empirical validation of its effectiveness. * The qualitative comparison, especially in Figure 7, does not support the claim that the proposed method has advantages over baselines like PuLID. Additionally, the benchmarking results in Table 1 show limited improvement in facial identity preservation, and the text alignment can be inferior to PuLID. Thus, the experiments are not comprehensive enough to convincingly demonstrate that the proposed approach is superior to existing methods like PuLID. Technical Quality: 1 Clarity: 1 Questions for Authors: My suggestions to the author are follows: * Conduct a comprehensive ablation study of the components proposed in the paper, which could include but is not limited to: 1) Justification for placing the learnable part (the conditioning module) in the text encoder rather than the diffusion denoising UNet. 2) Effectiveness of using facial preservation loss, multi-timestep distillation, and dynamic model expansion. * Design experiments that are more relevant to the proposed methods. The current paper contains many irrelevant experiments/visualizations, such as: Figure 1/8: Applying AnimateDiff and generating video. Figure 2: Illustrating other work's pipeline. Figure 6: Feature alignment lines and heatmap that do not provide useful information. The compatibility of the proposed method with different backbones/plugins can be mentioned in the paper but should not be the main focus and can be presented in the supplementary materials. * Polish the language and improve readability (e.g., using tools like ChatGPT). Some content is not presented in a formal paper format, such as Line 222, "each with 9 10 images." Confidence: 5 Soundness: 1 Presentation: 1 Contribution: 2 Limitations: The authors have discussed the limitations. Flag For Ethics Review: ['Ethics review needed: Research involving human subjects'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you reviewer jpRS for your thorough and detailed review of our paper. We appreciate the time and effort you have invested in providing your critical feedback. While some of your comments are indeed challenging, we believe they are valuable for improving the quality and rigor of our work. 1\. **Multi-timestep distillation**. While it's true that the student model accumulates larger errors with more steps, the effect of aligning the output with the teacher model will *reduce* accumulated errors. This is because we assume the teacher model is the gold standard at any number of timesteps. Therefore, aligning the student output with any timesteps will help reduce the intermediate errors in all steps. Similar techniques have been adopted in other topics, such as optical flow estimation (Section 3.3 of [a]). 2\. **Dynamic model expansion**. The dynamic model expansion involves duplicating the Q (query) and V (value) projection *layers* (as opposed to *tokens*) in the attention layers. The weights of the extra copies of Q, V projections are perturbed with Gaussian noises to introduce variability. We know that each vanilla attention layer has only one Q/K/V projection. After expansion by a factor of $N$, there are $N$ Q,V projections and 1 K projection. Consequently, each input token is mapped to $N$ queries and values, but only 1 key. The number of output tokens is determined by the number of keys, and therefore, the number of output tokens does not change. 3\. **Relevance to video generation**. We respectfully disagree with the notion that video generation is not relevant to our topic of subject-driven image generation. As the community shifts focus from image generation to video generation, we believe that the ultimate goal of subject-driven generation is to develop general methods applicable to both images and videos. A long-standing issue with current video generation models, such as Gen3 and Luma Dream Machine, is that subject consistency deteriorates as the video becomes longer. Dedicated subject embeddings could serve as a long-term memory to maintain subject consistency in very long videos. As a preliminary exploration, we hope our method may inspire more research works along this direction. Due to the limitations of AnimateDiff, we are currently only able to showcase our method on 1-second videos. However, we aim to demonstrate the broad potential of our method of generating longer videos with more powerful open-source pipelines. 4\. **Comparison with PuLID**. We clarify that we do not claim superior performance compared to PuLID. Instead, our focus is to provide the community with a face encoder that performs comparably to existing SOTA methods, with the unique advantage of being plug-and-play with most existing pipelines, such as animate generation. Specifically, inherited from Arc2Face, AdaFace is capable of generating highly realistic images. In contrast, existing SOTA methods, PuLID included, tend to produce more artistic images, as demonstrated in Figure 7 and the attached figure PDF. To gain an intuitive sense of how our model performs compared to others, we invite you to try our online demo hosted on Huggingface. 5\. **Ablation study of the proposed components.** We acknowledge the absence of some ablation studies. Due to limited response time and our small team's low computational resources, it has been unrealistic to train ablated models within this short period. We will add these ablated results to a newer version of the paper as soon as they are available. [a] RAFT - Recurrent All-Pairs Field Transforms for Optical Flow, ECCV 2020. --- Rebuttal Comment 1.1: Comment: The reviewer thanks author's feedback. The reviewer finds the motivation of "Multi-tiemstep distillation" and "Dynamic model expansion" in author's feedback . However, for a top tier conference like NeurIPS, the reviewer believes the work needs to be 1) technically sound by having justifications for proposed components/ideas and 2) experimentally show superior performance than existing works or comparable performance under a harder scenario. Specifically, the reviewer thinks the author has not well justified their proposed methods (see Questions 1) and has not shown progess in the personalized facial generation. As such, the reviewer decides to remain my rating. --- Reply to Comment 1.1.1: Title: Thanks for your response Comment: Thank you for your feedback and for taking the time to review our work. We appreciate your recognition of the motivations behind "Multi-timestep Distillation" and "Dynamic Model Expansion." However, we respectfully disagree with your assessment that our work "has not shown progress in personalized facial generation." In our submission, we have demonstrated advancements in personalized facial generation, particularly through seamless integration with the existing video generation pipeline, AnimateDiff without fine-tuning. This plug-and-play ability is a unique advantage of our method. Our empirical results, as presented in the paper, show that our approach can generate facial images comparable to several state-of-the-art methods. Moreover, when integrated with AnimateDiff, it demonstrates significant advantages over the existing subject-driven video generation method, ID-Animator. While we understand that our contributions may not have been fully aligned with the specific criteria you were looking for, we believe that our work offers meaningful advancements that contribute to the broader field of personalized image and video generation. We respect your decision to maintain your rating and will take your feedback into account as we continue to refine and improve our work. Your insights have been valuable and we are committed to addressing the areas you highlighted in future revisions. Thank you once again for your careful consideration. --- Rebuttal 2: Title: Thanks for your interesting question Comment: Thank you for taking the time to carefully examine our claim of "seamless integration with video generation pipelines." We appreciate the opportunity to further clarify and differentiate our approach from similar techniques. 1. While methods like DreamBooth, LoRA, and more recently, MagicMe [a], can achieve subject consistency in video generation, these approaches require cumbersome subject-specific fine-tuning, which limits their practicality for long videos featuring multiple subjects. These limitations have partly driven the development of recent zero-shot subject-driven generation methods. 2. More importantly, subject-driven video generation, often referred to as **visual story generation**, aims to create long sequences of videos where specific subjects transition through different scenarios. For example, in StoryDiffusion [b], a video story is described where a man reads a newspaper, drives to a forest, and encounters a tiger. Achieving such complex narratives requires more than just frame-to-frame consistency; it necessitates a mechanism for maintaining subject identity and appearance across diverse and lengthy sequences. 3. Traditional video generation methods can maintain subject and background consistency across **adjacent frames** due to large-scale pre-training on videos, where adjacent frames are typically consistent. This training allows the model to implicitly learn and preserve such consistency. However, when extended to **long sequences**, these models often suffer from subject distortion or semantic drift due to the absence of dedicated subject representations. Our method, while not yet perfect, represents a step towards zero-shot dedicated subject representation learning for visual story generation, addressing these challenges more effectively. In this sense, our method is also significantly different from the layout-based method you mentioned, which primarily focuses on maintaining spatial consistency over a short window of videos, rather than ensuring subject identity over longer sequences. For a more detailed discussion on the nuances of these techniques, we invite you to refer to the ID-Animator paper [c] (one of our baseline methods), particularly Section 2.3. We hope this clarification helps in understanding the unique contributions of our work in the context of video generation. We respect your perspective and appreciate your feedback, which has been instrumental in refining our presentation. [a] Magic-Me: Identity-Specific Video Customized Diffusion. arXiv:2402.09368. [b] StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation. arXiv:2405.01434. [c] ID-Animator: Zero-Shot Identity-Preserving Human Video Generation. arXiv:2404.15275.
Summary: This paper proposes AdaFace, a face encoder that maps facial features from the image space to the text space through the AdaFace Prompt Inverter, utilizing the structure and pre-trained weights of the CLIP text encoder for initialization. During the face distillation phase, AdaFace employs random Gaussian face embeddings and multi-timestep distillation, enhancing the model's ability to capture subtle facial details through dynamic model expansion. In the composition distillation phase, AdaFace uses a comparative learning loss, aligning feature increments with orthogonal subtraction, while introducing an elastic face preservation loss to address the misalignment of facial features caused by different prompts. Strengths: - The paper is well written and easy to follow - The proposed method requires fewer training resources - The approach to constructing contrastive pairs during the composition distillation stage sounds reasonable Weaknesses: - I tried the demo provided by the authors, and the ID similarity on a few test images was relatively low; it should be far from the state-of-the-art (SOTA) level of ID similarity claimed in the paper. - The test dataset consists of celebrities, which does not guarantee whether these IDs have appeared in the training set. Furthermore, the number of test samples is too small to be convincing. - The upper bound of ID fidelity is constrained by the frozen Face2Image model, in this paper, Arc2Face. - The proposed improvements like Random Gaussian Face Embeddings, Orthogonal Subtraction, etc. are not effectively validated through ablation study. Technical Quality: 2 Clarity: 3 Questions for Authors: na Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks reviewer 6DX2 for your constructive feedback. Your comments are valuable in improving the quality and clarity of our work. Responses to weaknesses: 1\. **The performance issue of the Huggingface online demo**. Thank you for trying out our demo. We would like to clarify that our first model's performance on some subjects was suboptimal due to poor settings of hyperparameters and the diffusion pipeline. We have since updated these parameters and the pipeline, which should now significantly improve performance using the same face encoder checkpoint. We invite you to try our updated online demo, which more reflects the performance of our model. Moreover, we would like to emphasize that we do not claim new "state-of-the-art" performance in our paper. Instead, our focus is to provide the community a face encoder that performs comparably to existing SOTA methods, with the unique advantages of being highly realistic, as well as plug-and-play with most existing pipelines, such as animate generation. 2\. **Possible data contamination of the celebrity subjects**. We selected a few popular athletes of the Paris Olympics 2024 for extra qualitative evaluation, as presented in the figure attachment. We will add more subjects as examples into an updated version of the paper as you suggested. 3\. **The ID fidelity is upper bounded by the teacher model**. This is an inherent limitation of model distillation. Nevertheless, we can mitigate this limitation by doing distillation on multiple teacher models to combine their strengths, for example, on both Arc2Face and PuLID, to gain good performance on generation of both realistic and artistic images. 4\. **Ablation study of the proposed components**. We acknowledge the absence of some ablation studies. Due to limited response time and our small team's low computational resources, it has been unrealistic to train ablated models within this short period. We will add these ablated results to a newer version of the paper as soon as they are available. --- Rebuttal Comment 1.1: Comment: Thank you for your response, but my concerns are still not completely addressed. I revisited the demo provided by the authors and found it to be slightly improved compared to the previous version. However, my conclusion remains the same that AdaFace does not reach the SOTA level in terms of ID similarity measures, such as those achieved by PuLID and InstantID. This led to a question as to why the quantitative comparisons provided by the authors depict AdaFace being on par with PuLID and InstantID. In my first round of reviews, I mentioned that this might be due to limited testing IDs or potential bias, but the authors failed to adequately address this in the rebuttal, i.e., verifying the quantitative metrics on a broader test set. The authors only provided qualitative comparisons of two athletes, one of whom (LeBron James) is a famous basketball player, who could have possibly appeared in the training set. Additionally, the authors have not supplemented any ablation study in the rebuttal. I disagree with the excuse of insufficient resources or time, given that several months have passed since the submission, which would have been ample time for the authors to prepare the apparently missing ablation studies. The authors promise that these ablations will be incorporated in the next version, but we cannot predict whether these experiments will validate the efficacy of the modules proposed by the authors. In conclusion, I believe the current paper has issues and is incomplete in terms of experimental evaluation, therefore, I have decided to maintain my score. --- Reply to Comment 1.1.1: Title: Thanks for your response Comment: Thank you for taking the time to revisit our demo and for providing additional feedback. 1. We acknowledge that small-scale subjective evaluations can exhibit high variance, particularly as our method performs better on certain ethnic groups compared to PuLID and InstantID, while underperforming on others. To address this, we plan to scale up the training dataset in the next stage to include a more diverse population, ensuring improved performance across a wider range of subjects. 2. We understand your concerns regarding the absence of ablation studies in our rebuttal. The delay in conducting these studies was due to the first author, who is primarily responsible for the technical implementations and experiments, working on multiple projects in parallel. As a result, the focus of the first author was diverted to other tasks in the period leading up to the rebuttal. However, given the importance of these ablation studies, which has been emphasized by all reviewers, we are committed to prioritizing them moving forward. We will ensure that these studies are completed to make the experimental evaluation more comprehensive. We respect your decision to maintain your rating and appreciate the valuable feedback you have provided. Your insights have been instrumental, and we are dedicated to addressing the areas you highlighted in future revisions.
Summary: This paper proposes AdaFace, a method for personalizing text-to-image diffusion models for human faces. At its core, it learns a prompt inverter that maps face embeddings from a pretrained face encoder to the text embedding space of diffusion prompts. It leverages various components including face distillation, composition distillation and elastic face preserving loss to preserve subject identity while attaining good compositionality. Strengths: The paper designs targeted training losses and regularizations for the task at hand. The explanation of the methods is detailed. The video qualitative results show improvement over ID-Animator. Weaknesses: * The quantitative metrics do not show a clear advantage of AdaFace over other existing personalization methods like PuLID. The number of qualitative examples for comparing with those methods is also limited -- just the 5 images per method in Figure 7, and not sufficient to clearly demonstrate that AdaFace outperforms existing methods. It would be helpful to show a larger number of uncurated examples comparing AdaFace and baselines to get a better comparison of their performance. * The training of the prompt inverter involves a number of components -- such as model expansion, the inclusion of different feature types in composition distillation, orthogonal subtraction and elastic face preserving loss -- but there are no ablation studies on most of them to demonstrate their effects on the performance. Technical Quality: 2 Clarity: 2 Questions for Authors: See Weaknesses - It would be helpful to see more uncurated qualitative examples for better comparison with other existing approaches. Additionally, are there particular reasons why ConsistentID is not included in the quantitative comparison although it is included in the qualitative comparison? And although the methods section discussed some conceptual intuitions for the design of the target losses, it would be helpful to use ablation studies to verify their contribution to the performance, and for the compositional delta loss part, show their advantage over previously explored designs. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The authors have discussed limitations and societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you reviewer Uvbt for your favorable evaluations. Your comments are valuable in improving the quality and clarity of our work. 1. **Limited examples**. We have added more examples of Paris Olypics Atheletes in the attached PDF file. 2. **Ablation studies of proposed components**. Due to limited response time and our small team's low computational resources, it has been unrealistic to train ablated models within this short period. We will add these ablated results to a newer version of the paper as soon as they are available. 3. **Comparison with ConsistentID**. On simple prompts, ConsistentID usually performs well. However, on more complicated prompts, such as "playing guitar, ocean waves, cyberpunk street market, neon signs, diverse crowd, futuristic gadgets, vibrant colors, urban style", ConsistentID totally ignores the specified style words. Please see the attached PDF file for this example. Since our evaluation prompts are all simple ones, they are unable to reflect the limitations of ConsistentID on long, complex prompts. Therefore, we do not include quantitative evaluation results of ConsistentID. --- Rebuttal Comment 1.1: Comment: Thanks for the response and for including additional qualitative examples. I understand that conducting ablation studies during the rebuttal period might not be feasible. However, ablation studies are a crucial part of a paper, for understanding and validating the necessity of each component of the method. After considering all aspects, I decide to keep my original score.
Summary: This paper proposes AdaFace, a test-time-tuning-free method for personalized text-to-face-image generation. Previous methods involving face features in the feature space of a face encoder, which is not flexibly composable with natural language for personalized generation. Thus, this paper proposes to map the face features into the features in the text conditioning space. Several techniques are proposed to enhance the performance, like Random Gaussian Face Embeddings, Multi-Timestep Distillation, Dynamic Model Expansion, Composition Distillation, and Elastic Face Preserving Loss. Some experiments demonstrate that the proposed method achieves good visual results. Strengths: 1. The visual results are satisfactory in general. 2. The proposed method can also be applied for personalized text-to-video generation. Weaknesses: 1. The overall motivation is not novel enough. Finding ways to convert input images into the textual space is a fundamental goal in text-to-image personalization, which has been emphasized in the very first TextualInversion work. Even in the context of tuning-free based methods, the proposed framework is not so novel compared with ELITE [a], which also involves training a mapper from the image space into the textual space. Similar compositional distillation technique has also been explored in SuTI [b]. 2. Lack of detailed studies of the proposed components, either qualitatively or quantitively. The authors propose a bag of techniques to improve the performance. Although their motivation is mentioned in the texts, there is no supportive results to illustrate how these techniques work. * There is only one quantitive study in Tab. 1 regarding the compositional distillation. However, there are actually a lot of technical details in the proposed compositional distillation techniques, like the orthogonal subtraction and compositional delta loss, which lack careful experimental analysis against their alternatives. * The proposed face distillation is not well supported. Can we simply train the face encoder with the simple noise prediction loss of diffusion models? * The analysis of the proposed Elastic Face Preserving Loss is also missing. 3. ELITE [a] mentions that using multiple token to represent an image may hurt the textual compatibility. It is necessary for the authors to provide a rationale for doing so. 4. How about the method comparing with the popular IP-Adapter (face version) [c]? 5. The overall training pipeline requires multiple stages of training, which is not so elegant. 6. The authors would like to consider merging multiple figures with similar functionalities and structures to one, like Figs. 2, 3, 4 and 5, to leave enough space for necessary experimental results. [a] ELITE: Encoding Visual Concepts into Textual Embeddings for Customized Text-to-Image Generation, Wei et al., ICCV 2023. [b] Subject-driven Text-to-Image Generation via Apprenticeship Learning, Chen et al., NeurIPS 2023. [c] IP-Adapter: Text Compatible Image Prompt Adapter for Text-to-Image Diffusion Models, Ye et al.. Technical Quality: 2 Clarity: 2 Questions for Authors: Please refer to the weaknesses above. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The authors have discussed the limitations of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks reviewer jmG6 for your constructive feedback. Your insights have been incredibly valuable in improving the quality and clarity of our work. Responses to weaknesses: 1. **In terms of novelty**. * While ELITE is the first to propose a text-space embedding method for personalization, it requires training individual global K, V projections for attention layers, which affects its compatibility with existing diffusion pipelines. In contrast, Adaface does not modify the existing diffusion pipeline, and the subject embeddings are applied similarly to ordinary text tokens without special treatment. This compatibility allows Adaface to be used with AnimateDiff for generating subject-themed videos seamlessly. * SuTi adopts a different approach to personalization by training millions of "expert" models in advance and mapping the input subject to these expert models for zero-shot generation. We are not aware of similar compositional distillation techniques as proposed in SuTi. However, we do note in lines 170-177 that there are methods adopting similar techniques, with an emphasis on our unique contributions. 2. **Extra discussions**. * **Ablation study of the proposed components**. We acknowledge the absence of some ablation studies. Due to limited response time and our small team's low computational resources, it has been unrealistic to train ablated models within this short period. We will add these ablated results to a newer version of the paper as soon as they are available. * **Training face encoders from scratch** is highly computational demanding. For example, InstantID was trained on 48x80GB NVIDIA H800 GPUs, which is inaccessible to most research teams. Arc2Face, the teacher model of AdaFace, was trained with 8*A100 GPUs for several weeks. In contrast, AdaFace was trained with 2xA6000 GPUs for less than 1 week. Moreover, it could learn from multiple teacher models to combine their advantages. 3. **Editability of Multiple embeddings**. The reduced editability of ELITE with multiple embeddings is likely due to overfitting. However, our face encoder is trained on hundreds of thousands of face images, thus significantly alleviates this issue. Additionally, our compositional distillation techniques further enhance editability. Our results align with recent models such as IP-Adapter, InstantID and PuLID, which adopt multiple subject embeddings while maintaining good editability. 4. **Comparison with IP-Adapter-FaceID**. The consensus in the AI art community is that IP-Adapter-FaceID produces far less authentic images compared to other methods like InstantID or PuLID. Therefore, we did not include a comparison with IP-Adapter-FaceID. 5. **Multi-stage training**. To the best of our knowledge, multi-stage training is widely adopted by many diffusion models. Given their inherent complexity and the numerous factors involved, multi-stage training helps steer these models towards achieving strong performance across various aspects. Once again, thank you for your valuable suggestions to improve the clarity of our presentation. We will incorporate them into the updated version of the paper. In addition, we invite you to try our online demo, hosted on Huggingface, to gain an intuitive sense of how our model performs compared to others. --- Rebuttal Comment 1.1: Comment: Thanks the authors for the response. I still believe the mentioned ablation studies are necessary. I expect the authors could finish them and add them to the revision. Conditioned on this, I will increase my score to 5.
Rebuttal 1: Rebuttal: We thank all the reviewers for their high-quality feedback and insightful comments. We will incorporate these suggestions into a future version of our paper. In particular, we appreciate your recognition of the novelty of our method, especially its seamless integration with video generation pipelines and the contrastive distillation techniques. To gain an intuitive sense of how our method performs compared to the existing state-of-the-art methods, such as InstantID and PuLID, we invite you to try our Huggingface online image and video generation demos. We would like to emphasize that we do not claim new "state-of-the-art" performance in our paper. Instead, our focus is to provide the community a face encoder that performs comparably to existing SOTA methods, with the unique advantages of being highly realistic, as well as plug-and-play with most existing pipelines, such as animate generation. A long-standing issue with current video generation models, such as Gen3 and Luma Dream Machine, is that subject consistency deteriorates as the video becomes longer. Dedicated subject embeddings can serve as a long-term memory to maintain subject consistency in very long videos. As a preliminary exploration, we hope our method may inspire more research works along this direction. In the attached PDF file, we present images based on two athletes from the Paris Olympic Games, LeBron James and Yusuf Dikec, who have recently gained recognition. Additionally, we include an example using Alan Turing's photo as input. This demonstrates that ConsistentID tends to overlook complex semantics in long prompts. A common concern is ablation studies on a few proposed components are absent. We are fully aware of this issue. Due to limited response time and our small team's low computational resources, it has been unrealistic to train ablated models within this short period. We will add these ablated results to a newer version of the paper as soon as they are available in the near future. Pdf: /pdf/9b973d9bccac393fe15eba71532e8416e5318e69.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Mixture of Nested Experts: Adaptive Processing of Visual Tokens
Accept (poster)
Summary: The paper builds on the Mixture-of-Experts paradigm for vision transformer, and adds a hierarchical aspect to it, yielding a Mixture of Nested Experts (**MoNE**). More specifically, experts are defined as subsets of nested channels in the Feed Forward Networks, such that each expert has a different compute cost, and the larger one correspond to the full original model. Nevertheless, some operations are always performed at the full model dimension, after padding the tokens that went through smaller experts as needed: The $(QK^T)V$ operation in the self-attention, the layer norms and the residual connections. In addition, a dynamic budget allocation heuristic is proposed to adapted the classical (uniform) load balancing loss to the scenarios of experts with different capacities. The proposed method is then evaluated on image and video classification. Strengths: - The routing strategy is learned in the first transformer layer and propagated to all subsequent layers. This can be better from a hardware perspective to know early on which channels need to be turned on/off. - Having experts with different compute costs/"capacity" is well motivated and reasonable to improve the accuracy/efficiency trade-off. - Good ablation experiments on the location of the router Weaknesses: * **Fixed number of experts:** It is not clear to me why `MoNE` uses a fixed number of nested blocks: It is very natural for `MoEs`, since experts are entirely separate blocks, however in the case of `MoNE`, the router could act directly on the channel and output a binary decision (essentially, having as many experts as dimension $D$). I would expect this to have an impact on the load balancing loss, but it would given much more flexibility to distribute the capacity across experts. More generally, there is a large literature on dynamic sparsity (e.g. *(1)* for recent references) which seems closely related to the design of nested experts and is not mentioned here. * **Baselines:** The experiment sections lacks baselines, in particular ones which also consider a for dynamic routing (e.g. there are none for the video classification task). For instance: * Simple mixture of experts. The only baseline considered in the paper is MoD. Not only that the paper compared to *the best reported MoD configuration* (line 233); However MoD experiments seem to have been run on very different datasets, so it is not clear why the best configuration would transpose here. * Token pruning; In particular for image classification, there is a very large body of literature on token pruning (A-ViT, E-Vit, etc.). Since the only tasks considered in the experiments are classification, pruning tokens should be a valid dynamic computing baseline. In addition, some of the metrics reported in the paper could be improved: * As far as I know, ImageNet-21k evaluation is not very standard as there is no official train-validation split. * FLOPs are not enough to show the method's efficiency and they should be accompanied by real latency. For instance, they do not take into account extra padding operations. * **Relevance of the dynamic budget allocation:** Table 1 shows that a simple uniform allocation for the load balancing loss performs as well as the heuristic proposed in Section 4.3. Therefore it is not clear to me whether the proposed dynamic budget allocation really has a significant impact #### references * (1) Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time, Liu et al, 2023 Technical Quality: 3 Clarity: 3 Questions for Authors: * In Equation (1), is $r_i$ in fact computed on $x_i$ (the input of the block) or on $z_i$ (the input of the FFN / output of the self-attention) ? * I assume the padding to the dimension $D$ is necessary to preserve a common dimensionality for the tokens inside a batch ? or is there another motivation behind it ? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: There is little discussion about some limitations e.g. (i) why rely on a fixed number of experts or (ii) what are the real latency of the method. The only limitation mentioned is that it would be hard to adapt this scheme to decoder-only LLMs, which seems a bit out of scope with the paper itself. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments. We appreciate the recognition of MoNE to have hardware-efficient routing design, the use of compute-aware experts to optimize the accuracy/efficiency trade-off, and the thorough ablation study demonstrating the impact of router placement. Below we answer some of the questions that the reviewer raised. **Fixed number of experts** This is a very astute point. Since the MatViT (MatFormer [16]) experiments suggest that the nested dimensionalities interpolate smoothly, it is fair to assume that MoNE is in-fact not training just a fixed number of experts (set to 4), but everything between the smallest and the largest experts. MoNE uses a fixed number of nested blocks to develop a tractable framework for capacity allocation and routing. As the reviewer mentioned, there is nothing stopping from extending this to all the interpolated dimensionalities (D) of nested experts barring the added complexity of precise capacity allocation and load balancing, which is generally non-trivial. The design choice to route between 4 models also relies on potential better serving capabilities given the individual block sizes. In short, yes, we agree that we can do MoNE with all the intermediate nested experts (not just the 4) but prototyped with the current design choice to show the benefits to begin with. We thank the reviewer for pointing to the dynamic sparsity work, we shall include them in the revised version of the manuscript. Compared to dynamic sparsity, MoNE and MatViT may offer a structural advantage, potentially leading to improved system efficiency by organizing sparsity in a more organized manner. As evident in the **Throughput/Latency** section in the Global Author Rebuttal, even with very high level implementation, the FLOP gains directly translate to throughput and latency gains. **Strengthening experimental evaluation with dynamic routing baselines and latency** MoNE, while being a MoE-style framework, differs starkly from the concept behind MoE networks. MoNE does not increase the number of parameters, unlike traditional MoE networks. In contrast, MoNE reduces computation while keeping the same parameter space. Given the constant parameter space, it allows for complementary use of other MoE methods together. We discuss this point further in the **MoE Comparison** section in the Global Author Rebuttal. We compare against MoD because the method is similar in spirit to ours, conditionally computing the inputs based on complexity. We extensively experimented with MoD and found that the settings mentioned in MoD translate best to these vision datasets as well. The reviewer correctly points out that for classification, dynamic routing and token pruning methods are valid benchmarks. We therefore compare against some of these methods, and report results in the **Baseline Comparisons** section in the Global Author Rebuttal. This shows that MoNE can offer superior performance compared to other dynamic routing methods. We benchmark our method on ImageNet-21K for images to depict the large-scale efficacy of our framework. We would like to point out that ImageNet-21K has been extensively benchmarked in the AugReg Paper [38]. Additionally, we show results on ImageNet-1K as well compared to other efficient architectures for smaller scale ViT models in the **Baseline Comparisons** section in Global Author Rebuttal. The reviewer correctly points out that FLOPs gains may not always translate to latency/throughput gains. Therefore, we present latency and throughput results as compared to baseline models in the **Throughput/Latency** section of the Global Author Rebuttal. We also present the variation of latency vs FLOPs, at different model capacities and observe that latency gains scale close to linear to FLOP gains. Also note that the”pad” mentioned in Figure 2a do not actually result in any additional computation, and can be avoided in implementation. **Significance of dynamic budget allocation** It is true that the uniform allocation performs as well as our heuristic method, but at a higher capacity requirement. More importantly, a uniform allocation $c_i=\frac{1}{4}, \forall i$ leads to a model with a fixed capacity based on the individual compute requirements of the experts, i.e., $e_c=\frac{1}{4}(\frac{1}{8}+\frac{1}{4}+\frac{1}{2}+1)=0.47$. In contrast, the proposed heuristic allows for flexible choice of model capacity ($e_c$) as per compute requirements. As discussed in the paper, given a user specified $e_c$, many solutions exist for $c_i$, and it is non-trivial to choose one over the other. The heuristics add constraints to reach a solution which offers promising results. The high performance of MoNE across capacities is depicted in Figures 3 and 4. **Clarification on input for computing r_i in Eqn (1)** The router logits $r_i$ are computed on the block input $x_i$, because unlike traditional MoE architectures, MoNE leverages the nested structure not just in the FFN layer, but also in the self-attention, to further increase compute gains. The exact changes to these operations are briefly described in the Appendix A.1 (Eqn. 5 and 6). **Padding to dimension D** Yes, padding is only to make sure two different dimensional vectors can be added. However, in an efficient implementation, padding would not be necessary, since we can maintain features of different dimensions separately, and add only the non-zero regions whenever needed. **Limitations** We have addressed the limitations mentioned by the reviewer in this discussion and in the Global Author Rebuttal. We will include a more elaborate discussion of our method and areas for future work in our final revision. --- Rebuttal Comment 1.1: Comment: Thanks a lot for your response. While I appreciate the added throughput results and discussion, I am inclined to keep my rating as I still think this would warrant a more extensive evaluation/discussion/modification of the paper. In the current state, the paper proposes a technically sound dynamic computing method but does not discuss/compare well to existing related work (e.g. no throughput for dynamic pruning methods), and the core evaluation in the paper is performed on very specific settings/benchmarks (imagenet21k for classification, and video classification), so it is also hard to get an insight on how it compared to existing dynamic methods. > We benchmark our method on ImageNet-21K for images to depict the large-scale efficacy of our framework. We would like to point out that ImageNet-21K has been extensively benchmarked in the AugReg Paper [38] While I do agree that ImageNet-21k might be a more robust benchmark than ImageNet-1k, it is much less standard in practice. With respect to existing baselines, it would be more fair to report ImageNet-21k results *in addition* to 1k rather than omit the 1k benchmark. > MoNE, while being a MoE-style framework, differs starkly from the concept behind MoE networks. MoNE does not increase the number of parameters, unlike traditional MoE networks. I agree but one may argue that this is the strength of MoE: Increased model capacity / number of training parameters while keeping the number of inference parameters constant (at least for a batch size of 1). It would be easy to design a MoE counterpart that would have the same number of inference parameters as MoNE, or inversely, the same number of training parameters but faster inference. > However, in an efficient implementation, padding would not be necessary, since we can maintain features of different dimensions separately, and add only the non-zero regions whenever needed. I am not 100% convinced it would be so trivial to reach such an efficient implementation (hardware wise), as it would also mean having to constantly handle tokens with different number of dimensions, as opposed to the standard "large tensor multiplication/sums". In that sense do see the advantage of simplicity for padding, but it seems to be a bit of a negative point from a memory usage perspective (though MoE also have a similar issue with batched inference and having to activate many experts) --- Reply to Comment 1.1.1: Comment: We would like to clarify some potential misunderstandings by the reviewer, and mention that many of these points are addressed in our [global rebuttal](https://openreview.net/forum?id=HbV5vRJMOY&noteId=m6HR8cjvnf) to all reviewers. In particular, we have already compared to existing dynamic models on ImageNet-1K in our initial rebuttal. Moreover, our current implementation does not use unnecessary padding, and we have already demonstrated substantial gains in latency (in addition to theoretical FLOPs) in our initial rebuttal. Finally, we also discussed in our initial rebuttal about how our MoNE is complementary to traditional MoEs. We now detail these points below: **Baselines** We compared to existing dynamic computing methods in the second table of our initial [Global Rebuttal](https://openreview.net/forum?id=HbV5vRJMOY&noteId=m6HR8cjvnf). This table shows that we outperform existing approaches in both accuracy and GFLOPs when using the same ViT-Ti backbone. Note that throughput comparisons are difficult as they are hardware and implementation dependent (eg. ViT-B/16 throughput mentioned in [47] is 300 img/sec vs in [38] is 659 img/sec), thus we have only compared our method to a vanilla ViT using the same hardware and code-base. **ImageNet-1K** To clarify, our **Baseline Comparisons** table in the [Global Author Rebuttal](https://openreview.net/forum?id=HbV5vRJMOY&noteId=m6HR8cjvnf) is indeed on the **standard ImageNet-1k benchmark**. It indicates the superior performance at lower FLOPs of over other dynamic methods _in addition to_ the extensive ImageNet-21K results shown in Figures 3 and 5 of the paper. Additionally, the same section mentions that Token Merging techniques (ToMe) can be applied on top of MoNE to further achieve latency savings, thus depicting our framework’s flexibility of use and how it is complementary to related work. The performance of MoNE is further shown on **standard** video classification benchmarks (Kinetics400, SSv2), indicating that MoNE is able to achieve baseline performance at significantly lower performance. We hope these clarifications help and are happy to clarify further. **MoEs** As detailed in the [Global Author Rebuttal](https://openreview.net/forum?id=HbV5vRJMOY&noteId=v0FuPA3Sk5), MoNE is comparable to a dense model, as it does not increase the parameter space, and we have demonstrated accuracy and efficiency gains over dense models. Extending MoNE to a sparse MoE setup is a promising direction for future work. Concretely, we can use MoNE in a sparse MoE setup where the experts are multiple MoNE layers, instead of traditional dense layers. This would reduce the compute cost of an MoE whilst still having the same parameter space. In response to "It would be easy to design a MoE counterpart that would have the same number of inference parameters as MoNE …", it is important to note that the parameter-scaling properties of MoE models are different to dense models, particularly in the "low-parameter" regime. As shown in Table 8 of VMoE [34], MoE B/16 ,models that have the same number of parameters as dense ViT L/16 models underperform the dense models by 2-5% depending on the task. **Efficient Implementation** We would like to clarify that while Figure 2a of the paper indicates a padding for bringing feature representations to the same dimension, it is _only_ for illustration / ease of explanation. In our actual efficient implementation, we handle tokens with different dimensions as different tensors, processing them in parallel without padding, and concatenating only for combined operations that require the whole token set together, which happens to be the case only for the SoftMax operation in SelfAttention. In this way, we match the exact theoretical FLOPs as well as achieve the gains indicated in the **Latency/Throughput** section of the [Global Author Rebuttal](https://openreview.net/forum?id=HbV5vRJMOY&noteId=v0FuPA3Sk5). Consequently, from a memory perspective as well, we do not incur any additional costs as we perform the exact operations _without padding_.
Summary: This paper present a method to select nested portions of a transformer network, using a MoE router-expert assignment method where each expert is a progressively larger slicing of a single underlying model. A capacity budget determines how many tokens can go to each expert, while a router network scores experts for each token so that the most important tokens (in the sense of benefitting from the larger computations) go to the larger model slices. Furthermore, when trained using random budget selection, the model can be dynamically scaled to different cost-accuracy tradeoffs at inference time. The model is evaluated on image and video classification tasks using imnet21k, K400 and SSv2, with excellent flops-accuracy performance compared to baselines, and a set of ablations show the impact of different design choices for router placement. Strengths: This is a well-written paper that describes an in interesting and effective idea. The approach is evaluated convincingly on three datasets for image and video classification tasks, and in enough settings to profile its performance characteristics and anecdotal visualizations of its behavior. Weaknesses: There are a few points that I think were under-explained (see questions below). In particular the descriptions of the projections around the MLP and SA were a little terse, and I'm still not sure exactly which operations are in the subspaces vs the full dimensional space. The connection between token redundancy and the operation of the model is also a little tenuous, though likely (see below as well), and while FLOPs are measured explicitly, the impact on both real computation savings and runtime may depend on hardware and distributed computation implementations, which I didn't see mentioned. Technical Quality: 3 Clarity: 4 Questions for Authors: * sec 3.1 I don't understand exactly where the projection back out to 4*D or D happens, and if this was a linear projection or padding? What operations are performed in the full-dimensional space, and which in the subspaces? Right now it's unclear exactly where the computation savings are, as I'm not sure exactly which operations are in the smaller subspace. * sec 3.1 l.102: "it is always projected to the model dimension D for the (QK)V operation." This could use more explanation on how the projection is done. Is this linear or padding? A more explicit equation for the projection from D/m to D could help. * Which tensors and dimensions are the layernorms performed over? It appears they are performed within each expert separately (which would make sense since they have different dimensions) but I'm not entirely sure. * The motivation mentions redundancy in the tokens quite a bit, and while I agree with that in general, I think the link so far is tenuous and wasn't made very explicit. In particular, if the router network is a linear classifier on the first transformer input, then how can it possibly know which tokens are redundant without comparing between them? Does such a comparison come from lower conv layers in feature extraction? Or is it just that some types of tokens tend to be more redundant than others (flat regions, for example, are just by nature of being flat and therefore next to something similar). * eq 3: the effective capacity e_c is defined as a linear combination of the dimension ratios, but d-dimensional MLP will use O(d^2) operations, so that a d/2 dimensional MLP will have 1/4 of the multiply-adds compared to a d dimensional one. Does this matter for the capacity assignments? Or is this not actually the case because the MLP always has one 4D hidden layer, so it is a linear scaling? * also eq 3: the overall intuition makes sense as described, but this particular optimization seems a little arbitrary; in particular, why are the weights delta^i exponential, and why is the entropy term needed in addition to the e_c capacity constraint? It seems possible the entropy is counteracting the exponential delta weights, when this could have been less aggressively weighted with no entropy? I don't know that there is necessarily a benefit to equal usage of all experts in this case, either (see my other question on runtime and distributed computation below) * for each of the different FLOPs levels in the performance curves, what are the expert assignment ratios? * Another interesting point of comparison would be the accuracy levels of each expert alone --- that is, the 4 single-expert routing assignments (1,0,0,0), (0,1,0,0) etc. * an interesting extension would be a lowest-capacity expert with 0 in its dimension slice, so no MLP, but still included in self-attention as well. this one is perhaps beyond the scope of this work but seems natural considering the redundant computation motivations * something that wasn't touched on was the impact on model runtime and latency, particularly in distributed settings, although that will be hardware-dependent. Since there are sync points between the experts at all SA layers, how to distribute computation between devices isn't as immediately clear as for many same-capacity experts, since the longest-running computation can be in any of the four expert capacities depending on the assignments * sec 4.4 videos: I agree that there is redundant temporal information and that would be a great application of MoNE. However, the application in the second paragraph doesn't actually seem to leverage or implement that, since it says that it applies the MoNE frame-by-frame. If that's the case, the temporal dimension wouldn't be used in MoNE to determine important vs redundant computations. If time and space combinations interleave or alternate, then it could --- but the description indicates that isn't the case. On the other hand, even without this just looking at a single frame, some regions are more likely to be redundant across frames than others, and so the argument for MoNE may still apply because of that. Whatever the case, it appears contradictory in the text right now and could use a little more explanation. Smaller comments: * l.122: "with k = 1" isn't very clear, I initially thought it meant "if k = 1" which prompted me to ask, but what if k > 1? In fact, I think this means to say e_i is a subset of e_i+1 and k is always 1 (i.e. exactly one of the nesting levels is selected) * meaning of i, j isn't always consistent, e.g. l.114 and 168 sums i over 1..E, but in most other places i is tokens and j is experts, so seeing i index the experts in a few places is a little confusing * Alg 1 EPR: Should "T" be "N" here? Also, the way this is phrased with floor function, there could be a few left-over tokens at the end that are unassigned. I take it these are assigned to the cheapest expert rather than being dropped? * l.185: "flexibility" --- If the flexibility here is dynamic and can be specified at inference time, then c would need to be changed at inference time as well --- does this mean that many different values for c are used a training time? And how are they randomized or selected? --- this was answered in adaptive training experiment later in sec 5, but could be good to mention in the method description Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: addressed in the final discussion section Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback and glad to see that the reviewer found our work to be well-written, interesting and effective with comprehensive eval. **Projections and compute save. Projection from D/m to D for (QK)V.** In a Transformer model, there are total 8 primary linear projection operations - (1) 3 to obtain $Q=W_qx$, $K=W_kx$, $V=W_vx$, (2) 2 to obtain $y=(QK^T)V$, (3) 1 in applying $z = W_oy$, (4) 2 in MLP: $W_2\sigma(W_1x)$ MoNE saves compute in all of these ops, except (2). In transformers, the weight matrices are of size $\mathbb{R}^{d_a\times d_b}$, where $d_a, d_b$ represent the feature dim (model dim, MLP dim, etc). In MoNE, these are either $\mathbb{R}^{d_a\times \frac{d_b}{m}}$ or $\mathbb{R}^{\frac{d_a}{m}\times d_b}$, depending on the op. $m$ denotes the reduction in the feature dim for the nested expert. Hence, in all projections, nested experts have FLOP gains by a factor of $m(>1)$. Denoting the model dimension as $D$, the features are extracted from $D$ to $\frac{D}{m}$. For eg, for the nested model with dim $\frac{D}{m}$, in (1) the input $x$ is $\frac{D}{m}$ which is projected to $D$. For that, we slice a weight matrix of dim $\mathbb{R}^{D\times \frac{D}{m}}$ from the full matrix of $\mathbb{R}^{D\times D}$, and multiply it with $x$, as explained in Lines 93-94. Hence we save by a factor of $m$, by choosing the nested matrix instead of the full matrix, for which the input would be $D$. Similar explanations for (3) and (4). These operations are discussed in Appendix A.1. **Layer Normalization** As in Line 98, after linear projections, the features are padded to D, to add with the residual features, and LayerNorm on full D dim. In the next layer, a part of the feature $\frac{D}{m}$ is sent and weight slicing operations are applied to project, as discussed above. **Leveraging redundancy** By redundant, we mean that the info of a certain token set is redundant/less needed, given a few of important/needed ones, as measured by final accuracy for a given compute. Our router does learn to prioritize features like edges for the largest expert, confirming the reviewer's hypothesis. Such features can be learned by low level convolutional filters. Tasks which need to compress redundancy at a higher level might need a router deeper into the network. Additionally, ViTs are known [c] to shuffle info among tokens, acting as an intrinsic router, thus adapting itself given the initial router decisions. [c] Darcet et al., Vision transformers need registers, ICLR 2024 **Eq. 3 relation between dim and capacity** Yes, the last line of the reviewer for this point is indeed correct. In the MLP dimension, the full model would do two projections: $D \rightarrow 4D$ and $4D \rightarrow D$ amounting to 2 * 4D * D compute. The nested model would do $\frac{D}{m} \rightarrow 4D$ and $4D \rightarrow \frac{D}{m}$, hence amounting to 2 * 4D * $\frac{D}{m}$ compute, a $m$ fold reduction. **Eq. 3 Rationale** The weights $\delta^i$ are exponential to account for the exponentially distributed accuracy differences (Fig 3, 4) between consecutive MatViT models. While this is empirical, this first term is as a proxy to accuracy obtained by a particular expert. The larger experts are thus favored more. However, in without the entropy term, this objective would lead to a greedy assignment of either the largest or the smallest expert. This will not leverage the framework’s full potential. The entropy term balances this, so that each expert is provided a significant capacity. We empirically reason some of our choices in Table 1 of paper, showing that this heuristic leads to competitive results across capacities. **FLOPs to Expert Capacity** The expert assignment ratio, $c_i,i\in[1,E]$ for expert $i$ is obtained by optimizing Eqn 3 for a given capacity $e_c$. Given FLOP needs, $e_c$ can be obtained by solving: GIVEN_FLOPS = FLOP_b + e_c * FLOP_a where FLOP_b corresponds to the model operations that are not reduced by MoNE (patch embedding, attention value ($(QK)^TV$) computation), and FLOP_a correspond to the linear projections. **Single expert accuracy** The individual MatFormer expert accuracy are in Fig 3 and 4 as MatViT/MatViViT. **Zero-capacity expert** This is close to skipping entire layers for some tokens in Mixture of Depth (MoD). We do compare with MoD in Fig 3, and find MoNE to perform better. **Runtime/latency** We agree that latency is heavily implementation dependent. Our high level Jax implementation resulted in latency gains scaling linearly to FLOP gains. We present this in the **Latency/Throughput** part of Global Author Rebuttal. The largest computations that bottleneck throughput can be resolved by efficient implementation. The ops are tensor products of different sizes, which can be decomposed into smaller sub-tensor products and efficiently parallelized (Fig 3 [14]), avoiding bottlenecks. We believe that further latency improvements can be obtained by low-level implementations. **Sec 4.4 videos** Our exploration for MoNE in videos is a first step based on the popular factorized-encoder ViViT [2] model, for which the gains in the spatial dimension are more readily realized, given that temporal information here is integrated late in the architecture so a router has 196*16 choices spatially vs. 16 temporally (over the 16 frame-wise CLS tokens). We will expand and clarify this discussion further; integrations with other space-time video architectures with more interleaved operations is an exciting direction for future work. **Smaller comments:** We thank the reviewer for noting the small comments. The reviewer's understanding is correct. We will clarify them in the paper. Yes, remaining tokens due to integer packing are not dropped but assigned to the smallest expert. --- Rebuttal Comment 1.1: Title: responses Comment: Thanks for the responses. I especially appreciate the new latency and throughput measurements and additional baselines as requested by a couple of the other reviewers. I will keep my score at 7. The explanation of slicing above helped, as it explains better that each of the W matrices are sliced on one of their dimensions, but I still think it could be clearer. I found it helpful to write out the dimensions of all the W matrices and go back over the transformer ops you enumerated, distinguishing between "D" and "H" dims in each: $W_q : [H_A, D]$, $W_k : [H_A, D]$, $W_v : [H_V, D]$, $W_o : [D, H_V]$, $W1 : [H_{MLP}, D]$, $W2 : [D, H_{MLP}]$ in which case all the the $D$ dims are sliced to $D/m$, and applied to correspondingly sliced vectors, and all the $H$ dims are left intact, if I understand correctly. I also now see that in Fig 2, the Pad operation is not shown in 2b, only 2a, and that the order of slicing and LN is inconsistent between 2a and 2b: 2a applies LN to full D-dim features, which makes sense, but 2b indicates that x is sliced before applying LN, in which case it's unlcear if this means x is padded back up to D and re-sliced, or if it's actually applied as in 2a, with LN before slicing. I think this is where some of my confusion around these ops came from as well. Lastly the horizontal blue bar of dim 4D in the MLP (and the horizontal red bar of dim D in SA) are a little confusing as well, as all the other sliced horizontal bars here are W matrices, and so it's not clear what the unlabeled bars correspond to in the figure. --- Reply to Comment 1.1.1: Comment: We thank the reviewer’s appreciation. The reviewer’s understanding is correct regarding slicing, and we will make it clear in the revised manuscript. In Figure 2b, we do not show the slicing operation in the intermediate layers and only show in Fig 2a, to avoid an overcomplicated diagram. Figure 2b is just to show the mixture of nestedness across tokens. The slicing at the input is just an indicative of that token processing at a certain nested dimension. The horizontal blue and red bars indicate the dimensions of the MLP and attention hidden dimension respectively, which are always at the full dimension ($H_A, H_V, H_{MLP}$ using the reviewer’s notations).
Summary: This paper tried to use Matryoshka mechanism to assign tokens to different experts. Strengths: 1. Seems like the proposed approach can learn some effective components in images, shown in visualizations . 2. The empirical performance is good compared to mavit. Weaknesses: 1. Why there is not comparison with FF which owns a single scale? What I mean is, starting for MaViT, and continue finetune the model with the same amount of compute . 2. Why is imagenet accuracy so low compared to a normal ViT? Such as Figure 3. 3. Why is there no experiments on imagenet1k? Technical Quality: 2 Clarity: 2 Questions for Authors: 1. In "we place a single router at the first transformer layer, and propagate the router decisions to all the layers.", why doing this? Why can't we learn from MoE papers, and place routers for all transformer layers? 2. How stable is router training? How do you evaluate whether router really can find a suitable compute to match the task difficulty for a specific image/video? 3. In figure 6b, do you mean that there is one router in the whole model? This is suprising. 4. Seems like random router is only a bit worse than the learned one... Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: 1. Important Tokens: is there a way to measure it? 2. The method description part is not clear to me. 3. Authors should also consider comparing with TokenMerging series of work, since the goal is the same, which is to save compute from the token level. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's positive feedback and are pleased they found our method produces informative visualizations and better results than MatViT. We'll address their questions below. **Comparison with Single-Scale FF and Fine-tuning with Same Compute** We appreciate the reviewer's suggestion regarding fine-tuning our framework from a MatViT model with the same compute. We exactly follow this procedure for videos, as detailed in Section 5, Lines 246-254. We utilize isoFLOPs training [24, 32] across all methods in Figure 4, meaning equal training FLOPs. While the x-axis in Figures 3, 4 depict the inference FLOPs consumed by the individual expert models (MatViT and MatViViT) and MoNE models with varying capacities, they are trained in isoFLOPs manner. We'll clarify this further by updating the figures and their captions. For images, we show that MoNE can be trained from scratch for the same number of epochs as ViT (thus taking much lower training FLOPs), and still perform favorably to ViT and MatViT (Figure 3). In Figure 3a, we show that MoNE’s performance can be further improved by using the same training FLOPs as ViT (MoNE-isoFLOPs). Note that MatViT performs 4 forward passes per training step, while MoNE performs just one. Regarding comparison with a single scale, Figures 3 and 4 depict the performance of individual MatFormer models. We will highlight these points in the revision. **Low ImageNet Accuracy** The numbers in Figure 3 correspond to validation accuracy on ImageNet-21k, and they are on-par with those reported in Fig. 4 of AugReg paper [38], with ~2x reduction in compute, for all three S, B and L models. **Experiments on ImageNet1k** We primarily experiment on ImageNet-21k to showcase our model's effectiveness in large data regimes. These have been elaborately benchmarked earlier in the AugReg [38] paper. But, as ImageNet1k is a widely used benchmark for smaller scale ViT models, we compare MoNE with other adaptive ViT models in the **Baseline Comparisons** section of the Global Author Rebuttal. **Router Placement** Our method substantially differs from traditional MoE frameworks. Unlike MoE, where parameter count increases with routers in more layers, our method maintains a constant parameter count regardless of the number of routers. This methodological difference explains why MoE router settings don't directly apply to our framework. We experiment with routers placement and present results in Figures 6a,b. Experiments suggest the best setup is a single router at the first transformer layer. While having multiple routers (expert choice) is the norm in the MoE literature, the MoNE setup is different from MoE. We further extend this discussion in the **Number of Routers** section of the Global Author Rebuttal. **Router Stability and its effectiveness in identifying to task difficulty** The router is jointly trained with the model to optimize only the classification loss, and we find the training to be stable, as suggested by the high performance of the framework and relevant visualizations in the paper. Figure 1 and 7 show that MoNE can learn to assign important tokens to the higher sized models, and the redundant ones to the lower sized nested models. The question raised about task difficulty is a really interesting one. While we fix a capacity and route tokens according to importance, as would be necessary for real-time applications, we do see the model is inherently able to learn the difficulty level of a specific data point. We discuss and visualize these results in the **Task Difficulty** section of the Global Author Rebuttal. These results validate that the router is able to associate compute requirements with task difficulty. **Random vs Learned Router** Figure 6c shows the performance of the model with a learned vs a random router at different capacities. While for higher capacities, the learned router performs marginally better than the random one, the gap significantly widens as we go to lower capacities, from 0.1% at e_c = 0.6 to 1.3% at e_c = 0.2. This makes sense: with ample capacity, many tokens can be heavily processed, reducing the need for smart routing. Conversely, in low-capacity scenarios, routing decisions become crucial as only a few tokens can utilize the heavy experts. Interestingly, ViTs inherently shuffle information [c], potentially even in the "Random" router setting as well, acting as an intrinsic information router. We note that a model trained with a learned router when evaluated with a random router, performs significantly worse (~6% drop in Top1 Acc on Ti/16 trained on ImageNet1k). We will update the paper with these discussions. [c] Darcet et al., Vision transformers need registers, ICLR 2024 **Measuring Token Importance** The router leads to intuitively sensible decisions on a per-token basis, as depicted in Figures 1 and 7, on both images and videos. The router decisions correlate well with the important tokens in the image/video. In Figure 7a, we see that the relevant image regions are processed by the largest expert. In Figure 7b, we see that the tokens sent to the largest expert correlate well with the motion in the video. This is a qualitative measure of token importance, and the router decision logits can thus be taken as a quantitative measure as well, which the EPR algorithm does while assigning tokens to experts. **Comparison with Token Merging** Since TokenMerging algorithms can work on top of any ViT-like models, it is complementary to our method and can potentially be applied to a pre-trained MoNE model to further reduce computation. We empirically validate this claim in **Baselines Comparisons** in the Global Author Rebuttal and compare against other adaptive algorithms as well. Note that a naive implementation already performs well and this can be further improved by considering the MoNE architecture into account, as discussed. We will add this result to the revised manuscript.
Summary: This paper introduces the concept of Mixture of Nested Experts (MoNE), which utilizes a nested architecture to process visual tokens more efficiently in visual media like images and videos. MoNE aims to leverage redundancy in data, choosing experts in a priority order to process visual tokens, thereby achieving substantial compute savings while maintaining performance. The approach, demonstrated with MoNE's algorithms like MoNE-21K and MoNE-4K, optimizes adaptive processing on standard image and video datasets, significantly reducing computational demands while using a single trained model. Strengths: 1. Overall, the paper is well-motivated and the idea of combining nested structers with MoE is very interesting; The problem of information redundency do exist in image or video classification tasks. 2. Empirically, the MoNE models attain very competitive results with less FLOPs and parameters, which could support their theoretical analysis in vision information redundency. 3. The paper is well-written and easy to understand. The method is simple and easy to follow. Weaknesses: 1. My main concerns are in the actual inference speed of your models. The experimental results reported in the paper focus on comparing FLOPs rather than throughput. Could the authors demonstrate some direct comparisons of training/inference speed? 2. The EPR algorithm (Algorithm 1) proposed in the paper seems to implement routing operations through some loops. Is this process actually implemented through loops or parallel computing? Does this operation take up a lot of inference time? 3. I'm also concerning whether the method can benefit dense prediction tasks such as image segmentation. As in those tasks there are less information redundency, will MoNE result in performance degradation? Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations are discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback. We are glad to hear that the reviewer found the core idea of the work – nested structures to exploit information redundancy – to be well motivated and interesting, the experimental results to be promising and the paper to be well-written. Below we address some of the questions raised by the reviewer. **Actual inference speed comparisons (Throughput/Latency)** We agree with the reviewer about the importance of realizing the FLOP gains in terms of latency/throughput. We provide real inference speedups in the **Latency/Throughput** section of the Global Author Rebuttal. We will include these results to the main paper. **EPR algorithm implementation** The EPR algorithm contains a loop only over the number of experts, which is quite low and fixed to 4 in our framework. While the nature of the EPR algorithm does not allow to parallelise the computation any further, the time taken by the algorithm is negligibly small as compared to the total time taken by the model. For comparison (on GPU), for a ViT B/16 that takes 190ms for forward propagation, the EPR algorithm takes 0.5ms, less than **0.3%** of the total computation time. We will mention this in the revised paper. **Applying MoNE to dense prediction tasks** Thank you for this suggestion. As dense prediction tasks typically operate on higher resolution images, we believe MoNE can offer further computational gains as the number of input tokens increases. Dense prediction tasks also provide stronger supervision at each pixel, which may help our model learn redundancy in the data. Leveraging MoNE for tackling redundancy in denser tasks would be an exciting direction for future work. --- Rebuttal Comment 1.1: Title: Reply to rebuttal Comment: Thanks for the authors' rebuttal. Most of my concerns are well-addressed and I keep my rating of a weak accept. --- Reply to Comment 1.1.1: Comment: We're glad to hear that our rebuttal addressed your concerns.
Rebuttal 1: Rebuttal: **Latency/Throughput** We present the latency/throughput gains of MoNE compared to baselines here, in addition to the FLOP gains mentioned in the paper. In the table below, we show absolute wall clock times and throughput for MoNE compared to a baseline ViViT model, on a single V100 GPU, achieving ~2x improvement in both FLOPs as well as runtime, whilst maintaining accuracy. We additionally show the variation of latency and throughput with FLOPs for varying model capacities $e_c$ of MoNE in Fig 1, 2 (attached pdf). The plots show that latency and throughput gains scales close to linearly with FLOP gains. Inference gains depend heavily on implementation. A simple high-level efficient implementation of our framework yields gains of this scale, and we believe that further improvements can be obtained by optimizing a low-level GPU kernel implementation for MoNE. |Method|FLOPs (G)|Throughput (clips/sec)|Latency (ms)|Top-1 Acc| |-|-|-|-|-| | ViViT-FE-B/16| 376| 15.8| 129.2| 64.4| | MoNE $e_c=0.3$|**162**| **30.7**| **65.5**|**64.6**| **MoE Comparison** Our framework MoNE, unlike traditional MoEs, does not increase the parameter space. Traditional MoEs, like Sparse VMoE, route inputs in each layer to one out of k independent experts (typically the FFN block), each having the same parameter footprint, thus increasing the parameter space k-fold for the expert blocks. On the other hand, independent MoNE blocks can potentially be used as experts in the MoE framework. Therefore, our work _complements_ VMoE and other similar works. MoNE in this paper acts as an in-place replacement for a dense model (ViT), and hence all our comparisons maintain the same parameter space. VMoE frameworks show cross-scale results at the expense of increased parameter space (e.g., equivalent performance of VMoE-L/16 to ViT-H/14 in Table 2 in [34], and similar cross-scale comparisons in Figs. 4 to 8 in [1]). MoNE, in contrast, matches baseline performance with limited inference compute while working with the same parameter space. **Baseline Comparisons** We compare MoNE with more baselines, particularly with adaptive computation of dense models, as shown in the table below. We performed this experiment on ImageNet1k with a Ti/16 sized model. ACT, PonderNet, DepthAdapt, A-ViT are works with similar motivation of input adaptivity as MoNE, and MoNE shows superior performance. Latency gains on bigger models e.g., ViT-B in our paper) are even higher, as also observed in literature [7]. | Method|GFLOPs|Throughput (img/sec)|Top-1 Acc (ImageNet1k) | |-|-|-|-| | ViT|1.3|3410|71.3| | PonderNet [a, 18]| 1.0| - | 66.2| | Depth Adapt [18] | 1.1| - | 69.4| | ACT [22]| 1.0| - | 71.0 | | A-ViT [47]| 0.8| - | 71.0 | | MoNE (Ours)| **0.8**| **4333**| **71.4**| Additionally, in Fig. 3 (attached PDF), we apply Token Merging (ToMe) [7] on top of the MoNE style ViT-Ti/16 model trained on ImageNet1k. We trained a model with full capacity till layer 3 and a router with $e_c=0.5$ after that. We applied ToMe only on the first 3 layers. For fair comparison, we compare the performance drop and quote the same from a ViT-Ti model from the ToMe paper. Our preliminary results demonstrate that this implementation improves performance compared to ToMe on ViT, and this can be made better by extending this approach to all MoNE layers, applying it to distinct sets of nested tokens. This indicates that ToMe is complementary to MoNE. We will add these results to the revised manuscript. [a] Banino et. al, PonderNet: Learning to Ponder, arXiv:2107.05407, 2021. **Number of Routers** Note that the number of routers does not have the same implications in MoNE as compared to traditional MoE. In MoEs, the parameter count increases with the number of layers on which the expert router is placed, and hence we typically see performance gains. Even then, in Sparse VMoE [34] (Table 2, 8), significantly increasing the parameter count with more routers only marginally improves performance. On the other hand, in MoNE, the parameter size remains fixed irrespective of the number of routers: the only change a router brings is re-assignment of tokens to nested experts while keeping the total compute per layer fixed. We hypothesize that increasing the number of routers leads to slight decrease in performance due to two reasons: (1) it brings in additional optimization challenges (as prevalent in the MoE literature as well [34]), (2) when we reassign a token from a smaller to bigger nested expert, its information content may be bounded by the representation power of the smaller expert, hence not improving performance. The opposite may happen from bigger to smaller nested experts, thus losing information. Since MoNE allows flexibility in the placement of routers, an interesting future direction would be to extend MoNE to more challenging task settings, where a higher number of routers might lead to better results. We will update the revision with this discussion. **Task Difficulty based on router decisions** We visualize the inputs that require the least and most compute per the router decisions, in a setting without capacity constraints in Fig. 4 (attached PDF). This is to understand if the router decisions correlate with task difficulty, i.e., send harder samples to larger experts. Specifically, instead of performing token to nested expert assignment using the EPR algorithm in the paper, we use the router decisions as is by just an argmax. Using this, we get an estimate of the computation that the router wants to assign to an image. The two sets of images indicate the top-3 images that demand the lowest and the highest compute, according to the router decisions. We observe that the images demanding less compute are visually simple, while the ones demanding highest compute are relatively complex. Pdf: /pdf/e64077802aa9bb6a20d3623f0af560902b5327bd.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper proposes the Mixture of Nested Experts (MoNE) framework. MoNE is built on top of the MatFormer architecture which utilizes a nested architecture where smaller, less computationally expensive sub-models are nested within larger models. Similar to MatFromer, MoNE uses structured slices of the model weights (the experts) to process information hierarchically. While MatFormer focuses on obtaining many models on the optimal loss-vs-compute curve using the mix'n'match algorithm (during inference), MoNE learns to dynamically route visual tokens to an appropriate expert based on their significance and the computational budget. The results show that MoNE achieves a favorable accuracy-efficiency trade-off curve for image and video classification tasks compared to the baselines. Strengths: - The paper is well-written and easy to follow - The extension of the MatFormer architecture into an MoE architecture with dynamic routing seems quite intuitive. The Matformer architecture is inherently composed of nested subnetworks and enabling dynamic routing in this architecture makes sense. - Experiments on Image and Video Classification tasks shows MoNE outperforms the baselines. Weaknesses: - Baseline comparisons: It is conceivable that the dynamic routing approach can outperform the mix'n'match policy of MatFormer during inference as smaller sub-networks can focus on simpler/uninformative tokens while the larger experts focus on more complex/informative tokens. Nonetheless, to fully evaluate the potential of the proposed MoE framework, it would be beneficial to compare it with other MoE architectures, especially those like Sparse Vision-MoE. - The advantage of having a nested set of experts compared to non-overlapping experts remains unclear. The qualitative results suggest that MoNE utilizes the full model for the most critical tokens, while less informative tokens are processed by smaller sub-networks. Given this, the paper could be strengthened by comparing MoNE to other dynamic token processing methods such as AdaViT [1], SVitt [2], and A-ViT [3], which have significantly enhanced computational efficiency in image and video recognition tasks by learning to dynamically skip tokens. 1. Meng, Lingchen, et al. "Adavit: Adaptive vision transformers for efficient image recognition." CVPR 2022. 2. Li, Yi, et al. "Svitt: Temporal learning of sparse video-text transformers." CVPR 2023. 3. Yin, Hongxu, et al. "A-vit: Adaptive tokens for efficient vision transformer." CVPR 2022. - Specialization of Experts in MoE Architectures: A primary focus in designing MoE architectures is the specialization of experts to address different aspects of the data distribution. Typically, having non-overlapping diverse experts is desirable. In contrast, MoNE and MatFormer consist of partially overlapping parameters, suggesting that the implicit learning bias in these architectures may differ significantly from that in conventional MoEs. Investigating the differences between MoNE experts and conventional sparse MoE architectures could greatly enhance the reader's understanding of the proposed MoE framework's behavior. - Worse results when increasing the number of routers: Standard MoE architectures employ MoE layers and routers in every(other) layer, enhancing the model's flexibility and expressiveness. However, it is concerning that increasing the number of routers leads to worse performance in MoNE. This suggests potential optimization challenges in this MoE model compared to standard MoEs. Notably, the best results are obtained when the router is placed at the very first layer, indicating that the dynamic decision-making for all layers is based on relatively simple cues at local feature levels. Further exploration into why the performance deteriorates with more routers compared to traditional MoEs, which typically see improved results, would be insightful. Technical Quality: 3 Clarity: 3 Questions for Authors: Please review the weaknesses section. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors mention one of the limitation of their work regarding extension to sequence modeling tasks. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the insightful review. We are glad to hear that the reviewer found the paper to be well-written, the MoNE framework to be intuitive, and compelling experimental results. Below we answer some of the questions raised by the reviewer. **Dynamic Routing vs MatFormer and the need for MoE comparisons** We agree with the reviewer that our dynamic framework allows for more informed token routing than the static strategy in mix’n’match in MatFormer [16]. We would like to point out that MatFormer’s mix’n’match happens on a per-layer basis, each layer processing all tokens uniformly using a particular nested model. Our method, in contrast, takes decisions per-token. MatFormer does not explicitly explore input-dependent dynamic decision-making for tokens. It is non-trivial to come up with a learning framework which would allow inference time mix’n’match where tokens are processed through different models. Regarding the concern about comparison with other MoE methods like Sparse VMoE, we discuss the comparison of MoNE with these MoE frameworks in the **MoE Comparison** section of the Global Author Rebuttal. In addition, MoNE can be extended to have multiple disjoint experts as in Sparse Vision-MoE, offering compute efficiency in such models. This can be considered as a future work. We will update the manuscript with this clarification. **Clarifying the Advantages of Nested Experts and Comparison with Dynamic Token Processing Approaches** The key advantage of having a nested set of experts over non-overlapping experts is that the computational gains can be achieved without increasing the parameter space. Non-overlapping experts for a given parameter space would lead to a limitation of representation power of each expert. However, as in MoNE, overlap between experts allows the largest expert to enjoy the full parameter space. Additionally, as shown in Table 5 of the MatFormer [16] paper, joint optimization of shared experts leads to better performance than having independent experts of the same size. We additionally compare our method with some of the methods mentioned by the reviewer, and show results in the **Baseline Comparisons** section of the Global Author Rebuttal. The comparisons show that MoNE performs better than other adaptive processing algorithms. **Understanding Expert Specialization: Contrasting MoNE and Conventional Sparse MoE Architectures** The primary motivation behind the nestedness in MoNE is input adaptivity thus offering compute efficiency while keeping the same parameter space. MoNE still remains fully compatible with traditional MoE approaches (Mixture of MoNE). In that setup, intuitively speaking, not all concepts may need the same amount of compute and using MoNE in the MoE setup would offer compute efficient MoE models. While one of the expected outcomes of MoE architectures is specialization of experts, this is not always the case in literature. Quoting a few lines from Section 5 of Mixtral of Experts [b], a MoE method that has non-overlapping experts - “Surprisingly, we do not observe obvious patterns in the assignment of experts based on the topic. For instance, at all layers, the distribution of expert assignment is very similar for ArXiv papers (written in Latex), for biology (PubMed Abstracts), and for Philosophy (PhilPapers) documents.“ Moreover, in Sparse VMoE [34] the authors observe very weak correlation of router decisions to categories. [b] A Jiang et al. Mixtral of Experts. arXiv:2401.04088, 2024. **Understanding the Impact of the Number of Routers in MoNE** This is a very important point raised by the reviewer and we clarify in the **Number of Routers** section of the Global Author Rebuttal. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed responses and the addition of new baseline comparisons. My main concerns have been addressed, and I am pleased to raise my score to 6. I would like to bring to your attention Flextron published at ICML'24 - "Flextron: Many-in-One Flexible Large Language Model" by Cai, Ruisi, et al. This work presents a very similar idea to your paper. While it is clearly a concurrent work and does not diminish the contributions of your paper, it would be beneficial to include a brief discussion highlighting the differences between the two approaches. --- Reply to Comment 1.1.1: Comment: Thank you for your valuable feedback and increased score. We are pleased to hear that your main concerns have been addressed. We appreciate you bringing Flextron to our attention. We have taken note of this concurrent work and agree that it would be beneficial to include a brief discussion in our paper highlighting the differences between the two approaches. We will make sure to incorporate this in our revised version.
null
null
null
null
null
null
Structured flexibility in recurrent neural networks via neuromodulation
Accept (poster)
Summary: The paper proposes an RNN architecture that include synaptic modulation, motivated by neuromodulatory factors in the brain. In essence, the paper shows it is possible to linearly influence the connectivity matrix of a low-rank RNN by scaling it with the output of smaller RNN, the latter nominally describing neuromodulation. The achieved composition of networks outperforms classical RNNs on tasks with timing and history dependence. The presence of the neuromodulatory signal also allow for multitask learning and reuse of previously learnt dynamics to learn a new task. The paper also compares the performance of the NM-RNN with those of a LSTM, both trained on a digit recall task, where LSTMs are known to outperform RNN. Instead, the result show the added flexibility of NM-RNN. Strengths: This is a nicely written paper, clear in motivation and generally well-executed. The idea of dynamically scaling the weights of a RNN is a well-reasoned concept, if not entirely novel (as per the prior literature on hypernetworks and synaptic scaling more generally). Adding this form of flexibility could be a useful concept to generalize neural network architectures, and it is certainly an interesting line of study from the perspective of theoretical neuroscience. Weaknesses: While I generally enjoyed the paper, there are a few issues that impact my initial score. The domain of impact of the paper is mostly in theoretical neuroscience, since it is not clear from the results that the absolute performance of the NM-RNNs actually exceeds popular constructs such as the LSTM (e.g., F5B). Because a lot of the paper is spent discussing the parallels of the NM-RNN gating mechanism and the LSTM, this becomes important from a mechanistic perspective. Can we gain insight into why there is a plateau in performance, despite the argued parallels in architecture? This could be useful from a fundamental ML perspective. Related to the above, it would be good to understand actual performance of these networks, in addition to the loss (since there is not always a 1-1 correspondence between the loss and performance). It is also important to discuss the computational implications of the architecture from a trainability standpoint (see Question below). From the theoretical neuroscience standpoint, the assumption of neuromodulation acting in this way (weighting low rank components) seems quite abstract and strong. I can understand the argument that networks are low rank, but the idea that modulation effectively selects or weights these components would seemingly require a high degree of precision and some ‘knowledge’ of these components at the level of the modulation generation (I note some discussion of this in the Future Work). Is this realistic, and/or are there ways that the developed models could be substantiated in actual experiments? Are there other ways of neuromodulation entering the model that would be similar effective, or does it really need to be as this sum of low rank components? Technical Quality: 3 Clarity: 3 Questions for Authors: The architecture seems to require an a prior specification of the maximum rank of the connectivity matrix. From an application standpoint, how would this be determined? Is the proposed method amenable to batch training? Can the authors characterize the trainability/computational implementation considerations of the proposed architecture, relative to say an LSTM? Certainly, there would seem to be more overhead relative to a vanilla RNN. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The discussion contains several limitations and future work directions, which are largely appropriate and appreciated. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your helpful comments. We have addressed some shared concerns regarding performance comparisons to LSTMs and vanilla RNNs in the general rebuttal. To respond to your comments regarding weaknesses of the paper: In general, we don’t believe the NM-RNN will outperform the LSTM. As you mentioned, there is a plateau in performance despite similarities in structure. While our NM-RNN captures some aspects of the LSTM via its synaptic weight scaling, other key features of the LSTM are not implemented. In particular, the input and output weights of the NM-RNN are held constant. Perhaps modulating these weights would help recover more LSTM-like performance. However, our main goal was to offer this model as an alternative to commonly-used low-rank RNNs, as a tool to model biological task completion with neuromodulation. In contrast, LSTMs are not commonly used as “brain-like” models of neural dynamics. We give example outputs of each model on the Measure-Wait-Go task to highlight specific performance (paper fig. 3D and rebuttal fig. B). For the multitask setting and EFT, example outputs are noisier and are less visibly distinguishable between model types, so we did not visualize them, instead choosing to summarize performance via % correct and loss metrics. We chose to have neuromodulation differentially impact low-rank components to mimic the selective way neuromodulators act on particular synapses [1], and to situate our model alongside the widely used low-rank RNN. However, this choice is certainly not the only way to model neuromodulation in an RNN—one alternative might be to implement neuromodulatory weights on a mixture of sparse weight matrices, instead of low-rank components. As another example, in Tsuda et al. the authors choose particular subsets of weights on which to apply neuromodulation. However, these subsets are prespecified instead of learned during training. To answer your questions: The architecture seems to require a prior specification of the maximum rank of the connectivity matrix. From an application standpoint, how would this be determined? The rank of the connectivity matrix is indeed a hyperparameter that must be determined before training. In general, we performed a parameter sweep over various numbers of ranks to determine the lowest rank that achieved satisfactory performance. For the MWG task, we chose rank-1 networks due to their theoretical tractability (and because even at this lowest possible rank, the NM-RNN can solve the task). However, to compare to the low-rank RNN we chose rank-3 NM-RNNs, to match prior work which used rank-3 RNNs to model this task [2]. Is the proposed method amenable to batch training? Can the authors characterize the trainability/computational implementation considerations of the proposed architecture, relative to say an LSTM? Certainly, there would seem to be more overhead relative to a vanilla RNN. This method is amenable to batch training, which we used in the multitask and EFT settings. For the MWG task, the space of possible samples was small enough to do full batch training. In terms of computational implementation, all models must be run sequentially, so while there are slight differences in training time the overall scaling rules likely do not change. We will add a discussion of computational implications to the final version. Citations [1] Peter Dayan. Twenty-five lessons from computational neuromodulation. Neuron, 76(1):240–256, 2012. [2] Manuel Beiran, Nicolas Meirhaeghe, Hansem Sohn, Mehrdad Jazayeri, and Srdjan Ostojic. Parametric control of flexible timing through low-dimensional neural manifolds. Neuron, 111(5):739–753, 2023. --- Rebuttal Comment 1.1: Comment: I thank the reviewers for the rebuttal. While most of my points have been addressed, I still have a few remaining concerns, namely: - the implementation of transfer learning is still opaque from my perspective. "we froze all recurrent and output weights and retrained only the weights which directly receive the input" is unclear; unless the input of the modulating network is fixed, but not the task-solving network, my impression is that the setup may lead to substantial overwriting/forgetting, and providing information on re-test of prior tasks is crucial in this regard. - Re: batch training. The differences in training time should be described transparently, esp. in the context of the ML impacts of this work. Overall, I am comfortable with my existing score. --- Reply to Comment 1.1.1: Comment: Thank you for your response. We would like to address your remaining concerns: To retrain our networks on the new task, we retrained weights which we viewed as "processing" the input for each model. For the NM-RNN, we retrained the parameters of the neuromodulatory subnetwork (input, recurrent, and output weights) and froze the parameters of the larger output-generating subnetwork (input, low-rank recurrent component, and output weights). In essence, we learned a new neuromodulatory signal s(t) to use with existing low-rank recurrent weights in the output-generating subnetwork. We then sought to come up with a fair comparison to the other models (RNNs/LSTMs). Since these models do not have a separate neuromodulatory subnetwork, allowing them to retrain all of their weights seemed like a strong baseline. Instead, we retrained the input weight matrices for the low-rank/vanilla RNNs and LSTMs. In this case the models have to produce new task outputs using fixed recurrent weight matrices (like the NM-RNN) and produce different behaviors via the input weights. Our reason for studying this multitask setting was not to test potential for continual learning, but rather to see if the model can solve a new task when it's constrained to re-use prior dynamical motifs. It's true that after retraining there is likely to be overwriting/forgetting in the retrained weights, however we did not attempt to limit this. We think it's interesting that the NM-RNN can solve the new task using low-rank components learned for a prior set of tasks, just weighted differently via the new neuromodulatory signal s(t). Notably, using the previously learned s(t) signals would allow the NM-RNN to immediately switch back to solving old tasks. Although we haven't yet explored whether those signals are maintained or attempted to maintain them, this would be an interesting future direction. Re: batch training, thank you for the suggestion. To offer some more concrete details, for the rebuttal figures we trained parameter-matched NM-RNNs (N = 100, M = 20, R = 4) and LSTMs (N = 8) in the multitask setting using batching (1000 samples for each task, batch size 100). We first trained on the original 3 tasks for 100k iterations of gradient descent, then 50k iterations retraining on the new task. The NM-RNN took about 1.5 hours to train, and the LSTM took about 15 minutes. We believe this discrepancy is due to the large difference in internal state sizes. For instance, the NM-RNN’s neuromodulatory subnetwork alone is 20-dimensional while the highest dimension of any matrix in the LSTM is 8. Additionally, we want to emphasize that we did not focus on optimizing the NM-RNN for speed, but rather wanted to demonstrate its task performance compared to the LSTM. We will ensure these details are made more clear in the final version.
Summary: The authors introduce and implement a novel biologically-inspired variant of standard recurrent neural networks (RNNs), which they evaluate on a number of tasks that require dynamics to generalize across task conditions (Measure-Wait-Go), switch between tasks/task contexts (four-task set from Duncker et al.), and to capture long-term dependencies (Element Finder Task). Their variant, the neuromodulated RNN (NM-RNN), derives from gated coupling between two networks, where the so-called neuromodulation network dynamically scales the recurrent weights of the output-generating network. The authors provide mathematical intuitions about NM-RNNs and how they relate to LSTMs, compare their performance on above-mentioned tasks, and analyse and visualize the networks' learned dynamics. Strengths: **Originality**: To my knowledge, the architecture is novel. **Quality**: The paper is very well written and analyses are thorough. The different tasks networks are evaluated on are well-motivated. **Clarity**: While the paper is well written, clarity could improve if the motivation was spelled out more clearly and if figures were self-contained (via better legends and labels). **Significance**: The work is an important contribution, but the real-world significance will depend on whether the suggested RNN architecture can be scaled, or used to generate insights about biology (depending on what the author's main motivation is) Weaknesses: I find the main weakness of the paper to be a lack of clarity in motivation and terminology. I gather the main motivation of the paper is to "bridge the gap between [...] highly biologically-accurate models and general network models (i.e. RNNs) of neuronal activity by adding a biologically-motivated form of structured flexibility" (lines 64-65) - in order to better be able to study flexibility and generalization capabilities observed in biological neural networks.The latter is not very clear from the paper. The link to neuromodulation features prominently in the Introduction, but should be spelled out more clearly, especially how exactly and strongly it links to the suggested architecture (or it should be clearly stated that the inspiration is only of loose nature). The suggested architecture effectively performs synaptic gain scaling, which is only one of many effects of neuromodulation. Streamlining terminology here might help to improve the clarity of the paper and to guide the reader. As an example, the authors mention dopamine as "a well-known example [...] implicated in motor deficits resulting from Parkinson's disease", but that statement needs to be better connected to the presented study and why it motivates the study. Both Motivation and Discussion would benefit from an explicit positioning of the approach within the field of NeuroAI: how much does this work contribute to bringing artificial NNs closer to biological NNs, and how much does it help in using ANNs to study BNNs? In a similar vein, authors may want to discuss their work in relation to: - Auzina, I. A., Yıldız, Ç., Magliacane, S., Bethge, M., & Gavves, E. (2023). Modulated Neural ODEs. Proceedings of the 37th Conference on Neural Information Processing Systems. Retrieved from https://github.com/IlzeAmandaA/MoNODE. - Naumann, L. B., Keijser, J., & Sprekeler, H. (2022). Invariant neural subspaces maintained by feedback modulation. ELife, 11, 76096. https://doi.org/10.7554/eLife.76096 Technical Quality: 4 Clarity: 3 Questions for Authors: Conceptual: Can you clearly define what exactly is meant by **structured flexibility**? Is flexibility in standard RNNs unstructured? Specifically, the EFT analysis and visualization convincingly demonstrate structure in the sense that computations are divided between neuromodulatory and output-generating network; in the Memory/Delay/Anti/Pro, this structure is imposed; but for the MWG task, the authors do not seem to analyse structure. What have we learned about the computational implications of synaptic gain scaling? **EFT** How do the authors predict the performance of the model will change with increasing T? How did the inputs at test time differ from the inputs during training? Is the zeroing out of s(t) components a signature of overparameterization relative to task complexity? Technical: Fig. 2A: The corresponding text talks about different components k, but it is not clear what is shown in the figure (a single component across time, or different components? How do changes in s(t) come about?) Could this be a plug-in replacement for GRUs/LSTMs? If so, how would the model perform standard sequence modeling tasks, such as sequence classification? The authors don't mention the vanishing gradient issue that RNNs severely suffer from. Did the not have this problem? Fig 3. E: please offer more interpretation for what is shown. Minor: ln. 104: "Likewise, artificial neural networks trained to solve tasks that mimic those found in neural experi- ments also often exhibit low-rank structure." Please provide a reference. Fig 4 A: This panel is not clear. What is shown? Please add labels to all traces and to the axes, and make clear how the subpanels’ axes relate to each other. The Memory/Anti task is not readily understandable from the description and visualization in the paper. Please explain this better to make the paper self-contained. Fig 5C what are the squiggly lines? Fig E not clear where trajectories begin and end, black line not visible. Fig F similarly hard to see/interpret. Eq. 4: Please specify what \sigma is. Fig 3 references mixed up in ln.208. ln. 267: "Curiously, the positive/negative relationship flips for θ ∈ (π, 2π), likely relating to the symmetric structure of sine and cosine." where can we see this? Confidence: 2 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The authors adequately address the limitations of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your thorough comments. Indeed, our motivation was to create a model somewhere between RNNs and biologically-accurate biophysical models. Specifically, we identified neuromodulation as a feature of biological networks that is not often modeled in RNNs. The goal of this paper is to study the impact of synaptic scaling, one specific effect of neuromodulation, on the performance and generalization of RNN models. We have discussed our motivation further in the General Rebuttal and will make this positioning more clear in the final version. Thank you for suggesting Auzina et al. and Naumann et al., we will add these papers to our related work section to highlight the use of modulatory signals in non-RNN models. To address your questions: Conceptual: By structured flexibility, we mean the introduction of additional parameters (i.e. flexibility) to a model, with these parameters having a clear motivation and interpretation (i.e. structure). In the case of our NM-RNN, the structured flexibility comes from adding a flexible neuromodulatory signal which impacts RNN dynamics in a constrained way (by scaling synaptic weights). In the EFT and multitask settings, we show that this allows the network to distribute different parts of the task across the subnetworks. In the MWG task, we show that ablating each component of the neuromodulatory signal leads to performance deficits (paper fig. 3F). In particular, ablating s_3(t) destroys the ability of the network to produce output ramps of different slopes, implying that the neuromodulatory subnetwork is involved in controlling the output interval length. In all three ablations, the general ramping shape of the output is preserved, implying that this behavior is stored in the dynamics of the output-generating network. Overall, we learned that synaptic gain scaling can improve generalization capabilities of low-rank RNNs and make them more like LSTMs. More generally, we offer this model as an alternative to commonly-used low-rank RNNs, as a tool to model biological task completion with neuromodulation. EFT: Each of our models was first trained in a continual learning regime in which each step of gradient descent was performed using a randomly generated batch of input sequences. Then, at test time, test input sequences were again sampled i.i.d. from the distribution of all valid EFT sequences. (It should be noted that for main paper figs. 5C, D, and F, we randomly generated test inputs while holding the query index and/or target element value fixed.) As the sequence length T increases while keeping the NM-RNN model size, number of training batches, and training batch size fixed, we predict that model performance will degrade. The set of possible EFT input sequences grows exponentially in T, making it challenging for the model to solve the task within the same number of training iterations; indeed, this is what we observed in our preliminary simulations. However, we expect that model performance would improve if the number of training iterations and/or the model size is suitably increased. Technical: Fig. 2A: The corresponding text talks about different components k, but it is not clear what is shown in the figure (a single component across time, or different components? How do changes in s(t) come about?) This figure is meant as an illustrative example of how the neuromodulatory signal s(t) can impact dynamical timescales. In this example, s(t) is a 1-dimensional signal so k=1. The changes in s(t) are artificially imposed for illustrative purposes, to show how the decay rate of w(t) is modulated by the value of s(t). We will clarify this in the final version. Could this be a plug-in replacement for GRUs/LSTMs? If so, how would the model perform standard sequence modeling tasks, such as sequence classification? We are not arguing to replace GRUs/LSTMs in standard ML workflows. However, we show that they do share some capabilities, achieving LSTM-like performance on some tasks. The authors don't mention the vanishing gradient issue that RNNs severely suffer from. Did the not have this problem? This issue did not arise for the tasks that we trained on. To speculate, perhaps the NM-RNN does not encounter this issue as often due to its similarities to the LSTM. This could also be a potential reason why the RNNs are failing at the EFT. Fig 3E: please offer more interpretation for what is shown. In fig. 3E we show the 3-channel neuromodulatory signal s(t) during the MWG task. On the left, the signals are aligned to when networks receive the measure cue. On the right, the signals are aligned to when networks receive the wait and go cues. The left figure shows how the s(t) responds to the measure cue and evolves throughout the task. The right panel shows how s(t) readies responses for the different interval lengths. In particular, we can see that between the wait and go cues, s_1(t) separates shorter intervals from longer ones, setting up initial dynamics for the go period when the ramp is generated. The third component s_3(t) seems to be involved with ending the output ramp, since it saturates first for the shorter intervals and then the longer ones. Minor: ln 104: we will add a citation to Schuessler et al. “The interplay between randomness and structure during learning in RNNs”, NeurIPS 2020 Fig. 4A: we have updated this figure in the PDF attached to the general rebuttal. Please let us know if additional adjustments would help to clarify. Fig. 5: The squiggly lines are the actual loss curves, while the bold lines show a moving average to indicate overall trends. We will add improved explanations and better-contrast lines to E and F. Eq. 4: \sigma is the sigmoid nonlinearity, we will clarify this in the final version. Fig. 3/ln. 208: thank you for catching this, we will amend it in the final version. ln 267: we have added these plots to the general rebuttal PDF and will include them in the final paper fig. 4D. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed reply to my review and the improved figures. I'd like to ask the authors to include their clarifications in the final version of the paper as they deem helpful for future readers; especially the positioning within the field of NeuroAI is important. I support acceptance of the paper.
Summary: This work studies the effects of synaptic gain scaling, a neuromodulatory mechanism, on the performance of task-trained low-rank RNNs – which have been used to understand the dynamics and other finer details of neural computation. Specifically, it introduces a simple time-varying neuromodulatory mechanism implemented as a hypernetwork to modulate the weights of low-rank RNNs trained on multiple tasks. The authors draw connections between the proposed model and LSTMs by theoretically demonstrating that the neuromodulation provides for a form of gating. Finally, the authors show that the proposed model outperforms low-rank RNNs and is comparable to LSTMs in multi-task settings and tasks involving long-term dependencies. Strengths: 1. The writing is clear and the overall presentation of the paper is good – related work is adequately described, and the model formulation is clear and simple but effective. 2. The authors theoretically study the links between the proposed NM-RNNs and LSTMs and show that, under certain assumptions, NM-RNNs learn gating mechanisms similar to LSTMs leading to their improved performance over LR-RNNs on certain tasks. To my knowledge, this theoretical analysis is novel and sound. 3. The authors perform several clearly motivated numerical experiments to analyze the computational implications of the proposed model. Particularly, they demonstrate that their model outperforms ordinary LR-RNNs in multi-task settings and capturing long-term dependencies. Weaknesses: 1. The model is limited in its bio-realism, so it is unclear how well this would match to computations performed in the brain. Specifically, there are no testable predictions for this model of neuromodulation and no comparison with neural data (as acknowledged by the authors in the limitations), so this limits the significance of the work. 2. The model does not seem to perform as well as LSTMs in the long-term dependencies task (the loss attained by LSTMs is far lower). Furthermore, the authors do not show comparisons with LSTMs and vanilla RNNs for the timing task and the multi-task setting. It would be important to show how the NM-RNN performs in comparison to at least the LSTMs as well in these settings. 3. On a related note to the previous point, the authors have not compared their model to existing implementations of neuromodulation-infused RNNs. Given the similarity of this work to Tsuda et al., 2021 [1] (which also seems to explain observations in neural data from Drosophila), is it possible to compare the NM-RNN to this model, and perhaps other approaches such as those introduced by Liu et al., 2022 [2]? 4. Most importantly, it seems like the scalability of this model is not well-studied and this is also related to the fact that the tasks considered are very simplistic. While these simple tasks allow for some interpretability, I think it would also be important to benchmark the NM-RNN on more complex tasks. For example, studying how it adapts under perturbation in a reaching/control task – this would be a more complex task and could also help understand the links between neuromodulation and learning. In general, moving beyond toy-ish tasks would strongly improve the experimental section of the paper. (References provided in Questions section.) Technical Quality: 3 Clarity: 3 Questions for Authors: See the Weaknesses section. Some additional questions: 1. Have the authors tried training the neuromodulatory subnetwork alone while treating the task subnetwork as a reservoir? This could potentially provide for a parameter-efficient multi-task learning framework (loosely related, see the recent paper by Williams et al., 2024 [3]). 2. Could the authors clarify what the three different NM-RNN curves are in Fig. 5C? If they are different seeds or configurations, this should be specified. 3. Have the authors tried using other activation functions in calculating $\mathbf{s}(\mathbf{z}(t))$, i.e., not restricting the neuromodulatory signal to values between 0 and 1 (and allowing negative values)? Could this lead to improved performance? **References:** 1. Tsuda et al. "Neuromodulators generate multiple context-relevant behaviors in a recurrent neural network by shifting activity hypertubes." bioRxiv (2021): 2021-05. 2. Liu et al. "Biologically-plausible backpropagation through arbitrary timespans via local neuromodulators." Advances in neural information processing systems 35 (2022): 17528-17542. 3. Williams et al. "Expressivity of Neural Networks with Random Weights and Learned Biases." arXiv preprint arXiv:2407.00957 (2024). **Rebuttal update:** Confidence increased from 3 to 4. I am in favor of accepting the paper. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The limitations have been adequately discussed in the paper (specifically, biological realism, comparison to neural data, and scalability). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your feedback. We have addressed some of your comments in the general rebuttal, in particular the second point under Weaknesses (regarding performance comparison to LSTMs and vanilla RNNs). We would also like to respond to the additional weaknesses noted. While we currently have no comparison with neural data, we believe this model can offer testable predictions of how diseases involving neuromodulator deficiencies and dysfunction impact movement and timing. As RNNs have been used to generate testable predictions of neural function, we believe our model offers a biologically meaningful extension by scaling synaptic weights as biological neuromodulation has been shown to do. We did not compare our work with the prior methods mentioned in the Related Work section because of a few key differences in motivation and implementation. The work of Tsuda et al., while similar, requires prespecification of (1) which neurons are neuromodulated and (2) the constant value that the neuromodulation takes. Our paradigm allows both of these parameters to be learned by the model. It was unclear how to set the aforementioned parameters in the Tsuda model in order to make a comparison. While Liu et al. also studied the impact of neuromodulation in RNN models, their work focused primarily on the impact of neuromodulation on training and not during task performance. Due to the difference in motivation, we did not compare to their model. We agree that training our model on more complex tasks will be an important step forward. Since vanilla RNNs are used to model more complex tasks and combinations of tasks [Driscoll], we believe our model should also be amenable to training in these situations. We appreciate the suggestion for studying reaching/control perturbation tasks and agree that training the NM-RNN in this task setup could help provide valuable insights into the relationship between neuromodulation and adaptation. To answer your questions: Have the authors tried training the neuromodulatory subnetwork alone while treating the task subnetwork as a reservoir? This could potentially provide for a parameter-efficient multi-task learning framework (loosely related, see the recent paper by Williams et al., 2024 [3]). We like this idea and think it would be an interesting extension for the multitask section of the paper. While we did not try random reservoir dynamics as in Williams et al., we did analyze the effect of retraining only the neuromodulatory signal while leaving the low-rank dynamical components fixed when learning a new task (paper section 5). These results could be viewed as a proof-of-concept that the NM-RNN can effectively reuse existing dynamics when learning new tasks. Could the authors clarify what the three different NM-RNN curves are in Fig. 5C? If they are different seeds or configurations, this should be specified. We apologize for the confusion—these are different hyperparameter settings for the NM-RNN, parameter-matched to the N=10 LSTM. We will update the caption of this figure in the final version to indicate the exact hyperparameters used in each curve. For reference the settings are (M=5, N=18, R=8), (M=5, N=13, R=12), and (M=10, N=12, R=7). Our overall goal in this figure was to show that LSTMs and NM-RNNs are able to solve the task, while vanilla RNNs cannot train on it. Have the authors tried using other activation functions in calculating z(t), i.e., not restricting the neuromodulatory signal to values between 0 and 1 (and allowing negative values)? Could this lead to improved performance? While this may lead to improved performance, we believe it would negatively impact the biological plausibility of the modulatory signal. We capped the neuromodulatory signal at 1 to model a saturating effect of neuromodulators on downstream synaptic strength—the assumption being that synaptic strength cannot increase indefinitely. Similarly, we did not allow negative saturation, instead setting the floor value of the neuromodulator to 0 to indicate complete silencing of synaptic connections and preserve the sign of the connection. --- Rebuttal Comment 1.1: Comment: I thank the authors for the rebuttal and appreciate the additional simulations with LSTMs. The authors' reasoning behind not comparing their method with Tsuda et al. and Liu et al. sounds reasonable to me. I am mostly satisfied with the other responses, and while I'd have liked to see more complex tasks/multi-task experiments, I completely understand not being able to do so due to the limited time available, and this is not a reason for me to reject the paper. Overall, I quite like the straightforwardness of this paper, the theoretical results connecting the proposed mechanism to LSTMs, and the general presentation, figures and interpretable experiments. I maintain my positive opinion and score, and I think this will be a good contribution to computational neuroscience and the audience at NeurIPS.
Summary: This work mimic the synaptic plasticity observed in brain which is driven by neuromodulators to develop neuro-inspired artificial neural networks. It proposes the neuromodulated NM-RNN, it has a neuromodulatory subnetwork that outputs a low-dim output that will scale the synaptic weights of low-rank RNN. It has connection to LSTMs with similar gating mechanisms, or even the dynamics of NM-RNN could be reproduced with a LSTM. The model is better capturing long-term dependencies. The work also demonstrate how this framework is applicable in multitask learning. Strengths: **Methods** This work focused on an important question to incorporate nonstationarity observed in biological network into artificial network. It implemented the potential mechanisms that synaptic plasticity controlled by the neural modulators. The subnetwork to outputs modulatory inputs for the scaling the low-rank RNN's weights is a novel design. **Evaluation** The NM-RNN has been benchmarked with multiple classic artificial neural networks (RNN, LSTM) on multiple tasks including measure-wait-go, multitask learning, element finder task to demonstrate its capabilities of capturing long-term dependencies. Weaknesses: **Method** 1. The design choice of rank-1 in modulatory subnetwork should be described, more hyperparameters should be explored. **Baselines** 1. The proposed NM-RNN models has multiple connections with LSTM, the novelty is therefore limited. Meanwhile, its performance is not comparable to LSTM as shown in Fig5 B. 2. The baselines are limited, i.e. comparisons with transformers which also has input-dependent attention weights which also model the nonstationarity, and relevant to neural plasticity. ***Evaluation** 1. The tasks and LSTM is not evaluated in measure-wait-go, multitask learning tasks. Adding more tasks to demonstrate the effectiveness of NM-RNN to captures long-term dependencies. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. Why choose rank-1 for modulatory subnetwork? 2. Why the training dynamics (MSE loss) for all models are noisy and fluctuated, any optimization techniques (regularization, normalization, scheduler might help with the training dynamics), how to guarantee the convergence of the model? 3. Any computational efficiency that NM-RNN might bring in? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: No potential negative societal impact. The limited scale of the network has been discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your feedback. We have addressed some of your comments from the Weaknesses section in the general rebuttal (in particular, concerns about Evaluation and Baselines). We would also like to clarify why we did not initially provide an LSTM baseline for the Measure-Wait-Go and multitask settings. The goal of this project is to offer a new model relevant to the field of computational neuroscience. Low-rank RNNs are commonly used tools in computational neuroscience due to their low-dimensional internal dynamics. We augmented the low-rank RNN with an interpretable modulatory parameter, and showed the comparative effectiveness of low-rank RNNs and NM-RNNs on tasks related to neuroscience. We also showed that NM-RNNs help to recover some of the performance gains of LSTMs. However, LSTMs are not commonly used as “brain-like” models when studying neural population dynamics. One of our contributions was to show the similarities between our more “brain-like” NM-RNN and the popular LSTM. Like LSTMs, transformers are not typically used to model neural population dynamics, so we did not include comparisons to these models. To address your questions: Why choose rank-1 for modulatory subnetwork? To clarify, the modulatory subnetwork is full-rank and the output-generating subnetwork is low-rank. In our study of the Measure-Wait-Go task we chose to present results with rank-1 and rank-3 NM-RNNs. We analyzed rank-1 networks due to their theoretical tractability, and rank-3 networks in order to compare to existing work which used rank-3 low-rank RNNs on this task [1]. For the multitask setting, we chose rank-3 NM-RNNs by sweeping over ranks and determining the rank required to consistently train on the first three tasks. For the element-finder setting we tried many combinations of rank-size parameters to compare to the LSTM. Why the training dynamics (MSE loss) for all models are noisy and fluctuated, any optimization techniques (regularization, normalization, scheduler might help with the training dynamics), how to guarantee the convergence of the model? We agree that the training dynamics for the Element-Finder task (EFT) are quite noisy, and would likely become smoother with the use of regularization techniques. However, our main point with this figure was to show general trends, i.e. vanilla RNNs struggle to solve EFT while some LSTMs and NM-RNNs are able to learn it consistently. In addition, we did not show the training dynamics for any of the other tasks, but can share empirically that they were much smoother than in the EFT, suggesting that the noisy learning may also be due to the relative complexity of this task. Any computational efficiency that NM-RNN might bring in? All models are run sequentially, so while there is a slight difference in computational efficiency between each individual step, the scaling laws likely don’t change. In general, you raise an interesting point that the matrix multiplication cost for running a step of a low-rank RNN is smaller than that of a vanilla RNN. So, low-rank RNNs and NM-RNNs will be faster to evaluate compared to a vanilla RNN with similar neuron count. We will add a discussion of computational implications to the final version. Citations [1] Manuel Beiran, Nicolas Meirhaeghe, Hansem Sohn, Mehrdad Jazayeri, and Srdjan Ostojic. Parametric control of flexible timing through low-dimensional neural manifolds. Neuron, 111(5):739–753, 2023. --- Rebuttal 2: Comment: Thanks for the authors' response. While I am not convinced that LSTM and transformers are not "brain-like", while RNNs are. I view them all more as predictive models instead of mechanistic models (i.e. hodgkin huxley model). And a lot of recent works have started to use transformers model neural population dynamics [1][2][3], and outperforms RNNs. I think adding more comparisons with baselines, and improving the optimization will be helpful. I decided to maintain my original score. [1] Representation learning for neural population activity with Neural Data Transformers. [2] STNDT: Modeling Neural Population Activity with a Spatiotemporal Transformer. [3] Neuroformer: Multimodal and Multitask Generative Pretraining for Brain Data. --- Rebuttal 3: Comment: I just wanted to step in and provide my perspective here as another reviewer, because I believe that the papers this reviewer mentions are orthogonal to what I believe is the goal of this paper. The goal here is to propose a neuromodulatory mechanism inspired by synaptic gain scaling, as a circuit-level mechanistic model of how modulatory signals could reconfigure the dynamics of a recurrent network to perform different tasks. I view it as a mechanistic model rather than a predictive one, i.e., it describes how a mechanism that is computationally similar to what is observed in the brain enables multi-task flexibility, rather than serving as a better predictive model or advancing performance from a deep learning perspective. In short, it is trying to computationally analyze the effect of synaptic gain scaling on multi-task performance. Several works have used recurrent neural networks to describe how the brain could be performing certain task-related computations, including influential work by Yang et al. [1] and Driscoll et al. [2]. While I'm not arguing here that somehow vanilla RNNs are more bio-plausible or better brain models than LSTMs, I believe there are different levels of modeling and biological realism in the computational neuroscience literature. Circuit-level computational models such as this work eschew synapse-level biological plausibility and aim to study computations through the lens of dynamics [3], task performance, or other hallmarks of neural computation like flexibility and generalization. I would thus argue that RNNs are good and simple/minimal models to use here – we are aware of recurrent circuits in various brain regions [4,5,6,7], but there is no similar support for transformer-like architectures or self-attention, to my knowledge (although on the other hand, these models are, recently, used at the cognitive level of modeling [8,9]). And given that the kind of tasks being performed here are related to working memory, which relies on the prefrontal cortex [10], which is further known to contain several recurrent microcircuits [6,7], I'd argue that RNNs are better computational models to use here than transformers if the goal is to model neural computation (which I strongly believe to be the case here). Finally, and perhaps most importantly, the papers that the reviewer refers to here are not trying to model how the brain could compute, or how mechanisms in the brain could enable multi-task flexibility. Works such as [11,12,13], using transformers to model neural population activity, are harnessing advances in deep learning to serve as better representation learning methods or predictive models of behavior from neural activity, for e.g., towards the goal of building better decoders from brain-computer interfaces – they are not meant to serve as models of how the brain could compute. Overall, I just wanted to provide my views here and I hope to engage in further discussion with this reviewer and/or the authors on this. **References:** 1. Yang, Guangyu Robert et al. “Task representations in neural networks trained to perform many cognitive tasks.” Nature neuroscience vol. 22,2 (2019): 297-306. 2. Driscoll, Laura N et al. “Flexible multitask computation in recurrent networks utilizes shared dynamical motifs.” Nature neuroscience vol. 27,7 (2024): 1349-1363. 3. Driscoll, Laura N et al. “Computation through Cortical Dynamics.” Neuron vol. 98,5 (2018): 873-875. 4. Douglas, Rodney J, and Kevan A C Martin. “Recurrent neuronal circuits in the neocortex.” Current biology : CB vol. 17,13 (2007): R496-500. 5. Wang, Xiao-Jing. “Decision making in recurrent neuronal circuits.” Neuron vol. 60,2 (2008): 215-34. 6. Mante, Valerio et al. “Context-dependent computation by recurrent dynamics in prefrontal cortex.” Nature vol. 503,7474 (2013): 78-84. 7. Fuster, J M. “Memory networks in the prefrontal cortex.” Progress in brain research vol. 122 (2000): 309-16. 8. Didolkar, Aniket, et al. "Metacognitive Capabilities of LLMs: An Exploration in Mathematical Problem Solving." arXiv preprint arXiv:2405.12205 (2024). 9. Webb, Taylor, et al. "A Prefrontal Cortex-inspired Architecture for Planning in Large Language Models." arXiv preprint arXiv:2310.00194 (2023). 10. Funahashi, Shintaro. “Working Memory in the Prefrontal Cortex.” Brain sciences vol. 7,5 49 (2017). 11. Ye, Joel, and Chethan Pandarinath. "Representation learning for neural population activity with Neural Data Transformers." arXiv preprint arXiv:2108.01210 (2021). 12. Le, Trung, and Eli Shlizerman. "Stndt: Modeling neural population activity with spatiotemporal transformers." Advances in Neural Information Processing Systems 35 (2022): 17926-17939. 13. Antoniades, Antonis, et al. "Neuroformer: Multimodal and multitask generative pretraining for brain data." arXiv preprint arXiv:2311.00136 (2023).
Rebuttal 1: Rebuttal: We would first like to thank all of the reviewers for their insightful and thorough comments on our paper. We have carefully considered your feedback and appreciate your help in both strengthening our current paper’s claims and providing future directions for us to consider. In this general rebuttal, we would like to provide answers to some common concerns as well as highlight the rebuttal figures shown in the attached PDF. First, a request shared among multiple reviewers was to compare the performance of our NM-RNN to LSTMs and vanilla RNNs in the Measure-Wait-Go (MWG) task and multitask setting. The comparisons among all four models (parameter-matched low-rank RNNs (R=3, N=106), NM-RNNs (R=3, M=5, N=100), vanilla RNNs (N=31), and LSTMs (N=15)) on the MWG task are shown in fig. A&B. In fig. A we’ve plotted the loss for 10 instances of each model type, both on linear and log scales. On trained intervals, the LSTM achieves the lowest median L2 loss due to its ability to accurately shape the output ramps (fig. B). However, on the extrapolated intervals the NM-RNN achieves the lowest median loss out of all four model types. Looking at the example outputs in fig. B, we see that this is because the NM-RNN better generalizes the slope of the output ramp to extrapolated intervals, compared to the LSTM. Overall, we see that our NM-RNN enjoys enough complexity to both train and generalize well on the task, without suffering from generalization performance losses like the LSTM. We have also trained all four model types (parameter-matched low-rank RNNs (R=3, N=100), NM-RNNs (R=3, M=20, N=100), vanilla RNNs (N=18), and LSTMs (N=8)) in the multitask setting, on both the three initial and one retrained tasks. For retraining the LSTM and vanilla RNN, we froze all recurrent and output weights and retrained only the weights which directly receive the input. The percent correct metric for 10 instances of each model is shown in fig. D. On the trained tasks, the NM-RNNs, vanilla RNNs, and LSTMs are able to train to high accuracy. On the retrained task, these three models still show similar high performance. While there is an outlier NM-RNN (also shown in the paper), the majority of the NM-RNNs achieve similar retraining performance to both LSTMs and vanilla RNNs. We would also like to use this general rebuttal to clarify our goals. In particular, our motivation was not to create a model that would outperform the LSTM on most tasks; rather, we aimed to create a version of an RNN which includes a biologically motivated implementation of synaptic gain scaling. One of the many known effects of neuromodulation on neural computation is synaptic scaling. This work aims to highlight how traditional low-rank RNN models of task completion may be overlooking synaptic scaling, a relevant biological factor that we show aids in performance and generalization. In developing this model we discovered similarities between our low-rank modulation and gating in LSTMs. This prompted us to further explore the relationship between the NM-RNN and LSTM. However, LSTMs are not commonly used as “brain-like” models in computational neuroscience, so we did not initially compare their performance on the neuroscience-relevant tasks. We believe this model offers a unique framework by which to study and hypothesize the impact of neuromodulation on task performance and generalization in biological networks. For example, we could use NM-RNNs trained on timing tasks to predict the effect of decreased dopamine. Rebuttal figs. C and E are both in response to Reviewer znSr. Fig. C is an updated version of paper fig. 4A. We aimed to clarify the distinctions between the four tasks in the multitask setting by adding highlights to the figure indicating separate task labels, inputs, and outputs, as well as verbal description of task goals and boxes around the separate tasks. Fig. E shows the flip in sign of PCs 4&5 of our example multitask NM-RNN. The network distinguishes between Pro/Anti versions of the same task by flipping the sign of PC5 as the measured angle crosses $\pi$. Pdf: /pdf/21eaa70fbe006b1daca13df9f958a3d90686db0b.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Model-free Low-Rank Reinforcement Learning via Leveraged Entry-wise Matrix Estimation
Accept (poster)
Summary: This paper proposes a leveraged matrix estimation method for low-rank policy evaluation and extends it to policy iteration as a model-free learning algorithm. The main idea is to separate the Q matrix estimation into two phases. The first step is to use half of the sample budget to estimate the leverage scores of the Q matrix. Based on the leverage scores, a skeleton of the value matrix is sampled. Then roughly one-fourth of the sample budget is used to sample the entries in the skeleton. The left sample budget is used to sample the entries outside the skeleton, and the final Q-value matrix is constructed by CUR decomposition with inverse leverage scores weighting. It's proved that the leveraged matrix estimation method relaxed eliminates the commonly used assumption of incoherence., and guarantees an entry-wise estimation error. The effectiveness of the proposed method is demonstrated via simulation in the Appendix. Strengths: The paper is well-written and easy to follow. The theoretical investigation comes with rigorous proof. By adopting leveraged matrix estimation, the authors can relax the incoherence assumption of the low-rank Q matrix, which serves as a major advantage of the proposed method. Weaknesses: It seems there is a lack of discussion about leveraged matrix estimation methods in matrix completion/low-rank matrix estimation literature. How does the proposed method perform when compared with other regularized policy evaluation approaches (for example, nuclear-norm/max-norm regularization)? It is crucial in assessing the significance and novelty of this work. Is that possible to include experiments in real-world datasets to enhance the significance of this work? Technical Quality: 3 Clarity: 3 Questions for Authors: Is the low-rank assumption of the Q function under any deterministic policy very restrictive? How does that compare to related works? It would be better to replace shadows in the figures (experiments) with error bars, as shallows overlay each other and make it hard to read. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The paper is more in a theoretical flavor and lacks real-world experiments. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review! Please find our responses below. **A. Comparison with other matrix estimation methods.** As mentioned in Section B.1 of our general rebuttal, our work is based on recent progresses in analysis of matrix completion methods providing entrywise guarantees [1,8,9]. It remains an open question how more global convergence guarantees (in Frobenius or spectral norm) can be utilized effectively for RL, so we compare our work only with methods providing entrywise guarantees. As you suggested, an alternative to our SVD-based approach would be to use nuclear norm minimization. The authors of [34] leveraged the guarantees for nuclear norm minimization from [9] for learning the $Q$-function. However, there are a few reasons why applying nuclear norm minimization is theoretically not straightforward in our setting: - **Approximate low-rank structure.** Since our algorithm is policy iteration-based, our estimates $Q_{\tau}^\pi$ are only approximately low-rank. The result of [9] is based on non-convex optimization, and it is unclear how this approximation error influences the final guarantee of the algorithm. In contrast, a simpler analysis using SVD allows us to explicitly express our bounds in terms of the approximation error. - **Coherence-free subspace recovery.** In our subspace recovery result (Theorem 4), we can bound the subspace error $\Vert U_{i,:} - \widehat{U}\_{i,:} O\_{\widehat{U}}\Vert\_{2}$ in terms of $\Vert U_{i,:}\Vert_2$. It is not clear whether the same can be achieved using current guarantees for nuclear norm minimization, which might instead show dependence on $\max_{i\in [S]} \Vert U_{i,:}\Vert_2$. We believe this would lead to additional incoherence constants appearing in the sample complexity of our algorithm. We refer reviewer to Section C of our general rebuttal for a more detailed comparison with other methods employing matrix completion for RL. **B. Real-world experiments.** We refer the reviewer to part B.1 of our response to **Reviewer WY9t** for a discussion on applying our algorithm to gym environments. Moreover, our paper serves more as a proof of concept to show that we can use active sampling to decrease the sample complexity of the problem. We believe that adapting our analysis for more practical, real-world settings by combining our sampling schemes with deep learning can lead to interesting results, but this is beyond the scope of this paper. **C. Our low rank assumption ($Q^\pi$ low rank for every deterministic policy $\pi$) is less restrictive than previously used assumptions.** We refer the reviewer to part B.3 of our general rebuttal for a discussion about our assumptions. --- Rebuttal 2: Comment: Dear reviewer, Thanks again for your review. We have answered your concerns and would be grateful to hear your thoughts. If you need further clarification or want to ask more questions, we would be happy to respond. Best regards. --- Rebuttal Comment 2.1: Comment: Dear reviewer, We thank you once again for your efforts. As the rebuttal period is nearing its end, we would appreciate if you could let us know whether our rebuttal addresses your concerns. We would be also happy to answer any further questions you may have while we still can. Thank you.
Summary: This paper considers the reinforcement learning problem with low-rank latent structure. The objective of this problem is of learning an $\epsilon$-optimal policy in a tabular setting. For this problem, they devised LoRa-PI (Low-Rank Policy Iteration), a model-free learning algorithm alternating between policy improvement and policy evaluation steps. Their main innovation lies in the latter part where the algorithm estimates the low-rank matrix corresponding to the (state, action) value function of the current policy using the following two-phase procedure. The entries of the matrix are first sampled uniformly at random to estimate, via a spectral method, the leverage scores of its rows and columns. These scores are then used to extract a few important rows and columns whose entries are further sampled. The algorithm exploits these new samples to complete the matrix estimation using a CUR-like method. For this leveraged matrix estimation procedure, they used spikiness instead of coherence (two-to-infinity norm assumption). Strengths: 1) According to the authors, their matrix estimation method is the first able to yield entry-wise guarantees that do not depend on the matrix coherence but on its spikiness instead. 2) The authors used the tabular setting where entrywise estimation is important and applied an estimation method that fits well with entry-wise estimation. As they mentioned in Lemma 1, the singular value difference could be very large, but the tabular setting lets authors focus more on entrywise estimation. I believe this is a nice strategy for writing. 3) The authors clearly state why policy iteration is much more useful than value iteration to alleviate condition number assumptions. Weaknesses: 1) Readers might think this is another application of low-rank estimation literature. Of course, this type of 'adapting results of other fields appropriately' studies is also important in the research, but I'd say this is not ground-breaking. 2) (Important) I am not sure how meaningful the transition from 'coherence' to 'spikiness' is. Maybe it is directly related to Question 1. If I understand correctly, this is the main reason for using the CUR-based matrix completion instead of others, but I believe the authors should provide a more convincing reason (such as theorems?) to make authors study CUR-based matrix completion. 3) (Important) It is not clear how much improvement they made. It would be great if there's a table for low-rank RL that compares the performance/assumptions of previous works concisely. Could you add a table of reference that includes all low-rank and many recent tabular RL results to see how much improvement you made? 4) (Important) Dependence of the threshold parameter $\beta$ with parameter $d$ or $\sigma_d(Q^\pi)$. See the questions section for details. Also, this $\beta$ seems very important, but it is written in the appendix. It feels like the authors are trying to deceive readers. Technical Quality: 2 Clarity: 2 Questions for Authors: - (Important) Is spikiness always better than coherence? The authors state that in a specific case, spikiness could be finite but coherence could be very large. Is there a case where coherence is finite but spikiness blows up? To use the word 'less restrictive' I think this proof is also necessary. - (Important) Table (mentioned in the weakness section) - (Important) Could you explain how your threshold of the singular value could be free from the rank $d$? In many sparse linear or low-rank estimations, rank works as a parameter for the threshold. To alleviate the rank condition from the threshold, one must assume the minimum signal condition, such as $\sigma_{min}>\epsilon$ for some $\epsilon>0$. Seems like in your paper it was possible since you assumes 'minimum signal condition' on Theorem 2, $\epsilon < \sqrt{\frac{S+A}{SAd} \sigma_d (Q^\pi)}$. You don't know $d$ or $\sigma_d$, how could you be sure about this? One main selling point of this paper is that the learner doesn't need to know $d$ or minimum eigenvalue conditions before the game starts. (line 224-225) - Is there any great idea to extend your estimation idea to the model-based method? - This idea works well in the tabular setting. What about the non-tabular settings, such as linear RL? Let me know the difficulty of extending your result to linear RL. *** Minor points 1) Maybe using $\sigma_{min}$ instead of $\sigma_d$? Researchers frequently use $\sigma_d (Q^\pi)$ as $d$-th largest eigenvalue and $d=\max_\pi rank(Q^\pi)$. This means in the traditional notation $\sigma_d(Q^\pi)=0$. Though they've mentioned it in their related works, it would be better to change it as $\sigma_{min}$ or define $\sigma_d$ somewhere near the Notation paragraph. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful and insightful review! Please find our responses below. **A. Our low-rank matrix estimation method is novel.** Thank you for raising concerns about the novelty of our matrix estimation method. We believe this is an important point to clarify, so we have included a discussion on the novelty of our matrix completion method in Section A of the general rebuttal. **B. Spikiness is always better than coherence.** First, note that we not only substitute dependence in incoherence $\mu$ for spikiness $\alpha$ in our sample complexities but also remove the term $\mu^2$ completely (please see the table in the general rebuttal and the discussion preceding it). Moreover, the spikiness constant appears as a proxy for $\frac{\Vert Q\Vert_{\max}}{\sigma_d(Q)}$, which essentially quantifies the difficulty of recovering low-rank structure from single observations. This makes it a more intuitive parameter than incoherence, which depends on the Euclidean norm of the rows of the singular vector matrices. Assuming bounded spikiness is less restrictive than bounded incoherence because it always holds that $$ \alpha(Q) = \sqrt{SA}\frac{\Vert Q\Vert_{\max}}{\Vert Q\Vert_\textup{F}} \leq \sqrt{SA} \frac{\Vert U\Vert_{2\to\infty} \Vert Q\Vert_{\textup{op}} \Vert W \Vert_{2\to\infty}}{\Vert Q\Vert_\textup{F}} \leq \mu(Q)^2 d. $$ So bounded incoherence always implies bounded spikiness. Furthermore, note that the CUR method is not crucial for our improvement in the scaling with the incoherence constant. It is active sampling (choosing which rows and columns to sample more) that brings this improvement. However, it is unclear how to leverage active sampling for other methods besides CUR, which is why we employ it in our algorithm. **C. Comparison with literature on low-rank $Q$-learning.** We are thankful again for raising this question. We include a tabular comparison with past works in Section C of our global rebuttal. **D. About the singular value threshold $\beta$.** We reassure the reviewer that we included the definition of $\beta$ in Appendix B.4 (see eq. (10) line 639) only due to space constraints in the main paper. It is clearly stated in Proposition 1 where the definition of the threshold $\beta$ can be found. For the sake of clarity, we will introduce $\beta$ in the main text in the updated version of our paper. Next, we discuss why $\beta$ does not depend on $d$ or $\sigma_d$ explicitly. Note that, by definition, the threshold $\beta$ depends on the number of samples $T$ and moreover, it decays with $\sqrt{T}$. Thus, under assumptions on sample complexity (see (4) in Proposition 1), we ensure that $\beta$ is smaller than $\sigma_d(Q)$. Therefore, all relevant singular values (of which there are $d$) are preserved, and the rest (corresponding to the noise and having a norm smaller than $\beta$) are removed. Please check Appendix B.4 for a more detailed proof. It is also important to note that we do not require knowledge of $\sigma_d(Q)$ for our algorithm to work, but our guarantees are valid for a number of samples $T$ that is sufficiently large depending on $\sigma_d(Q)$. **F. Extension to model-based RL.** We refer the reviewer to [37] for entrywise guarantees for model-based RL. In that work, the recovery depends on the properties of the transition kernels and the reward matrix, and not directly on the properties of $Q$-matrix. The authors of [37] do not use a complex iterative scheme with two-phases as we have done in this paper, since in their setting, the matrices $P$ and $r$ they observe are those that should be recovered. Our setting is more challenging because we do not have direct observations of the $Q$-matrix, we employ active sampling, and we achieve coherence-free guarantees. **G. Extension to linear RL.** Although they are not directly comparable, we believe that linear RL requires a stronger assumption than the one we use in the paper. Specifically, in linear RL, the $Q$-function can be written as $Q(s,a) = \phi(s,a)^\top \theta$ with $\theta\in\mathbb{R}^d$ and a known nonlinear mapping $\phi \in \Phi$. In our setting, $Q(s,a) = U_{s,:} \Sigma W_{a,:}^\top = \mathrm{vec}(U^\top e_s e_a^\top W)^\top \mathrm{vec}(\Sigma)$, and we do not assume knowledge of $U$ or $W$. Therefore, we believe our work is better aligned with low-rank MDPs where neither $\theta$ nor $\phi$ are known. Note that our setting is a special case of low-rank MDPs where the functions $\phi$ have a specific structure (depending on the singular vectors $U,W$). Solving low-rank MDPs in full generality with computationally tractable algorithms remains an interesting open research question. --- Rebuttal 2: Comment: Dear reviewer, Thanks again for your review. We have answered your concerns and would be grateful to hear your thoughts. If you need further clarification or want to ask more questions, we would be happy to respond. Best regards. --- Rebuttal Comment 2.1: Comment: Thank you for your detailed rebuttal. 1) Seems like the table in the common rebuttal shows clear improvement compared to the previous algorithms. It looks great that this work is free from the anchor assumption, and shows the best performance. 2) About the $\sigma_d$ dependence, I understand the author's point, but I can't shake off a certain unease. In short, it seems that $beta$ nominally depends only on $T$, but $T$ comes with the condition that it must be "large enough to satisfy all the conveniences." Of course, this is a common practice in RL or bandit papers, but including $\sigma_d$ makes me question whether it is really correct to say that we don't know the minimum signal condition. The extent to which T can be assumed is subjective, but if "free from d and $\kappa$ (and eventually $\sigma_d$)" is a selling point, I personally believe that we should be able to know in advance how large $T$ needs to be before running the algorithm. Time is tight, but if possible, it would be helpful for my judgment if you could point out the cases where the $T$ conditions in the papers included in the rebuttal Table involve assumptions about variables that the agent cannot know. 2-1) Also, what happens when $\sigma_d$ is too small and you cannot satisfy the condition on $T$? In my intuition, the small singular value cannot affect the result much, but in your analysis, it feels like it will affect a lot on the result. 3) It's also good to know that coherence always includes spikiness, but according to your calculation, $\alpha^2 < \mu^4 d^2$ and your sample complexity eventually depends on $\alpha^2$ according to Theorem 2 and 3, so in some terrible case, one might say your result is depending on $\mu^4$, right? Instead of selling $\mu$-freeness, I'd rather say this paper is on a different assumption than $\mu$-based studies. Maybe they assumed they are in a setting of moderate $\mu$, while you are assuming the case of moderate $\alpha$. 3-1) It would be also great if the authors could provide a table 'include' $d$. I know $d << S, A$, but when it is multiplicative I think one should add all the parameters. Again, thanks a lot for your rebuttal. --- Reply to Comment 2.1.1: Comment: Dear Reviewer, We thank you again for your comments and feedback. We hope that our answers address your concern and would be happy to follow up before the end of the rebuttal phase. Many thanks. --- Rebuttal 3: Comment: **1.** Thank you for recognizing the improvements made by our algorithm. Indeed, one key takeaway is that our adaptive, learning-based approach offers stronger guarantees than uniform random sampling and can even match the performance of settings with prior knowledge of the problem structure. **2. Regarding the condition on $T$ and its dependence on $\sigma_d$ .** First of all, we wish to clarify that by saying we do not know $\sigma_d$, we mean that our algorithm does not require as input $\sigma_d$ or any lower bound on it. For example, the number of samples per epoch in the algorithms in [34] and [35] depends on $\sigma_d$, while our algorithm does not. This allows our approach to work without prior knowledge of the parameters of $Q$-matrices which previous algorithms require. This is especially challenging in iterative settings like ours, where prior knowledge would need to include the parameters of all $Q^\pi$ matrices encountered until convergence. Next, both papers ([34] and [35]) are based on the CUR approach, and we will explain why this approach requires assumptions on the number of samples $T$. In the proof of our Theorem 5 we need to make use of the following inequality: $$ \Vert \widetilde{Q}^\pi_{\tau}(\mathcal{I},\mathcal{J})^\dagger \Vert_{\mathrm{op}} = \frac{1}{\sigma_d(\widetilde{Q}^\pi_{\tau}(\mathcal{I},\mathcal{J}))} \leq \frac{1}{\sigma_d(Q^\pi(\mathcal{I},\mathcal{J})) - \Vert \widetilde{Q}^\pi_{\tau}(\mathcal{I},\mathcal{J}) - Q^\pi(\mathcal{I},\mathcal{J}) \Vert_{\mathrm{op}}} $$ (Note that for simplicity, we set $L=R=\mathrm{Id}$.) The second term in the denominator is an error term that scales with $1/\sqrt{T}$. Therefore, to ensure the expression above is positive, we require a condition of the form $T\gtrsim 1/\sigma_d^2(Q^\pi(\mathcal{I},\mathcal{J}))$. This condition is thus necessary for the analysis of any CUR-based methods, including [34]. Now, note that in [35] (e.g., Theorem 14 or Corollary 16 for finite $S,A$), the authors use a **much stronger** assumption: the discount factor $\gamma = O\left( \frac{\sigma_d^2(Q(\mathcal{I},\mathcal{J}))}{d^2 V_{\max}} \right)$, which must hold even in the high sample regime (for large $T$). In contrast, our analysis works for any value of $\gamma$. **2.1. Regarding small $\sigma_d$.** We agree that a very small $\sigma_d$ should not drastically affect the guarantees. If $\sigma_d$ is very small and, without loss of generality, $\sigma_{d-1}$ is sufficiently large, one could set $d_{\mathrm{eff}} = d-1$ and obtain bounds that depend on $\sigma_{d-1}$ instead. However, ignoring $\sigma_d$ introduces an approximation error that depends on the incoherence of the $d$-th singular vectors. To safely disregard the recovery of the $d$-th singular value and its associated singular vectors, we must ensure that $\sigma_d$ is small and that the $d$-th singular vector is sufficiently incoherent. This contrasts with Frobenius/spectral norm guarantees, where neglecting terms corresponding to the $d$-th singular value leads only to an additive error of $\sigma_d$, regardless of the singular vectors. We are not aware of any method that can select the rank parameter $d$ to address these issues without assuming knowledge of singular vectors or the appropriate incoherence parameters. **3. Regarding incoherence and spikiness.** While our algorithm's worst-case guarantees scale with $\mu^4$, we believe this comparison is not particularly useful. As shown in the table, in a general, non-adaptive setting, the guarantees typically scale with $\alpha^2 \mu^2$ . However, in this paper (following [35]), we demonstrate that adaptive sampling allows us to eliminate the dependence on the incoherence parameter $\mu$. On the other hand, the spikiness parameter $\alpha$ quantifies the difficulty of recovering the full low-rank structure. We believe that the dependence on $\alpha$ cannot be improved through different sampling methods and instead requires a novel matrix recovery method. We will discuss these issues further in the paper and clarify our concept of incoherent-free matrix recovery. **3.1. Regarding the dependence on the rank parameter $d$.** We omit explicit mention of the rank parameter $d$ because the sample complexity scales as $d^3$ in all three CUR-based settings, including ours and those in [34,35]. We believe that the nuclear norm-based method used in Theorem 21 of [34] offers slightly better scaling, with $d^2$. We will clarify this further in our revised version of the manuscript
Summary: The works present a policy iteration algorithm that relies on supposedly low rank structure of value function (Q-function) The proposed work first estimates the low-rank matrix of a given policy before performing policy iteration step. The Leveraged Matrix Estimation (LME) algorithm learns the Q-matrix for a given policy. LME for a given sampling budget estimates Q-matrix in a 3 step process. The policy iteration algorithm also uses LME at the policy evaluation stage. Strengths: The proposed Q-matrix estimation and policy iteration algorithm are interesting for sparse MDPs. Sample complexity bounds for the algorithm Analysis is easy to follow Weaknesses: It is not clear where such low-rank Q-matrix appear in real-world. Is coherence and spikiness of Q generally appear in control problems? Can authors show any real-world application like gym environments? There is significant body of work that leverage rank deficit structure of Q, it is not immediately what are the additional insights the current draft has over the past work. Can you compare with other algorithms proposed. Technical Quality: 3 Clarity: 2 Questions for Authors: See weakness Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: See weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review! Please find our responses below. **A. Appearance of low-rank $Q$ matrices in the real world problems.** Low-rank $Q$ matrices have been observed practically, as shown in Section 4.1 in [43] for a wide range of environments (Atari games). Motivated by this observation, further empirical studies and the first theoretical results assuming prior knowledge of the most informative states have been presented in [34,35]. We believe that our paper is well-motivated by the fact that it is the first theoretical paper proving nearly optimal entrywise guarantees with **no prior knowledge** of the problem structure. Furthermore, we recognize that assumptions made for theoretical analysis are seldom completely true in real-world problems. For example, regarding the extensively studied framework of linear MDPs, there are very few real-world problems where such structure holds (with fixed feature vectors). Nonetheless, we believe that our analysis should be used as a proxy for real-world environments and can be practically useful even in settings with an approximately low-rank structure. **B. Coherence/spikiness always appear in real-world control problems.** Coherence and spikiness are general properties of matrices that are of constant order in very specific settings, assuming that all entries of the matrix are approximately equally informative. We believe that making RL algorithms robust and independent of the specific environment-related quantities, such as incoherence, is an important challenge. In this work, we address this challenge for a class of low-rank $Q$ matrices by providing robust algorithm that ensures low error for a wide range of matrix instances. **C. Numerical simulations in gym environments.** We refer the reviewer to part B.1 of our response to reviewer WY9t for a discussion on applying our algorithm to gym environments. Moreover, since the focus of our work is primarily theoretical, we believe that numerical simulations should complement main theoretical results and be applied in the most illustrative environments. Thus, we are not convinced that repeating the experiments of [35] in the gym environment would significantly strengthen our work. **D. Comparison with literature on low-rank $Q$-learning.** For a comparison with past work, we refer the reviewer to Section C of our global rebuttal. --- Rebuttal Comment 1.1: Comment: Thank you very much for your response. I would be more inclined to see numerical evaluations as a complement to current theoretical results, however that is not a critical for my decision. I prefer to keep my score unchanged --- Reply to Comment 1.1.1: Comment: Thanks again for your quick follow up and would be happy to answer any other questions if you have more. --- Rebuttal 2: Comment: Dear reviewer, Thanks again for your review. We have answered your concerns and would be grateful to hear your thoughts. If you need further clarification or want to ask more questions, we would be happy to respond. Best regards.
Summary: In this paper, the authors consider the problem of learning an $\epsilon$-optimal policy in systems with low-rank latent structures and is order optimal under weaker conditions. The proposed algorithm iterates between exploitation (policy evaluation) and exploration (policy improvement). For policy evaluation, the entries are sampled in two stages to complete matrix estimation. The adopted approach provides entry-wise guarantees with respect to the spikiness of the matrix. Strengths: - The authors have done a great job of clearly defining the problem they are trying to solve and conducting a thorough literature survey showcasing the existing work as well as how their results improve upon the existing work. - The proposed algorithm is parameter-free which implies that it does not rely on spikiness/rank/coherence bounds to work. Similarly, the lack of dependence of sample complexity bounds also shows the strengths of these results. Weaknesses: - The notation can be hard to read especially as l and L and r and R imply completely different things, while in most cases they imply smaller letters imply matrix entries for the matrix indicated by the larger letter. Technical Quality: 3 Clarity: 3 Questions for Authors: - Theorem 1 shows that the Leverage Scores Estimation depends on the $d^{th}$ singular value, Wouldn't that impose constraints indirectly on the rank/spikiness of the original matrix? - In Line 285, is the subscript of $\Omega$ a typo. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: - Minor Comment: Please elaborate on acronyms before using them. The goal is to make the paper self-contained for a first-time reader as well even though these acronyms are widely known in practice. Line 10 - CUR, Line 30 - MDP, Line 88 - ERM, MLE - Can the authors compare the simulation results in Appendix A with existing approaches akin to [35] to showcase if their algorithm indeed works in a practical setting? - Can the authors also showcase how their algorithm holds for different values of rank, spikiness, and condition number? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable review and positive feedback! Please find our responses below. **A. We do not impose any constraints on the rank/spikiness of the original matrix.** According to Theorem 1, we need number of samples $T = \widetilde{\Omega} \left( \frac{(S+A)}{(1-\gamma)^3} \frac{r_{\max}^2\kappa^2 SA}{\sigma_d^2(Q)} \right)$, and note that the second term can be upper bounded by $\alpha^2$ (for $\kappa,d=\Theta(1)$). Thus, the number of samples required for this theorem to hold does depend on the spikiness parameter $\alpha$. However, this does not *indirectly impose constraints on the spikiness*, since this result holds for any level of spikiness. We only require a higher sample complexity for instances with higher spikiness, which intuitively correspond to more difficult instances of matrices. Furthermore, note that we do not require knowledge of the spikiness parameter $\alpha$ when sufficiently many samples are provided. **B. Numerical experiments.** - **Comparison with numerical simulations from [35].** Note that the algorithm used in the experimental section in [35] corresponds roughly to our LoRa-VI algorithm with uniform anchors presented in Appendix A.4. More precisely, choosing anchor states uniformly at random in the first stage of our algorithm recovers the algorithm in [35] (with the slight modification of using PI instead of VI, which we do for theoretical reasons as discussed in Section 3.3). Thus, all results obtained in [35] are obtainable within our framework as well since it incorporates a more general setting. - **Additional simulations.** We want to emphasize that our main contributions in this paper are theoretical. However, we conducted four different experiments to show the practical relevance of our work. We agree that experimentally testing the dependence on rank, spikiness and condition number is an interesting direction. We will incorporate new experiments regarding these settings in the revised version of our paper. **C. On notation.** We thank the reviewer for giving useful feedback about our notation. Here are answers to your questions: - using $L,R$ and $\ell,r$ for different quantities: We denote an element $(i,j)$ of a matrix $M$ by $M_{i,j}$ in this paper. Thus, elements of the matrices $L$ and $R$ are denoted by $L_{i,j}$ and $R_{i,j}$, respectively. We will add a sentence explaining this in the notation section of the revised paper. - subscript in line 285: The subscript $\square$ is not a typo. We use notation $\Omega_{\square}$ to denote elements in the square matrix formed by entries $\mathcal{I}\times \mathcal{J}$ in the skeleton of the matrix. - regarding acronyms: We will add descriptions of the acronyms we use. Note that CUR is not an acronym. --- Rebuttal 2: Comment: Dear reviewer, Thanks again for your review. We have answered your concerns and would be grateful to hear your thoughts. If you need further clarification or want to ask more questions, we would be happy to respond. Best regards.
Rebuttal 1: Rebuttal: Thank you all for your efforts in reviewing our paper. We feel that some of the paper contributions might have been overlooked and we take the freedom to highlight them below. **A. Our matrix completion scheme is novel.** We want to emphasize that our matrix completion method is novel and not merely an application of existing results from matrix completion literature: our work is the first one to leverage entrywise bounds for estimating leverage scores and use them for adaptive sampling. Previously known two-phase matrix completion algorithms (such as that of [7]) only work in noiseless settings since estimating leverage scores has been challenging without appropriate entrywise (or $\Vert \cdot \Vert_{2\to\infty}$) guarantees. We use recent advancements in establishing entrywise guarantees to prove that we can **1)** successfully estimate leverage scores and **2)** use these estimates to sample in a manner that results in incoherence-free entrywise guarantees. **B. Main contributions of our paper.** Next, we want to emphasize three crucial points in which our algorithm is superior compared to previously known methods: - Our proposed method enjoys **entrywise guarantees**. As discussed in [35], entrywise guarantees are crucial for RL algorithms. Handling error in Frobenius or spectral norm does not seem to be sufficient for studying RL algorithms (see section F.2 in [35]). - **No explicit dependence on incoherence**. As shown in the table below, we remove explicit dependence on incoherence in sample complexity bounds. In [34], sample complexities scale with the incoherence constant $\mu$ roughly as $\mu^6$, but by decoupling the effect of incoherence from the effect of spikiness (corresponding to terms with $\sigma_d(Q)$), their sample complexity scales as $\mu^2 \alpha^2$ (note that spikiness constant $\alpha$ satisfies $\alpha \leq \mu^2 d$). In other words, our algorithm is better by a factor of $\mu^2$ than algorithms based on uniform sampling (studied in [34]), and requires the same sample complexity as the algorithm of [35], which has prior knowledge of anchor states. - Our proposed method requires **milder assumptions**. Specifically, we make two significant improvements in reducing the set of assumptions: - we do not require prior knowledge of the most informative states (anchor states) as in [35]. Such knowledge greatly simplifies the problem, essentially reducing it to estimating the part of the matrix corresponding to the anchor states only. We believe this assumption is too strong for real-world applications, and we devise a way to learn such states without any prior knowledge. - our low rank assumption ($Q^\pi$ low rank for every deterministic policy $\pi$) is a relaxation of the previous assumption in [34] ($r$ and $P^\pi$ are low rank for any deterministic $\pi$). We achieve this by applying policy iteration-based algorithm instead of previously reported algorithms based on value iteration. We refer reviewers to Section 3.3 in the paper, where we explain in detail the benefits of using PI-based methods. **C. Comparison with other methods** Below, we provide a brief tabular overview of the most relevant algorithms, their performance and assumptions. For the sake of brevity, we omit writing the factors $\kappa,d$ and use the abbreviations NNM: nuclear norm minimization and MC: matrix completion. | Method | Error guarantees | Sampling model | Assumption | Sample complexity | |:---------------------:|:----------------:|:---------------------:|:---------------------:|:---------------------------------:| | Ours | entrywise | adaptive | bounded spikiness | $\alpha^2 (S+A)/\epsilon^2 $ | | Algorithm 1 [35] | entrywise | apriori fixed anchors | anchors apriori known | $\alpha^2 (S+A)/\epsilon^2$ | | LR-EVI (Thm 9 [34]) | entrywise | unif. anchors | incoherence | $\mu^2 \alpha^2 (S+A)/\epsilon^2$ | | NNM [9] (Thm 21 [34]) | entrywise | unif. anchors | incoherence | $\mu^2 \alpha^2 (S+A)/\epsilon^2$ | | Two-phase MC [7] | exact recovery | adaptive | noiseless | not applicable | We also note that a different type of low-rank structure has recently received significant attention. Namely that $Q$-function can be written as $Q(s,a) = \phi(s,a)^\top \theta$ for an unknown vector $\theta \in \mathbb{R}^d$ and an unknown feature mapping $\phi$ belonging to some function class $\Phi$. These methods are not directly comparable to those presented in the table above and usually do not utilize matrix completion.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Swarm Intelligence in Geo-Localization: A Multi-Agent Large Vision-Language Model Collaborative Framework
Reject
Summary: Proposed a graph based learnable multi-agent framework. The framework consists of multiple stages : Forwarding: Election (K: Answer agents; R: Reviewer) -> Review -> K Discuss till a final conclusion is reached. Proposed a mechanism to learn the graph connections dynamically. The major Contributions Introduced in the paper: (A) A new swarm intelligence geo-local framework smileGeo; (B) Dynamic learning strategy; (C) A new Geo-dataset (test mainly). Strengths: The major strengths of the proposed smileGeo frameworks are: (a) the learnable Graph based communication strategy seems works well empirically. In table 2, authors demonstrated that it helps achieve better acc, but lower average token costs. (b) The proposed method is also scalable as shown in table 3. (c) Used attention-based GNN to predict optimal connections and optimal election. Also empirically justified the effectiveness of attention based GNN. (d) Also constructed Simple rules of updating edges(connections) that works well in practice. Weaknesses: The major weaknesses are as follows: (a) Comparisons with baselines seems unfair. (b) Missing details of the evaluation setup, metrics, etc. Technical Quality: 3 Clarity: 3 Questions for Authors: Here are my questions to the authors: (a) Table 1 involves comparison between open/closed source single LVLMs with smileGeo-single. However, smileGeo appears to primarily focus on a multi-agent framework, without introducing any new single LVLM architectures. (b) The comparative results of different agent frameworks without web searching are reported in table 2. How are 'acc' and 'tks' determined for each framework? Which types of LVLMs are employed? Did they aggregate all LVLMs and calculate averages per framework, or utilize the best-performing LVLM specific to each framework? It's important not to unfairly advantage one framework over others by using superior LVLMs. (c) Question about the comparison with LLM/LVLM-based agent frameworks: For the integration frameworks you compared in Table 2, what specific LVLMs were integrated within the LLM-Blender, LLM Debate, and smileGeo frameworks? Did you use the same LVLM combinations for the different frameworks in the comparison? Different LVLM combinations may have different underlying behavior on metrics. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: yes, the authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1. Table 1 involves a comparison between open/closed source single LVLMs with smileGeo-single. However, smileGeo appears to primarily focus on a multi-agent framework, without introducing any new single LVLM architectures.** Thank you for your comments. In fact, Table 1 compares the results directly generated by the LVLM used by each different LLM agent in smileGeo with the results discussed using the smileGeo framework for all LLM agents. The experimental design of Table 1 aims to demonstrate that the LVLM possesses a certain reasoning ability and can infer more accurate results by incorporating external information when discussing with other LVLMs. Additionally, we have included the comparative results of different agent frameworks in Table 2. We will refine the structure of the experiments to avoid any confusion. **Q2. The comparative results of different agent frameworks without web searching are reported in Table 2. It's important not to unfairly advantage one framework over others by using superior LVLMs.** Thank you for your comments. Sorry for the confusion about the experiment settings. To ensure the fairness of experiments, all frameworks utilize the same number and types of LLM agents. The only difference lies in the architectures used for discussion among the LVLMs. We would fill in more details of experiment settings to avoid confusion in the revised version. **2.1 How are 'acc' and 'tks' determined for each framework?** Thank you for your questions. We have clarified the meanings of 'acc' and 'tks' below Table 2: 'acc' stands for the accuracy of the framework, while 'tks' refers to the average number of tokens a framework uses per query (including image tokens). Additionally, the calculation of 'acc' is provided in Section 4.1, Evaluation Metrics. The number of tokens is determined by the length of the query sentences and the pixels of the images: the longer the sentence or the higher the image resolution, the more tokens there are. **2.2 Which types of LVLMs are employed?** Thank you for your questions. Each framework utilizes all the single LVLMs listed in Table 1 as different LLM agents, engages them in discussions, and summarizes the final results. **2.3 Did they aggregate all LVLMs and calculate averages per framework, or utilize the best-performing LVLM specific to each framework?** Thank you for your questions. All frameworks are designed to enable all LVLMs to reach a consensus and provide a unified answer. To prevent situations where a consensus cannot be reached, we implement a majority rule after a certain number of discussion rounds, ensuring a unified answer is recognized by the majority of agents. **Q3. Question about the comparison with LLM/LVLM-based agent frameworks:** **3.1 For the integration frameworks you compared in Table 2, what specific LVLMs were integrated within the LLM-Blender, LLM Debate, and smileGeo frameworks?** Thank you for your questions. All frameworks use the single LVLMs listed in Table 1 as distinct agents, engage them in discussions, and summarize the final results. **3.2 Did you use the same LVLM combinations for the different frameworks in the comparison? Different LVLM combinations may have different underlying behavior on metrics.** Thank you for your questions. Yes, we use the same LVLM combinations for the different frameworks in the comparison. We apologize for any confusion caused and will refine the experimental section to ensure a clearer expression. --- Rebuttal Comment 1.1: Comment: As we approach the end of the author-reviewer discussion period, we respectfully wish to check in and ensure that our rebuttal has effectively addressed your concerns regarding our paper. Should you have any remaining questions or need further clarification or additional experimental results, please do not hesitate to let us know. We appreciate the thoughtful reviews and the time you’ve invested in providing us with valuable feedback to improve our work. If you believe that our responses have sufficiently addressed the issues raised, we kindly ask you to consider the possibility of raising the score.
Summary: This works proposes a new visual geo-localization framework with multiple LVLM (Large Vision Language Model) agents. The agents communicate with each other to estimate the geo-location of the input image. A dynamic learning strategy is proposed to optimize the communication patterns among agents to improve efficiency. The method is evaluated on the proposed GeoGlobe dataset. Strengths: + The idea of tacking worldwide city-level geo-localization with multiple LVLM agents is very interesting. + The result is surprisingly good with zero-shot setting, which is even better than powerful close-source models. + Detailed comparison with other agent-based methods is provided. The ablation study on the number of agents is also very detailed. + The writing is easy to follow. Weaknesses: - The authors could make the geo-localization setting more clear in the introduction, for example, the paper focuses on worldwide city-level geo-localization. There are lots of different settings for geo-localization problem and this could be confusing for some researchers. - This paper provides a comparison with three traditional geo-localization methods, i.e., NetVLAD, GeM, and CosPlace. However, these three methods are either retrieval-based landmark matching methods or fine-grained classification-based place recognition methods. It would be better to provide a direct comparison with worldwide geo-localization method on city-level setting, e.g., [A]. Although I believe LVLM-based method is better at this setting, a comparison can make it more convincing. [A] Pramanick, Shraman, et al. "Where in the world is this image? transformer-based geo-localization in the wild." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022. - There are only two qualitative results in the appendix. Given that the accuracy is over 60%, it should be easy to find successful and failed cases to demonstrate the actual output cases of the proposed methods. It can also better illustrate how multiple agents help the geo-localization process. - There are also some existing worldwide geo-localization datasets that could be used for more comprehensive evaluation, e.g., IM2GPS3K, YFCC4K. Technical Quality: 4 Clarity: 3 Questions for Authors: See the weaknesses. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: The authors mentioned the limitations in the checklist. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1. The authors could make the geo-localization setting more clear in the introduction, for example, the paper focuses on worldwide city-level geo-localization. There are lots of different settings for geo-localization problem and this could be confusing for some researchers.** Thank you for your suggestions. Our task aims to achieve city-level localization for any landmark worldwide, without assuming access to a limitless image database. In the revised version, we will clarify the problem of worldwide city-level geo-localization in the introduction. We will detail the differences, difficulties, and challenges of this task compared to high-precision positioning in a local area. Thank you again for the valuable feedback. **Q2. This paper provides a comparison with three traditional geo-localization methods, i.e., NetVLAD, GeM, and CosPlace. However, these three methods are either retrieval-based landmark matching methods or fine-grained classification-based place recognition methods. It would be better to provide a direct comparison with worldwide geo-localization method on city-level setting, e.g., [A]. Although I believe LVLM-based method is better at this setting, a comparison can make it more convincing.** Thank you for your advice. We have added a direct comparison with the model under the same geo-localization settings. The comparative results (accuracy in %, without web searching) on our dataset, GeoGlobe, are shown below: ||Natural|ManMade|Overall| |:--:|:--:|:--:|:--:| |NetVLAD|26.5134|28.9955|28.6047| |GeM|23.1022|25.4175|25.0749| |CosPlace|28.1688|30.2782|29.8701| |TransLocator[A]|26.1776|34.1971|32.6259| |**smileGeo**|58.6111|64.3968|63.2730| Due to the limited rebuttal time, we did our best to train the ViT-based model presented in paper [A]. To ensure fairness, we trained all the models for the same length of time (3 days) using the same hardware configuration, and then used the same test dataset to test each model separately to obtain the results. The results demonstrate that our method significantly outperforms the referenced method. **Q3. There are only two qualitative results in the appendix. Given that the accuracy is over 60%, it should be easy to find successful and failed cases to demonstrate the actual output cases of the proposed methods. It can also better illustrate how multiple agents help the geo-localization process.** Thank you for your suggestions. In the revised version, we will supplement the paper with detailed case studies to describe both successful and failed cases. An example of a successful case is illustrated in Figure 6. While a single LVLM may not directly identify local landmarks, it possesses relevant geographical knowledge about the landmarks. Our framework can stimulate reasoning capabilities among LVLMs, resulting in correct positioning. Regarding failure cases, we will show, for example, that increasing the number of the same LLM agents in Figure 2 leads to excessive repeated and redundant information in the discussion, negatively affecting the experimental results. Additionally, we will include cases illustrating that web search results provide extra information to our LLM agent framework. This can achieve better outcomes when our framework does not rely on Internet search tools. Thank you again for your valuable advice. We will incorporate these changes in the revised version. **Q4. There are also some existing worldwide geo-localization datasets that could be used for more comprehensive evaluation, e.g., IM2GPS3K, YFCC4K.** Thank you for your comments. We conducted further experiments on those datasets, and the results are illustrated in the table below (accuracy in %, without web searching). We also include the comparative results of the best single LLM agent, Gemini-1.5-pro, in our framework and TransLocator as illustrated in paper [A]. || IM2GPS3K | YFCC4K | |:--:|:--:|:--:| |Gemini-1.5-pro|32.1989|11.0009| | TransLocator [A]|31.0978|13.4039| |**smileGeo**|35.6690|16.0714| Although all models, including ours, have generally lower accuracy on these datasets, our method still outperforms the others. It is worth noting that the YFCC4K and IM2GPS3K datasets do not apply artificial filtering to the images, resulting in ambiguous images with almost no geographical clues, such as food photos and portraits. This issue is consistent with the problem mentioned in Section 4.1 of the paper [A] cited in Question 2 above. Therefore, in our research, we invested significant time and effort to construct a novel dataset, GeoGlobe, to better evaluate the worldwide city-level geo-localization task. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. It addresses the concerns and I will keep the rating. --- Reply to Comment 1.1.1: Comment: Thank you for your positive feedback. We appreciate your time and effort throughout this reviewer-author discussion stage.
Summary: The paper introduces smileGeo, a novel framework for visual geo-localization, which involves identifying the geographic location of an image. The authors argue that while Large Vision-Language Models (LVLMs) show promise in this area, their individual performance is limited. SmileGeo leverages the concept of "swarm intelligence" by enabling multiple LVLMs to collaborate and refine their location predictions through a multi-stage review process. To enhance efficiency, the framework incorporates a dynamic learning strategy that optimizes the selection of LVLMs for each image. Furthermore, the paper introduces "GeoGlobe," a new dataset designed to evaluate visual geo-localization models in open-world scenarios where many images depict locations not seen during training. Experimental results demonstrate that smileGeo outperforms existing single LVLMs and image retrieval methods, highlighting the effectiveness of collaborative learning for visual geo-localization. Strengths: * The idea of using an ensemble of networks/agents for geolocalization is interesting and novel. The authors propose a graph-based social network to enable collaboration between the agents. * The ability to search the internet and provide the agents with relevant information is interesting and improves the performance on the task of geolocalization. * The paper proposes GeoGlobe, a new dataset for benchmarking models on the task of geo-localizing landmarks. The dataset could be utilized in future for other learning based geospatial tasks. Weaknesses: * The paper only seems to tackle the problem of geolocalizing **landmark images**. While this is a challenging problem, the current literature [1, 2, 3] has already tried to address the problem of geolocalizing arbitrary ground-level images. The latter problem requires learning sophisticated geographic and visual features. I think even searching the internet cannot effectively solve the geolocalization problem for non-landmark images. * Limited applicability: The framework is built entirely upon the capabilities of different LVLMs (e.g. GPT4, LLaVA, etc). It seems the framework cannot generalize beyond the training data used for training LLMs. * The work fails to address the practical applications and real-life use cases of the framework. Why do we require such a framework? * The limitation and failure cases are not adequately mentioned in the paper. [1] Vivanco Cepeda, Vicente, Gaurav Kumar Nayak, and Mubarak Shah. "Geoclip: Clip-inspired alignment between locations and images for effective worldwide geo-localization." Advances in Neural Information Processing Systems 36 (2023). [2] Haas, Lukas, Michal Skreta, Silas Alberti, and Chelsea Finn. "Pigeon: Predicting image geolocations." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12893-12902. 2024. [3] Berton, Gabriele, Carlo Masone, and Barbara Caputo. "Rethinking visual geo-localization for large-scale applications." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4878-4888. 2022. Technical Quality: 4 Clarity: 3 Questions for Authors: * At present, each agent sees the same information. It might be interesting to incorporate different kinds of information that is revealed differently to the agents, such as multi-view images or panorama images. * How much compute time is used for a single inference run? * Why does the performance of some LLMs decrease with web searching? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Limitations are insufficiently addressed in the paper. The future works mentioned in the conclusion are vague and fail to specify specific future directions for the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1.The paper only seems to tackle the problem of geolocalizing landmark images. While this is a challenging problem, the current literature [1-3] has already tried to address the problem of geolocalizing arbitrary ground-level images. The latter problem requires learning sophisticated geographic and visual features.** Thank you for your comments. Geo-localization of ground-level images is indeed challenging and will be a focus of our future work. Our current work primarily addresses the geo-localization of landmark images, which we mentioned in the abstract. In our work, we tackle worldwide city-level geo-localization without relying on an extensive image database, unlike the methods in previous studies [1-3], which compare similar images from a backend database. Such a database reliance can limit model performance. Moreover, geo-localization requires understanding complex geographic and visual features. We leverage the knowledge and reasoning abilities of LLM agents to design a new framework. We will clarify it in the introduction and apologize for any confusion. **Q2.I think even searching the internet cannot effectively solve the geolocalization problem for non-landmark images.** Thank you for your comments. I agree that it is very difficult to locate ambiguous images, such as selfies. However, as noted in the second sentence of the abstract, the focus of our work is to perform city-level geo-localization of various landmarks (not only famous landmarks) worldwide. Additionally, we manually constructed a geolocation-based dataset, GeoGlobe, to filter out most images whose locations could not be determined. **Q3.Limited applicability: The framework is built entirely upon the capabilities of different LVLMs. It seems the framework cannot generalize beyond the training data used for training LLMs.** Thank you for your concerns. A single LVLM indeed faces these challenges, as many large models attempt to consume vast amounts of data for pre-training. Our motivation is to address the biases in pre-trained LVLMs by combining their strengths. We designed smileGeo using various LLM agents, and experiments show that it outperforms any single LVLMs. Additionally, web searching results can provide extra information, making the framework more robust. **Q4.The work fails to address the practical applications and real-life use cases. Why do we require such a framework?** Thank you for your questions. Geo-localization holds significant application value. Beyond the applications such as robot navigation mentioned in the introduction, many popular consumer-facing applications also rely on geo-localization technology, as noted in the privacy policies of apps like Twitter. By geo-locating social media pictures posted by users in real time and analyzing their mobility patterns, these applications can offer tourists personalized recommendations for attractions and itinerary planning. Along this line, our framework achieves more accurate geolocation, thereby enhancing the usefulness and precision of such applications. We will refine our explanation in the paper and apologize for any confusion caused. **Q5.The limitation and failure cases are not adequately mentioned.** Thank you for your comments. We recognize the limitation mentioned in the second point of the checklist, noting that the framework currently relies solely on Internet search tools. However, we believe the framework has potential beyond this. As stated in the conclusion, "Looking ahead, we aim to expand the capabilities of smileGeo to incorporate more powerful external tools beyond just web searching." In the revised paper, we will include failure cases in the appendix to provide readers with a better understanding of our model. For instance, we will demonstrate a scenario where increasing the number of identical LLM agents in Figure 2 leads to excessive repetition and redundancy, negatively affecting the results. **Q6.At present, each agent sees the same information. It might be interesting to incorporate different kinds of information that is revealed differently to the agents.** Thank you for your suggestions. Providing each LLM agent with a different view of the image is indeed an intriguing idea. However, it has settings that are different from our learning task. Allowing all LLM agents to access the complete image ensures that each agent can fully analyze the data from its unique perspective. Our proposed framework aims to integrate the memory and reasoning capabilities of different LLM agents, leading to improved accuracy of geo-localization tasks. **Q7.How much compute time is used for a single inference run?** Thank you for your questions. The 99% Response Time for smileGeo is less than 25 seconds. This efficiency is largely due to the agent selection model within our framework, which minimizes unnecessary question-answering and communication overhead with large models. Additionally, the slowest LLM agent in the framework has an average response time of less than 500ms, and the average latency for API calls within our servers in the data center is within 50ms. We also limit each question to a maximum of 20 rounds of discussion. Therefore, our model is computationally efficient and significantly outperforms other LLM agent-based frameworks. We will add this explanation to the final version. **Q8.Why does the performance of some LLMs decrease with web searching?** Thank you for your concerns. Some web search results introduce noise into geo-localization. Our web search primarily relies on the Google search engine, which inherently includes advertising URLs and contents similar to our queries. Models with weaker reasoning abilities are more susceptible to being influenced by this noise. It is consistent with what we highlighted in Section 4.2: "Models with larger parameters demonstrate superior reasoning abilities compared to smaller models". We will include a more detailed explanation of this in the revised version. --- Rebuttal Comment 1.1: Comment: I thank the authors for their responses. I have a few clarifying questions: [1] What is so __unique__ about the framework that it could only be employed for geolocalization? I think the proposed framework is very general and not unique to geolocalization. [2] "A single LVLM indeed faces these challenges, as many large models attempt to consume vast amounts of data for pre-training. Our motivation is to address the biases in pre-trained LVLMs by combining their strengths." How do you ensure that the LLM agents have sufficient world knowledge that is relevant for the task of geolocalization? Are there any empirical studies to prove the statement? [3] Regarding practical application: How can the framework be used in robot navigation? Robot navigation requires fast response times for decision-making at each stage. The authors have mentioned that "99% Response Time for smileGeo is less than 25 seconds." This does not make the framework scalable for real-time applications. --- Reply to Comment 1.1.1: Comment: **Response to Q1:** Thank you for your questions. Geo-localization is a complex task that requires extensive geospatial knowledge and strong reasoning abilities. LVLMs offer a novel approach to visual geo-localization by leveraging their powerful visual question-answering (VQA) capabilities, eliminating the need for external geo-tagged image records. This motivation led us to design a discussion framework around LVLMs to fully utilize their strengths and achieve better results. Since different LVLMs have different memory and reasoning capabilities for geo-localization tasks, we designed an LLM agent selection module in the proposed framework, which can select the most suitable agents for geo-localization of the target image for discussion, thus improving the efficiency of the framework. When the selected LLM agents cannot reach a high-confidence conclusion, they can autonomously call an internet-based geographic image search tool to supplement it with additional positioning information. We also appreciate your acknowledgment of our framework’s potential as a general approach that could be extended to other fields, reinforcing that the underlying concept of the LLM-based discussion framework is versatile. In the revised version, we will mention our intention to explore its application to other areas in future work. **Response to Q2:** Thank you for your questions. This was the motivation behind our first comparative experiment, where we compared different single LVLMs in Table 1. Even without retrieval assistance, most closed-source large model agents (such as GPT-4V and Gemini-1.5-pro) and some open-source large models (like Qwen-VL) achieved higher experimental accuracy than some image retrieval-based methods (as shown in Table 3). This demonstrates that LVLMs inherently possess the ability to analyze and process geo-location data, as well as the capacity to retain geo-location knowledge. Furthermore, our framework allows LVLM agents to search the internet and obtain sufficient world knowledge directly. As mentioned in Section 4.2, "models with larger parameters, such as llava–1.6–34b, demonstrate superior reasoning abilities compared to smaller models," leading to significant improvements in accuracy and outperforming traditional retrieval-based geo-localization methods. These experiments confirm that LLM agents, particularly closed-source large models and LVLMs with larger parameters, exhibit strong memory and reasoning capabilities for geo-localization tasks, both independently and with additional geo-location information. Additionally, there are also many reports verifying that using LVLMs for geo-tagging has gained widespread acceptance; please check the following links [1-3]. [1] https://x.com/itsandrewgao/status/1785827031131001243 [2] https://lingoport.com/i18n-term/llm/ [3] https://www.assemblyai.com/blog/llm-use-cases/ **Response to Q3:** Thank you for your concerns. In this paper, we propose a framework that effectively addresses the geo-localization task, which could be a critical component of robot navigation. Robot navigation typically involves many stages, such as localization, trajectory planning, and execution of the planned route. Tasks like localization and path planning often prioritize accuracy over real-time processing, especially in city-level navigation, where incorrect trajectory planning can lead to significant resource wastage. Several studies [1][2] utilize LMM/LVLM agents for UAV dispatching, a specific aspect of robot navigation. While inference using LMM/LVLM is known to be very time-consuming, the successful application of methods in these studies indicates promising prospects for LMM/LVLM-based geo-localization. Additionally, we believe that as LLM technology advances—through methods like quantization, distillation of open-source LLM agents, and calling of more lightweight and faster closed-source LLM agents (e.g., GPT-4o-mini)—our proposed framework will soon be capable of real-time responses. We will include this prospect in the revised paper. [1] Liu, S., Zhang, H., Qi, Y., Wang, P., Zhang, Y., & Wu, Q. (2023). Aerialvln: Vision-and-language navigation for uavs. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 15384-15394). [2] Zhao, H., Pan, F., Ping, H., & Zhou, Y. (2023). Agent as Cerebrum, Controller as Cerebellum: Implementing an Embodied LMM-based Agent on Drones. arXiv preprint arXiv:2311.15033.
null
null
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
A probability contrastive learning framework for 3D molecular representation learning
Accept (poster)
Summary: To address the problem of potential false positives and false negatives in contrastive learning of molecules, this paper proposed a learnable weighted contrastive learning approach for molecular representation learning. The effectiveness of the proposed method is tested on the MoleculeNet and QM9 datasets. Strengths: 1. The false pair label is an issue for contrastive learning. 2. The experimental result shows improved performance. 3. The proposed method is clear and easy to follow. Weaknesses: 1. There are existing works about the false labels in contrastive learning [1,2] and weighted contrastive learning [3,4,5]. None of these are discussed in the related works while they are quite related to the issue that the paper tries to address. This limits the novelty of this paper. 2. In the experiment, how to do data splitting is not clear. I did not find it in the paper. In molecular tasks, data splitting may have a big effect on performance. 3. In the title and the ablation, the author mentioned 3D structures. However, it seems that 3D is not a key component of the paper. And the 3D loss is taken from Uni-mol which is not a contribution of this paper. 4. The reason for using Gamma prior should be justified. It is an essential step in the proposed approach and related to the technical soundness. 5. In MolCLR, the contrastive is done between two augmentations Xi_1 and Xi_2, which is the same as Figure 1. But in the code, the contrastive is done between the original sample Xi and one augmented sample Xi_1. Maybe the latter case is less affected by the false labels. 6. The major part of Figure 2 is copied from the cited papers (MolCLR, Uni-mol, Equiformer). Is this permitted? 7. Some typos: e.g. line 285, missing "Table" before the reference. Overall, the application of advanced contrastive learning to molecular representation learning is a promising idea. But the current paper needs to be improved in several aspects for publication. [1] Debiased Contrastive Learning, NeurIPS 2020 [2] A Theoretical Analysis of Contrastive Unsupervised Representation Learning, ICML 2019 [3] CWCL: Cross-Modal Transfer with Continuously Weighted Contrastive Loss, NeurIPS 2023 [4] Camera Alignment and Weighted Contrastive Learning for Domain Adaptation in Video Person ReID, WACV 2023 [5] Weighted Contrastive Learning With False Negative Control to Help Long-tailed Product Classification, ACL 2023 Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and for recognizing the promising potential of our application. W1: There are existing works about the false labels in contrastive learning [1,2] and weighted contrastive learning [3,4,5]. None of these are discussed in the related works while they are quite related to the issue that the paper tries to address. This limits the novelty of this paper. [1] Debiased Contrastive Learning, NeurIPS 2020 [2] A Theoretical Analysis of Contrastive Unsupervised Representation Learning, ICML 2019 [3] CWCL: Cross-Modal Transfer with Continuously Weighted Contrastive Loss, NeurIPS 2023 [4] Camera Alignment and Weighted Contrastive Learning for Domain Adaptation in Video Person ReID, WACV 2023 [5] Weighted Contrastive Learning With False Negative Control to Help Long-tailed Product Classification, ACL 2023 A1: We appreciate the reviewer's suggestion and will update our manuscript to include and discuss these references in the related works section. Our contribution lies in adapting the weighted contrastive loss to molecular data with a new optimization algorithm that allows for simultaneous posterior inference and model optimization, coupled with extensive evaluation from multiple aspects. Specifically, our method diverges from existing works such as [1] and [2] by focusing on a Bayesian inference framework tailored for weighted contrastive learning. Additionally, [3], [4] and [5] address weighted contrastive loss in various domains, our approach uniquely integrates 3D molecular structures and graph-based representations. By referencing these works, we aim to frame our approach within the broader context of addressing false negatives in contrastive learning and clearly delineate our novel contributions. W2: In the experiment, how to do data splitting is not clear. I did not find it in the paper. In molecular tasks, data splitting may have a big effect on performance. A2: We apologize for the confusion for data splitting methodology in the paper. Here, we provide a comprehensive explanation. Chiral MoleculeNet Experiments: We split all the datasets with scaffold split, which splits molecules according to their molecular substructure. Non-Chiral MoleculeNet Experiments:We created a scaffold split for most datasets. Except that the subtask QM9 adopts a random splitting setting following Mol-CLR, QM9 Experiments: For the QM9 dataset (which have different downstream regression tasks compared with QM9 subtask in MoleculeNet dataset), we adhered to the random data splitting strategy employed in the Equiformer paper. We will clarify it in our paper. W3: In the title and the ablation, the author mentioned 3D structures. However, it seems that 3D is not a key component of the paper. And the 3D loss is taken from Uni-mol which is not a contribution of this paper. A3: We appreciate the reviewer’s observation and acknowledge that the 3D loss is derived from Uni-mol. However, our contribution lies in exploring the combined use of this 3D loss with a weighted contrastive learning loss, which our ablation study in table 5 demonstrates enhances performance. The 3D information is vital in our work for one more main reason: The calculation of our weighted contrastive loss depends on the 3D structural information, providing spatial context that complements the molecular graph structure, which enables performing protein-ligand binding task(Table 8, Appendix D) and potentially many other real-world drug design tasks that other contrastive learning based method, like MolCLR, don’t work. We will ensure these points are clearly articulated in the revised manuscript. W4: The reason for using Gamma prior should be justified. It is an essential step in the proposed approach and related to the technical soundness. A4: We use the Gamma prior because it naturally lends itself to conjugacy in the posterior, which significantly eases the posterior sampling procedure. Also, it is known for its flexibility in shape and scale to model positive continuous variables, which is suitable for sample weights in our setting. Additionally, we compared it with a baseline using binary sample weights with the Bernoulli prior in Table 1. Our results show that the Gamma prior offers superior robustness and accuracy. This advantage is likely due to the broader search space available to continuous weights, with binary weights being a subset of this continuous space. W5: In MolCLR, the contrastive is done between two augmentations Xi_1 and Xi_2, which is the same as Figure 1. But in the code, the contrastive is done between the original sample Xi and one augmented sample Xi_1. Maybe the latter case is less affected by the false labels. A5: Thank you for pointing out the difference. We agree that performing contrastive learning between the original sample \(X_i\) and one augmented sample \(X_{i1}\) can potentially reduce the impact of false labels, as it introduces less randomness and maintains higher consistency. We will add a discussion in the paper to highlight the benefits of our trick. Additionally, we will include visualizations to empirically demonstrate how our method effectively identifies and mitigates the impact of false labels. W6: The major part of Figure 2 is copied from the cited papers (MolCLR, Uni-mol, Equiformer). Is this permitted? A6: We have contacted the original authors for permission to use these images and have received approval. In addition, we will improve the quality of the figure with a similar design to enhance the presentation in the camera-ready version. W7: Some typos: e.g. line 285, missing "Table" before the reference. A7: Thank you for pointing this out. We will correct it in our paper. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. But I still have concerns about the use of the Gamma prior, the copyright of the figures, and the ignorance of related works about weighted contrastive learning. I do like the idea but the current submission is not ready for publication. --- Rebuttal 2: Comment: Dear Reviewer oFgE, We hope this message finds you well. We wanted to follow up on our rebuttal and ensure you had the opportunity to review our responses to your valuable feedback. We have carefully addressed each of your concerns and made significant improvements to the manuscript based on your suggestions. We would greatly appreciate it if you could take a moment to review our rebuttal and consider the clarifications and enhancements we've made. Your feedback has been instrumental in refining our work, and we hope it convinces you of the strength of our contributions. We kindly ask you to re-evaluate our paper, and if you find our improvements satisfactory, we hope you might consider raising the score. Thank you again for your time and consideration. Best regards, Authors --- Rebuttal 3: Comment: Thank you for your continued feedback. We appreciate your recognition of our idea's potential and would like to address your remaining concerns: Gamma Prior: We chose the Gamma prior for its conjugacy properties and flexibility in modeling positive continuous variables, which suit our Bayesian framework. We are open to further discussion or providing additional comparative experiments. Figure Copyright: We have obtained permissions for the figures used. We can either re-design them or provide the permissions documentation in our revised submission. Related Works: We have now included a detailed discussion of related works on weighted contrastive learning in our updated manuscript, clarifying our unique contributions. We are committed to making the necessary improvements and welcome any further guidance to ensure our work is ready for publication. Best regards, Authors
Summary: This paper introduces a probability-based contrastive learning framework. It regards learnable weights as variables with different distributions and automatically identify and mitigate false positive and negative pairs via Bayesian modeling. The author verify the effectiveness of their method in 13 out of 15 property prediction tasks in MoleculeNet and QM9, which are standard benchmarks. Strengths: (1) The probability weighted contrastive learning mechanism seems novel to me. It is new to treat model weights as random variables and sample optimal weights using Bayesian inference. (2) The performance is brilliant and experiments are comprehensive, ranging from MoleculeNet to QM9. It also outperforms strong baselines such as Equiformer, Transformer-X, etc. (3) The method is presented in a clean and readable way, easy to understand. Weaknesses: (1) Algorithm A in lines 199-216 is filled with text. Can the author present the algorithm more elegantly? (2) Some related works should be mentioned in the related work and be compared in the experiments. For example, MatchDrop [A] also noticed the false positive sampling issue in the graph data augmentation technique. They propose to keep the crucial subgraph unchanged and only move the less informative part. Can this MatchDrop augmentation be considered in your contrastive learning framework? [A] Rethinking explaining graph neural networks via non-parametric subgraph matching, ICML 2023 (3) MoleculeNet is somehow of small sizes. Some large datasets should be considered such as PCQM4MV2. Technical Quality: 3 Clarity: 3 Questions for Authors: (1) Why 0.251 for Equiformer in Table 4 is in bold? (2) I remember that Equiformer is purely a backbone architecture without pretraining. Therefore, it may be unfair to compare it directly in QM9. Moreover, it has an improved version Equiformer V2 [A]. [A] EquiformerV2: Improved Equivariant Transformer for Scaling to Higher-Degree Representations. (3) Similarly, the strongest baseline listed in the paper, Uni-Mol, also has an improved version, Uni-Mol+ [A]. However, it is a pity that it does not release its performance on QM9 and MoleculeNet. [A] Highly Accurate Quantum Chemical Property Prediction with Uni-Mol+. (4) I would recommend the author add InstructBio[A], another interesting work as an extra baseline in Table 1 and 2, which also reports performance on MoleculeNet. [A] InstructBio: A Large-scale Semi-supervised Learning Paradigm for Biochemical Problems. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, the authors addressed limitations adequately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and for recognizing the presentation and novelty of our work. Weaknesses: (1) Algorithm A in lines 199-216 is filled with text. Can the author present the algorithm more elegantly? A1:We appreciate the suggestion. We will revise the manuscript by converting the text into functions and formulas, hopefully providing a clearer and more structured presentation of the algorithm. (2) Some related works should be mentioned in the related work and be compared in the experiments. For example, MatchDrop [A] also noticed the false positive sampling issue in the graph data augmentation technique. They propose to keep the crucial subgraph unchanged and only move the less informative part. Can this MatchDrop augmentation be considered in your contrastive learning framework? [A] Rethinking explaining graph neural networks via non-parametric subgraph matching, ICML 2023 Thanks for the suggestion, we will add this paper in the related works part. However, MatchDrop did not include results on Molecule datasets, we will try to include this data augmentation method and the results will be included in the final version of this paper. (3) MoleculeNet is somehow small in size. Some large datasets should be considered such as PCQM4MV2. A3: Thanks for your suggestion, We added the following experiments on PCQM4Mv2 dataset: We start from the checkpoint of pretrained Uni-Mol+ small model, we adopt Atom Masking and Coordinate Masking augmentations and substitute the original 3D coordinate prediction loss function with our weighted contrastive learning loss function. The model is trained for 150K steps. We get 0.0690 valid MAE, marginally better than 0.0696 of the original model. Using larger backbones and longer training time might help to achieve better performance. Questions: (1) Why 0.251 for Equiformer in Table 4 is in bold? Thank you for catching this mistake. We will correct this in the manuscript. (2) I remember that Equiformer is purely a backbone architecture without pretraining. Therefore, it may be unfair to compare it directly in QM9. Moreover, it has an improved version Equiformer V2 [A]. [A] EquiformerV2: Improved Equivariant Transformer for Scaling to Higher-Degree Representations. We acknowledge the point. While Equiformer is indeed a backbone architecture without pretraining, our experiments on QM9 were also conducted without the use of external data. The contrastive learning was performed directly on the QM9 dataset. Please refer to the following table, our method outperforms EquiformerV2 in 6 out of 12 tasks on QM9, despite using significantly fewer transformer layers (2 vs 6). This suggests that our approach is both efficient and effective. | | α | ΔE | E_homo | E_lumo | μ | C_v | G | H | R^2 | μ | μ0 | ZPVE | |---------------|------|------|--------|--------|-------|-------|------|------|-------|-------|-------|------| | EquiformerV2 | 0.050 | 29 | **14** | **13** | **0.010** | 0.023 | 7.57 | **6.22** | 0.186 | **6.49** | **6.17** | 1.47 | | Equiformer |0 .046 | 30 | 15 | 14 | 0.011 | 0.023 | 7.63 | 6.63 | 0.251 | 6.74 | 6.59 | 1.26 | | Ours | **0.037** | **24.2** | 21.1 | 13.7 | 0.022 | **0.022** | **6.2** | 6.31 | **0.082** | 7.22 | 9.40 | **1.09** | (3) Similarly, the strongest baseline listed in the paper, Uni-Mol, also has an improved version, Uni-Mol+ [A]. However, it is a pity that it does not release its performance on QM9 and MoleculeNet. [A] Highly Accurate Quantum Chemical Property Prediction with Uni-Mol+. We appreciate this. We will cite the Uni-Mol+ paper in related works. Please refer to our answer on weakness 3 for the experiment to validate our framework’s effectiveness based on Uni-Mol+. Experiments show that our method get 0.0690 valid MAE, marginally better than 0.0696 of the original Uni-Mol+ model. (4) I would recommend the author add InstructBio[A], another interesting work as an extra baseline in Table 1 and 2, which also reports performance on MoleculeNet. [A] InstructBio: A Large-scale Semi-supervised Learning Paradigm for Biochemical Problems. Thanks for recommendation. We will add this paper to our experiment section. The following analysis shows that our method outperforms InstructBio across all subtasks in the MoleculeNet classification and regression experiments. | | BBBP | Tox21 | ToxCast | SIDER | ClinTox | BACE | ESOL | FreeSolv | Lipo | |------------|----------|----------|----------|-----------|-----------|-----------|--------------|--------------|--------------| | Instrctbio | 67.4±0.5 | 75.6±0.3 | 65.1±1.5 | 61.5±1.9 | 78.0±0.6 | 78.5±0.8 | 1.771±0.015 | 0.832±0.020 | 0.752±0.017 | | Ours | 76.7±2.0 | 80.1±1.0 | 69.9±2.5 | 64.9± 3.3 | 89.4± 0.1 | 88.2± 1.3 | 0.664± 0.024 | 1.358± 0.452 | 0.590± 0.003 | | | | | | | | | | | | --- Rebuttal Comment 1.1: Title: Update to the Response Comment: The authors answered all my questions and conducted relevant experiments and comparisons. I have no more problems and highly recommend the AC for acceptance. Thanks for their efforts and I have raised my score accordingly. --- Rebuttal 2: Title: Gratitude for Your Review and Enhanced Rating Comment: Dear Reviewer, Thank you for taking the time. We really appreciate your decision. Your support is invaluable in improving our work. Best regards, Authors
Summary: This paper proposes a probability-based contrastive learning framework for 3D molecular representation learning. It addresses the issue of false positive and negative pairs in existing methods. Experiments show its effectiveness, outperforming other baselines. The approach has wide applicability and can boost the performance of molecular representation learning models. Strengths: 1. To the best of my knowledge, this paper raises an intriguing question and warrants attention. 2. The approach taken to address the problem is also innovative. 3. The experiments were comprehensive and yielded significant gains. Weaknesses: There are no major flaws in this article overall, but it requires attention to detail in the writing. For example: 1. Eq.3 is supposed to be $e^{w_i^+s_i}$, not $w_i^+s_i$. Or am I misunderstanding? 2. Line 129 contains a duplicated "positive." 3. The authors should explain the function of an augmented random variable $u_i$ in the appendix, in case readers are not familiar with the method. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. What is the augmentation strategy for QM9 and MoleculeNet? Are only MoleculeNet and Non-Chirality version MoleculeNet pretrained while QM9 is trained directly? 2. If we calculate a similarity based on the molecular structure using chemical software like RDKit to use as a weight, should we achieve better performance? What do you think? Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: See **Weaknesses** Section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and for recognizing the innovation of our work. W1: Eq.3 is supposed to be $e^{w_i^+s_i}$, not $w_i^+s_i$. Or am I misunderstanding? This might be a misunderstanding, Equation 3 is correct as written. This equation is part of our Bayesian augmentation method, designed to create a joint distribution that maintains conjugacy between the priors and posteriors of the weights. W2: Line 129 contains a duplicated "positive." We appreciate your careful review and will remove the duplicated. W3: The authors should explain the function of an augmented random variable ui in the appendix, in case readers are not familiar with the method. We will add a detailed explanation of the function of the augmented random variable u_i​ in the appendix, providing context and clarity on its role in our method. Q1: What is the augmentation strategy for QM9 and MoleculeNet? Are only MoleculeNet and Non-Chirality version MoleculeNet pretrained while QM9 is trained directly? A1: We will add this in the appendix. For non-chiral MoleculeNet, we apply a combination of node removal and edge removal as augmentation strategies, as shown in Figure 1. For chiral MoleculeNet and QM9, we adopt two different augmentations, following Uni-Mol: Atom Masking Augmentation: This augmentation randomly masks a certain percentage of atoms. Coordinate Masking: This augmentation adds random gaussian noise to atom coordinates. Also, QM9 has been trained directly without the pretraining on a larger dataset. Q2: If we calculate a similarity based on the molecular structure using chemical software like RDKit to use as a weight, should we achieve better performance? What do you think? A2: As discussed in our common response to all reviewers, we have conducted experiments comparing different weighting strategies, including similarity-based weights derived from molecular fingerprints based on RDkit. Our experiments demonstrate that while this similarity-based approach shows some improvement in certain tasks, it does not outperform our proposed method in the majority of the tasks. This suggests that the added complexity and adaptability of our method provides better alignment with the diverse tasks in molecular property prediction. --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer Jokw Comment: Thanks for conducting the experiment and your rebuttal; my concern has been resolved, leading me to adjust my rating from 6 to 7. --- Reply to Comment 1.1.1: Title: Gratitude for Your Review and Enhanced Rating Comment: Dear Reviewer, Thank you for taking the time. We really appreciate your decision. Your support is invaluable in improving our work. Best regards, Authors
Summary: Commonly used data augmentation methods may produce false positives or negatives in learning molecular representations. This study proposes a novel probability-based contrastive learning framework to tackle false positive and negative pairs in molecular representation learning. Bayesian modeling is used to learn a weight distribution on sample pairs, automatically identifying and mitigating false pairs. This dynamic adjustment enhances representation accuracy. The model is optimized through a stochastic expectation-maximization process that iteratively refines sample weight probabilities and updates model parameters. Strengths: * Present a good analysis on the impact of false positive and negative pairs * The proposed probability-weighted CL is novel * The overall structure of the manuscript is clear and easy to follow (make the fonts in the figures bigger, they are hard to read). * The experimental results show the average performance. How about variances, an indication of robustness? Weaknesses: * Why not design the probability distribution based on the similarity score distribution, a simpler approach? The best hyperparameters (Table 6) indicate that the positive and negative pairs have quite different distributions. Maybe a even simpler approach using a similarity thresholding would work. * What are the appropriate values for a_u and b_u. * Figure 3 is confusing. Is it used to show that, before learning, some positive samples are highly similar to negative samples, which is no longer the case after learning? * Recent and better performance on the MoleculeNet datasets should be considered. [1] Fang, Yin, et al. "Knowledge graph-enhanced molecular contrastive learning with functional prompt." Nature Machine Intelligence 5.5 (2023): 542-553. [2] Hajiabolhassan, Hossein, et al. "FunQG: Molecular representation learning via quotient graphs." Journal of chemical information and modeling 63.11 (2023): 3275-3287. [3] Ren, Gao-Peng, Ke-Jun Wu, and Yuchen He. "Enhancing molecular representations via graph transformation layers." Journal of Chemical Information and Modeling 63.9 (2023): 2679-2688. Technical Quality: 2 Clarity: 3 Questions for Authors: * Figure 1, which node(s) and edge(s) are removed? How is similarity calculated? Does augmentation consider structural validity and stability? * Why do you choose Gamma or Bernoulli distribution over other distribution? * The authors pointed out that i-MolCLR did something similar, however, claimed that the proposed approach offered a better solution. Why not compare with i-MolCLR in all experiments? * The paper shows that the best setting is positive pair weights following a+ =5 and b+ = 1, and negative pairs following a- = 1 and b- = 1. The significant difference between these two distributions is approximately close to the threshold for decision-making. When the distributions are similar, such as positive pair weights following a+ =1 and b+ = 1 while negative pair weights following a- =1 and b- = 1, performance is worse according to the ablation study. Besides, the author should discuss about the choices of au and bu. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and for recognizing the presentation and novelty of our work. W1: Why not design the probability distribution based on the similarity score distribution, a simpler approach? We agree that using a similarity thresholding is a simpler approach. However, as the experiments showed, even if we use fingerprint-based similarity calculation, the issue cannot be completely resolved, and the performance gaps between this simple approach and our methods are in fact still reasonably large. Please refer to our response to all the reviewers for more detailed analysis. W2: What are the appropriate values for a_u and b_u. In code, we set a_u=b_u=1, here we conducted an ablation study on the choice of a_u and b_u. Generally speaking, the choice of a_u and b_u will not influence the experiment results to a large margin, the top performance is at a_u=b_u=5. | a_u | 1 | 5 | 10 | 1 | 5 | 10 | 1 | 5 | 10 | |--------------|------|------|------|------|----------|------|------|------|------| | b_u | 1 | 1 | 1 | 5 | 5 | 5 | 10 | 10 | 10 | | Avg ROC_AUC% | 80.4 | 80.2 | 80.1 | 80.6 | **80.7** | 80.2 | 80.3 | 80.1 | 80.5 | | | | | | | | | | | | W3: Figure 3 is confusing. Is it used to show that, before learning, some positive samples are highly similar to negative samples, which is no longer the case after learning? Figure 3 is not to illustrate the similarity of positive samples to negative samples before and after learning. Instead, it compares the distributions of similarity scores for positive and negative pairs after pre-training with the original MolCLR loss and the proposed loss. Left plot (Positive samples): It shows the distribution of similarity scores for positive samples. The proposed method (blue) produces higher similarity scores with a higher mean and lower variance compared to the original MolCLR method (orange). This indicates that the positive samples are more consistently similar to each other with the proposed method. Right plot (Negative samples): It shows the distribution of similarity scores for negative samples. The proposed method (blue) yields lower mean and lower variance in similarity scores compared to MolCLR. MolCLR shows two peaks (around 1 and 2.7), suggesting that it incorrectly assigns higher similarity to some negative pairs. The proposed method concentrates the negative similarity scores around 1, indicating better separation of negative pairs from positive pairs. W4 Recent and better performance on the MoleculeNet datasets should be considered. We appreciate the reviewer's suggestion. We will cite these papers in our related work section. In fact, we can apply our method on top of these methods, we will update the experiments in final version of our paper. However, we would like to emphasize that direct comparisons with these methods may not be fair due to differences in additional information sources and data splitting methods: KANO: This method utilizes additional knowledge graph as an information source, which provides a significant advantage. Our method does not use such external information. LineEvo: This method employs random data splitting, which can yield artificially higher performance. FunQG: This method uses a molecular graph coarsening framework that significantly reduces the graph's complexity, losing 3D information of each individual atom. Our approach retains the original graph structure. Here, we present another fairer comparison with a recent baseline Instruct-Bio [1]. The following analysis shows that our method outperforms InstructBio across almost all subtasks in the MoleculeNet classification and regression experiments. | | BBBP | Tox21 | ToxCast | SIDER | ClinTox | BACE | ESOL | FreeSolv | Lipo | |---------------|--------------|--------------|--------------|---------------|---------------|---------------|------------------|-----------------|------------------| | Insturct-bio[1] | 67.4±0.5 | 75.6±0.3 | 65.1±1.5 | 61.5±1.9 | 78.0±0.6 | 78.5±0.8 | 1.771±0.015 | **0.832±0.020** | 0.752±0.017 | | Ours | **76.7±2.0** | **80.1±1.0** | **69.9±2.5** | **64.9± 3.3** | **89.4± 0.1** | **88.2± 1.3** | **0.664± 0.024** | 1.358± 0.452 | **0.590± 0.003** | | | | | | | | | | | | [1] Wu F, Qin H, Li S, et al. Instructbio: A large-scale semi-supervised learning paradigm for biochemical problems[J]. arXiv preprint arXiv:2304.03906, 2023. --- Rebuttal 2: Title: Rebuttal Continued Comment: Questions: Q1: Figure 1, which node(s) and edge(s) are removed? How is similarity calculated? Does augmentation consider structural validity and stability? [A1] Sorry for making the confusion, we will add this to explain in our manuscript. In Figure 1, the nodes marked with red dots and the edges marked with red lines are the ones removed. When we “remove” a node, we do not entirely eliminate it from the graph. Instead, we substitute it with a special [MSK] node that does not correspond to a specific element type(like C or O). This approach is employed to avoid any changes in the topology that would occur due to node removal. Structural validity and stability is also preserved. The similarity is calculated based on the representations from our trained encoders. The reason for not using fingerprint-based similarity is that such methods are specifically designed for unmodified molecules. The introduction of [MSK] nodes leads to discrepancies when using fingerprint-based similarities, as these traditional methods cannot accommodate the presence of masked nodes effectively. Therefore, we rely on our model's learned representations, which are robust to such augmentations, to measure similarity accurately. Q2: Why do you choose Gamma or Bernoulli distribution over other distribution? A2: We use the Gamma prior because it naturally lends itself to conjugacy in the posterior, which significantly eases the posterior sampling procedure. Also, it is known for its flexibility in shape and scale to model positive continuous variables, which is suitable for sample weights in our setting. Additionally, we compared it with Bernoulli prior which uses binary sample weights in Table 1. Our results show that Gamma prior offers superior robustness and accuracy. This advantage is likely due to the broader search space available to continuous weights, with binary weights being a subset of this continuous space. Q3: The authors pointed out that i-MolCLR did something similar, however, claimed that the proposed approach offered a better solution. Why not compare with i-MolCLR in all experiments? A3: We appreciate your suggestion. Please refer to the common reply to all reviewers for experiment using fingerprint-based similarity on Chiral MoleculeNet dataset (that is exactly the method proposed in I-MolCLR) and tabel 3 for experiment on non-Chiral MoleculeNet dataset. We are currently working on extending our experiments to include QM9 dataset, and we will incorporate these results in a future version of the paper. Q4: The paper shows that the best setting is positive pair weights following a+ =5 and b+ = 1, and negative pairs following a- = 1 and b- = 1. The significant difference between these two distributions is approximately close to the threshold for decision-making. When the distributions are similar, such as positive pair weights following a+ =1 and b+ = 1 while negative pair weights following a- =1 and b- = 1, performance is worse according to the ablation study. Besides, the author should discuss about the choices of au and bu. A4: We agree that the choice of hyperparameters in the prior distribution plays a crucial role in the performance of our method. The significant difference between the distributions for positive and negative pairs reflect the inherent differences in their nature. This also suggests that the model requires a more pronounced distinction between these pairs to effectively learn useful representations. Experiments show that the best performance is achieved when au=bu=5, please refer to our answer on W2 for experiment details. --- Rebuttal 3: Title: Follow-up on Rebuttal: Request for Further Review and Feedback Comment: Dear Reviewer T3Hf, Thank you for your thorough feedback on our work. We have carefully addressed each of your comments and provided additional analyses to clarify the points you raised. We believe our detailed rebuttal directly addresses your concerns and strengthens the validity of our approach. We kindly encourage you to review our responses at your earliest convenience, as your insights are invaluable to improving our manuscript. We appreciate your time and look forward to any further comments you might have. Best regards, Authors --- Rebuttal Comment 3.1: Title: I moved my decision from 5 to 6. Comment: The responses address some of my concerns. It will be interesting to see additional experimental results.
Rebuttal 1: Rebuttal: We thank the reviewers for their valuable comments. We are happy that the reviewers find our work innovative and promising in general. We notice there are also some questions and concerns from several perspectives. A common issue raised by the reviewers is the comparison against directly using similarity scores threshold to decide weight distributions, which we will address below. For other specific questions raised by each reviewer, we will post our responses separately. We also have revised the manuscript by incorporating some extra results and explanations based on the reviews. We will incorporate more detailed revisions into the camera-ready version according to responses and further discussions. We hope the reviewers can read our response and reconsider your decision if necessary. Thanks again for helping us to make this work better. Q: Why not design the probability distribution based on the similarity score distribution? We appreciate the reviewers’ insightful suggestion to use a simpler approach based on similarity scores. To thoroughly investigate this, we designed ablation experiments using the chiral version of MoleculeNet classification tasks and compared three different methods: 1. Bayesian Inference (Original Method): In this approach, we calculate the weights using Bayesian inference, as described in our paper. 2. Fingerprint-based Similarity: This method calculates the weights based on the similarity scores derived from molecular fingerprints, similar to the approach used in I-MolCLR. 3. Encoder-based Similarity: Here, we first extract features of data pairs using encoders and then calculate their similarity scores. These scores are then regularized to the [0,1] range. For methods 2 and 3, we compute the weights using the following formulas: $w_i^- = 1 - \lambda \times Sim(x_i, x_k)$, $w_i^+ = \lambda \times Sim(x_i, x_k)$ In our experiments, we set $\(\lambda = 1\)$. | MoleculeNet Experiments | BBBP | Tox21 | ToxCast | SIDER | ClinTox | BACE | MUV | HIV | PCBA | |--------------------------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------| | Bayesian Inference | 76.7± 2.0 | **80.1± 1.0** | **69.9± 2.5** | **64.9± 3.3** | 89.4± 0.1 | **88.2± 1.3** | **82.9± 3.1** | **83.0± 1.7** | 82.9± 0.5 | | Fingerprint Similarity | **77.9** | 80.0 | 68.9 | 64.9 | 87.6 | 83.6 | 78.7 | 79.8 | **83.8** | | Encoder-based Similarity | 75.2 | 79.6 | 67.8 | 58.5 | **90.4** | 82.8 | 80.5 | 81.0 | 74.5 | From these results, we observe the following: 1. Fingerprint-based Similarity: While this method showed improvements in 2 out of 9 tasks compared to our original method, but it did not perform as well overall. This indicates that while similarity-based methods are simple, they may not be flexible enough to fully capture the complexities of molecular representations required for robust performance across diverse tasks. 2. Encoder-based Similarity: This approach performed worse than both the Bayesian inference method and the fingerprint-based similarity approach, further suggesting that using a direct similarity-based method does not necessarily yield better results. These findings suggest that while simpler methods like similarity thresholding may work in some cases, they do not outperform our proposed Bayesian inference method which is designed to dynamically adapt and provide better alignment of positive and negative pairs of molecular representation learning. Thus, our Bayesian inference-based approach is essential for achieving state-of-the-art performance across various molecular property prediction benchmarks. We hope this detailed comparison addresses the reviewer’s concerns and provides clarity on the advantages of our proposed method.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Training for Stable Explanation for Free
Accept (poster)
Summary: The paper proposes a novel metric for assessing the stability of explanations in machine learning models, which is crucial for their trustworthiness. The authors introduce a method called R2ET (Robust Ranking Explanation via Thickness) designed to train models to generate stable explanations efficiently and effectively. They provide experiments across various data modalities and model architectures, showing that R2ET achieves superior stability against stealthy attacks and generalizes effectively across different explanation methods. Strengths: 1. The introduction of a new metric that aligns more closely with human perception than existing $\ell_p$ distance measures is compelling. 2. The theoretical grounding of the methods and the extensive empirical validation provide a high degree of confidence in the results. 3. The paper is clearly written. Weaknesses: 1. The paper’s discussion of explanation robustness is focused on adversarial robustness. However, the explanation robustness can also be affected by other factors, such as distributional shifts [1, 2]. It would be beneficial to discuss the relationship and difference between the proposed method and these methods. 2. While the paper tests various data modalities, the focus is small-scale datasets, such as MNIST. Could you discuss the potential limitations of the R2ET method when applied to real-world, large-scale datasets? An evaluation of the scalability of the proposed method is beneficial. [1] Li et al. “Are Data-driven Explanations Robust against Out-of-distribution Data?” CVPR 2023. [2] Pillai et al. “Consistent explanations by contrastive learning” CVPR 2022. Technical Quality: 4 Clarity: 3 Questions for Authors: Should the authors address the identified weaknesses (listed in descending order of importance), I would be inclined to raise my rating. These revisions would enhance the paper’s contribution to the field and its practical applicability. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The discussion of limitations in the paper could be more comprehensive. While the authors mention general applicability issues, they do not delve into specific limitations that might affect the deployment of their method in real-world settings, particularly in scenarios with highly variable and noisy data. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your detailed and constructive reviews. > To Weakness 1: The paper’s discussion of explanation robustness is focused on adversarial robustness. However, the explanation robustness can also be affected by other factors, such as distributional shifts [1, 2]. It would be beneficial to discuss the relationship and difference between the proposed method and these methods. 1. We believe that adversarial robustness (AR) and distributional shifts (DS) as highlighted in [1,2] are **distinct** because of the “perturbation budget” $\epsilon$ between the original and perturbed inputs: $\epsilon$ is **intentionally small** in AR for stealthiness, but **relatively large** in DS due to the more substantial variations inherent in out-of-distribution (OOD) samples, making DS and AR **incomparable**. Specifically, the goal of AR is to **maintain the explanations unchanged** even with *neglectable* input perturbations, aiming for consistency in explanations. In contrast, DS ensures **reasonable predictions (and explanations) for OOD samples**, where a *substantial deviation* from the in-distribution samples is expected, naturally leading to **different explanations and rankings**. For example, two images of birds (e.g., Figure 2 from [1]), although from the same class, differ significantly in shape and location, leading to varied feature importance across the images. This variability is typical in DS scenarios. Conversely, in AR cases, the visual and explanatory differences are minimal, thereby requiring the explanation rankings to remain consistent. 2. Despite the fundamental differences of $\epsilon$ between AR and DS, the concepts and properties of thickness and R2ET, as detailed in Section 4, are not specified to AR, but **broadly applicable to different robustness scenarios**. Thus, we can view them in the DS frameworks: Specifically, the variable $\mathcal{D}$ in Definition 4.1 and 4.2 (definition of thickness) denotes the (unknown) distribution of sample $x'$. The thickness can be defined under DS framework by viewing $x\in\mathcal{X}$ as in-distribution samples, and $x'\in\mathcal{D}$ as out-of-distribution samples. Furthermore, Propositions 4.4 – 4.7 do not rely on specific assumptions about the distribution $\mathcal{D}$ and merely limit the perturbation budget to $\epsilon$. Thus, these propositions hold even if $\mathcal{D}$ represents the out-of-distributions. This adaptability indicates the universal applicability of thickness and R2ET, making them address both AR and DS challenges effectively. > To Weakness 2: While the paper tests various data modalities, the focus is small-scale datasets, such as MNIST. Could you discuss the potential limitations of the R2ET method when applied to real-world, large-scale datasets? An evaluation of the scalability of the proposed method is beneficial. We want to point out that we also include ROCT image dataset from *real-world medical applications* (where robust explanation is of critical importance) in our experiments. The dataset consists of images having 771x514 pixels on average, making it **comparable in scale** to well-known datasets such as ImageNet (469x387), MS COCO (640x480), and CelebA (178x218). Regarding the scalability of the R2ET method, we analyze the computational efficiency of R2ET, highlighting that it requires **only two additional backward propagations**. This makes R2ET considerably more efficient (**20x faster**) than many adversarial training (AT) methods, as well as [1] and [2]. More discussions are disclosed in Lines 810-818. This efficiency suggests that R2ET’s approach to generating robust explanations is scalable to larger, more complex datasets while maintaining computational feasibility. > To Limitation: The discussion of limitations in the paper could be more comprehensive. While the authors mention general applicability issues, they do not delve into specific limitations that might affect the deployment of their method in real-world settings, particularly in scenarios with highly variable and noisy data. We recognize that the limitations discussed could be expanded to better address scenarios that involve highly variable and noisy data, which fall outside the “small perturbation” scenarios. While the theoretical analyses in Sections 4-5 will still hold across varying distributions and perturbations, affirming that our approach is theoretically sound under unknown perturbations. It points to a valuable direction for future research: testing and potentially adjusting R2ET to ensure robustness and reliability in more diverse and challenging environments. --- Rebuttal Comment 1.1: Title: Thank you for the reply Comment: Thank you for your reply; I am happy to raise my score. --- Reply to Comment 1.1.1: Comment: Thank you so much!
Summary: The paper aims at a robust explanation of predictive models. A new concept called “feature ranking robustness” is proposed and a corresponding objective function called “explanation thickness” and an optimization algorithm R2ET is designed to increase the thickness during model training time. Theoretical analyses that covers the numerical and statistical aspects of the thickness are provided. Experiments results on diverse data modalities including tabular, image, and graph data, demonstrate the robustness of the explanations obtained from the thickness metric. Strengths: 1) The enhancement of explanation robustness is different from prediction robustness as higher-order gradients are involved. The submission consider a new metric based on relative ranking of top-k salient features rather than distance between saliency maps over all features, thus focusing more on the important features that will be perceived by the users. 2) he theoretical analyses are relevant and novel. In particular, the authors connect thickness to certified robustness, adversarial training, and constrained optimization (section 4.2), and the multi-objective optimization analysis help clarify the trade-off between model accuracy and explanation robustness. The experimental results are solid, with many datasets, baselines, and metrics to verify the proposed method. 3) The paper is well-organized and concepts are clearly defined. 4) while explanation robustness has been studied previously, the paper adopts a new perspective of top-k thickness. Then this concept results in a new optimization algorithm and in-depth theory. In particular, formulating explanation problem under the MOO framework and the learning theoretical framework make unique and significant contribution to the XAI community. It seems that the R2ET algorithm can be applied to multiple different gradient-based explanation methods. Weaknesses: - The thickness concept has been used for prediction robustness. - Eq. (6) involves evaluating the Hessian matrix and that can be quite time consuming, compared to WD that does not need the Hessian matrix. Technical Quality: 3 Clarity: 3 Questions for Authors: - In Figure 2, why the correlation indicates that thickness is a better metric of explanation robustness? More details are needed. - When applying R2ET to other explanation methods, do you need to modify the R2ET algorithm? In other words, how easy it is to apply R2ET to a broader spectrum of explanation methods? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your detailed and constructive reviews. >To Weakness 1: The thickness concept has been used for prediction robustness. Their fundamental meanings differ: the thickness for prediction robustness measures the distance between two decision boundaries, while explanation thickness qualifies the expected gaps between two features' importance. Furthermore, different from the “boundary thickness” for prediction robustness, our explanation thickness is inherently more complicated to optimize due to the nature of gradient-based explanations being defined as first-order derivative relevant terms. Thus, we try to speed up the computation and avoid adversarial training. > To Weakness 2: Eq. (6) involves evaluating the Hessian matrix and that can be quite time consuming, compared to WD that does not need the Hessian matrix. While Eq. (6) requires calculating the Hessian norm, we address the potential computation burden by the finite difference. It allows us to **estimate the Hessian norm effectively with only two additional backward propagations**. Thus, compared to Vanilla and WD, R2ET remains competitive in terms of computations. R2ET is much more time-saving comparing many existing baselines, such as about **20x faster** than adversarial training. More discussions concerning time complexity are in Lines 810-818. > To Question 1: In Figure 2, why the correlation indicates that thickness is a better metric of explanation robustness? More details are needed. The left subfigure demonstrates a high correlation between the sample’s thickness and the attacker’s required budget to manipulate the rankings. This high correlation signifies that samples with greater thickness demand a larger attack budget for a successful manipulation, thereby justifying that thickness serves as a metric for evaluating explanation robustness. Conversely, the right subfigure shows a low correlation between the Hessian norm of samples and the required attack budget. It indicates that the difficulty of manipulating explanations remains similar across samples, regardless of whether they have a large or small Hessian norm. This low correlation suggests that the Hessian norm is not a reliable indicator of explanation robustness. > To Question 2: When applying R2ET to other explanation methods, do you need to modify the R2ET algorithm? In other words, how easy it is to apply R2ET to a broader spectrum of explanation methods? R2ET is general to *a range of gradient-based explanation methods*—including Gradient (Grad), Gradient*Input, SmoothGrad, and Integrated Gradients (IG)—**without any modifications** given theoretical analysis (see Appendix A.1.2) and experimental outcomes (Section 6). In the future, we aim to extend R2ET to a broader spectrum of explanation methods *beyond gradient-based ones*. Given the objective of R2ET that directly maximizes the gaps between feature importances and minimizes the Hessian norm, R2ET should also perform effectively with other explanation methods. --- Rebuttal 2: Comment: I will keep my score after reviewing other referees' comments and the manuscript. --- Rebuttal Comment 2.1: Comment: Thank you once again. We truly appreciate your insightful and valuable feedback on our work.
Summary: The paper describes a regularizer to add to a loss function to encourage the resulting model to have input attributions robust to ranking changes in its top k features. That is, for an input x, the input attributions (a score for each input) for this input will be similarly ordered (at least in the top k scoring features) as compared to the input attribution of any small perturbation x' of x. The argument to use ranking and top-k ranking for defining robustness is that only the top few input features are of relevance to a human interpreting an explanation. Ranking robustness itself might be because the relative importance of features is more relevant to the human than the absolute magnitudes of the importance. To make ranking robust, the regularizer encourages gaps or "thickness" in the scores of top-k most important features which would naturally make it harder to reorder them via a small perturbations in the inputs and therefore (by addition of Hessian to encourage smoothness of gradients) feature importance scores (which are assumed to be gradients or "saliency"). Analytical arguments are made as to how to effectively implement the regularizer in a more efficient fashion. Analytical arguments are presented as to a worst-case performance of an attack on the proposed method (though why worst case is relevant confuses me). A large collection of experiments demonstrate the use of the regularizer (termed R2ET) with various parameters and compared to other robustness methods faced with several adversarial explanation manipulation methods. These results show the proposed method is not optimal in all cases though it is in many. Strengths: + S1. The approach includes analytical and experimental methods and seems to achieve to some extent the ranking robustness goal. + S2. Multiple adversarial attacks experimented with including one aimed at ranking specifically. + S3. Multiple other robustness methods experimented with including several that approximate some of the same loss terms (i.e. the Hessian) in different ways (though lack of experimental timing information is a shortcoming as listed below). Weaknesses: - W1. Motivation is based on ranking robustness but ranking robustness is not directly measured in the experimental sections. In the collection of explanation metrics enumerates in B.1.4, the P@k only partially exposes robustness with respect to ranking thought would miss all importance swaps that happen within the top k features. The experimental discussion seems to be making the case that the proposed top-k thickness metric is related to P@K but as P@k does not fully capture ranking robustness, those arguments also fail to make the connection. While I agree that optimizing for thickness would promote ranking stability, I do not find the concepts identical (i.e. there may be other or better ways to achieve ranking stability). Some experimental results may be hinting at the mismatch where I see the regularizer without Hessian sometimes outperforming the one with it. Suggestion: Include a more direct ranking robustness measure in the experimental results. I'm unsure what form of such a measure would be. Perhaps ranking correlation from [103] can be used for feature importance in the present paper. - W2. Computational effort reduction not demonstrated. Part of the design seems to be limit backwards passes or other computationally costly steps. The experiments are plenty but none seem to demonstrate this aspect. There is an analytical time complexity discussion in the "Time complexity of R2ET" paragraph. The title itself suggests that stability is free but is a bit deceptive in that the training incurs additional costs. Suggestion: Include the experimental measurements of effort in the results. One of the strengths of the present approach is that the explanation-time is less costly as saliency is relatively less costly than alternatives. Showing this would be good to discuss/experiment with as well. - W3. Most (or all) theoretical analysis applies only to saliency. Arguments/discussions regarding Proposition 4.6 in the appendix include the assumption that the explanation is a gradient (a saliency explanation). Saliency are defined in Preliminaries but it is not clearly stated there that the rest of the paper assumes all explanations are of this form. As the methods target saliency, a lot of this paper depends on saliency being a good explanation method to begin with. However, plenty of works argue that it is not (see [73, 75]). Some benefits of things like IG also share motivation to ranking robustness (that relative scores are more important than absolute). Suggestion: Make it clear that the analytical sections make this assumption. Also, note in the "Apply R2ET to other explanation methods" paragraph in Section 6 that the analytical conclusions do not apply to these other methods. A good inclusion to the limitations discussion would be that saliency itself has some problems which may warrant use of a different explanation method especially for problems with regards to robustness which this work is attempting to solve using a regularizer. Other comments/suggestions: - C1. The intro/motivation could use some discussion of how the primary desiderata in explanations and how they related to each other. The paper is primarily based on ranking robustness which I could say is an element of an explanation's interpretability and resistance to adversarial manipulation. From here it would be useful to say that addressing robustness could be done with adjustments to the explanation method or to adjustment to the model or training. The paper discusses the second but a note about the benefits/drawbacks of the first would be good to include as they introduce some of the subsequent experiments which are presently hard to fit into the big picture. 1. Want (interpretability and) resistance to adversarial manipulation. 2. Option 1: Change the explanation method (don't use Saliency). Benefit: no model changes. Drawbacks: faithfulness vs robustness tradeoff. Explanation-time costs might be large. Option 2: Add regularizer for saliency during training. Benefits: Fast (or "free" as per title) explanations, better faithfulness, Drawbacks: training-time costs, utility loss. Option n: ??? 3. We do Option 2 and here are analytical and experimental results demonstrating the strengths and the impact of weaknesses. This chain of thought justifies the present (i.e. faithfulness) and suggested experiments (i.e. costs as per Weakness W2). Some of the experiments in the Appendix might be approaching these points though I'm having trouble connecting the dots. - C2. The "thickness" notion could be better presented in the experiments. Figure 8 in the appendix shows some saliency/importance graphs without regularization for gaps. Can you include similar pictures for regularized models? I would expect to see the top-k features to have a linear decrease in importance. Seeing this (or something else) could go a long way in building intuition in the approach described in this paper. Smaller things: - The intro point about "inherent limitations of human cognition" as an argument for top-k importance doesn't apply as well to vision tasks where our eyes look at a huge number of raw features and necessarily so in order to resolve higher-level structures. - The top-k thickness paragraph on line 97 cites [107] as a work suggesting maintaining ranking of features but this work is not on feature importance. The "raking" there is because the topic is ranking models. - The statement "only the top-k important features in \cal{I}(x) are more relevant to human perception" is quite vague and I'm unsure how it could be formalized. Also note the grammar issue ("more relevant" than what?). - "cost computations" -> "costly computations". - "single-input DNNs", this reads like the DNNs have a single input neuron but doesn't mean that. Rephrase? Technical Quality: 3 Clarity: 2 Questions for Authors: - Q1. My understanding of a lot of the technical discussions in the paper was limited so the comments in the weaknesses in the prior section might not be fully justified. If this is the case, please address how the paper does not have the stated weakness. For those that are weaknesses, please comment on plausibility of addressing them either with my suggestions or otherwise. - Q2. Proposition 4.6 relates Equation 6 and Equation 7. 7 includes perturbations while 6 does not. I'm having trouble qualifying the perturbations to make the statement make sense. Please include some higher-level points as to why this and other \theorems follow. - Q3. Some results show that the regularization without the Hessian performs better than with it which is casting some doubt on the theoretical arguments. Alternatively it could be a mismatch between P@k and the optimization goal. Can ranking consistency be measured more directly? (i.e. as per suggestion of W1) and would it be able to explain the \H results? - Q4. How would directional scores affect the methodology? While saliency has a sign, I don't see it demonstrated in this paper with a sign. Have all attributions been absolute valued first? Generally an explanation with both positive attribution and negative attribution would be more valuable and perhaps relative to ranking, having top-k (positive) and bottom-k (negative) would be the equivalent. - Q5. I'm not following the purpose of worst-case complexity of an adversarial attack (Section 5). Wouldn't it make more sense to lower bound the complexity of an attack instead of upper bound it? Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Only one limitation is noted in Section 7 with regards to using a surrogate of thickness. I believe there are further limitations as noted in W3 for example. Societal impacts not mentioned though addressing explanation robustness for its own sake or against adversarial perturbations is expected to have only positive societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your detailed and constructive reviews. Due to the character limitation of rebuttal, we have to make the response concise and simple. Feel free to discuss them if more questions/concerns. > To W1: We agree that P@k may not be the “best” and “direct” metric for capturing ranking stability, and believe that the study of “best” evaluation measurement may be explored in future work, which potentially calls for psychology experts or human-involved surveys. We evaluate the explanation ranking robustness by P@k because 1) P@k is widely used to evaluate the explanation robustness in existing works [13, 86, 72, 32, 81, 68, 23]. 2) P@k better aligns with human perception. As shown in Figure 1, only the top few features (without their importance score nor relative rankings) will be delivered to end-users in credit card applications. Furthermore, as discussed in Lines 997-1000, while there are various metrics for ranking similarity—such as cosine similarity, Pearson correlation, and the one in [103]—these can be sensitive to changes in the ordering of less important features, which might not reflect human perception accurately as well. Because they often consider the entire rankings, potentially distorting their usefulness in scenarios where only the most critical features are relevant to decision-making. > To W2: While we include an analytical discussion on the time complexity of R2ET in the Appendix, we acknowledge that we did not present empirical data comparing the actual training times. Because the training time is influenced by several factors such as *convergence rates, hardware specifications, and learning rates*. Rather, we compare R2ET with baselines using **similar analytical training resources** (number of backward propagations), except Exact-H and SSR which are much more costly. Besides, we agree that demonstrating the computational efficiency of R2ET during explanation time could significantly strengthen our argument, as R2ET adds no cost to the existing gradient-based explanation methods. > To W3: In the revised version, we will clarify that the theoretical analysis applies to gradient (saliency) explanations, as initially introduced in the preliminary. Although the primary results are on saliency explanations, we extend our work by showing that R2ET could generalize to other mentioned explanation methods, such as Grad*Inp, IG and SmoothGrad, by maximizing their lower bounds (Appendix A.1.2). The experimental results further indicate R2ET could fit more explanation methods. > To C1: Thanks a lot. We agree with your comments, and will partially modify our writings logics in the revised version according to your suggestions. > To C2: We report “thickness” in Figure 2, Table 3, and Table 5, to show the strong correlation between thickness and robustness. The gaps shown in Figure 8 are not exactly the same as the thickness. We will include more figures for R2ET, and it is supposed to see a significant gap between the $k$-th and $(k+1)$-th features, aligning with R2ET's objectives. > To "small things": For small thing 1: We agree that humans may resolve higher-level structures and discuss such high-resolution images in the “concept-based” explanation cases, where top-k *sub-regions* may be of more importance than others. For others, we will modify our writing and make them clearer. > To Q1: Please refer to the response above. > To Q2: The connection is the perturbation budget $\epsilon$. Intuitively, Eq. (5) shows that a L-locally Lipschitz model’s local behavior under an $\epsilon$ manipulation can be approximated using $\epsilon$ and its high-order derivative information (such as Hessian). A large $\epsilon$ (for a stronger manipulation) leads to a rigorous restriction to the model (e.g., smaller change rates). Based on Eq. (5), R2ET in Eq. (6) “integrates” $\epsilon$ into $\lambda_2$. On the other hand, Eq. (7) aligns with two objective functions with $\epsilon$ as shown in Lines 629-634. > To Q3: We have justified the use of P@k in response to W1. As shown in Table 3, the model with the highest thickness leads to the best P@k. Thus, we attribute the super performance of R2ET\H to their higher thickness than R2ET. In other words, R2ET does not always achieve the best performance *may* because of the potential mismatch between **R2ET’s optimization goal** and **thickness**. Although we agree that there *may* be a mismatch between **P@k** and **“real ranking consistency”**. The later mismatch will be one of our future research directions. > To Q4: Our analysis and methods in Sections 4 and 5 are independent of the signs of gradients. In other words, the statements and proofs will hold in the general case where saliency does or does not have a sign. In the experiments, we follow existing works to take the magnitude (absolute values) of the gradient. > To Q5: We study the “worst-case” for attackers in Sec. 5, which provides a guarantee for a successful attack on the explanation rankings. In practice, the attacker could be limited to its attack (perturbation) budget and computation resources (number of attack iterations). Thus, the guaranteed successful attack is defined by the upper bound of the attack iterations. > To limitation: We will show the assumptions more explicitly as one of the limitations, and discuss the (positive) societal impacts in the revised version. --- Rebuttal Comment 1.1: Comment: After reading the discussions I'm realizing my reading of thickness and the loss terms was slightly off hence some of my questions were misdirected. First let me make sure I'm viewing this right now: is it the case that both top-k thickness and R2ET optimization goal care not about influence within the top k features nor the influence within the bottom non-top-k features? In this case, your last comment that there might be "mismatch between R2ET’s optimization goal and thickness" is confusing. Was that point about thickness other than top-k ranking thickness of Definition 4.2? 4.2 thickness seem to have a direct representation in the loss term, the one with \lambda_1 coeff of Equation (6). Also; I have now some deeper concerns about the importance of setting k in the methodology. The implicit goal when using some value of k is that we want a model that has k important features and n-k non-important features for each explanation. Given that explanation faithfulness is among the goals of explanations in general and in this paper, the setting of k is as much, if not more, about the problem being solved/predicted by the model than it is about the complexity/interpretability of explanations. If k is too small or too large, the regularization will force the model to do some weird things. Some tasks might not even have a fixed k number of inputs that are important uniformly (i.e. 1 feature is necessary to make prediction for some set of instances while 2 features are necessary for a different set). Can you comment on how to deal with this aspect of the methodology? --- Reply to Comment 1.1.1: Comment: Thanks for the further discussion. > is it the case that both top-k thickness and R2ET optimization goal care not about influence within the top k features nor the influence within the bottom non-top-k features? We motivate the research based on the P@k (one of the widely used ranking-based metrics), which does *not* care about the rankings within the top-k features, nor the rankings within the remains. Thus, we naturally do not intend to consider such relative rankings for top-k thickness and R2ET. > In this case, your last comment that there might be "mismatch between R2ET’s optimization goal and thickness" is confusing. Was that point about thickness other than top-k ranking thickness of Definition 4.2? 4.2 thickness seem to have a direct representation in the loss term, the one with \lambda_1 coeff of Equation (6). R2ET's objective in Eq. (6) and thickness in Eq. (3) in Definition 4.2 are not exactly the same. Let us go back to the *pairwise* thickness defined in Eq. (2) in Definition 4.1: The *pairwise* thickness measures the explanation robustness by comprehensively capturing the surrounding gaps (by expectation and integration). The top-k thickness, defined by pairwise thickness, accumulates the pairwise thickness of all possible top-bottom pairs. Thus, an **exact** computation of top-k thickness will involve expectation and integration due to Eq. (2). R2ET optimizes the *bounds* of the top-k thickness based on Eq. (5), rather than the definition of the top-k thickness directly. $h(x,i,j)$ in Eq. (6) refers to the *gap*, rather than *thickness*. Thus, optimizing R2ET's objective function does not essentially lead to the best thickness. > I have now some deeper concerns about the importance of setting k in the methodology. The implicit goal when using some value of k is that we want a model that has k important features and n-k non-important features for each explanation. Given that explanation faithfulness is among the goals of explanations in general and in this paper, the setting of k is as much, if not more, about the problem being solved/predicted by the model than it is about the complexity/interpretability of explanations. If k is too small or too large, the regularization will force the model to do some weird things. Some tasks might not even have a fixed k number of inputs that are important uniformly (i.e. 1 feature is necessary to make prediction for some set of instances while 2 features are necessary for a different set). Can you comment on how to deal with this aspect of the methodology? It is a great question. We believe it is asking for a more "fine-grained" explanation robustness. In a nutshell, we can generalize R2ET to a more "fine-grained" version, although it may go far away from some practical scenarios (e.g., credit card applications only disclose a fixed number of reasons for final decisions). As discussed in Lines 107-110, we can generalize R2ET (based on precision@k) to address broader requirements and/or properties. For example, the average precision@k (AP@k) or discounted cumulative gain (DCG) takes the relative rankings/positions among top-k (or the bottom) features into account. To incorporate these metrics, R2ET can be easily extended by setting different weights to various feature pair rankings. Intuitively, the key difference between R2ET (P@k) and the adaptations (based on AP@k and DCG) will be the weights of gaps $h(x,i,j)$ in Eq.(6), and R2ET adaptations encourage to maintain rankings between *any* feature pairs. In the case of AP@k or DCG, setting $k=n$ ($n$ be the number of features) is meaningful, and R2ET models will keep reasonable gaps between *any* feature pairs. In other words, models can provide the ranking of every feature's importance. Thanks again for your time and consideration. Please let us know if further concerns.
Summary: In this work, the authors study explanation robustness particularly for saliency-based explanations based on gradient information. They propose to use a robustness metric based on the saliency ranking of features. The central benefits claimed (taken from the introduction_ for this approach are: * Relying on $\ell_p$ metrics is not a good/reliable proxy for robustness * Attacking $\ell_p$ metrics leads to an arms race between attackers and defenders In their methodology they provide an alternative called R2ET, the ranking based robustness metric, and describe how this is linked to certified prediction robustnes and adversarial training. Moreover, they state the optimization problems one must solve to compute the R2ET metric. Experimentally, the authors explore tabular and image datasets and across various models compute the described R2ET metric. Strengths: The paper is well written in that the authors make their motivations and methodology clear. They also attack an interesting and worthwhile problem in explanation robustness and have put substantial time into their experiments which leads to a comprehensive evaluation of the proposed metric. Weaknesses: The two critical weaknesses I see with this work are its motivation and evaluation. Motivation: Point 1) The authors first motivation is that explanations within a small $\ell_p$ norm can flip all of the rankings and have significantly different "top" features, thus we should use a ranking approach. However, this exhibits the fallacy of "begging the question." Explanations within a small $\ell_p$ norm ball can only appear different when visualized with a heatmap if the values in each feature dimension are very small.^1 This is because what the heatmap shows us is a "ranking" of the highest magnitude features, thus the argument boils down to: "Because small $\ell_p$ norm explanations have a non-robust ranking, we should look at a robust ranking approach" which seems like a poor motivation given that explanations with a small $\ell_p$ norm are not the only kinds of explanations that exist unless your classifier is extremely flat. Point 2) The authors state that attacking $\ell_p$ robustness leads to an arms race, but also cite [89] which is a formal certification method that proves no attacker exists and therefore thwarts the arms race (although only for small models). If the authors wish to claim the attackers arms race as a motivation for this paper I think they need to do more to (1) show that an arms race will not exist for their method which is not obvious to me and (2) need to show that they scale far beyond [89] which it appears they already do, but this point will need to be made explicit Point 3) I think the authors should introduce some toy experiments to show where and how their approach really benefits compared to gradient-based explanations. My intuition is that the only benefit of this approach is when the magnitude of the input gradient is small. Is this the authors intuition as well? ^1 I suspect if the authors add values to the colormap under the right-most portion of figure 1 we will see this. Because the Technical Quality: 2 Clarity: 3 Questions for Authors: It is not impossible that my main three points in the weaknesses section stem from a misunderstanding. Do the authors think that I have missed something or misrepresented any of their points? Confidence: 2 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The primary limitation of the paper is in its motivation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > To Weakness 1: The authors first motivation is that explanations within a small L-p norm can flip all of the rankings and have significantly different "top" features ... 1. **There is a misunderstanding of what the L-p norm is measuring** and let us clarify that. The “small L-p norm” is **not used to measure the magnitude of the explanation itself**, but to **qualify the distance between an original and its perturbed counterpart**. Our study is driven by the observations (Fig. 1) that a “slightly manipulated” explanation (with *a small L-p distance to the original explanation*) can lead to *extremely different feature rankings* from the original ranking, altering human perception and decision-making processes. Thus, we study ranking-based metrics aligning with human perception of salient features. 2. The above response is **irrelevant** to the magnitude of the original explanation (e.g., no matter whether the gradient’s L-p norm is large or small). As detailed in Proposition 4.5, changes in rankings depend on the joint effect of the *differences in gradient values* and the *Hessian matrix*, rather than the value of the L-p norm. 3. In our experiments, the classifiers are not necessary to be (extremely) flat since we impose no such constraints and assumptions on the explanations, and R2ET has shown to be effective generally. > To Weakness 2: The authors state that attacking L-p robustness leads to an arms race... 1. Theoretically, Proposition 4.5 delineates that the defender, e.g., R2ET, with proper objective function preserves the explanation rankings for **all** types of attackers with a given perturbation budget, and thus mitigates the arms races. Specifically, the proposition shows that the attackers cannot alter the models’ explanation ranking if their perturbation budgets is below the critical threshold (RHS of the inequality). The only chance to manipulate the ranking is to increase the attack budgets. On the other hand, the defenders, e.g., R2ET, increase the critical threshold to prevent **any types of attackers** to mitigate the arms races. In the case of Hessian norm being nearly zero (the denominator goes to zero), it becomes practically impossible for attackers to manipulate rankings successfully. Notice that the proposition is regardless of the “scale” of the models. 2. Experimentally, Table 1 and Appendix B.2.3 report explanation robustness facing **attackers with different objectives and different attack strategies**, respectively. For all types of attackers, R2ET typically showcases superior robustness under various attacks, indicating the mitigation of the arms races. The datasets and models involved in the experiments are significantly “larger” than those in [89], e.g., the largest feature dimension studied in [89] is **2k** in [89] compared to **396k** in ours. In sum, we will clarify these statements more explicitly in the revised version. PS. Our proposed approach focuses on ranking-based metrics for evaluating explanation robustness, and contrasts to the L-p norm-based metrics used in [89]. > To Weakness 3: I think the authors should introduce some toy experiments... Essentially, R2ET enhances explanation ranking robustness because it maximizes the gaps among features and minimizes Hessian norm, **rather than minimizing the magnitude of the gradient**. Intuitively, wider gaps (larger absolute distances) indicate more significant distinctions in feature importance. The Hessian norm implies the change rate of the feature importance, and a smaller Hessian norm makes it harder for attackers (or any perturbations) to change the model behavior. Our empirical evidence shows that the key is not the magnitudes of gradients, but their differences. Specifically, in Lines 935-943 (Figures 8 and Table 7) of the appendix, we take the top-2 case of models in Bank and COMPAS in Figure 8 and Table 7 as an example: the model for Bank showcases a smaller average gradient magnitude (approximately 0.08) yet achieves greater top-2 robustness compared to the one for COMPAS, which has a larger gradient magnitude (around 0.3). It indicates that our approach’s benefits extend beyond conditions where gradient magnitudes are minimal. To address your suggestion regarding toy experiments, which could indeed further validate and clarify the benefits of R2ET. We will consider adding these experiments in the revised version. > To Question: It is not impossible that my main three points in the weaknesses section stem from a misunderstanding. Do the authors think that I have missed something or misrepresented any of their points? **The most significant misunderstanding is the role of L-p norm. We do not focus on the norm of explanation, e.g., $|I(x)|$, nor small norm explanations. Instead, we investigate whether the L-p distance between the original explanation and its perturbed counterpart, e.g., $|I(x)-I(x’)|$, can serve as a suitable explanation robustness measurement.** The reviewer seems to misunderstand that we study and restrict the L-p norm of explanation, and doubts the motivation for studying gradient-based explanations with a small L-p norm. However, as Figure 1 shows, our core argument is that even a seemingly minor change (evaluated by L-p norm) in explanations could lead to extremely different explanation rankings (compared to their original explanation ranking). It underscores the necessity for a novel ranking-based approach to assess explanation robustness. **The magnitude of the gradient is not our main focus, and will only marginally impact our theoretical and experimental conclusions.** ***We would appreciate it if the reviewer could diminish the misunderstanding and re-read the manuscript concerning our response. It would be even better if the reviewer could give more feedback and raise the scores.*** --- Rebuttal Comment 1.1: Title: Thanks for your comments Comment: I have re-read the paper after the author rebuttal and several of my concerns have been addressed. I still have a few minor concerns, but given the review period is coming to a close I will give the authors the benefit of the doubt and raise my score to reflect my overall postive impression of the paper. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to revisit our paper and for further considering our manuscript. We greatly appreciate your positive reassessment and for raising the score based on the favorable impression of our work.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Doing Experiments and Revising Rules with Natural Language and Probabilistic Reasoning
Accept (poster)
Summary: This paper integrates a Large Language Model (LLM) within a Sequential Monte Carlo (SMC) algorithm, where the LLM functions as both the proposal distribution (revising existing particles) and the evaluator (assessing each hypothesis in light of new data). The authors applied this method to two cognitive psychology tasks: Zendo and ActiveACRE. These tasks involve formulating latent game rules, updating the probability of these rules based on new data, and proposing new experiments to gather further data. In both domains, the Online, Hard variant of SMC consistently outperforms other SMC variants in identifying the best latent rule, while the Online, Fuzzy variant of SMC more accurately captures human data. However, compared to optimal experimentation (the InfoGain algorithm), none of the LLM-based proposers outperform the InfoGain algorithm. Furthermore, a random proposer outperforms the LLM-based methods, indicating a significant deficit in the LLM’s performance as a proposer. Strengths: I find the proposed method interesting, as it effectively demonstrates the benefits of integrating Bayesian methods with LLMs. The comparison of various LLM-based methods for problem solving and explaining human data is particularly valuable. Weaknesses: The main weakness of the work is that both the enhancement in problem-solving capability and the correspondence with human data are not convincing. While the Online, Hard variant of LLM-based SMC clearly outperforms ReAct-style LLMs, it employs LLMs in three distinct roles: as a proposer (for revising existing hypotheses), as an evaluator (for translating natural language hypotheses into code and verifying them), and as an active learner (for proposing new experiments to perform next). The performance of the LLM as an active learner, however, is particularly weak, even underperforming a random active learner (see Tables 3 and 4). Replacing the active learner in the Online, Hard method with a random active learner would likely significantly improve performance compared to using the LLM. Consequently, it is unclear which aspects of the pipeline benefit from the inclusion of the LLM and which aspects are detrimental. Second, the comparison between humans and models is superficial. The best-fitting Online, Fuzzy model clearly includes at least two free parameters, as detailed in Section 2.3. However, the authors did not report the values of these best-fitting parameters. Are these the only two free parameters? Did you add additional decision noise to the model to better capture human data? If so, it would be more informative to report scores such as BIC or cross-validation results rather than just the simple log likelihood of the model. Figure 7 aims to illustrate a bounded rationality model for human-likeness, suggesting that the Online, Fuzzy method best aligns with human behavior when the number of LLM calls per iteration is neither too few nor too many. However, the evidence for this claim appears weak. The bounded rationality pattern only emerges with the Online, Fuzzy method, and the authors tested only three different numbers of LLM calls per iteration (x-axis). With just three data points, it is difficult to conclusively demonstrate an inverted U-shaped relationship. Additionally, when LLMs were used with methods other than the Online, Fuzzy method, the inverted U-shaped relationship did not appear at all. Technical Quality: 2 Clarity: 2 Questions for Authors: Lines 227-229: The authors claim that “Indeed, Table 2 shows that our best-performing model surpasses [13]’s model on human data […], while our model does not perform such parameter fitting.” However, the Online, Fuzzy method does contain free parameters (specifically, the two free parameters that determine the false-positive and false-negative rates for each hard rule) that can be adjusted to better fit human data. Did the authors actually fit these parameters using human data? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: I believe the paper would be improved if the authors could independently comment on the three separate roles of the LLMs (i.e., proposer, evaluator, and active learner) in their proposed method. Minor comments: Line 224: “judgement” —> “judgment” Flag For Ethics Review: ['Ethics review needed: Research involving human subjects'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed feedback! > Concern 1: The main weakness of the work is that both the enhancement in problem-solving capability and the correspondence with human data are not convincing. It is unclear which aspects of the pipeline benefit from the inclusion of the LLM and which aspects are detrimental. We would like to respond to this concern with two key points: 1. LLMs are necessary for natural language hypothesis generation: As discussed in the introduction, since the hypotheses are natural language, we need to leverage LLM in our proposer. Without LLMs, having natural language hypotheses would not be possible. We also compare our proposers against a non-LLM proposer of non-linguistic hypotheses from [1] in Table 2. 2. Our experiments are controlled: While our work leverages LLM in three distinct components: the hypothesis proposer, natural-language-to-code transpiler, and active learner, we control our experiments by keeping the transpiler and active learner fixed for all methods in the main results to perform an isolated study on the hypothesis proposer and keeping the transpiler and hypothesis proposer fixed in the active learning ablation study (Table 3 and 4). We address other points raised under concern 1 below. > Concern 1.1: While the Online, Hard variant of LLM-based SMC clearly outperforms ReAct-style LLMs, it employs LLMs in three distinct roles: as a proposer, as an evaluator, and as an active learner. In the main results (Table 1 and 2, and Figure 4) where we show that the online models clearly outperform ReAct-style LLMs in both accuracy and human-likeness, all methods use the same transpiler and active learner. > Concern 1.2: The performance of the LLM as an active learner, however, is particularly weak, even underperforming a random active learner (see Table 3 and 4). Replacing the active learner in the Online, Hard method with a random active learner would likely significantly improve performance compared to using the LLM. We believe there is a misunderstanding here. There are three different active learners: LLM, random, and InfoGain. InfoGain (introduced in section 2.2) involves choosing an experiment that maximizes information gain from a pool of candidate experiments proposed by the experiment proposer, which can either be LLM experiment proposer or random experiment proposer (an ablation study for the experiment proposer is done in table 4). Our work advocates for the InfoGain active learner which outperforms both LLM active learner and random active learner as shown in Table 3. All results, except for Table 3 and 4, use InfoGain with LLM experiment proposer, which is the best active learner, as its active learner. > Concern 2: The comparison between humans and models is superficial. We believe this concern is caused by some misunderstandings. We clarify the misunderstandings below. > Concern 2.1: The best-fitting Online, Fuzzy model clearly includes at least two free parameters, as detailed in Section 2.3. However, the authors did not report the values of these best-fitting parameters. We would like to first clarify that the two noise parameters, $\theta = (\epsilon, \delta)$ are not free parameters; they are random variables, as discussed in section 2 and section 2.3. Since they are random variables with priors on them, they are not optimized but rather get marginalized out as shown in Equation 2 and 3. However, the gaussian priors for the two noise random variables do have free parameters, namely the means and standard deviations. Thus, there are 4 free parameters in our fuzzy models and no free parameters in our hard models. Although in principle we could have fitted these 4 parameters, in practice, it is really expensive to do so because it involves lots of LLM calls to even evaluate a single parameter setting. Therefore, we did not fit the free parameters; we only tried two different parameter settings for all fuzzy models, and reported the results of the setting that worked best in the main paper and the results of the other parameter setting in the appendix (Figure 8). The same parameter setting is used across all fuzzy models and reported in the appendix (A.4). > Concern 2.2: It would be more informative to report scores such as BIC We include here a table of BIC scores which we will include in our revised appendix: | Method | BIC | |---|---| | [1]’s best model | 3085 | | Batch, Fuzzy | 3352.93 | | Online, Fuzzy | **2988.77** | | Batch, Hard | 5842.00 | | Batch w/ Refinement, Hard | 6999.52 | | Online, Hard | 10419.86 | > Concern 3: The evidence for bounded rationality appears weak. When LLMs were used with methods other than the Online, Fuzzy method, the inverted U-shaped relationship did not appear at all. The data provided in Figure 7 is meant to show that our most human-like model is consistent with the theory of bounded rationality, and we leave it for future study to thoroughly examine whether, or the degree to which, the model is boundedly rational. However, the fact that only our most human-like model, the online, fuzzy model, is the only model exhibiting the relationship is not a weakness; it is in fact the opposite: it helps strengthen our point. Specifically, we expect the inverted U-shaped relationship to appear only in models that sufficiently explain human data. We believe that other methods do not exhibit the inverted U-shaped relationship because they are not human-like enough to have the bounded rationality property. > Question 1: Contradicting to lines 227-229, the Online, Fuzzy method does contain free parameters that can be adjusted to better fit human data. Did the authors actually fit these parameters using human data? We refer the reviewer to our replies to concern 2.1 which addresses this question. [1] Bramley et al. 2018, Grounding compositional hypothesis generation in specific instances.
Summary: The authors tackle the problem of learning natural language hypotheses and collecting new experiments to enable hypothesis refinement. They propose a system that integrates LLMs and SMC/BOED to solve this problem; crucially, this method goes beyond prior work on inductive reasoning by allowing for experimentation to guide hypothesis refinement. In brief, they use a LLM to propose natural language hypotheses, SMC to reweight and resample these hypotheses based on how well they explain the data, and then use SMC’s weighted particle approximation to approximate the expected information gain and select experiments. The authors then evaluate this approach on two task Zendo and ActiveACRE, showing improvements against previous inductive reasoning methods and showing some agreement between their proposed system and human behavior on these tasks. Strengths: * The work presents a principled, novel approach to an interesting problem. First, this work extends previous work on inductive hypothesis generation with LLMs by accounting for experimentation. Second, I think the design choices are quite practical, interesting, and creative. * I like the idea of considering natural language as the hypothesis space and using SMC and BOED for active experimentation/hypothesis revision is quite natural and well-motivated. SMC and BOED are well-established ideas but leveraging them for the task of natural language hypothesis generation and experimentation is new and the authors have solid approach to integrating LLMs for proposing hypotheses while still guaranteeing the correctness of their procedure. * The authors ablate the different design decisions of the system. * The presentation of the paper was mostly clear but I do have some suggestions. Weaknesses: I think the tasks do a fine job of evaluating the system but I'm not sure if they're the most compelling demonstrations of the value of natural language hypotheses/they might not fully utilize the power of natural language hypotheses. For example, it seems like a restricted DSL might be enough for some of these tasks; I do note that the authors do an analysis to show the benefit of natural language. Additionally, there are a few ablations of the system that could strengthen the paper. * Ablations that illustrate the role of a natural language hypothesis space or at least some additional discussion of more realistic settings where this is a good design choice. * Number of particles/number of hypotheses * Some analysis of particle degeneracy would be helpful * How does a model with vision capabilities perform on this task (e.g., GPT-4 V)? I think the presentation can be improved a bit in some places. For example, Figure 4b could probably be replaced with a summary figure. It’s a bit difficult to parse a 6x10 grid of figures. Furthermore, it was sometimes hard to understand the details of some of the baselines (without reading the hypothesis refinement/search papers). For example, I think clearly defining the batched inference with refinement baseline (in math) would be helpful in the main text since it's a crucial method to understand. Technical Quality: 3 Clarity: 3 Questions for Authors: * Does LLM-SMC-S produce a consistent estimator of expectations of test functions wrt the normalized posterior (this is the guarantee enjoyed by vanilla SMC)? I think this probably follows from the properly weighted property and the fact that ratios of consistent estimators are consistent. It would be good if authors could comment on this in the paper though. * Does the procedure suffer from particle degeneracy? It’s fine if it does but since this is a considerable practical challenge of SMC, I think it’s worth commenting on this. * The derivation in A.1 has some typos. I think there’s a missing integral sign in line 10. * I think I would have liked to see some of the actual hypotheses proposed by the LLM in the main text. * I think it would be helpful to see Equation 6 is actually approximated explicitly in math. * Figure 2 was a bit confusing for someone who’s used to seeing plots in SMC papers where the circles correspond to particles; here I think the circles represent specific natural language hypotheses (e.g., 3 red balls) but at first I thought they represented particles. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors discuss limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback! Below we address concerns and questions. > Concern 1: I'm not sure if they're the most compelling demonstrations of the value of natural language hypotheses/they might not fully utilize the power of natural language hypotheses. For example, it seems like a restricted DSL might be enough for some of these tasks We fully acknowledge this concern. Our work evaluates the models on traditional, well-studied Bayesian concept learning datasets to demonstrate that using natural language representation is beneficial even in domains where crafting a DSL is still possible and also to enable in-depth analyses. We believe that our work is needed to convince the Bayesian concept learning community to consider using natural language representation, and it is also an important stepping stone towards fully wielding the power of natural language. As the reviewer pointed out, we do show the almost obvious flexibility of natural language representation in our “majority of blocks are red” experiment. We leave evaluations on more complex domains where natural language representation is a must for future work. > Concern 2: There are a few ablations of the system that could strengthen the paper. - Ablations that illustrate the role of a natural language hypothesis space or at least some additional discussion of more realistic settings where this is a good design choice. - Number of particles/number of hypotheses - Some analysis of particle degeneracy would be helpful - How does a model with vision capabilities perform on this task (e.g., GPT-4 V)? Realistic setting: more abstract inductive reasoning domains such as content recommendation and moral reasoning from the GATE paper [1] Number of hypotheses: while not directly ablated, the number of hypotheses is indirectly tied to the number of LLM calls per iteration, which we performed an ablation study on Detailed analysis on LLM-SMC-S, which, we think, may be able to provide a probabilistic account of evolutionary algorithms, and extension to visual data through the use of multimodal LLM are both exciting future directions we plan to explore. > Concern 3: It was sometimes hard to understand the details of some of the baselines Please see our global response where we provide more details of the baselines. They will be included in our revised paper. > Question 1: Does LLM-SMC-S produce a consistent estimator of expectations of test functions wrt the normalized posterior (this is the guarantee enjoyed by vanilla SMC)? It would be good if authors could comment on this in the paper though. Yes, it does. We will revise our paper accordingly. > Question 2: Does the procedure suffer from particle degeneracy? It’s fine if it does but since this is a considerable practical challenge of SMC, I think it’s worth commenting on this. From what we see during the experiments, particle degeneracy usually happens when the ground truth rule is discovered and included in the pool of particles. Since the ground truth rule particle has a much higher posterior than others, other particles tend to get killed. When the ground truth rule has not been discovered, the particles tend to still be diverse. > Question 3: The derivation in A.1 has some typos. Thank you for pointing it out! We will fix it. > Question 4: I think I would have liked to see some of the actual hypotheses proposed by the LLM in the main text. Please see our global response where we provide actual hypothesis traces from our models. They will be included in our revised paper. > Question 5: I think it would be helpful to see Equation 6 is actually approximated explicitly in math. Equation 6 is actually the approximation of Equation 4, and it can be computed exactly. > Question 6: Figure 2 was a bit confusing for someone who’s used to seeing plots in SMC papers where the circles correspond to particles The figure does in fact represent particles with circles. The three red particles represent how there are three particles with the same hypothesis. The colors of these circles change when their hypotheses are revised. [1] Li and Tamkin et al. 2023. Eliciting Human Preferences with Language Models. --- Rebuttal Comment 1.1: Title: Acknowledged and maintain my positive support Comment: Thanks for your clarifications! I"ll maintain my score and advocate/support for the acceptance of this paper. I only have one presentation comment. For Proposition 1, consider making it a statement about consistency of LLM-SMC-S? Presumably, consistency is the guarantee that people care about and the properly weighted property is why this guarantee holds (I could be missing something though). Regarding, the author's response to the first concern. That's reasonable and consider elaborating further in the Discussion (although this is already briefly discussed).
Summary: The paper addresses the problem inferring rules and designing experiments based on them. To do so, 1) they propose representing rules in natural language, generated with LLMs. 2) Using Monte-Carlo algorithms to score them 3) Revising these rules and proposing new experiments with LLMs. They instantiate the problem in the domains of Zendo (binary rules) and ActiveAcre (abstract causal reasoning/ blicket detector style tasks). The authors then compare human inferences with LLM inferences to show what might explain human inferences and how their method can outperform humans in inferring underlying rules. Strengths: - The paper is well written, I enjoyed reading it! - The problem is well motivated, and the description of the methods is also clear. - The authors use good baselines to compare their method with. I liked the controls and conditions used. - I particularly liked the rule following analysis and visualizations presented in fig 4b - I also liked the discussion on using formal languages vs natural language to represent rules! Weaknesses: - I think the description of the baselines could be made clearer. Including the prompts in the main or using figures to describe them could be useful. - While I liked the discussion on formal languages vs natural languages, I would have liked to see a baseline where the model implements rules in a formal language. This need not be a DSL, but a way for the model to reason with an external symbolic solver. Eg: the language could be python (with pymc/stan/pyro), or webppl. - Relatedly, [https://arxiv.org/abs/2402.17879](https://arxiv.org/pdf/2402.17879) seems like a relevant paper where rules are inferred and implemented in pymc - Finally, the two domains, while inspired from psychology experiments, are quite artificial. They did make it possible for the authors to do an in-depth analysis (with human responses and try different variations, analyse failures) but adding a more realistic domain the strengthen the contribution of the paper. Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors address limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback! Below, we address concerns raised by the reviewer. > Concern 1: I think the description of the baselines could be made clearer. Please see our global response where we provide more details of the baselines. They will be included in our revised paper. > Concern 2: While I liked the discussion on formal languages vs natural languages, I would have liked to see a baseline where the model implements rules in a formal language. This need not be a DSL, but a way for the model to reason with an external symbolic solver. Eg: the language could be python (with pymc/stan/pyro), or webppl. We did not consider such a baseline because: 1. A pair of recent papers found that pure python generation underperforms NL->Python (Ellis ‘23 NeurIPS [1], Wang ‘24 ICLR [2]) 2. We compare with the Bramley et al. [3] Zendo model, which uses a formal language We will mention these facts in the revision. > Concern 3: The two domains, while inspired from psychology experiments, are quite artificial. They did make it possible for the authors to do an in-depth analysis, but adding a more realistic domain the strengthen the contribution of the paper. We fully acknowledge this concern. Our work evaluates the models on traditional, well-studied Bayesian concept learning datasets to demonstrate that using natural language representation is beneficial even in domains where crafting a DSL is still possible and also to enable in-depth analyses. We believe that our work is needed to convince the Bayesian concept learning community to consider using natural language representation, and it is also an important stepping stone towards fully wielding the power of natural language. We do show the almost obvious flexibility of natural language representation in our “majority of blocks are red” experiment. We leave evaluations on more complex domains where natural language representation is a must for future work. [1] Ellis 2023, Human-like Few-Shot Learning via Bayesian Reasoning over Natural Language. [2] Wang et al. 2024, Hypothesis Search: Inductive Reasoning with Language Models. [3] Bramley et al. 2018, Grounding compositional hypothesis generation in specific instances. --- Rebuttal Comment 1.1: Comment: Thank you for your responses and addressing my concerns! I'll keep my current score.
Summary: This paper describes a model of online construction of rules that explain data, where (a) hypotheses are expressed in natural language, and (b) they are updated by proposing experiments (inputs to the ground-truth rule). The method consists of an extension of SMC, where experiments and hypotheses revision are proposed by an LLMs; the best experiment is chosen by an approximate information gain criterion. The paper considers both fuzzy and hard rules, and finds that hard rules perform best in terms of task completion (and the model outperforms previous methods, especially those that propose a batch of hypotheses rather than inferring them online). Moreover, the fuzzy rules model best fits human behavior. Strengths: The paper is clear and well motivated. The setup of online inference, where the agent is allowed to propose experiments and observe results, is interesting and new in this line of work. Though the initial domains studied in this setup are simple, they allow us to understand the meat of the problem more clearly. I can imagine other more realistic domains, such as based on code debugging or data analysis, that might benefit from the ideas here. The paper is careful in its choice of experiments. In Zendo and ActiveAcre, the paper compares with recent, relevant baselines: ReAct (interactive, but LLM only), batch inference (static), batch with refinement (as done in prior work), or online inference. It also accomplishes the goal of obtaining decent fit to human data, which previous related papers did not attempt to do, providing a candidate cognitive model besides the best performing model. Weaknesses: While the domains used here are interesting for a first study, they are still relatively simple. Most compelling would be to find a task with a richer space of hypotheses and experiments, perhaps also making better use of common sense priors embedded in LLMs. The paper also doesn't provide much qualitative insight into what behavioral differences arise from doing online experiments compared to only doing batched inference, perhaps with refinement. While it might be hard to describe general differences, at least having a few example traces in the appendix (e.g., showing example scenarios where batched inference falls short, or outperforms the online model) would help us at least form some qualitative hypotheses on where the gains are coming from. Technical Quality: 4 Clarity: 4 Questions for Authors: * Are any of the 10 Zendo predicates particularly unlikely to be recovered? I'd be curious to see an example of the model recovering 'a red is bigger than all non reds', for instance, since this seems like a quite unlikely hypotheses to propose by itself. * Is the hardcoded random experiment proposer biased in any useful way? It would be good to describe it at least in the appendix. Again, for most of the Zendo rules, it'd guess that most single random experiments would be highly non informative. * In the result in Table 4, with a single proposal, there's nothing for the InfoGain reranking to do. So should the first column be interpreted as two random repeats of the same exact model? * Appendix A1: missing an integral sign in (10)? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Yes, adequately addressed. Flag For Ethics Review: ['Ethics review needed: Research involving human subjects'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback! Below, we address concerns and questions raised by the reviewer. > Concern 1: While the domains used here are interesting for a first study, they are still relatively simple. We fully acknowledge this concern. Our work evaluates the models on traditional, well-studied Bayesian concept learning datasets to demonstrate that using natural language representation is beneficial even in domains where crafting a DSL is still possible and also to enable in-depth analyses. We believe that our work is needed to convince the Bayesian concept learning community to consider using natural language representation, and it is also an important stepping stone towards fully wielding the power of natural language. We do show the almost obvious flexibility of natural language representation in our “majority of blocks are red” experiment. We leave evaluations on more complex domains where natural language representation is a must for future work. > Concern 2: The paper also doesn't provide much qualitative insight into what behavioral differences arise from doing online experiments compared to only doing batched inference Please see our global response where we provide actual hypothesis traces from our models. They will be included in our revised paper. > Question 1: Are any of the 10 Zendo predicates particularly unlikely to be recovered? I'd be curious to see an example of the model recovering 'a red is bigger than all non reds', for instance There are two Zendo rules (out of ten) that are unlikely to be recovered by our models: ‘a red is bigger than all non reds’ and ‘all are blue or small’. We note that human participants from Bramley et al. [1] also perform very poorly on these two rules, as shown in [1]'s Figure 4. > Question 2: Is the hardcoded random experiment proposer biased in any useful way? It would be good to describe it at least in the appendix. Again, for most of the Zendo rules, it'd guess that most single random experiments would be highly noninformative. We did try to come up with a random experiment proposer that would produce useful experiments. Here is a description of it, which we will include in the appendix: Sample number of cones uniformly from 1-5 Sample color, size, orientation, and groundedness uniformly Sample number of touchings from geometric-distribution(p=0.3) - 1 Then, sample random pairs of blocks and make them touch until we have the specified number of touchings. > Question 3: In the result in Table 4, with a single proposal, there's nothing for the InfoGain reranking to do. So should the first column be interpreted as two random repeats of the same exact model? The rows in the first column still use different experiment proposers, so they are not random repeats. > Question 4: Appendix A1: missing an integral sign in (10)? Thank you for pointing it out! We will fix it. [1] Bramley et al. 2018, Grounding compositional hypothesis generation in specific instances. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I appreciate the clarifications. I think commenting in the paper about what is hard about the two hardest Zendo rules would be useful to draw attention for future work.
Rebuttal 1: Rebuttal: We thank all the reviewers for their detailed feedback. We have posted responses to each reviewer's individual comments. Here, we address some common concerns raised by the reviewers. > Common concern 1: Want to see qualitative differences, i.e., actual hypotheses proposed by the models Below are example hypothesis traces of online, batch with refinement, and batch (without refinement) models on the Zendo task with the rule “a blue touch a red”. We will include them in our revised paper. - Batch: 'Blocks must touch at least one other block' is proposed but is immediately falsified by an existing experiment where a scene with no blocks touching is negative. - Batch with refinement: ‘Blue blocks must not touch green blocks’ is first proposed and then immediately refined into ‘Blue blocks must not touch blocks of any color other than red’ since there is an existing data point where a scene with a blue touching a blue is negative. This hypothesis later gets falsified, without an opportunity to refine itself since the model is not online, when a scene with no blocks touching turns out to be negative. - Online: ‘There must be a blue block’ is proposed and added to the pool of particles. Since it has higher prior than other particles (has shorter length); it keeps surviving while others get killed, despite some conforming with the data. Upon seeing a scene with a blue touching a green being negative, the particle ‘There must be a blue block’ is perturbed into `there must be a blue block touching a red block’. As you can see, the batch without refinement model lacks the ability to propose hypotheses that are consistent with existing data. The batch with refinement model fixes the issue but lets many of these almost-correct hypotheses get falsified without having a chance to refine itself upon seeing new experiments. The online models are the only models whose hypothesis proposal is affected by the prior. As defined by the prior, shorter hypotheses are preferred over longer hypotheses (the longer ones get killed), given that both explain data equally well. Once shorter hypotheses get falsified by a new experiment, they get perturbed into longer hypotheses that fit the data better. While in principle the batch with refinement model could also do the ‘there must be a blue block’ -> ‘there must be a blue block touching a red block’ transition like the online model, the influence of prior in hypothesis proposal and the ability to revise existing hypotheses upon seeing new experiments is what differentiates online models from batch with refinement models. > Common concern 2: Baselines are not described in detail. It is hard to understand the baseline without reading prior work. We include more baseline details below and will include them in our revised paper. In probabilistic inference terms, both batch with and without refinement correspond to importance sampling $p(h|x_{1:t},y_{1:t}) = E_{p(h’|x_{1:t},y_{1:t})} [1[h = h’]] = E_{q(h’|x_{1:t},y_{1:t})} [\frac{p(h’|x_{1:t},y_{1:t})}{q(h’|x_{1:t},y_{1:t})}1[h = h’]]$. The difference in the two baselines, batch without and with refinement, lies in how $q(h’|x_{1:t},y_{1:t})$ is constructed. - Batch: $q(h|x_{1:t},y_{1:t}) = U(LLM(x_{1:t},y_{1:t}))$ where $LLM(...)$ prompts an LLM to return a list of hypotheses - Batch with refinement: $q(h|x_{1:t},y_{1:t}) = U(Refined\text{-}LLM(x_{1:t},y_{1:t}, None, 0))$ where $Refined\text{-}LLM$ is defined as follows: First, let $s(h, x_{1:t}, y_{1:t}) = \frac{1}{t} \sum_{i=1}^t \mathbb{1}[h(x_i) = y_i]$. This simply scores what percentage of data points in $x_{1:t}, y_{1:t}$ that $h$ makes correct predictions. function $Refined\text{-} LLM(x_{1:t}, y_{1:t}, h, k)$: &nbsp;&nbsp;&nbsp;&nbsp; $H = LLM\text{-}with\text{-}h(x_{1:t},y_{1:t}, h)$ # Prompts LLM to refine h &nbsp;&nbsp;&nbsp;&nbsp; if $k=K$: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; return $\emptyset$ &nbsp;&nbsp;&nbsp;&nbsp; else if $\exists h’ \in H, s(h’, x_{1:t}, y_{1:t}) = 1$: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; return {$h’ \in H | s(h’, x_{1:t}, y_{1:t}) = 1$} &nbsp;&nbsp;&nbsp;&nbsp; else: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; $h^* = argmax_{h’ \in H} (s(h’, x_{1:t}, y_{1:t}))$ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; return $Refined-LLM(x_{1:t}, y_{1:t}, h^*, k + 1)$ where $K$ is the number of refinements allowed. > Common concern 3: The domains, while interesting and good for first study, are relatively simple We fully acknowledge this concern. Our work evaluates the models on traditional, well-studied Bayesian concept learning datasets to demonstrate that using natural language representation is beneficial even in domains where crafting a DSL is still possible and also to enable in-depth analyses. We believe that our work is needed to convince the Bayesian concept learning community to consider using natural language representation, and it is also an important stepping stone towards fully wielding the power of natural language. We do show the almost obvious flexibility of natural language representation in our “majority of blocks are red” experiment. We leave evaluations on more complex domains where natural language representation is a must for future work. [1] Qiu et al. 2024, Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement [2] Wang et al. 2024, Hypothesis Search: Inductive Reasoning with Language Models.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
No-regret Learning in Harmonic Games: Extrapolation in the Face of Conflicting Interests
Accept (spotlight)
Summary: The paper looks at how no-regret learning algorithms behave in harmonic games, which model situations where players have conflicting interests. This is different from the often-studied potential games where interests are shared. The authors show that in continuous time, FTRL dynamics are Poincaré recurrent and do not converge. In discrete time, FTRL can cause never-ending cycles of best responses. But, by adding an extrapolation step (FTRL+), they show that learning can reach a Nash equilibrium from any starting point, ensuring constant regret and giving important insights into harmonic games. Strengths: The paper stands out for its novel exploration of no-regret learning in harmonic games, which are less studied compared to potential games. The use of FTRL+ to ensure convergence in these games is a creative extension of existing algorithms. Furthermore, the the result on Poincare recurrence in Theorem 2 is insightful and interesting. The statements and proofs of the results are clear and rigorous, and the figured and presentation aid clarity. Weaknesses: Empirical results beyond the matching pennies example could strengthen the work. The proofs are somewhat dense and challenging to follow. A stronger differentiation from the work by Legacci et al. would be beneficial. Technical Quality: 4 Clarity: 4 Questions for Authors: How do the choices of parameters $\lambda_i$ and $\eta_i$ affect the performance of the FTRL+ algorithm? How might one select these parameters for different types of harmonic games? Could you expand on the types of harmonic games tested in your experiments? Are there benchmarks the algorithm could be compared against? Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: Limitations are sufficiently covered given the theoretical nature of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer aEEW, Thank you for your strong positive evaluation and encouraging remarks! We reply to your questions and comments below: > Empirical results beyond the matching pennies example could strengthen the work. Duly noted. We have included a pdf in our global rebuttal with additional figures in a series of $2\times 2 \times 2$ harmonic games that better illustrate the differences between FTRL and FTRL+. > The proofs are somewhat dense and challenging to follow. Point well taken. Being fully aware of the NeurIPS reviewing load and the time that it takes to review a paper, we tried to make our proofs as concise and straight-to-the-point as possible, but we understand that we may have overshot our original target. We will be happy to expand our proof sketch in the main part of the paper and provide more explanations and intuition along the way in the first revision opportunity. > A stronger differentiation from the work by Legacci et al. would be beneficial. To provide the necessary context, Legacci et al. [32] showed that the replicator dynamics – a special case of the FTRL dynamics with entropic regularization – are Poincaré recurrent in uniform harmonic games (a subclass of harmonic games for which the uniform distribution is always a Nash equilibrium). In the continuous-time setting, our paper's contribution is that *all* FTRL dynamics are recurrent in *all* harmonic games. The way that [32] obtained their result hinges on a special mathematical construct – the so-called *Shahshahani metric* – which is essentially "mandated" by the structure of the replicator dynamics. The key property of this metric is that incompressibility of the replicator field is equivalent to the underlying game being uniformly harmonic. As such, a plausible first step to extend the results and analysis of [32] would be to see if a suitably "adjusted" variant of the Shahshahani metric could provide an equivalence between incompressibility and harmonicity under a different measure; however, all our efforts to establish such an equivalence failed. Likewise, if we work with only *uniform* harmonic games, it is still not easy to find an analogue of the Shahshahani metric that provides a link between incompressibility of the associated FTRL dynamics and (uniform) harmonicity (at least, all our efforts to do so failed). Because of this, the "incompressibility" approach of [32] does not seem generalizable -- at least, not in a straightforward way. **TLDR:** We see no way of extending the incompressibility approach of [32] to more general FTRL dynamics and/or non-uniform harmonic games. We will be happy to include a version of this discussion when it is again possible to revise the paper to better highlight the differences between our dualization approach and the Riemannian incompressibility approach of [32]. > How do the choices of parameters $\lambda_i$ and $\eta_i$ affect the performance of the FTRL+ algorithm? How might one select these parameters for different types of harmonic games? Great question! The main things to keep mind are as follows: - For $\eta_i$: in principle, $\eta_i$ should be chosen as large as possible, as larger values of $\eta$ lead to lower regret in (16), and to faster convergence to Nash equilibrium. The bound of L307 provides an upper limit to $\eta_i$ so, from an operational standpoint, $\eta$ should be taken as high as allowed by this limit. - For $\lambda_i$: the precise dependence is determined by Eq. (D.9) in Appendix D. We provided a rough upper bound at that point to simplify the end expression. In principle, taking $\lambda = 1$ leads to better bounds, but this comes at a cost of two queries per iteration, so there is still a trade-off involved (but one which is harder to quantify exactly). > Could you expand on the types of harmonic games tested in your experiments? The payoffs of each game appear in the vertices of the phase portrait (in the paper as well as the rebuttal pdf). The games themselves were generated by randomly selecting a game with integer payoffs from the subspace of games satisfying the harmonicity condition of Definition 1. > Are there benchmarks the algorithm could be compared against? The most widely used benchmarks that we are aware of are the three algorithms presented in Eqs. (14a,b,c) in our paper. Of these, (14a) is FTRL, which diverges (we plotted this throughout for comparison); (14b,c) are both special cases of FTRL+, and their trajectories are almost indistinguishable, so we only used the optimistic variant of the algorithm to minimize visual clutter. --- Please let us know if you have any follow-up questions, and thank you again for your time and positive evaluation! Kind regards, The authors --- Rebuttal Comment 1.1: Comment: I thank the authors for their extensive response and clarifications. While I will leave my positive score of 7 as is, given the author's response I can increase my confidence. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you again for your time, input, and positive evaluation! Kind regards, The authors
Summary: The paper studies the behavior of no-regret dynamics, and in particular follow the regularized leader (FTRL), in harmonic games--the strategic counterpart of potential games. They establish the following main results: i) the continuous-time version of FTRL is Poincare recurrent; ii) an extrapolated version of FTRL (FTRL+), which includes the optimistic and mirror-prox variant, converges to a Nash equilibrium; iii) under FTRL+ each player attains $O(1)$ regret. Strengths: The paper provides a clear and compelling motivation for investigating no-regret dynamics in harmonic games. In particular, the decomposition of Candogan et al. shows that, together with potential games, harmonic games constitute the basic building blocks of any game. Further, harmonic games generalized zero-sum games with a fully-mixed equilibrium, the latter class having received extensive attention in the literature. The results obtained in the paper make a concrete step towards better understanding no-regret dynamics in harmonic games, which I believe is an important contribution and will be welcomed by the learning in games community at NeurIPS. I also believe that the paper may lead to interesting follow-up work on harmonic games. Overall, the main message of the paper is compelling. The writing of the paper is also exemplary and of really high quality. The main body does a great job at providing high-level sketches of the main technical ideas, and the appendix is also very well organized and polished. I did not detect any notable issues with the soundness of the claims. Weaknesses: Regarding the significance of the results, it can be argued that some of the results are somewhat incremental based on existing results. The fact that FTRL is Poincare recurrent is perhaps not that surprising conceptually given the recent paper of Legacci et al. which shows that for the special case of replicator dynamics; I do understand though that the extension requires some new ideas, as the authors discuss. Moreover, the fact that FTRL+ attains constant regret is also not particularly surprising. To put that result into better context, I would recommend citing the paper of Daskalakis, Fishelson and Golowich (NeurIPS 2021) that shows $O(\log^4 T)$ regret for any game; as far as I understand, the only improvement pertains to logarithmic factors in $T$, which is perhaps not very significant. But I do not believe that the points above constitute basis for rejection. Technical Quality: 4 Clarity: 4 Questions for Authors: No further questions. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer sMJW, Thank you for your positive evaluation and your input! We reply to your questions and remarks below: > The fact that FTRL is Poincare recurrent is perhaps not that surprising conceptually given the recent paper of Legacci et al. which shows that for the special case of replicator dynamics; I do understand though that the extension requires some new ideas, as the authors discuss. From a conceptual standpoint, we agree that it is plausible to expect that FTRL is recurrent in harmonic games, given that the replicator dynamics (a special case of FTRL dynamics) are recurrent in uniform harmonic games – the result of Legacci et al. [32]. At the same time, we were genuinely surprised that the proof technique of [32] could **not** be extended along either direction – that is, to prove that FTRL is recurrent in *uniform* harmonic games, or that the replicator dynamics are recurrent in *general* harmonic games. The reason for this is that the result of [32] revolves around a special mathematical construct – the so-called *Shahshahani metric* – which is essentially "mandated" by the choice of [32] to work with the replicator dynamics. The key property of this metric is that incompressibility / preservation of volume under the replicator dynamics is equivalent to the underlying game being uniformly harmonic. As such, a first step to extend the results of [32] would be to see if a suitably "adjusted" variant of the Shahshahani metric could provide an equivalence between incompressibility and harmonicity under a different measure – however, all our efforts to establish such an equivalence failed. Likewise, if we work with only *uniform* harmonic games, it is still not easy to find an analogue of the Shahshahani metric that makes a given, non-replicator FTRL scheme incompressible (at least, all our efforts to do so failed). Because of this, the "incompressibility" approach of [32] does not seem generalizable – at least, not in a straightforward way. **TLDR:** Even though, conceptually speaking, the recurrence of FTRL may not seem surprising, the failure of the incompressibility approach of [32] in more general settings and the apparent need to take a radically different approach does seem surprising (at least to us), and we believe that it carries important insights as to which techniques are more suitable for the analysis of general harmonic games. > Moreover, the fact that FTRL+ attains constant regret is also not particularly surprising. To put that result into better context, I would recommend citing the paper of Daskalakis et al, that shows $\log^4 T$ regret for any game. First off, thanks for bringing up the paper of Daskalakis et al! We should have cited and discussed this paper, this was an omission on our end. Now, as to the significance of the polylog factor: if $T$ ranges between $10^3$ and $10^6$ (a reasonable range for online learning applications), the factor $\log^4 T$ ranges roughly between $2000$ and $35000$. Thus, ceteris paribus, shaving off the $\log^4 T$ factor could result in a gain between $3$ and $4$ orders of magnitude, depending on the context. We believe that this gain can be significant in several applications – though we freely acknowledge that there is a certain degree of subjectivity as to which gains are considered significant in which setting. Going beyond this point, we should clarify that, in our view, the true importance of the constant regret guarantee of FTRL+ is the associated path length bound (D.6) which, in turn, plays a crucial part in establishing the convergence of FTRL+ to equilibrium. This would not be possible with a non-constant bound (at least, not without a fairly different technical approach), a fact which, from our standpoint, further adds to the significance of an $\mathcal{O}(1)$ versus an $\mathcal{O}(\mathrm{polylog} T)$ bound. --- Please let us know if you have any follow-up questions, and thank you again for your time and positive evaluation! Kind regards, The authors --- Rebuttal Comment 1.1: Comment: I thank the authors for the detailed response. I have no further questions, and I maintain my positive evaluation. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you again for your time, input, and positive evaluation! Kind regards, The authors
Summary: This paper studies multi-agent no-regret learning dynamics in general harmonic games, which is the strategic complement of potential games. The paper's main contributions are the convergence properties of the family of "Follow-the-regularized-leader" (FTRL) algorithms and its variants in general harmonic games, significantly extending previous results in two-player zero-sum games with fully mixed equilibrium and uniformly harmonic games. Specifically, the main results are (1) the continuous-time dynamics of FTRL are Poincaré recurrent, (2) the discrete-time dynamics of FTRL diverges, but FTRL with an extrapolation step (called FTRL+) guarantees $O(1)$ individual regret and global asymptotic last-iterate convergence/point convergence to a Nash equilibrium. These results show that potential and harmonic games are complementary not only from the strategic but also from the dynamic viewpoint. Strengths: 1. This paper is very well-written and easy to follow. I appreciate that the main body clearly explains the main ideas, and the appendix provides rigorous and detailed proof. 2. The problem of no-regret learning dynamics in games is relevant. This paper contributes to the area by establishing the previously unknown convergence properties of the FTRL dynamics in general harmonic games, including Poincaré recurrence of continuous-time FTRL, last-iterate convergence, and constant regret of FTRL+. The results and techniques in this paper shed light on the further development of learning in harmonic games. Weaknesses: I do not see any major weakness in the paper. Minor comments: Theorem 3/4: $m_i$ is used to choose the step size but has not been defined? The definition is in the appendix but a pointer should be given in the main body. Line 314: "Similar bounds have only been established for optimistic methods in two-player zero-sum games [25]". [25] established constant regret bounds for all variationally stable games, a class of multi-player games that includes two-player zero-sum games. This should be acknowledged. [25] Hsieh, Yu-Guan, Kimon Antonakopoulos, and Panayotis Mertikopoulos. "Adaptive learning in continuous games: Optimal regret bounds and convergence to nash equilibrium." In Conference on Learning Theory. 2021 Technical Quality: 4 Clarity: 3 Questions for Authors: 1. $m_i$ is used to choose the step size for convergence of FTRL+. Is there an efficient way to estimate an upper bound of $m_i$? 2. This paper left the rate of convergence for FTRL+ as an open question. I would like to know if similar results hold for optimistic online mirror descent (OOMD)-type algorithms in harmonic games. I think proving convergence rates for OOMD algorithms, especially OGDA, might be more promising than FTRL+. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer gj66, Thank you for your strong positive evaluation and encouraging remarks! We reply to your questions and comments below: > Theorem 3/4: $m_i$ is used to choose the step size but has not been defined? The definition is in the appendix but a pointer should be given in the main body. $m_i$ is actually defined in L140, just after the definition of a harmonic game, but we understand that it is easy to miss. We will nest it inside Definition 1 to make it easier to find – thanks for pointing this out! > "Similar bounds have only been established for optimistic methods in two-player zero-sum games [25]". [25] established constant regret bounds for all variationally stable games, a class of multi-player games that includes two-player zero-sum games. This should be acknowledged. Fair point. This was an omission on our end, we will rectify it accordingly, thanks for pointing it out! > $m_i$ is used to choose the step size for convergence of FTRL+. Is there an efficient way to estimate an upper bound of $m_i$? Good point! In general, we do not see an easy way of obtaining an upper bound for $m_i$ based on *local* knowledge alone. However, as far as the step-size tuning is concerned, we believe that an AdaGrad-type step-size would obviate the need to estimate $m_i$, as it obviates the need to know the Lipschitz modulus of the players' payoff functions in zero-sum games. As we state in the paper's concluding remarks, this is a very fruitful direction for future research, and one which we intend to undertake in future work. > This paper left the rate of convergence for FTRL+ as an open question. I would like to know if similar results hold for optimistic online mirror descent (OOMD)-type algorithms in harmonic games. I think proving convergence rates for OOMD algorithms, especially OGDA, might be more promising than FTRL+. We are not aware of *any* convergence rates for harmonic games (or even asymptotic convergence results for that matter); to the best of our knowledge, our paper provides the first convergence result in the literature for general harmonic games. Now, we agree that optimistic GD might be easier to analyze than FTRL+ in terms of rates, and it would be a natural first step. However, when the players' regularizer function is steep (in the sense that the induced choice map takes values in the interior of the strategy simplex), optimistic FTRL and optimistic mirror descent are equivalent, so all the difficulty would already be present in the OptMD case. Still, beyond the "steep" regime, it may well be that the "primal-dual" formulation of mirror descent (versus that of dual averaging) could be more amenable to a rate analysis. At the current state of our knowledge for learning in harmonic games, it is very difficult to tell without doing the full analysis. --- Thank you again for your time and positive evaluation – and please let us know if you have any follow-up questions! Kind regards, The authors --- Rebuttal Comment 1.1: Title: Response by the Reviewer Comment: Thank you for the response! I agree that developing algorithms that require no knowledge of $m_i$ is an important question. I will keep my score. Additional Comment: I think providing more examples of Harmonic games would be helpful. Are there other natural families of games (beyond that obtained by the decomposition or 2p0s game with fully mixed NE) that are harmonic? --- Reply to Comment 1.1.1: Comment: Thank you for your continued input and support! Regarding your question: other natural familes of games that are harmonic include the class of cyclic games (Hofbauer & Schlag, 2000), the Dawkins variants of the Battle of the Sexes (Smith & Hofbauer, 1987), crime-deterrence games (Cressman & Morrison, 2000), etc. [To be clear, these families of games predate the introduction of the term "harmonic game" in the literature (which was due to Candogan et al., 2011), but they were all seen to be harmonic once the notion was introduced] We will be sure to include these examples in the first revision opportunity, thanks for bringing up the question! Kind regards, The authors --- ### **References** - R. Cressman and W.G. Morrison. *On the evolutionary dynamics of crime.* The Canadian Journal of Economics/Revue canadienne d’Economique, 31(5):1101–1117, 1998. - J. Hofbauer and K.H. Schlag. *Sophisticated imitation in cyclic games.* Journal of Evolutionary Economics, 10(5):523–543, 2000. - J.M. Smith and J. Hofbauer. *The “battle of the sexes”: A genetic model with limit cycle behavior.* Theoretical population biology, 32(1):1–14, 1987.
Summary: The contributions of this paper are two-fold: i) They prove that continuous FTRL dynamics for general harmonic games are Poincaré recurrent and hence do not converge. Their result generalizes the original result of Mertikopoulos, P., Papadimitriou, C. H., and Piliouras, G., 'Cycles in adversarial regularized learning,' In SODA ’18: Proceedings of the 29th annual ACM-SIAM Symposium on Discrete Algorithms, 2018, for zero-sum games with an interior equilibrium (as a trivial example of harmonic games), and the result of Legacci, Davide, Panayotis Mertikopoulos, and Bary Pradelski, 'A geometric decomposition of finite games: Convergence vs. recurrence under no-regret learning,' arXiv preprint arXiv:2405.07224 (2024), for uniform harmonic games to general harmonic games by volume-preserving flow arguments. ii) Moreover, they show that the regret of general harmonic games for the class of Extrapolated FTRL algorithms (discrete-time dynamics) is constant in time ($T$). Strengths: - The contributions of this paper are solid and interesting. This paper addresses fundamental problems in harmonic games. - I enjoyed reading the paper. It is well-written, especially the introduction to harmonic games in the appendix and the new viewpoint of Mixed characterization of harmonic games and its role on upper bounding the path length of the no-regret dynamics. Weaknesses: - Just some missing citations on no-regret learning for games, e.g., -- Daskalakis, Constantinos, Alan Deckelbaum, and Anthony Kim. "Near-optimal no-regret algorithms for zero-sum games." Proceedings of the twenty-second annual ACM-SIAM symposium on Discrete Algorithms. \ -- Chen, Xi, and Binghui Peng. "Hedging in games: Faster convergence of external and swap regrets." Advances in Neural Information Processing Systems 33 (2020). \ -- Piliouras, Georgios, Ryann Sim, and Stratis Skoulakis. "Beyond time-average convergence: Near-optimal uncoupled online learning via clairvoyant multiplicative weights update." Advances in Neural Information Processing Systems 35 (2022). \ ... __Minor Suggestions__ - Please emphasize in the claims that the regret bounds entailed are constant in $T$ and not potentially in the dimension of the action sets $\mathcal{A}$, since $H_i$ is not constant in $|\mathcal{A}_i|$. - Regarding the notation for $\nu_i(x)$, maybe consider changing it to $\nu_i(x_{-i})$ to improve readability. - In line 268, $x_{n + 1}$ instead of $x_{n}$? - In line 277, "and" before "which" seems to be a typo. Technical Quality: 3 Clarity: 3 Questions for Authors: I do not have any particular question (as the results shown in this paper are not unexpected) except that could authors provide some intuition on the conjecture that convergence to Nash Eq. of no-regret learnings for general harmonic games would be linear beyond the work of Wei, C.-Y., et al. Linear last-iterate convergence in constrained saddle-point optimization. In ICLR ’21: Proceedings of the 2021 International Conference on Learning Representations, 2021. ? (Non-asymptotic version of Theorem 4) Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Not applied. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer 1Wrk, Thank you for your strong positive evaluation and encouraging remarks! We reply to your questions and comments below: > Just some missing citations on no-regret learning for games [suggestions follow] Thanks for the pointers, we were aware of some but not all of the references you provided -- we will discuss them accordingly. > Please emphasize in the claims that the regret bounds entailed are constant in $T$ and not potentially in the dimension of the action sets $A$, since $H$ is not constant in $A$. Duly noted - we will note this dependence explicitly to avoid any misunderstandings. Thanks for bringing this up! > Regarding the notation for $v_i(x)$, maybe consider changing it to $v_i(x_{-i})$ to improve readability. You mean in order to make explicit the non-dependence on $x_i$, correct? This is an excellent suggestion - we will run through the paper with a fine-toothed comb to make sure it does not create any clashes of notation and adjust things accordingly. Thanks! > [Typos] Will fix those, many thanks for the careful read! > Could the authors provide some intuition on the conjecture that convergence to Nash Eq. of no-regret learnings for general harmonic games would be linear beyond the work of [Wei et al.]? (Non-asymptotic version of Theorem 4) Our intuition comes from the fact that the harmonic measure always gives rise to a fully mixed equilibrium. Since this *specific* equilibrium of the game lies in the interior of the simplex, the weighted sum in the definition of a harmonic game formally looks quite similar to the condition needed to establish metric subregularity in the work of [Wei et al.]. Our conjecture reflects our (optimistic :-) ) belief that this formal similarity could be leveraged to yield a linear convergence rate result in the spirit of [Wei et al.]. However, at this stage, there are too many unknown variables - and metric subregularity can be tricky in problems without a convex structure to rely on - so, for the moment, this is purely conjectural. --- Please let us know if you have any follow-up questions, and thank you again for your time and positive evaluation! Regards, The authors --- Rebuttal 2: Title: Discussion on Log T for general sum games Comment: Thank you very much for your detailed reply, especially regarding the intuition behind possible linear convergence. __This paper makes a solid and clear contribution, which is not surprising as harmonic games are a potential generalization of zero-sum games. Therefore, it was expected to observe constant regret. As a result, I would like to keep my score (accept) and thank the authors for their good work.__ Lastly, I would like to ask the authors to please add a discussion on how their work improves the log/poly log $T$ for general-sum games, referencing the works of: - Piliouras, Georgios, Ryann Sim, and Stratis Skoulakis. "Beyond time-average convergence: Near-optimal uncoupled online learning via clairvoyant multiplicative weights update." Advances in Neural Information Processing Systems 35 (2022). - Daskalakis, Constantinos, Maxwell Fishelson, and Noah Golowich. "Near-optimal no-regret learning in general games." Advances in Neural Information Processing Systems 34 (2021): 27604-27616. - Farina, Gabriele, et al. "Near-optimal no-regret learning dynamics for general convex games." Advances in Neural Information Processing Systems 35 (2022): 39076-39089. to constant regret in $T$ only for harmonic games as a special class of general-sum games. This will clarify the scope of the contribution of this work and align it more closely with the existing literature. --- Rebuttal Comment 2.1: Title: Thank you Comment: Thank you for the added pointers on $\log T$ regret, they are very helpful! Thank you also for your time, your continued input and support, all greatly appreciated. Kind regards, The authors
Rebuttal 1: Rebuttal: Dear reviewers, dear AC, We are grateful for your time, comments, and positive evaluation! To streamline the discussion phase, we replied to each of your questions and comments in a separate rebuttal below, and we will integrate all applicable points in the next revision opportunity. To better illustrate the differences between FTRL and FTRL+, we have included in this global reply a pdf with further experiments for $2\times 2\times 2$ games, with different initializations in different harmonic games. Thank you again for your input and encouraging remarks, and we are looking forward to continuing this constructive exchange during the discussion phase. Kind regards, The authors Pdf: /pdf/d61818131ca68668d09debe0b70866a8bf79a3b7.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
HEALNet: Multimodal Fusion for Heterogeneous Biomedical Data
Accept (poster)
Summary: The paper proposes an attention based early-fusion method for multi-modal learning. Specifically the paper uses a perceiver style iterative updates to a latent vector using a fusion layer. The fusion layer sequentially updates the latent vector using cross-attention with each of the modalities. The approach supports cross-modal interactions and provides a way to handle missing modalities at inference time. The authors compare the approach against existing baselines for fusing biomedical modalities and show promising results on the survival prediction task. Strengths: The paper is well written and provides good context on existing methods. The proposed approach is simple and explained clearly. The iterative style fusion is useful as it can support cross-modal interactions without exploding the embedding dimension and can also handle missing modalities at inference time. Weaknesses: Its unclear how much the iterative fusion is helpful as the parameters are shared. Can the authors provide more some information about when the iterative fusion is useful and how many iterations are typically needed. The proposed iterative style updates to the latent vector seems similar to an LSTM style aggregation. In many VLM and multimodal papers (https://arxiv.org/pdf/2405.09818, https://arxiv.org/pdf/2304.08485), the modality specific tokens are projected to a shared latent space and are aggregated through self-attention across the shared space. This provides more opportunity for cross modal interactions than iterative cross-attention. Have the authors compared with such approaches? The results are shown for a specific task (survival prediction) and specific modalities where the datasets are small and models are prone to overfitting. Its unclear if this approach works well for problems which need more complex interactions and where dataset sizes are larger. Technical Quality: 3 Clarity: 3 Questions for Authors: Its unclear how the features from different patches in the WSI aggregated? What is the embedding dimension? Do the early-fusion models have more parameters? If so are they more prone to overfitting on these datasets? Since the latent array is updated sequentially with information from different modes, how much does the order of fusing modalities impact the result? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Given the weaknesses and open questions, I'm basing my decision but would be open to changing the decision post rebuttal once the questions and weaknesses are addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **Comments on weaknesses:** **Usefulness of iterative fusion:** In short, having multiple fusion layers (i.e., iterations) helps to prevent overfitting. During development, we found that the modality-specific updates only learn how to operate on the latent embedding $S$ directly after the previous modality update if you only have one fusion layer. Increasing the number of fusion layers means that $S$ is updated with all modalities multiple times such that information from each modality becomes part of the context vector for every other modality. We share the weights between the fusion layers to prevent layer specificity such that the modality-specific updates become context-agnostic. This is why the model also delivers robust performance when skipping a modality at inference time (Table 2). Regarding the number of fusion layers needed, we ran some initial ablation studies during development, where we did not find a clear pattern on the exact number of fusion layers needed, but found that $d>1$ was an effective regulariser. **Cross-modal interactions & other architectures:** Using the shared latent space projection in a “monolithic” architecture is something that we originally considered and decided against for three main reasons: - **Don’t align what shouldn’t be aligned:** First, we would like to point out a crucial distinction of multimodal modelling in the Vision & Language (V&L) domain and on biomedical modalities beyond V&L. - **Vision & Language** models rely on the assumption that the same semantic concept is explicitly expressed across modalities (e.g., an image and its description). In other words, both modalities follow the same training distribution and V&L models consequently align the latent representations to reflect this distribution across both modalities, such as through the contrastive objectives in the linked Chameleon paper. - For **Biomedical modalities beyond V&L**, we don’t want to make this assumption. Often, the signal from different modalities (e.g., morphological features, genomics, and transcriptomics) follows different distributions and we want to maintain this separation to build a more predictive model. If we used projections into the same latent space combined with contrastive objectives, this would align the signal from the separate distributions. This defeats the purpose of cross-modal learning for these modalities as it becomes more difficult to learn from both distributions. - **We have benchmarked** several approaches which leverage projection and alignment in the same latent space. The MCAT baseline also aligns both modalities directly through the “genomics-guided cross-attention” where the dot product of the WSIs and the multiomic data is minimised. Another example is the Perceiver (Early Fusion) baseline which is first combining the input modalities to a common tensor representation and then passes them through multiple cross-and self-attention layers. HEALNet **consistently outperforms** these methods (Table 1). - **Scaling up to many modalities:** The iterative attention mechanism alleviates the use of fusion operators that can result in high-dimensional latents (e.g., Kronecker product [1]). This allows for combining many modalities while preserving the structural information of each as well as learning cross-modal interactions. - **Improved handling of missing modalities:** Another benefit of an iterative architecture instead of a monolithic one is that each layer pass does not require all modalities at the same time. **Larger datasets:** We would like to point out that for cancer pathology tasks The Cancer Genome Atlas (TCGA) contains by far the largest patient cohorts available in the public domain with each cancer cohort being over 500GB in size (L192-198). To show HEALNet’s performance on larger sample sizes, we ran some additional experiments comparing HEALNet to a subset of baselines (due to resource constraints in the rebuttal period) on a disease classification (ICD9) and patient mortality (MORT) prediction tasks of MIMIC-III (n=32,616). The modalities in MIMIC are tabular EHR features and time series on vital signs and test results. The results are shown in the general comment of the rebuttal. ### **Responses to questions** - **[…]unclear how the features from different patches in the WSI aggregated? […]** - The final embedding dimension is a 2D Vector of size `num_patches x encoding_dim` . The `encoding_dim` refers to the dimensions of the Kather100k pre-trained encoder, which results in a 2048-dimensional feature vector per patch (see L195). - **Do the early-fusion models have more parameters? […]** - This highly depends on the early fusion method used. A commonly used early fusion methods is the Kronecker product [1] which is constructing a high-dimensional input tensor, meaning that models using Tensor fusion would require a higher number of parameters in the earlier layers than HEALNet. However, other early fusion models use aggregation methods (e.g., concatenation and averaging across excess channels, as done in the Perceiver). In these cases, the parameters required in the early layers would be lower than HEALNet’s, but notably this approach also collapses an entire dimension, which may remove salient signal. - **[…] how much does the order of fusing modalities impact the result?** - During development of HEALNet, we performed preliminary analyses on the influence of the order of modalities on the downstream performance: We did not find any significant difference. Since all modalities become the context of each other’s update as long as $d>1$, the order of the modalities does not matter. References: [1] Zadeh, A. et al (2017). Tensor fusion network for multimodal sentiment analysis. --- Rebuttal Comment 1.1: Comment: >having multiple fusion layers (i.e., iterations) helps to prevent overfitting Thanks I think it makes sense that this helps reduce influence of the order of modalities. The choice of weight sharing makes sense with this design, I think its worth highlighting this in the paper. >Don’t align what shouldn’t be aligned While I do agree the VL models are aligned, the idea of projecting to a shared representation space in which interactions happen is also being used in the cross-attention interaction, although the iterative fusion does reduce dependency slightly. If the modalities are complementary its unclear why a concatenation approach doesn't work as well, while it may not handle missing modalities. Not sure I understand the perceiver early fusion setup well. Are the tokens from different modalities concatenated and passed through iterative fusion? Wouldn't it have similar paramters to HEALNet then (the Q,K,V projection dims remain similar) > The final embedding dimension is a 2D Vector of size num_patches x encoding_dim This is indeed a large sequence length. --- Rebuttal 2: Comment: Thank you very much for responding to our rebuttal! Per your suggestion, we will further highlight the role of having multiple fusion layers in the manuscript. We briefly wanted to clarify the open points below: > “If the modalities are complementary its unclear why a concatenation approach doesn't work as well, while it may not handle missing modalities.” We agree that, in principle, the concatenation approach would work well if all modalities had the same tensor dimensions. However, if we are processing structurally different modalities (e.g., 1D vs 2D vs 3D tensors), one would need to flatten or aggregate the excessive dimensions to ensure shape consistency for the projection. This leads to two problems: 1) we may end up with excessively large 1D tensors which 2) remove spatial signal if we e.g., flatten an image representation. In practice, we cannot always assume shape consistency between our modalities and the presence of all modalities for every sample (especially in clinical practice) - both of these aspects informed our design choices. We touch on these aspects in L89 & L111-114. > “Are the tokens from different modalities concatenated and passed through iterative fusion? Wouldn't it have similar paramters to HEALNet then (the Q,K,V projection dims remain similar)” Contrary to the Perceiver, in HEALNet each modality has its *own* Q,K,V projection (as indicated in Figure 1), which allows us to keep the input data intact irrespective of the tensor dimensions. That is, HEALNet does not require to aggregate excessive dimensions which helps to maintain the structural signal of each modality. --- Rebuttal Comment 2.1: Comment: Thanks to the authors for addressing my questions. The method provides a simple way to aggregate information from different modalities without losing their structural information. However its unclear how the results show here translate to other problems. Based on this I am updating my score from 4 to 5.
Summary: The authors present HEALNet as an early fusion approach for integrating different data modalities. HEALNet utilizes an end-to-end training process with an additive method for combining modalities, rather than handling them in parallel. This strategy enables HEALNet to scale and adapt effectively to datasets of varying sizes and characteristics without requiring explicit pre-training. Strengths: - The idea of the authors to implement an additive approach to combine modalities instread of processing them in parallel is promising. Weaknesses: - The methodology is not clearly written and is difficult to understand. - The presentation also falls short. For instance, Figure 3 features a bar plot that is unreadable. - Additionally, the color scheme in the attention map seems unusual; typically, a blue-red gradient is used as it is more intuitive. - The visualization analysis is lacking. There is only one figure displaying the model's inspection capabilities, which features a WSI attention map. - The iterative attention mechanism, with multiple steps and updates, can be computationally expensive. This could be particularly challenging when dealing with very high-dimensional data or when scaling up to larger datasets. - There is no ablation study regarding the number of the fusion layers. The number of fusion layers is a critical parameter that can significantly affect the computational efficiency of the model. Technical Quality: 3 Clarity: 2 Questions for Authors: - Could you include metrics such as as Floating Point Operations (FLOPs) and the number of trainable parameters to provide a more comprehensive understanding of the computational cost and efficiency of the HEALNet approach? - We have seen in recent papers [Transcriptomics-guided Slide Representation Learning in Computational Pathology](https://arxiv.org/html/2405.11618v1) that in cases where representations come from different sources or modalities the different repressentations are aligned in a meaningful way to facilitate effective learning and integration. How does your model handle discrepancies in feature scales or semantic differences between modalities? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The methodology needs to be clearer. The dimensions of the cross-attention layers are not clearly defined, which complicates understanding of how these layers align and integrate features from modalities of differing dimensions. Additionally, the visualizations and illustrations require significant improvement, as current figures, such as bar plots and attention maps, are difficult to interpret and hard to read. Enhancing these visual elements and providing clearer explanations of the methodology will improve the quality of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **Comments on weaknesses** - **“methodology is not clearly written and difficult to understand”**: We would like to point to the step-by-step methodology, which is discussed in detail in Section 3, illustrated in Figure 1, and presented as pseudocode in Appendix A. We believe this to be a very detailed description of HEALNet. Nevertheless, we would be grateful if the reviewer could specify which parts of the methodology are unclear, so we can further improve the clarity of our manuscript. - **“[…] Figure 3 features a bar plot that is unreadable”**: We would kindly like to point out that the aim of Figure 3 is to provide a high-level *illustration* of the general inspection capabilities of HEALNet that help users validate the model, as well as understand the results. We believe we mention this in the caption of Figure 3. In this work, Figure 3 is meant to showcase the capabilities of HEALNet, rather than to deep dive into the biology of UCEC. The size/formatting of the Figure 3 is due to page limitations. Still, Figure 3 is sufficiently high resolution that the individual feature names can be read on the PDF version of the paper. Nevertheless, we appreciate the feedback and will provide a larger version of the image in the Appendix. - **“iterative attention mechanims […] can be computationally expensive”**: The computational cost of the iterative attention mechanism is alleviated by sharing weights across the fusion layers, which makes the number of parameters independent of the fusion layers. It is worth noting that the existing tasks of the paper are already dealing with very high-dimensional datasets (see L181 onwards). To show that HEALNet also works for datasets with a larger sample size, we ran some additional experiments comparing HEALNet to relevant baselines on a disease classification (ICD9) and patient mortality (MORT) prediction tasks of MIMIC-III (n=32,616). The table below shows the AUC and Macro AUC for the respective tasks. The modalities in MIMIC are tabular electronic health record features and time series on vital signs and test results. | Model/Dataset | ICD9 (n=32,616) [AUC] | MORT (n=32,616) [MacroAUC] | | --- | --- | --- | | Uni-modal (EHR) | 0.731 ± 0.023 | 0.658 ± 0.001 | | Uni-modal (Time Series) | 0.700 ± 0.013 | 0.715 ± 0.016 | | Perceiver (Early) | 0.733 ± 0.028 | 0.723 ± 0.015 | | Porpoise (Late) | 0.628 ± 0.020 | 0.617 ± 0.015 | | HEALNet | 0.767 ± 0.022 | 0.748 ± 0.009 | - **“[...] number of fusion layers is a critical parameter that can significantly affect the computational efficiency of the model”:** As previously mentioned, the number of fusion layers does not affect the computational efficiency of the model due to the implementation of weight sharing. ### **Responses to questions** - **Handling discrepancies in feature scales and semantic differences between modalities:** We would like to point out a crucial distinction between multimodal modelling tasks in the “standard” Vision & Language (V&L) setting and ones over Biomedical modalities (beyond V&L): - Current **Vision & Language** approaches rely on the assumption that the same semantic concepts are expressed within the modalities (e.g., an image and its text description). Moreover, these semantic concepts are typically explicit (e.g., an image with a bird and a description that includes ‘bird’). Therefore, both modalities follow the same training distribution. In turn, V&L approaches align the latent representations to reflect that distribution across both modalities, and leverage these explicitly expressed semantic concepts through a contrastive objectives, such as the one in the linked Tangled paper [1]. - For **Biomedical modalities beyond V&L**, we generally cannot make such assumptions. Often times, the signal from different modalities (e.g., morphological features, genomics, and transcriptomics) follows different distributions and we want to maintain that separation to build a more predictive model. If we used projections into the same latent space combined with contrastive objectives, this would likely mute the signal from the separate modalities. This defeats the purpose of cross-modal learning for these modalities as it becomes more difficult to learn from both distributions. It is worth noting that while we separately align the latent representation to each modality, we never align the modality representation with each other to prevent such a muting effect. Moreover, on a broader note, if we were to make a similar assumption as in [1], such an approach would be applicable and scale only to two modalities - which is the case in [1], where the authors learn from WSI and gene expression profiles only. HEALNet, on the other hand, is robust and can readily scale to many different modalities of any size, as shown in the MIMIC experiments above. - While we appreciate and thank the reviewer for the reference [1], we would like to point out that paper [1] was published just three days before the NeurIPS deadline. Nevertheless, we have benchmarked some approaches that perform a projection and alignment in the same latent space. Specifically, the MCAT [2] baseline also aligns both modalities directly through the “genomics-guided cross-attention” where the dot product of the WSIs and the multiomic data is minimised. Another example is the Perceiver (Early Fusion) [3] baseline which is first combining the input modalities to a common tensor representation and then passes them through multiple cross-and self-attention layers. Note that HEALNet consistently outperforms both of these methods, as shown in Table 1 and Section 5. [1] Jaume et al. (2024) Transcriptomics-guided Slide Representation Learning in Computational Pathology [2] Chen, R.J. et al. (2021). Multimodal co-attention transformer for survival prediction in gigapixel whole slide images. [3] Jaegle, A., et al. (2021) Perceiver: General perception with iterative attention. --- Rebuttal Comment 1.1: Comment: I have reviewed the authors' rebuttal and appreciate the clarifying comments. However, my question regarding the inclusion of metrics such as Floating Point Operations (FLOPs) and the number of trainable parameters remains unanswered. So, does my question about the number of iterations that are required. Therefore, I intend to maintain my score. --- Rebuttal 2: Comment: Thank you for responding to our rebuttal. In our initial response we briefly discuss the computational complexity of HEALNet, and specifically the role of the fusion layers (as per the last weakness point). We also added new experiments to show that HEALNet is able to scale seamlessly to larger datasets, where the computational cost of the iterative attention mechanism is alleviated by sharing weights across the fusion layers. This makes the number of parameters independent of the number of fusion layers (i.e., iterations). More formally, for any multimodal model, it is important to scale with both number of samples $n$ and the number of modalities $m$. For instance, MCAT (our most competitive baseline) is natively designed for two modalities. To scale this approach to $m>2$, one would need to calculate the modality-guided cross-attention for all unique pairwise combinations ${m \choose 2} = \frac{m(m-1)}{2}$ which is $\mathcal{O}(m^2)$. A **key advantage of HEALNet’s** sequential setup is that it scales linearly. Since the cross-attention and SNN layers used within the fusion layers scale with $\mathcal{O}(n)$, each fusion layer has a complexity of $\mathcal{O}(mn)$ such that the runtime then mainly depends on the number of fusion layers $d$. Note that, HEALNet we consider $d$ to be a hyper-parameter with the values provided in Appendix E. We plan to post the number of parameters of HEALNet and the baselines on the KIRP dataset later today. We acknowledge that the actual runtime will depend on several other factors such as the size of the modality-specific embeddings within each model. As we didn't measure FLOPs at this instance, we provide the BigO analyses of our method with respect to the other baselines. Assuming we use the same encoders for each model, the time complexity for each fusion operation is summarized below. | Model | Time Complexity with number of modalities $m$ and samples $n$ | | --- | --- | | Perceiver (early, concat) | $\mathcal{O}(nm)$ | | MultiModN (sequential) | $\mathcal{O}(nm)$ | | MCAT(intermediate) | $\mathcal{O}(m^2n)$ | | Porpoise (late, Kronecker) | $\mathcal{O}(m^2n)$ | | HEALNet | $\mathcal{O}(nm)$ | We hope that this clarified the remaining concerns and will be reflected in your updated score. --- Rebuttal Comment 2.1: Comment: I have carefully considered the authors' rebuttal and appreciate the additional clarity provided. As a result, I am updating my score to 5. --- Reply to Comment 2.1.1: Comment: Thanks for acknowledging our response and your positive outlook - we are glad that we were able to further clarify our work. We would appreciate if you could update your increased positive score, as stated in your last comment, in the system before the deadline of the discussion period.
Summary: This paper presents HEALNet, a method for end-to-end multimodal fusion of mixed-type biomedical data. In contrast to methods like feature concatenation or Kronecker product fusion, HEALNet employs an iterative cross-attention structure that operates on the raw input modalities, representing a hybrid between early and intermediate-fusion approaches. This allows for the necessary cross-modal interaction, while natively handling missing modalities at test time and aiding interpretability (by operating on raw data rather than embeddings). HEALNet outperforms existing state-of-the-art multimodal fusion methods on survival analysis from histopathology and multi-omic data. Strengths: - The problem motivation is very clear, and the explanation of related prior work is thorough. Care is taken to properly inform the reader of relevant background so that the authors can demonstrate why the proposed method is unique. - The structure and logical flow of the paper is excellent. Tables and illustrations are of high quality throughout. - The proposed method is interesting and is designed to solve domain-specific problems in biomedical multimodal fusion such as handling missing modalities and interpretability, while still enabling powerful cross-modal learning. - The experiments appear to be soundly conducted, and results are convincing against competitive relevant baselines. Weaknesses: - Treatment of hyperparameters is somewhat confusing. It is unclear if Bayesian hyperparameter optimization (HPO) was applied to all methods or just to HEALNet. If HPO was applied to all methods, then why do their hyperparameters not appear in Table 5? These are important details to ensure a fair comparison between methods. - The latent update in Equation 2 is unclear, and this seems to be a critical component of the proposed method. What *exactly* is this function $\psi(\cdot)$? Also, there is an additional parameter $\rho$ to $\psi(\cdot)$ in the pseudocode that is not present in Equation 2. This needs to be clarified. Technical Quality: 4 Clarity: 4 Questions for Authors: - Was Bayesian HPO applied to all methods or only HEALNet? Why does only HEALNet appear in Table 5? Please clarify this and which exact hyperparameters underwent optimization; for instance, it is confusing that all shared hyperparameters appear to be identical except the L1 regularization term. - What exactly is the update function $\psi(\cdot)$? What is the parameter $\rho$ in the pseudocode, and why is it not present in Equation 2? Minor comments: - L18: Unnecessary to use MMML acronym if never used again - L83: “additive approach to combining modalities (rather than handling them in parallel).” It is unclear what is meant by “additive” – sequential? - L118: “fusing operators” -> “fusion operators” - L125: Remove semicolon - L145: I would write “…Learning Network and its key component, (b) the early fusion layer (as given in Equation 3).” - Eq 4: Use \left[ and \right] to make square brackets larger - L186: Can omit “contains” - L187: Consider citing https://ieeexplore.ieee.org/abstract/document/10230356/, which addresses multimodal fusion methods to prevent overfitting in multimodal fusion. This is particularly relevant since this paper also considered fusion of histopathology imaging with tabular data, and forms results in contrast to the parameter-intensive Kronecker product as well. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Several limitations are adequately addressed in Section 6. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## **Responses to Weaknesses and Questions** **Hyperparameter optimization:** We would like to clarify that we ran the Bayesian Hyperparameter optimisation for all baselines. The shared parameters in Table 5 were ran equally for all baselines from Table 1. We acknowledge that Table 5 currently does not show the final hyperparameters of the baselines, but we will add these for the final manuscript. Please see the relevant hyperparameters that we tuned for the baselines below. Note that this choice was mostly informed by the original implementations of the respective papers. - MCAT/MOTCAT/Porpoise: - `model_size_omic` : Embedding size of early layers for the omic data (pre-set choices between ‘large’ and ‘small’ from original papers) - `model_size_wsi` : Same as model_size_omic, but for the WSI embedding - `dropout` : Dropout applied after each layer of original implementation - `fusion_method` : choice between concatenation and bilinear fusion - Perceiver (early): - `layers` : total number of iterations - `latent_dims` : dimensions of latent bottleneck - `attention_heads` : number of attention heads - `attn_dropout` : dropout after each self-attention layer - `ff_dropout` : dropout after each feedforward pass - MultiModN: - `embedding_dims` : size of latent embedding **Latent Update in Eq. 2:** - **Clarification on $\psi(\cdot)$**: In the main body, the update function $\psi(\cdot)$ represents the learnable function (i.e., the cross-attention update) instead of a fixed closed-form expression. In the pseudocode in Algorithm 2, given the attention matrix $attn \in \mathbb{R}^{b \times i \times j}$ for batch size $b$, number of queries $i$, and keys/values $j$, for modality $m$ at time step $t$ and the value matrix $v \in \mathbb{R}^{b \times j \times d}$ for the dimensions of the modality value vector $d$, the update mechanism is $\psi(S_t, a^{(t)}_m) = \sum_j attn_{b,i,j} \cdot v_{b,j,d}$. We will make this part clearer in the updated version. - **Clarification on $\rho$:** Thanks for catching this. It is a leftover notation (typo) that we missed in the last version of the manuscript. We will correct this accordingly. --- Rebuttal Comment 1.1: Comment: I acknowledge that I have read the authors' rebuttal and thank them for the clarifying comments. I would encourage the authors to be as specific as possible when justifying hyperparameter choices, even if this material is relegated to the appendix. I will maintain my original score.
Summary: The authors present a multi-modal fusion architecture, named HEALNet, which aims to preserve modality-specific structural information and capture cross-modal interactions in a shared latent space. HEALNet enables intuitive model inspection by learning directly from raw data inputs instead of opaque embeddings. The iterative paradigm can skip modality update steps to handle missing modalities. The authors validate the framework through multi-modal survival analysis on four cancer datasets. Strengths: 1. The paper is well-organized and clearly written, with a logical flow of information that is easy to follow. 2. The iterative paradigm's ability to skip modality update steps to handle missing modalities is a practical approach. Weaknesses: 1. HEALNet shares a very similar structure with Perceiver. The authors should highlight the innovative parts more clearly to help readers understand which aspects are novel and which are based on related work. 2. The evaluation is limited to survival analysis. Given that this is a multi-modal fusion framework, it would be beneficial to perform additional tasks, such as classification or regression, to better evaluate its performance. 3. The sample sizes for the four datasets used are relatively small. Instead of 5-fold repeated random subsampling, more repetitions should be conducted, and confidence intervals should be reported to ensure the reliability of the results. 4. In addition to the concordance index, other metrics or plots such as cumulative proportion surviving plots should be reported to better visualize the results. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. Higher performance was reported in the compared paper MOTCat [1]. Why did this happen? 2. Since the modalities are different, what is the rationale for sharing weights across all fusion layers in HEALNet? How is modality-specific structural information preserved during the fusion process? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: 1. The evaluation of HEALNet is limited to survival analysis, which may not fully capture its potential across different biomedical tasks. 2. The small sample sizes and limited number of datasets may affect the generalizability of the results. 3. The absence of a comprehensive set of performance metrics limits the understanding of the model's strengths and weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## **Comments on Weaknesses** **Clarification on Novelty:** As you rightly point out, the general idea of iterative cross-attention with latent state passing has been previously suggested, which we acknowledge as a starting point for HEALNet’s architecture (L89, L113, L151). While other iterative modelling architectures (such as the Perceiver) handle a range of uni-modal tasks, HEALNet introduces a novel way of how iterative attention can be leveraged for learning effective multi-modal representations. Concretely, the Perceiver architecture requires concatenation of the input modalities. This becomes particularly problematic for modalities with misaligned dimensions and channels, as you either need to flatten all tensors or aggregate across channels, both of which trades off information for shape compatibility. The Perceiver paper demonstrates multimodal capabilities by concatenating audio and video embeddings, reducing the dimensions of the video and treats them as a single modality thereafter — this is identical to the Early Fusion baseline shown in Table~1, which HEALNet outperforms. HEALNet learns modality-specific latent updates to ensure that the modality shapes can be kept intact. Additionally, learning modality-specific updates becomes valuable in the handling of missing modalities. In such scenarios, an early fusion approach like the Perceiver would need to impute the missing values, which is noisy. In contrast, HEALNet’s setup allows to skip the missing updates at train and inference time. **Classification and Regression tasks:** We agree that the method is general which was a key design objective of the architecture. The focus on survival analysis in this paper was chosen for two main reasons. First, this paper focuses on multimodal data in the context of biomedical tasks. In these domains, most tasks that are a) clinically relevant and b) have sufficient data available to deal with time-to-event labels, where the outcomes of some study subjects is unobserved (right-censored data). Neither classification nor regression on these task labels is suited to handle time-to-event or the concept of censorship accurately. Second, it is worth noting that the survival analysis task was chosen to ensure comparability with relevant baselines. Under the hood, the survival analysis implementation is using a classifier to predict the hazard bins of a proportional hazards survival model. Architecturally, the only thing that would change to adapt to a classification task is the very final layer (”Head”) in Figure 1A. **Sample size**: In the field of cancer pathology, The Cancer Genome Atlas (TCGA) contains by far the largest patient cohorts. It is worth noting that given the high dimensions of the whole slide images (see L185), even a single cancer cohort (e.g., TCGA-BLCA) contains over 500GB of data. This is also the case for the benchmarked previously published studies. **Confidence Intervals:** We reported standard deviation instead of confidence intervals for two reasons. First, the target variable does not follow a t-distribution ([see this plot](https://filetransfer.io/data-package/djTYYGjo#link)), which is a crucial assumption for the valid interpretation of confidence intervals. As such, reporting standard deviations across the repeated random sub-sampling folds is a more valid estimate of the model’s confidence. Second, using standard deviations was in line with the benchmarked methods, which improves the comparability when directly contrasting the approaches. **Metrics beyond c-Index:** We agree that Kaplan-Meier plots provide an in-depth representation of the survival probabilities on the test set, but the actual visual differences are often subtle and barely detectible by a naked eye. Therefore, the c-Index coupled with the standard deviation across folds provides a better way of comparing results, as it is also commonly found in the published baselines. ## **Responses to questions** **Higher performance reported in MOTCat [1] paper:** To make the results across experimental setups comparable, we re-ran all experiments under the same experimental conditions and the tunable sets of hyperparameters from the original paper, which we show in Table~1. The different results can be explained by the fact that this paper’s experimental setup is quite different from the one reported in [1]. While [1] predicts disease-specific survival (DSS), HEALNet is trained to predict general survival (GS). General survival does not filter for case deaths unrelated to the disease and is therefore a noisier prediction task. This filtering is evident by the smaller sample sizes on the same cancer cohorts in [1]. Getting higher c-Index results is a general trend we observed in studies predicting DSS instead of GS[2]. HEALNet follows the same experimental setup, evaluation, and cohort selection as in several multimodal benchmarks (Porpoise, MCAT). **Rationale for weight sharing**: We would would like to clarify two points: First, the weights are only shared between the sets of modality-specific weights across the fusion layer. For example, the cross-attention update for modality 1 and 2 would be shared across all the fusion layers, but no weights are shared between the modality 1 and 2 within or across fusion layers. Second, the weight sharing helps to 1) improve the parameter efficiency of the model and prevent exploding parameter size with a high number of layers and modalities and 2) we found it to prevent overfitting. [1] Xu, Y., Chen, H., 2023. Multimodal Optimal Transport-based Co-Attention Transformer with Global Structure Consistency for Survival Prediction, in: 2023 IEEE/CVF International Conference on Computer Vision (ICCV). https://doi.org/10.1109/ICCV51070.2023.01942 [2] Jaume, G., et al., 2023. Modelling dense multimodal interactions between biological pathways and histology for survival prediction. *arXiv preprint arXiv:2304.06819*. --- Rebuttal 2: Title: A gentle reminder Comment: Dear reviewer `a9ao`, Thank you again for your thoughtful review and positive attitude toward our work! We have addressed all of your questions and concerns. As the discussion period is drawing to a close, we are keen on getting your feedback. We are looking forward to your response. Thank you for your time. Kind regards, The Authors --- Rebuttal 3: Comment: Dear AC, We really appreciate your time and effort in managing our submission. Even though we addressed all of reviewer `a9ao` questions and concerns, we've noted that they haven't responded nor acknowledged our rebuttal thus far, even though they showed positive outlook on our work. We believe further interaction can be beneficial, and we are keen to discuss any remaining clarifications in the final hours of the discussion period. We would appreciate your assistance in reminding the reviewer to respond/acknowledge our rebuttal. Kind regards, Authors
Rebuttal 1: Rebuttal: ### **General comment** We would like to thank all reviewers for their time and insightful comments. We are encouraged that you acknowledge HEALNet as having an excellent structure and logical flow, sound experiments, and an interesting methodological contribution, as well as being clear, thorough, and unique. We also thank the reviewers for picking up on minor comments and typos which we will incorporate in the final manuscript. In the reviewer comments, we provide detailed answers to your questions with additional explanations of the raised issues. ### **Additional experiments** We would like to particularly highlight additional experiments that we ran based on the comments around scaling the method to larger datasets (i.e., larger sample size). To address this, we ran HEALNet some relevant baselines on a disease classification (ICD9) and patient mortality (MORT) prediction tasks of MIMIC-III (n=32,616). The table below shows the AUC and Macro AUC for the respective tasks. The modalities in MIMIC are tabular electronic health record features and time series on vital signs and test results. | Model/Dataset | ICD9 (n=32,616) [AUC] | MORT (n=32,616) [MacroAUC] | | --- | --- | --- | | Uni-modal (EHR) | 0.731 ± 0.023 | 0.658 ± 0.001 | | Uni-modal (Time Series) | 0.700 ± 0.013 | 0.715 ± 0.016 | | Perceiver (Early) | 0.733 ± 0.028 | 0.723 ± 0.015 | | Porpoise (Late) | 0.628 ± 0.020 | 0.617 ± 0.015 | | **HEALNet** | **0.767 ± 0.022** | **0.748 ± 0.009** | We did not have time to run all benchmarks given the time and resource constraints during the rebuttal, but we plan to add the full experimental extension to the final manuscript.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Local and Adaptive Mirror Descents in Extensive-Form Games
Accept (poster)
Summary: The submission considers the problem of learning epsilon-Nash equilibria from trajectory feedback in zero-sum extensive-form games. The submission focuses on developing that avoids importance sampling over action sequences. Strengths: The submission is well-written, and the problem and solution are reasonable. The experiments are reasonable choices to demonstrate the efficacy of the solution. Weaknesses: My main criticism of the submission is that it purports its motivation to be motivated by solving large games. However, using a fixed sampling is untenable for learning good policies in large games, as the target policy and the behavioral policy will become too far apart. Model-free approaches to learning equilibria in large games (i.e., [1, 2]) learn on policy. It would be good for the submission to discuss this discrepancy and how it can be resolved in greater detail. I'll mention that the submission does already acknowledge this point and include some discussion at the end. Also, it is worth noting that [2] showed empirical convergence for an on-policy algorithm with trajectory feedback and no importance sampling over action sequences. It would be interesting for the submission to discuss the feasibility of providing theoretical grounding for those results. [1] From Poincaré Recurrence to Convergence in Imperfect Information Games: Finding Equilibrium via Regularization (2020) [2] A Unified Approach to Reinforcement Learning, Quantal Response Equilibria, and Two-Player Zero-Sum Games (2023) One other comment I had: > Existing procedures suffer from high variance due to the use of importance sampling over sequences of actions (Steinberger et al., 2020; McAleer et al., 2022). This claim and citation is a bit confusing. The whole point of McAleer et al. (2022) was to address this issue. The submission is citing a paper that refutes the claim of the sentence. Technical Quality: 3 Clarity: 3 Questions for Authors: I guess I'd be curious to hear the author's thoughts on whether providing grounding for [2]'s results is a promising direction. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: I'll mention a limitation of my own review here: I did not confirm the correctness of the theory, so I'll defer to the other reviewer's on that point. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer HP7A for the overall positive review and the interesting references. We would like to answer the remarks and the question below. > My main criticism of the submission is that it purports its motivation to be motivated by solving large games. However, using a fixed sampling is untenable for learning good policies in large games, as the target policy and the behavioral policy will become too far apart. Model-free approaches to learning equilibria in large games (i.e., [1, 2]) learn on policy. It would be good for the submission to discuss this discrepancy and how it can be resolved in greater detail. I'll mention that the submission does already acknowledge this point and include some discussion at the end. Indeed, similarly to [3], we do not expend too much on this point as the best choices may not be motivated by theory. While the balanced policy gives the best rates, it may not be the optimal choice in practice. As we played a bit with the sampling policy in our experiments, we realized that updating from time to time the sampling policy with the current average one also provides good empirical results. We conjecture that for practical applications, the choice would not matter that much, as long as it decently explores the tree. This is why we mention the idea of restarting and taking the average policy as the new sampling one, as it achieves this purpose. We did not want to delve too much into this matter, as we think that this question would be better answered by more practical works, but we propose to add these insights to the paper. > Also, it is worth noting that [2] showed empirical convergence for an on-policy algorithm with trajectory feedback and no importance sampling over action sequences. It would be interesting for the submission to discuss the feasibility of providing theoretical grounding for those results. We thank Reviewer HP7A for pointing out these empirical results and we will add this reference in our literature review. Before answering, we would like to point out some differences between their theoretical setting and ours. First, they have a strong monotonicity assumption for the underlying operator $G$ they consider (in our case, $G$ would be $G:(\mu_{1:},\nu_{1:})-> (-\ell^\nu,\ell^\mu)$). This strong monotonicity assumption does not hold for their experiments, as the operator $G$ is linear with respect to the realization plan, hence they need to add some regularization, parameterized by the $\alpha$ constant. Second, they consider a full feedback setting in theory, observing the outcomes of all possible trajectories at each episode. Despite these differences, we found out that their approach can still be adapted to our setting. An entropic regularization with the Kullback-Leibler divergence allows for the use of an importance sampling estimator, which gives an unbiased estimator of $G$ (formerly provided by the full feedback). The rate is no longer linear, but only in $\mathcal{O}(T^{-1/2})$ for the exploitability with a fixed regularization. Then, as mentioned in [2] in commentary of Figure 2, the regularization can be decreased over time to converge to the real problem, but we think only a rate of $\mathcal{O}(T^{-1/4})$ for the true exploitability can be achieved with this approach, by taking $\alpha$ also of order $\mathcal{O}(T^{-1/4})$. However, this approach again introduces the importance sampling over action sequences to deal with the trajectory feedback. If the regularization is taken such that it compensates for the importance sampling term, then we obtain their empirical approach for dealing with the trajectory feedback. This still implies the need for an analysis with a regularization that varies with time, as the current policies will change. We think that the fact the analysis is done iteratively (as in their Theorem 3.4) rather than being based on the regrets helps in this regard, but we do not know yet if it is possible to provide some theoretical guarantees for this approach. >One other comment I had: "Existing procedures suffer from high variance due to the use of importance sampling over sequences of actions (Steinberger et al., 2020; McAleer et al., 2022)." This claim and citation is a bit confusing. The whole point of McAleer et al. (2022) was to address this issue. The submission is citing a paper that refutes the claim of the sentence. We cited McAleer et al. (2022) as it raises the issue and tries to solve it using the fixed sampling framework considered in our submission but with a CFR-based algorithm. As explained later in the introduction, existing procedures *not relying on fixed sampling* suffer from the high variance issue. We acknowledge this sentence could be a bit confusing and we will replace it by: "As noted by McAleer et al., 2022 most of the existing procedures suffer from high variance due to the use of importance sampling over sequences of actions.” [3] McAleer, S., Farina, G., Lanctot, M., and Sandholm, T. ESCHER: Eschewing importance sampling in games by computing a history value function to estimate regret, International Conference on Learning Representations 2022 --- Rebuttal Comment 1.1: Title: Response Comment: > We thank Reviewer HP7A for pointing out these empirical results and we will add this reference in our literature review. Before answering, we would like to point out some differences between their theoretical setting and ours. First, they have a strong monotonicity assumption for the underlying operator they consider (in our case, would be). This strong monotonicity assumption does not hold for their experiments, as the operator is linear with respect to the realization plan, hence they need to add some regularization, parameterized by the constant. Second, they consider a full feedback setting in theory, observing the outcomes of all possible trajectories at each episode. There may be a misunderstanding about the point I was making here. I was specifically pointing to their experiments with black box feedback (not full feedback) in Figure 13. The reference provides no theory for these experiments, but I think they are still interesting because they seem to be converging 1) on policy, 2) without importance sampling over sequences, 3) using black box feedback. I would be curious to hear any speculation the authors' may have on whether or not deriving guarantees for algorithms having those properties is a promising direction for this literature. --- Reply to Comment 1.1.1: Comment: Sorry, our response to this remark may not have been very clear. The first two paragraphs were about their theoretical setting (with a regularized full feedback and on-policy), but the third paragraph was indeed referring to these experiments with trajectory feedback, trying to bridge the gap between the two. We think that proving guarantees for their algorithm would be a very good contribution, but we do not know yet if it is doable (maybe this would not work well on carefully engineered games). The main difficulty is the regularization: this kind of update can make sense theoretically if the regularization is chosen to compensate for it at each iteration. However, this also implies that the solution of the regularized game changes over time, which is hard to analyze.
Summary: The paper studies the extensive-form game under the fixed sampling policy framework. It proposes the algorithm based on online mirror descent. The paper gives near-optimal regret bounds for the proposed algorithm, under different learning rate settings. The algorithm is justified with experiments. Strengths: 1. The the paper has detailed introduction and literature review, with a clear description of the problem setting and framework. The paper also makes sufficient comparisons to existing works under the same or different framework. 2. The result has optimal dependency on $T$ and near-optimal dependency on other game-related parameters. The removal of the importance sampling term in the fixed-rate setting is a good contribution. 3. The paper justifies it's theory with experiments. It makes comparison with a benchmark method and experiment result looks convincing. Weaknesses: 1. The framework itself is still confusing. While the authors made some discussion of the framework in section 2.2, the advantage of fixed sampling policy seems not reflected in the paper's theoretical result. And in terms of the regret, the paper's result doesn't improve the regret of simultaneous regret minimization procedures. 2. On the explanation of the theory, the paper can be improved by adding comparison to more benchmarks. The paper only compares the result with the result of simultaneous regret minimization procedure, and the conclusion is the bound matches but doesn't improve the existing result. It could be helpful to understand the paper's result by comparing previous results under the same fixed sampling policy framework. Technical Quality: 3 Clarity: 3 Questions for Authors: From the theorems the fixed and adaptive rate bounds have the same dependency on $T$. Is it correct to understand the motivation of the adaptive rate is only to remove the dependency on the importance sampling term $\kappa$? When the balanced policy is used as the sampling policy, $\kappa$ is reduced to a game-dependent term. In this case, is it correct think the adaptive rate setting as unnecessary? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: / Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer yah9 for the overall positive review. We would like to answer the points made below. > The framework itself is still confusing. While the authors made some discussion of the framework in section 2.2, the advantage of fixed sampling policy seems not reflected in the paper's theoretical result. And in terms of the regret, the paper's result doesn't improve the regret of simultaneous regret minimization procedures. The optimal rate in the online case is $\tilde{\mathcal{O}}(H(A_X+B_Y)/\epsilon^2))$ and is attained up to constant and logarithmic factors in [1] (note that the proof of the lower bound also works in the fixed sampling framework). For this reason, we cannot hope for theoretical improvements in this setting. The purpose of fixed sampling is to remove the importance sampling term, that does not appear in the leading theoretical regret term but that would make (in practice) function approximations fail. Removing this problematic term requires different ideas and analysis. > On the explanation of the theory, the paper can be improved by adding comparison to more benchmarks. The paper only compares the result with the result of simultaneous regret minimization procedure, and the conclusion is the bound matches but doesn't improve the existing result. It could be helpful to understand the paper's result by comparing previous results under the same fixed sampling policy framework. We are not aware of other papers that work within the fixed sampling framework with trajectory feedback besides those we have mentioned. If Reviewer yah9 can point additional references, we would be pleased to include them. Consequently, we primarily compare our theoretical results with those of [2]. Reference [3] only achieves a convergence rate of $\mathcal{O}(1/\epsilon^2)$ with respect to $\epsilon$. In the introduction, we discuss the optimal rate already achieved in the online setting by [1]. We will make an effort to reference these results more explicitly in our discussion. > From the theorems the fixed and adaptive rate bounds have the same dependency on $T$ . Is it correct to understand the motivation of the adaptive rate is only to remove the dependency on the importance sampling term ? When the balanced policy is used as the sampling policy, $\kappa$ is reduced to a game-dependent term. In this case, is it correct think the adaptive rate setting as unnecessary? The motivation behind the adaptive rate, in addition to being a more natural choice, is mostly practical: it does not require the balanced policy to be computed beforehand and the horizon $T$ to be known. Theoretically, the adaptive rate is indeed unnecessary, as using the balanced policy already allows for the best rate with respect to the game parameters. Nevertheless, considering the practical advantages of an adaptive policy, we wanted to also provide some theoretical guarantees for it. Our initial thought was that we could obtain the same optimal rate. However, we did not manage to prove it and now strongly believe it is not possible. Optimality with regards to the $\epsilon$ dependency was possible nonetheless under some quite general assumptions (with fewer restrictions on the fixed sampling policy and the regularizer) which is already in our opinion a valuable result. [1] Fiegel, C., Ménard, P., Kozuno, T., Munos, R., Perchet, V., and Valko, M. Adapting to game trees in zero-sum imperfect information games, International Conference on Machine Learning, 2023 [2] Bai, Y., Jin, C., Mei, S., and Yu, T. Near-optimal learning of extensive-form games with imperfect information, International Conference on Machine Learning, 2022 [3] McAleer, S., Farina, G., Lanctot, M., and Sandholm, T. ESCHER: Eschewing importance sampling in games by computing a history value function to estimate regret, International Conference on Learning Representations 2022
Summary: I reviewed an earlier version of this paper. While I have some additional comments, my review is largely similar to my previous review, since the paper has only a few minor edits relative to the previous version, as far as I can ascertain. The paper introduces algorithms designed to approximate Nash equilibria in zero-sum imperfect information games (IIG) with trajectory feedback. Specifically, the authors focus on the fixed-sampling framework. Past approaches have struggled with significant variance, largely due to the utilization of importance sampling across action sequences (McAleer et al. (2022) provide an algorithm that doesn't require importance sampling, but their techniques don't generalize to non-scale invariant regret-minimization-based algorithms such as OMD). The proposed approach employs an adaptive Online Mirror Descent (OMD) algorithm. This involves using adaptive learning rates along with a dilated regularizer. The paper demonstrates that this method ensures a convergence rate of O\tilde(T^{-\frac{1}{2}) with high probability and exhibits a near-optimal dependence on the game parameters, when employed with appropriate choices of sampling policies. Strengths: The paper provides a strong technical contribution in producing an adaptive trajectory-feedback-based adaptive OMD algorithm that doesn't rely on importance sampling and generalizing DS-OMD to use time-varying regularization. The paper provides empirical evidence for the convergence and variance of their approach, compared to other approaches in the literature, and also provides code for their algorithm. Weaknesses: The paper could delineate its contributions better. As noted by most reviewers in a previous submission, the similarity to the regret circuit decomposition of CFR is apparent (and noted by the authors). Still, the difference could be further highlighted in the contributions, especially since this was confusing for several reviewers last time. A note was made by the authors last time regarding the interpretation of their method as regularization at the global level (whereas CFR doesn't have this sort of interpretation). While it is mentioned now that the proposed algorithm is an instance of GDS-OMD, I think these things could be further emphasized throughout the paper. Along the same lines, one of the reviewers pointed out last time that things like $h_t$ and $q_t$ are not defined in the main body, and as far as I can tell, this is still an issue, even though the authors indicated that this would be clarified (even if not at the detailed technical level as done in the appendix) in the main body. It would be good for the authors to implement these changes, given that they assured the reviewers they would do this to improve the presentation. One of the reviewers noted last time that the motivation for variance reduction is unclear. I agree with this. While the reviewers mention that variance reduction becomes a concern for performance in function approximation settings, it would be nice to see this in their experiments (since the current empirical evidence for the provided algorithm doesn't seem to indicate that there is much gain in performance beyond the variance reduction with the current approach). Technical Quality: 3 Clarity: 2 Questions for Authors: No specific questions beyond the weaknesses mentioned above. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We first thank Reviewer 1VRY for taking the time to review our submission and for suggesting improvements. We address them below. > The paper could delineate its contributions better. As noted by most reviewers in a previous submission, the similarity to the regret circuit decomposition of CFR is apparent (and noted by the authors). Still, the difference could be further highlighted in the contributions, especially since this was confusing for several reviewers last time. A note was made by the authors last time regarding the interpretation of their method as regularization at the global level (whereas CFR doesn't have this sort of interpretation). While it is mentioned now that the proposed algorithm is an instance of GDS-OMD, I think these things could be further emphasized throughout the paper. LocalOMD indeed enjoys at the same time a local interpretation (as CFR) and a global interpretation (as the application of GDS-OMD updates). This is actually a unique feature of our algorithm. As noted by Reviewer 1VRY, we mention this particularity several times: - At the beginning of the contribution section (l. 81), pointing the similarity to a paper studying a CFR approach. - In the local loss paragraph (l. 251), dedicated to this idea. - For justifying the use of the adaptive rates (l. 273). We propose to add to the conclusion the following sentence: “LocalOMD enjoys simultaneously two interpretations: one as a Mirror Descent type algorithm working at the global level, with a single update performed at each iteration over the whole tree; and one as regret minimizers working locally at each information set, which makes it very similar to a CFR algorithm despite a fundamentally different approach.” > Along the same lines, one of the reviewers pointed out last time that things like $h_t$ and $q_t$ are not defined in the main body, and as far as I can tell, this is still an issue, even though the authors indicated that this would be clarified (even if not at the detailed technical level as done in the appendix) in the main body. It would be good for the authors to implement these changes, given that they assured the reviewers they would do this to improve the presentation. Indeed, the notations $h_t$ and $q_t$ are only defined in Algorithm 2 in the main paper, and not directly in the body. We tried to include them in the main text after the last submission with some necessary explanations of where these two terms come from to avoid further confusion. In particular we would need to reproduce appendix D.1 of [1] or appendix F of [2] about obtaining a practical implementation of the GDS-OMD update. However, this would obfuscate the main message of this paper with technical details already appearing in the literature. That is why we finally decided to skip them. If a reviewer thinks that those explanations should be inserted here from existing papers, we could use the extra-page to include them nonetheless. > One of the reviewers noted last time that the motivation for variance reduction is unclear. I agree with this. While the reviewers mention that variance reduction becomes a concern for performance in function approximation settings, it would be nice to see this in their experiments (since the current empirical evidence for the provided algorithm doesn't seem to indicate that there is much gain in performance beyond the variance reduction with the current approach). The high scale of the rewards is an issue that currently prevents the practical implementation of Mirror Descent algorithm with trajectory feedback: as explained at l. 258, the reward can be of order $\mathcal{O}(A_X)$, which is incompatible with the function approximation settings. The main argument favoring the fixed sampling framework is that it allows the rewards to stay of order $\mathcal{O}(HA)$, which becomes manageable. We have included experiments demonstrating that this approach does indeed reduce variance in relatively small games. However, this issue of high variance is less significant in the tabular setting. Additionally, demonstrating that this modification enables function approximation with the trajectory-based Mirror Descent approach would require a separate paper, given the extensive technical adjustments needed to adapt a tabular algorithm to the function approximation setting, which is beyond the scope of our current submission. [1] Bai, Y., Jin, C., Mei, S., and Yu, T. Near-optimal learning of extensive-form games with imperfect information, International Conference on Machine Learning, 2022 [2] Fiegel, C., Ménard, P., Kozuno, T., Munos, R., Perchet, V., and Valko, M. Adapting to game trees in zero-sum imperfect information games, International Conference on Machine Learning, 2023 --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response; my understanding of your paper and contribution has improved and I have raised my score. I hope that the authors consider the suggestions provided by reviewers from past submissions and the current submission. Going back to suggestions pointed out in a review of an earlier submission, even the very minor point of correcting the stylization of the game "Liar's Dice" was not corrected, even though this was acknowledged by the authors as something to be fixed. I only bring this up because as I noted in my review there was not much change in this submission relative to the last submission that I reviewed (indeed, this is why I pointed out "weaknesses" corresponding to discussion between past reviewers and the authors); I hope that if the paper gets accepted, the authors end up implementing the changes they suggest they will in their rebuttals. > “LocalOMD enjoys simultaneously two interpretations: one as a Mirror Descent type algorithm working at the global level, with a single update performed at each iteration over the whole tree; and one as regret minimizers working locally at each information set, which makes it very similar to a CFR algorithm despite a fundamentally different approach.” I agree that would make a good addition to the conclusion. > That is why we finally decided to skip them. If a reviewer thinks that those explanations should be inserted here from existing papers, we could use the extra-page to include them nonetheless. Makes sense. I agree that it is not necessary to include them in the main body to prevent the obfuscation of the contribution of this paper.
null
null
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Noisy Label Learning with Instance-Dependent Outliers: Identifiability via Crowd Wisdom
Accept (spotlight)
Summary: The paper addresses the problem of learning with noisy labels by considering a model where instance-dependent confusion matrices occur occasionally across the samples, and the rest of data share a common nominal confusion matrix. The paper claims two main contributions: (1) showing that a single confusion matrix is insufficient to identify outliers and proposing a crowdsourcing strategy with a column sparsity constraint to overcome this, and (2) presenting an end-to-end, one-stage learning loss that is differentiable and easily optimized. These contributions are validated through experiments that demonstrate improved testing accuracy. Strengths: ● The problem setting, considering the instance-dependent noisy samples as outliers, is realistic and really interesting. ● The paper posits that existing methods relying on sparsity priors are insufficient for outlier detection and proposes a novel crowdsourcing strategy as a solution with theoretical grounding and generalization guarantees. Weaknesses: Lack of large-scale datasets in experiments Technical Quality: 3 Clarity: 3 Questions for Authors: Can the authors discuss any limitations their method might have in scenarios of highly imbalanced noise? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: __[Regarding Large-Scale Datasets]__ We agree with the reviewer on this point. In the current manuscript, we have tested our methods against baselines on a dataset of size 50,000 samples in both machine annotations and real datasets. We did not use larger data size due to resource limitations and also because our focus has been on the design principle side. Nonetheless, to examine the proposal’s scalability, we have conducted an experiment on the full SVHN dataset which contains more than 600,000 images. In this experiment, $K=10$. We used $M=3$ machine annotators as we defined in the manuscript. The error rates of the annotators are 0.30, 0.36, and 0.32. We also included performance of $\texttt{GeoCrowdNet(F)}$ and $\texttt{MaxMIG}$ for comparison. The result is presented in _Table 6 in the attached pdf_. Although we have not been able to sufficiently tune our method’s parameters due to time limitation, the preliminary result already looks promising. __[Regarding Imbalanced Noise]__ Thank you for this question. Please note that Table 1 in the manuscript addresses the imbalanced noise setting, where annotators have different annotation noise rates. Table 4 of the Appendix G presents the individual noise rates for each scenario. As shown in the table, in the high level noise case, the noise rates for the three annotators are 26%, 69%, 57% for CIFAR-10, and 15%, 25%, 69% for STL-10, respectively. The results in Table 1 show that our method is robust to such highly imbalanced noise settings, outperforming the second-best baseline by approximately 3% on average.
Summary: The paper investigates the challenge of instance-dependent noisy labels, which are modeled using an instance-dependent confusion matrix reflecting annotator errors. Traditional approaches assume a consistent confusion matrix across instances, which simplifies the problem but is unrealistic. This study models the instance-dependent confusion matrices as outliers. The authors propose using a crowdsourcing strategy with multiple annotators and a specialized loss function to effectively detect outliers and identify the target classifier. The method is supported by extensive theoretical results. Experimental results also confirm the efficacy of the proposed method. Strengths: 1. The work aims at an important and the proposed solution is solid. 2. The presentation is clear and the proposed method is supported by thorough theoretical results. 3. Experiments on real-world label noise such as CIFAR-N validate the effectiveness of the proposed method. Weaknesses: 1. Theorem 3.6 requires $S\rightarrow \infty$. However, we know if we have sufficient annotators, the aggregated noise rate could be 0, making the result trivial. It is interesting to show the result with finite $S$. 2. The organization part of the proposed method could be polished. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Does this method heavily rely on multiple annotators? If there is only one label for each example, is there any approximation method to generate extra labels to simulate multiple annotators? 2. Would existing estimators for noisy labels, such as [R1], provide a good initialization for the variable $\mathbf A_m$ in the proposed loss function? [R1] Unmasking and improving data credibility: A study with datasets for training harmless language models. ICLR 2024. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: __[Regarding $S \to \infty$ in Theorem 3.6]__ We agree that an analysis with finite $S$ is more desirable (as we did in Theorem 3.5). Nonetheless, we argue that even with infinite $S$, the result in Theorem 3.6 is meaningful and nontrivial. Let us explain. First, the aggregated noise will not disappear when $S \to \infty$. Note that $S$ represents the number of annotations (not the number of annotators). When ($M,N$) are fixed, $S$ can still go to $\infty$ as sampling for annotation is assumed to be with replacement. Recall that for $\textbf{y}_n^{(m)} \sim \textbf{p}_n^{(m)} = \textbf{A}_m \textbf{f}(\textbf{x}_n) + \textbf{E}_n^{(m)}$. Even when there is no outliers, if $\textbf{A}_m$ is heavily non-diagonal, $\textbf{p}_n^{(m)}$ is going to be very different from $\textbf{f}(\textbf{x}_n)$, and thus $\textbf{y}_n^{(m)}$ is always going to be wrong no matter how large is $S$. One possibility of using increased annotators to attain zero aggregated noise is when $\sum_{m=1}^M \textbf{A}_m = \textbf{I}$, but this requires that all the annotator errors are zero mean across $m$---which might be too special to assume. Majority voting also does not help if most $\textbf{A}_m$ are not close to diagonal. Second, the identifiability analysis is not trivial and the result has good practical implications. Note that $S \to \infty$ makes the equality $\textbf{p}_n^{(m)} = \textbf{A}_m \textbf{f}(\textbf{x}_n) + \textbf{O}_n^{(m)}$ hold for every $m$ and $n$. But this equality per se still does not guarantee the identifiability of any of $\textbf{A}_m$, $ \textbf{f}(\textbf{x}_n)$, or the outlier $\textbf{O}_n$. The point made by Theorem 3.6 is that our formulation with a volume maximization constraint can identify all the three terms. This theorem gives useful insight: when we have sufficiently large $S$, using the outlier sparsity and volume regularization (as shown in line 227-228, page 6) is expected to give good performance. We believe this insight itself can benefit practical implementations. __[Organization of the Proposed Method Section]__ We will do a thorough proofreading and polish the section. __[Regarding Multiple Annotators and the One-Label-Per-Item Case]__ This is an interesting question. The method does require the existence of multiple annotators, but it does not require each item to be labeled by all annotators (see the settings of Lemma 3.1 and Theorem 3.5). Therefore, even if each item is annotated by only one annotator, the method still works under reasonable conditions. To show this, we present some empirical results in _Table 3 and 4 of the attached pdf_. The results suggest that the proposed method still outperforms the competing baselines in this setting. __[Initialization for A_m]__ Good initialization techniques are always appreciated. We will add discussion in the revised version on initializing $\textbf{A}_m$ using existing methods such as those in [R1]. In _Table 5 of the attached pdf_, we also compared the performance of our approach using different initialization strategies. In particular, we compare the machine annotator experiment on CIFAR-10 using different initialization strategies: (i) $\textbf{A}_m$ initialized using identity matrices and (ii) $\textbf{A}_m$ initialized by the$ \texttt{GeoCrowdNet(F)} $method. We also include the performance of $\texttt{GeoCrowdNet(F)}$ for reference. We observe a slight improvement (around 0.1-0.4%) in accuracy when using initialization from $\texttt{GeoCrowdNet(F)}$ with the cost of training $\texttt{GeoCrowdNet(F)}$ for 10 epochs.
Summary: The paper extends further estimation of transition matrix in label noise learning. In previous studies, label noise is often assumed to be class-dependent. Hence, one can use the noisy label data and apply loss correction to learn a good classifier. The paper extends from such a modelling approach by considering most of samples having class-dependent label noise and some samples having class- and instance-dependent noise. Those with instance-dependent label noise are then considered as outlier in the newly proposed modelling approach. Despite such a modelling, it still results in a non-identifiable solution. To overcome that, the paper incorporates multiple labels from multiple annotators, so that one can solve for the nominal transition matrix and the classifier of interest. The paper also provides theoretical results to guarantee that the estimation holds under certain confidence level. Empirical results show that models trained on the new loss function performs competitively with prior methods. Strengths: - The paper thoroughly presents the motivation and the formulation of their modelling approach. The paper also connects to previous studies when formulating the noisy label learning with instance-dependent noise as outliers. In general, the paper is well-written and easy to follow. - The main contribution of the paper is to extend further the modelling considering only class-dependent transition matrix by adding *perturbation* induced by instance-dependent label noise samples. The paper also analyses the theoretical guarantee when estimating the classifier of interest in terms of PAC-style formulae. Weaknesses: The major weakness of the paper may lie at the assumption that majority of samples would share the same nominal transition matrix, while a few would have their own transition matices. However, this does not affect too much to the contribution of the paper. Another weakness is the usage of multiple annotators, while the conventional noisy label learning does not consider any additional labels. Of course, some studies (e.g., [43]) show that using a single noisy label would result in non-identifiability, unless additional constraints are exploited. Nevertheless, multiple annotator setting would be closer to current practice. The downside is that there are not many multi-rater learning datasets for benchmarking. ### Minors - Line 76: typo "traget" -> target - The abbreviation "NMF" at line 100 is not defined. - Line 111: $E(x_{n})$ is a matrix. Thus, please make it clear when specifying $E(x_{n}) > 0$. Technical Quality: 3 Clarity: 3 Questions for Authors: I do enjoy reading the paper. It is coherent and easy to follow. I do not have any questions. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: As mentioned in the weakness section above and in the conclusion of the paper, the paper only considers a few samples with instance-dependent label noise, while the remaining samples are class-dependent label noise, to reduce the complexity of the modelling and analysis. Such an assumption may not always hold in all settings. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the insightful comments. __[Regarding the Lack of Public Multi-rater Dataset]__ We agree. It would benefit the community if more public multi-rater datasets are available. We are working on creating such data using Amazon Mechanical Turk (AMT), but the size of data is still limited by resource/cost and it might be beyond the timespan of this submission. A relevant remark is that, to conduct experiments under realistic multi-annotator settings, we considered machine annotators in our experiments (see our settings in Sec. 5.1, and Appendix G.2). Note that a machine annotator is a classifier (e.g., SVM, kernel SVM and KNN based classifiers) that are trained using limited data. Hence, they all make mistakes in “labeling” data. The setting is of interest: (i) integrating the annotations from such erroneous classifiers is called ensemble learning, which is a special case of noisy label learning and crowdsourcing---and is a real application in practice. (ii) The classifiers make mistakes according to the hardness of each item, and thus the setup well represents instance-dependent errors. We will release the code for making such machine annotator-based datasets, so that the community could easily create multi-rater scenarios with specified error rates and missing label proportions, etc, to facilitate reproducible research in an economical way. __[Minor Points]__ Thanks for the attentive reading. We will fix the typos. In particular, $\textbf{E}^{\natural}(\textbf{x}_n)>0$ should be $\textbf{E}^{\natural}(\textbf{x}_n) \neq \textbf{0}$, that is, the matrix is not an all-zero matrix. --- Rebuttal Comment 1.1: Title: Comments by Reviewer TTRF Comment: Thank you, the authors, for discussing the concerns I raised in my initial review. Regarding the public datasets, there are a number of available datasets that could be used in noisy label learning and multi-rater learning: - 10 datasets mentioned in the NeurIPS 2022 paper: *Is one annotation enough? - A data-centric image classification benchmark for noisy and ambiguous label estimation* - *dopanim: A Dataset of Doppelganger Animals with Noisy Annotations from Multiple Humans* available on zenodo with DOI 10.5281/zenodo.11479590 In addition, I have just found another (preprint) paper investigating the identifiability of the noisy label learning: *Towards the Identifiability in Noisy Label Learning: A Multinomial Mixture Approach*. In that paper, they showed that at least $2C - 1$ noisy labels per sample needed to make the label noise problem identifiable, where $C$ is the number of classes. This is, to me, more practical than [43] because the result of 3 noisy labels per sample in [43] is too optimistic, although in their paper, they also mentioned that it must be larger than 3 (but not explicitly say how many specifically). Hence, I think that it worths to add that into the discussion, especially the authors already showed that the more annotations, the better the estimation. In summary, I believe that this is a good paper in the field of noisy label learning.
Summary: This paper studied the identifiability problem of instance-dependent label noise with multiple annotators. To achieve the identifiability, this work first claimed a fact that only a proportion of all instances may have a labeling difficulty that significantly deviates from the general population. Then, it connected the problem to the uniqueness of non-negative matrix factorization with mild assumptions. Inspired by the identifiability result, this work proposed a end-to-end one-stage method to learn from crowds via identifying instance-dependent label noise. Experiments on multiple datasets with machine annotations and human annotators showed the effectiveness of the proposed method. Strengths: 1. This work provided some interesting identifiability results for learning from crowds, which may inspired further work. 2. The writing is excellent, making it easy to understand. 3. The proposed method is end-to-end and one-stage, which is nice for application. 4. The case study in Figure 2 clearly showed the rationality of the results. Weaknesses: 1. As this work claimed, the number of annotators is important to identify the instance-dependent label noise, is there some experimental results verifying this analysis? 2. For human annotations, the annotations are usually sparse. Does the annotation sparsity level influence the identifiability and the performance of the proposed method? I suggest the authors do some experiments as existing works to clarify it [1-3]. 3. The meaning of some terms, like "outliers", "the model of interest", "neural systems", is not clear when just reading the abstract. Besides, since the sparsity prior of the outliers is a basis assumption for this work, it should be referred to in the abstract. [1] Label correction of crowdsourced noisy annotations with an instance-dependent noise transition model. NeurIPS 2023 [2] Coupled confusion correction: learning from crowds with sparse annotations. AAAI 2024 [3] Trustable co-label learning from multiple noisy annotators. TMM 2023 Technical Quality: 3 Clarity: 3 Questions for Authors: 1. The meaning of some terms, like "outliers", "the model of interest", "neural systems", is not clear when just reading the abstract. 2. It seems that there is a symbol that is not displayed in the restrictive condition of Eq.(6). 3. What does $e^\star_n$ mean in Line 129? 4. What does "over-canceling” mean in Line 211? 5. I suggest placing the learning-from-crowds methods after the learning-from-single-annotator methods in Table 1 and 2. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: __[Regarding Impact of the Number of Annotators]__ In Figure 3 of the supplementary section H.1, we presented experiments showing the impact of the number of annotators for outlier identification. The results indicate that increasing the number of annotators improves the detection of instance-dependent samples as well as the classification performance. We will add more obvious pointers to this result in the supplementary material. __[Regarding "Sparse Annotations"]__ Thanks for this suggestion. We hope to remark that our identifiability analysis does cover the “sparse annotation” case—see Eq. 10, Lemma 3.1 and Theorem 3.5 where the bounds depend on $S$, i.e., number of observed labels. Here, $S$ can be much smaller than $NM$, where $N$ is the number of data items and $M$ is the number of annotators. We also thank the reviewer for providing the papers addressing sparse annotations. We will discuss them in “Related Works”. The interest on the sparse annotations also attests to the importance of our theoretical result in Theorem 3.5. In our real-data experiments, the LabelMe dataset has many missing labels (please see Table 2 of the manuscript). To be specific, only 2547 annotations are obtained from 59 workers for 1000 images, which implies that 95% of the annotations are missing ($S/(NM) \approx 0.05$). Our approach scores the best classification performance in this case as shown in Table 2 of the manuscript. To further analyze the effect of missing annotations, we followed the reviewer’s suggestion to conduct the following synthetic data experiments for various levels of sparsity in the CIFAR-10 dataset—see _Table 1, 2 in the attached pdf_. The results show that the proposed method is more robust to the negative effect brought upon by the missing annotations, relative to the best-performing baselines, namely, $\texttt{MaxMIG}$ and $\texttt{GeoCrowdNet}$. __[Regarding Abstract]__ Thank you for your suggestions. We agree that the terminologies can be simplified/unified to improve clarity. Mentioning the sparsity prior also makes sense to us. We will revise our abstract accordingly. __Questions:__ __[Q1]__ Agreed. We will revise accordingly. __[Q2]__ The symbol $\mathbb{I}[E]$ in the constraint denotes the indicator function, whose value is 1 if the event $E$ happens and 0 otherwise. We defined this notation in Table 3 in Appendix A. To avoid confusion, we will make this clearer in the main text. __[Q3]__ $(\textbf{A}_n^\star, \textbf{e}_n^\star, \textbf{f}^\star)$ denotes an optimal solution of Problem (6). We will make it clearer. __[Q4]__ Ideally, we hope to just cancel/exclude the exact set of outliers from the process of learning $\textbf{f}$. This would require us to set $C=\| \mathcal{I}\|$ in Problem (6). Nonetheless, the exact number $\|\mathcal{I}\|$ is hard to estimate. Our analysis in Theorem 3.6 shows that for even when $C$ is over-estimated ( i.e., $C > \|\mathcal{I}\|$), our method still can cancel out all the outliers. As a price to pay for overestimating $C$, there will be $C- \|\mathcal{I}\|$ nominal data samples also identified as outliers and excluded for learning $\textbf{f}$. This is what we meant by “over-canceling”. Thanks for pointing out this ambiguity. We will rephrase this sentence to make this discussion clearer. __[Q5]__ Thank you for your suggestion. We will do the rearrangement in the revised version.
Rebuttal 1: Rebuttal: We would like to thank all reviewers for their attentive reading and valuable comments/suggestions. We have replied to each reviewer in their corresponding sections. Here we present a summary of major comments and our responses accordingly. __[Reviewer okUb]__ Reviewer okUb suggested investigating the impact of annotation sparsity and the number of annotators. We clarified that the effects of both annotation sparsity and number of annotators are covered by our analysis and some existing experiments. To further observe these aspects, we have additionally run more experiments under various sparsity levels and presented the results in _Table 1 and 2 in the attached pdf_. We also clarified a number of notation and terminology questions. __[Reviewer TTRF]__ Reviewer TTRF made a comment regarding the lack of public multi-rater datasets in this field. We followed up with some thoughts and offered our perspectives. We highlighted the benefits of including machine annotators in our experiments and its usefulness in facilitating reproducible, realistic experiments in an economical way. __[Reviewer FFyM]__ Firstly, the reviewer FFyM raised concerns about the significance of the infinite sample analysis in Theorem 3.6. In our response, we clarified the merit of Theorem 3.6 under the infinite case, when the number of annotations goes to infinity. We pointed out that the aggregated noise would not be canceled out by simply letting $S \to \infty$ and explained the practical implications and significance of the identifiability analysis of Theorem 3.6. Secondly, reviewer FFyM asked about the importance of multiple annotators and suggested considering a one-label-per-sample case. We clarified that our method is able to work with such single-label data and provided additional results in _Table 3 and 4 in the attached pdf_. Lastly, we followed reviewer FFyM’s suggestion on trying different initializations, and provided additional empirical study (in _Table 5 of the attached pdf_). __[Reviewer BkAV]__ Reviewer BkAV made a comment about experiments using large-scale datasets. We replied and offered our perspectives on these points. To address this comment, we conducted an experiment on the full SVHN dataset which contains more than 600k images. The result is presented in _Table 6 in the attached pdf_. In addition, reviewer BkAV asked about the performance of the proposed method under imbalanced annotation noise. We pointed out that we had already considered such a setting in our experiments and provided a pointer to the pertinent results in the appendix. Pdf: /pdf/ef974c16f9800e1bd820c847e5c710a8b1d1a32f.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
LLM Processes: Numerical Predictive Distributions Conditioned on Natural Language
Accept (poster)
Summary: This paper focuses on the setting where the goal is to use LLMs to make numerical predictions, as these models can be grounded by the provision of side-information (given in text) as well as other information that may be learned during their pretraining. They propose LLM Processes, an approach to apply LLMs to numerical tasks, such as density estimation and multivariate time series tasks. In computing their densities over continuous valued numerical inputs, they employ binning as is done in LLMTime, although they do not rescale to remove decimal points. They also add an additional terminal special token (<t>). In selecting their input prompt formats, they use separators between each x, y and (x,y) pair, order points by distance to the current point, scale the y values to be closer to [0, 1] and not incorporate -1 values. On many 1D synthetic tasks, LLMP demonstrates matching or better performance when compared to GPs with a RBF kernel. Their approach outperforms the most related baseline of LLMTime in terms of both NLL and MAE in predictions on a Weather time series benchmark. Their experiments also outperform a GP (with RBF kernel) on a multidimensional input/output task of simultaneous temperature, rainfall and wind speed regression task. They also provide experiments looking at the ability of LLMP to handle textual information (alongside numerical inputs); they provide descriptions of the numerical features, which cannot be used in baseline forecasting approaches that do not understand text. Strengths: 1. Good empirical results showing that the proposed approach improves over the baselines of LLMTime and a BP with an RBF kernel across a variety of univariate and multivariate forecasting tasks. 2. Ablations supporting particular choices for input formatting and scaling Weaknesses: Slightly unclear in the evaluation; please see questions below. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. It’s not immediately clear to me what the takeaways from Figure 9 are. What are the axes for each of the subplots? 2. My current understanding is that incorporating some additional textual information should meaningfully change the predictive distribution, which seems to occur. Is there any way to measure how correct this change in distribution is? 3. For instance, in Figure 9d, it seems that the median from 5 random samples seems to be roughly constant (around 2.5-3), which does not seem to match the ground truth values in 9f. Is this case just a failure of the LLM Process? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Yes, limitations are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for taking the time to review our paper and for the positive words about our contributions. We address your questions below. Figure 9 is intended to demonstrate how predictions from an LLMP can be influenced by conditioning on various scenarios communicated via text prompts to the model. This experiment is a qualitative investigation with no specific ground truth information. However, we hope to show that a prompt containing two synthetic initial data points plus some descriptive text will yield samples that are meaningful. We provide units for the axes in the prompts (detailed in Appendix J.1) as follows: - Figure 9b) “The following are daily temperature measurements from Montreal in January in degrees Celsius” - Figure 9c) “The following are daily temperature measurements from Montreal in May in degrees Celsius” - Figure 9d) “In the following series, the first number is the number of Months from January and the second is the Monthly precipitation measurements in inches from San Diego, CA” - Figure 9e) “In the following series, the first number is the number of Months from February and the second is the Monthly precipitation measurements in inches from Singapore” We will add these units to the plots in the next revision. You are correct that the median response in Figure 9d) does not match the ground truth in Figure 9f) particularly well. They match in the sense that both figures indicate that the precipitation is quite low all year round in Singapore, but it does not match the minor rising and falling trend. We think that this can be attributed to the fact that since the synthetic datapoints also do not match actual values very well, the LLMP had difficulty reconciling this mismatch between values and text information. We included this experiment primarily to demonstrate that the model is aware of this seasonal trend in San Diego vs. Singapore (Singapore has one of the least seasonal trends in precipitation on earth) and that can reasonably modify the LLMPs predictive distribution on the same collection of data values by varying the prompts. Different from Figure 9, Figure 10 is a quantitative investigation on how incorporating textual information can improve predictive performance. In this experiment, we measure how correct this change in predictive distribution is by examining improvements in the Negative Log-likelihood (NLL) and the Mean Absolute Error (MAE) given by the predictive distribution when text is included in the prompt versus when it is excluded. Figure10 shows that both these metrics improve (i.e. are lower) when conditioning on text. Please let us know if you have any more comments or questions. If we have adequately addressed your concerns, we kindly ask that you consider revising your score. --- Rebuttal Comment 1.1: Title: Reviewer Response Comment: Thank you for the clarifications about the evaluation and axes meaning in the figures. I'll maintain my stance of a weak accept; I think there are useful comparisons to prior work and ablations that illuminate some of both the abilities and failures of the application of LLMs to numerical prediction.
Summary: This paper proposes using LLMs to model joint distributions over numerical outputs while conditioning on potentially multiple covariates per data point. This is achieved through tokenizing a series of input and output pairs and then decoding corresponding outputs for another input. Outputs for numerical values are achieved by treating the number as a sequence of digits, each of which are represented with a token. The method is empirically validated with additional experiments showcasing capabilities of conditioning on additional contextual information, such as text describing the process. Strengths: The problem statement is innovative and clearly investigates many different ablations and alternatives. Extensive empirical results are shown, investigating different failure modes and comparing performance to traditional methods. The method demonstrates great versatility and interesting applications. Additionally, the paper itself is very well-written and clear in its presentation. Weaknesses: The main weakness in the approach that I potentially see is the runtime. From what I could tell, there is no runtime results in either the main paper or the appendix. For processing a sequence of input-output pairs, what is the general runtime exhibited, and how does it change as the sequence grows? Technical Quality: 4 Clarity: 4 Questions for Authors: See weaknesses. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The authors extensively discussed the limitations and societal impact of the model. Flag For Ethics Review: ['No ethics review needed.'] Rating: 9 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for taking the time to review our paper and for the positive words about our contributions. We address your questions below. **The main weakness in the approach that I potentially see is the runtime. From what I could tell, there is no runtime results in either the main paper or the appendix. For processing a sequence of input-output pairs, what is the general runtime exhibited, and how does it change as the sequence grows?** We agree that this is the main weakness our method. We view the main use case of LLMPs as being when the benefits of incorporating textual information into your regression problem outweigh the significant computational expenses involved. Appendix G “Additional Implementation Details” does contain some example runtimes. It states that processing times vary as a function of the: - GPU used. - length of the prompt. - number of target points queried. - number of tokens required to be generated for a particular target point. - the number of samples taken at each target point - and whether independent or autoregressive sampling is used. However, Appendix G does not specifically address your questions, so below is some new data that hopefully will give you a more tangible understanding of runtime performance. Note that this new data is quantitative as oppose to general "Big O notation". The table below shows the time to load the LLM into GPU memory, the time for the LLM to generate all samples at all target points, and the time to compute the probability distribution over the true target points. All runs used the Llama-2-7B LLM and were executed on an NVIDIA 3090 GPU with 24GB of memory with a batch size of 10. All times are in seconds. | Function | Model | Load (s) | Sample (s) | Likelihood (s) | |:--------------------------------------------------|:--------:|:----------:|:------------:|:------------------------:| | Quadratic - 10 Training Points, 40 Target Points | I-LLMP | 5 | 81 | 1 | | Quadratic - 10 Training Points, 40 Target Points | A-LLMP | 5 | 170 | 3 | | Quadratic - 50 Training Points, 40 Target Points | I-LLMP | 5 | 259 | 4 | | Quadratic - 50 Training Points, 40 Target Points | A-LLMP | 5 | 354 | 7 | From the table, we can see that the longer the prompt, the longer the computation time for each target point. For independent sampling (I-LLMP), the prompt length is constant and is only a function of the number of training points as each target point is processed independently. For autoregressive sampling (A-LLMP), the prompt length is a function of both the number of training points and the number of target points since each target point is appended to the prompt as it is sampled. We will add these details in the next revision. --- Rebuttal 2: Comment: Thank you for the response to my question, this does indeed directly communicate the type of information I wanted to see regarding this issue. As far as the paper, I do still feel strongly that this is innovative work with a great deal of experimental results backing the empirical findings. As such, I maintain my original score.
Summary: This paper investigates the regression problem in large language models via in-context learning. They evaluate a variety of regression tasks such as for-casting and time series prediction, multi-dimensional regression, and more. They look into prompt engineering exploiting both numerical examples and their textual explanation for eliciting coherent predictive distributions. Strengths: -The problem of looking into the regression capabilities of language models in their in-context learning is very interesting and important. -the paper provides a large amount of experimental resutls and research work. Weaknesses: -some parts of the paper were unclear to me, especially the experimetnal part. -Though the related works are covered well in terms of citation the actual resutls are not compared to the previous ones. see below my questions. Technical Quality: 4 Clarity: 3 Questions for Authors: If I correctly understand the experimetnal part reports resutls on both training and only prompting. In this case make these two paradigms more clear in the organization of the experimental results. The paper that is cited [21] : their experiments should be comparable to this work, why the results are not compared? while you apply more technical variations here what is the main message and your new contribution compared to them? can you put their results and yours side by side and compare them? [21] From words to numbers .... Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for taking the time to review our paper and posing insightful questions. Answers to your questions are below: **If I correctly understand the experimetnal part reports resutls on both training and only prompting. In this case make these two paradigms more clear in the organization of the experimental results.** Apologies if the organization of the experimental results section was not clear. We decided to organize our experiments into two main sections: the first section examines how LLM Processes (LLMPs) perform on purely numerical tasks, while the second section examines the influence of conditioning LLMPs on text and similar examples in-context. Note that for our proposed methods (I-LLMP and A-LLMP), no training is ever involved. These methods use prompting only. Only the Gaussian Process hyperparameters are trained to compare with LLMPs. We will clarify this point at the start of the experimental sections in the next revision of the paper. **The paper that is cited [21] : their experiments should be comparable to this work, why the results are not compared? while you apply more technical variations here what is the main message and your new contribution compared to them? can you put their results and yours side by side and compare them?** The primary reason we did not compare to the results in the “From Words to Numbers: Your Large Language Model Is Secretly A Capable Regressor When Given In-Context Examples” paper is that it was released on arXiv on April 11, 2024 only a little more than a month before the NeurIPS submission deadline of May 22, 2024. By the time we became aware of the paper, we had finished running our experiments and had started writing up the paper. However, below are the results of a new experiment where we compare our results to theirs on the Original #1 dataset (see section 4.2 in the “From Words to Numbers” paper). The experimental set-up is as follows: There are 100 trials with each trial consisting of 50 training points and a single target point. The training and target points for each trial are randomly generated using the function described in Equation 2 in the “From Words to Numbers” paper. We use the code from their paper to generate the data and evaluate their approach and compare it to ours using identical numerical data. We use the Llama-2-7B LLM for both methods to ensure a fair comparison. Our method (I-LLMP) achieved lower Mean Absolute Error (MAE) on 78 of the 100 trials when compared to their method. When the errors are averaged over the 100 trials, our average error was 0.836 and theirs was 3.137. These results indicate that our LLMP approach is clearly superior to the approach employed in the “From Words to Numbers” paper. This is due to the facts that a) we sort the training points according to distance to the current target point when creating the prompt whereas they do not, and b) we form a distributional estimate for the predicted point and then take the median sample value as the best estimate, whereas they generate a single point estimate. We will include this comparison in the updated version of our paper. In our paper, we also compare to the LLMTime method in Figure 5 and Gaussian Processes in Table 1 and Figure 7. These results show that we significantly outperform LLMTime, especially in terms of negative log likelihood (NLL) and are better overall when compared to Gaussian Processes on a wide variety of functions. While we also show that LLMs are capable of non-linear regression, our contributions go significantly further than both the “From Words to Numbers” paper and LLMTime: - Our primary contribution is to condition on problem relevant text and demonstrate how this can improve prediction performance. Their work does not consider this. - Our work presents how to elicit full numerical predictive distributions from LLMs using two different methods (sampling-based and logit-based) while their method employs only point estimates for predictions. - We perform a comprehensive analysis of various prompt formats including ordering and scaling. They do no such experimentation and use only a single fixed prompt format. - We present a novel auto-regressive LLM sampling approach that yields superior results to the independent sampling approach used in their paper. - We compare our results to Gaussian Processes, viewed as the gold standard for probabilistic regression, while they compare to only simpler approaches. Please let us know if you have any more comments or questions. If we have adequately addressed your concerns, we kindly ask that you consider revising your score. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my question and adding the new resutls. I have raised my score.
null
null
Rebuttal 1: Rebuttal: We thank the reviewers for the time and effort they put into reading and commenting on our paper. Our work has presented a new zero-shot approach for generating probabilistic predictions with LLMs using plain language to augment numerical data. Reviewers believe that our work is both important and innovative, with Reviewer Qkoz stating that the “problem statement is innovative” and reviewer dL9w saying that “The problem of looking into the regression capabilities of language models in their in-context learning is very interesting and important.” All reviewers agreed that the experiments, ablation studies, and competitive performance comparisons were extensive and comprehensive. Reviewer Qkoz added that “The method demonstrates great versatility and interesting applications.” We believe that we have addressed all of the reviewer’s questions and concerns in the rebuttal. In addition, we added two new experiments to support some of the questions raised. The first new experiment shows that our LLM regression approach is superior to the one described in the recent “From Words to Numbers: Your Large Language Model Is Secretly A Capable Regressor When Given In-Context Examples” paper, and the second new experiment provides processing times for a representative use case. Please let us know if there are any further questions.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Preventing Model Collapse in Deep Canonical Correlation Analysis by Noise Regularization
Accept (poster)
Summary: This paper reveals a new observation in multi-view representation learning, that is, the performance of DCCA-based methods will gradually decrease as training progresses. The authors explore the possible reasons from the rank of weights and conclude that the Correlation Invariant Property is the key to preventing this problem, and propose NR-DCCA. Experiments also show the effectiveness of the proposed NR-DCCA. Strengths: 1. The perspective of observation is novel. 2. The method proposed by the author is simple and effective. 3. The theoretical analysis seems sound. Weaknesses: 1. In fact, the authors call this degradation phenomenon model collapse, which is not very accurate. Collapse should be very extreme. The performance degradation is a bit like an overfitting problem. 2. The method is effective, but it cannot be denied that it is too simple. 3. The legend may be better if it is a vector diagram. 4. The authors' theoretical analysis is based on Gaussian white noise. Is it the same for other types of noise? 5. Is NR loss also universal and effective on other DCCA methods? I think this is very important because it avoids the risk that DCCA is a special case. Technical Quality: 3 Clarity: 3 Questions for Authors: See Weaknesses. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your comments and feedback. In addition to the general response, we address your itemized concerns here. >In fact, the authors call this degradation phenomenon model collapse, which is not very accurate. Collapse should be very extreme. The performance degradation is a bit like an overfitting problem. Your question is very helpful in understanding the model collapse of DCCA. Previous studies have referred to weights being redundant and low-rank as an over-parametrization issue\[1,3\], and then the impaired feature quality and downstream task performance caused by this issue is called collapse \[2\]. Similarly, in this paper, we call the degradation of DCCA in the downstream task, model collapse. It is worth noting that this is not the same as overfitting in traditional supervised learning, the collapse of DCCA is not due to to it is memorizing training data, leading to a reduction in generalizability. \[1\] Wang, Zhennan, et al. "Mma regularization: Decorrelating weights of neural networks by maximizing the minimal angles." *Advances in Neural Information Processing Systems* 33 (2020): 19099-19110. \[2\] Jing, Li, et al. "Understanding dimensional collapse in contrastive self-supervised learning." *arXiv preprint arXiv:2110.09348* (2021). \[3\] Barrett, David GT, and Benoit Dherin. "Implicit gradient regularization." arXiv preprint arXiv:2009.11162 (2020). >The method is effective, but it cannot be denied that it is too simple. Our NR method is intuitively efficient, and it rests on a number of theoretical guarantees, including why NR guarantees CIP properties that constrain the behavior of the network (Theorem 1), and why NR guarantees that features do not degrade (Theorem 2). Our NR method can be generalized to other methods such as DGCCA. We believe a simple method is a plus, rather than a minus, as it can be used in different types of DCCA very easily, and it can be widely used as a plug-and-play module. >The legend may be better if it is a vector diagram. We are grateful for your suggestion, we will improve the quality of the pictures in the next version. >The authors' theoretical analysis is based on Gaussian white noise. Is it the same for other types of noise? Your question is very helpful in understanding NR. From our theoretical analysis, the most important feature of noise is that the sampled noise matrix is a full-rank matrix. Therefore, continuous distributions such as the uniform distribution can also be applied to NR, which demonstrates the robustness of the NR method. Here are our experiments with synthetic datasets (mean and variance are from multiple datasets (different common rates)): As you can see, their effects on preventing model collapse and improving the performance of DCCA are almost identical: | Epoch | DCCA | NR-DCCA (Guassion Noise) | NR-DCCA (Uniform Noise) | | :---- | :---- | :---- | :---- | | 100 | 0.284 \+/- 0.012 | 0.295 \+/- 0.005 | 0.291 \+/- 0.004 | | 800 | 0.137 \+/- 0.028 | 0.313 \+/- 0.004 | 0.313 \+/- 0.005 | | 1200 | 0.106 \+/- 0.027 | 0.312 \+/- 0.005 | 0.316 \+/- 0.005 | >Is NR loss also universal and effective on other DCCA methods? I think this is very important because it avoids the risk that DCCA is a special case. We strongly agree with you that it is important that the NR method works for other DCCA methods. In particular, we test the results of NR-DGCCA in this paper and compare them with other methods such as DGCCA. Please see Appendix A.8 DGCCA and NR-DGCCA in our paper. Moreover, we have added two new DCCA methods, DCCA\_EY and DCCA\_GHA, which replace the matrix factorization in CCA with matrix operations, and in the follow-up work, we can check whether NR can be used in them \[1\]. \[1\] Chapman, James William Harvey, Ana Lawry Aguila, and Lennie Wells. "A Generalized EigenGame With Extensions to Deep Multiview Representation Learning." --- Rebuttal Comment 1.1: Comment: I appreciate the effort put into the rebuttal, which addressed some of my concerns. After reading the other reviews and replies, I have decided to raise my score accordingly. --- Reply to Comment 1.1.1: Comment: Thanks very much for your recognition of our work!
Summary: The authors propose NR-DCCA, a novel approach of Deep Canonical Correlation Analysis(DCCA) equipping with noise regularization, in order to prevent DCCA-based methods from model collapse in the multi-view representation learning(MVRL) task. First, the authors analyze the difference between Linear CCA and DCCA, and then draw a conclusion on the cause of model collapse of DCCA. After that they propose the NR-DCCA method, which integrates noise-regularization into DCCA. Theoretical analysis is also provided to further illustrate the effect of noise regularization. Experimental results show the effectiveness of NR-DCCA. Strengths: 1. Originality: Although noise regularization is not a novel approach, the authors point out the key difference between Linear CCA and DCCA, which may be the cause of model collapse of DCCA, and succesfully use noise regularization to alleviate this issue. Therefore, I think this paper is innovative enough. 2. Quality: The paper is highly qualified because of several reasons. First, it is meaningful to find the key difference between Linear CCA and DCCA and its effect, which can help researchers get deeper understanding of CCA and DNN-based methods; Second, sufficient theoretical analyses are provided. According to my knowledge, these proofs are right. Third, the experiments are well organized and explained. 3. Clarity: The writing style is good and the core ideas are easy to understand. 4. Significance: The significance of the paper is not that considerable compared to its originality and quality. However, it is still good. Weaknesses: 1. The authors hypothesize the root cause of model collapse of DCCA is that the weight matrices of network is not guaranteed to be full-rank, which possibly leads to overfitting. The experimental results are good and I think this hypothesis is probably right to some extent. However, as I know the relation between weight matrices’full-rank property and the overfitting phenomenon is still underexplored. The authors do not provide enough proofs to support this opinion. Besides, ‘full-rank’ is not a precise(even misleading) enough expression if the authors determine to use NESum as metric to evaluate the redundacy of weight matrices. 2. The theoretical proof of Theorem 2 only shows that noise regularization do not cause a serious degradation to DCCA’s performance. However, why noise regularization works in DCCA still remains unclear. The authors just write they try to enforce DCCA to mimic the behavior of Linear CCA but no more illustrations are provided. 3. Some texts in the figures are too small. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. why can noise regularization enforce DCCA to mimic the behavior of Linear CCA and therefore prevent DCCA from model collapse? 2. The authors need to polish their expressions. According to the math formulat of NESum, a full-rank matrix with considerably different eigenvalues can still have low NESum score. For example, diag(1000,1,1) has lower NESum than diag(2,1,0), both of them are full-rank. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your comments and feedback. In addition to the general response, we address your itemized concerns here. >However, as I know the relation between weight matrices’full-rank property and the overfitting phenomenon is still underexplored. The authors do not provide enough proofs to support this opinion. Thanks for your comment. There is theoretical work devoted to the phenomenon that weight matrices tend to be low-rank in overfitting networks \[1,2\]. In the field of representation learning, \[1\] mentions that due to the interaction between network weight layers, the weight matrix collapses (be low-rank), which in turn affects the quality of the representation. We can observe very clearly that the performance of DCCA decreases with training while its NESum decreases rapidly, so we hypothesize that the weight matrix collapse (low-rank) also occurs in DCCA. More theoretical work, such as analyzing how the layers in DCCA interact with each other and why they become low-rank, is also very important and we leave it for future work. In this paper, we mainly provide practical solutions to model collapse, and the theoretical analysis is for justification purposes. \[1\] Jing, Li, et al. "Understanding dimensional collapse in contrastive self-supervised learning." *arXiv preprint arXiv:2110.09348* (2021). \[2\] Barrett, David GT, and Benoit Dherin. "Implicit gradient regularization." arXiv preprint arXiv:2009.11162 (2020). > Besides, ‘full-rank’ is not a precise(even misleading) enough expression if the authors determine to use NESum as metric to evaluate the redundacy of weight matrices. Indeed, there is a difference between the notion of the rank of a matrix and that of NESum. NESum was originally proposed as a measure of whether the eigenspace is dominated by a small number of very large eigenvalues. We simply follow the previous research and use this metric to measure the eigenvalue distribution and redundancy of the matrix. In some cases where the eigenvalues are suddenly decreasing, e.g., NESum (2,1,0.1) \= 1 \+ 0.5 \+ 0.05 \= 1.55 (1-\>0.1), NESum (2,2,0) \= 1 \+ 1 \+ 0 \= 2 (2-\>0), the full-rank matrix will have a smaller NESum. In order to better analyze the weight matrices in DNNs, we add the average of the cosine similarity between the dimensions in the weight matrices as another measure of redundancy, and we use DCCA and NR-DCCA in synthetic datasets as an example: | Epoch | NESum | | Cosine Similarity | | | :---- | :---- | :---- | :---- | :---- | | | DCCA | NR-DCCA | DCCA | NR-DCCA | | 100 | 0.260 | 0.227 | 0.074 | 0.070 | | 800 | 0.149 | 0.330 | 0.093 | 0.052 | | 1200 | 0.141 | 0.354 | 0.098 | 0.050 | It can be seen that the metrics of cosine similarity and NESum are synchronized, with weight redundancy in DCCA increasing (cosine similarity increases, NESum decreases) and NR-DCCA redundancy decreasing (cosine similarity decreases, NESum increases). In particular, we visualize the correlation matrix and the eigenvalues of the first Linear layer in DCCA and NR-DCCA. Again, it can be found that the redundancy of the weight matrices in DCCA is rising (**please see the pdf material**). >The theoretical proof of Theorem 2 only shows that noise regularization do not cause a serious degradation to DCCA’s performance. However, why noise regularization works in DCCA still remains unclear. The authors just write they try to enforce DCCA to mimic the behavior of Linear CCA but no more illustrations are provided. Your suggestions are very helpful for the understanding of NR. First of all, the key difference between DCCA and Linear CCA is that the weight matrix in DCCA cannot be guaranteed to be full-rank during the optimization process. Combined with the previous research, we believe that this difference leads to model collapse in DCCA, and therefore we need to design a method to constrain the behavior of DCCA. As for the full-rank property of Linear CCA, we prove through Theorem 1 that it is equivalent to CIP. So we design NR, through which DCCA has the property of CIP, and its behavior is constrained to be consistent with that of Linear CCA. Theorem 2 shows that NR can guarantee that the generated features will not be degenerated (reconstruction and denoising) from the feature level. >Some texts in the figures are too small. We apologize for the small size of the text in the images. Due to space constraints, we put too many images under the same figure, which will be improved in our subsequent releases. >why can noise regularization enforce DCCA to mimic the behavior of Linear CCA and therefore prevent DCCA from model collapse? Thanks for your question. The key difference between DCCA and Linear CCA is that the weight matrix in DCCA cannot be guaranteed to be full-rank during the optimization process. Combined with the previous studies, we believe that this difference leads to model collapse in DCCA, and therefore we need to design a method to constrain the behavior of DCCA. As for the full-rank property of Linear CCA, we prove through Theorem 1 that it is equivalent to CIP. So we design NR, through which DCCA has the property of CIP, and its behavior is constrained to be consistent with that of Linear CCA. >The authors need to polish their expressions. According to the math formulat of NESum, a full-rank matrix with considerably different eigenvalues can still have low NESum score. For example, diag(1000,1,1) has lower NESum than diag(2,1,0), both of them are full-rank. Your suggestion is valuable, and we have added the cosine similarity of the elements of the weight matrix as an indicator of the redundancy of the weights.
Summary: The paper proposes noise regularization term to prevent collapse issues found in deep canonical correlation analysis (DCCA) methods. The term makes DCCA to behave in a way similar to linear CCA, which is robust to collapse by definition, thereby making DCC with the regularization robust against collapse issue. Strengths: 1. Clear and consistent performance improvement by the proposed method in the both synthetic and practical settings. 2. The proposed method sound reasonable; by the proposed regularization, DCCA mimics linear CCA in terms of collapsing. Since the latter is resilient to collapse by definition, NR-DCCA becomes robust to collapse by mimicking the linear CCA. Weaknesses: 1. Readability. It is quite hard to find the exact configuration used for real-world experiments such as what encoders are used for DCCA. The content in the appendix is huge, but they are not organized well. 2. The degree of model collapse will depend on the model complexity (e.g., the number of parameters in the MLP). But there is no clear analysis on this aspect. Would the proposed noise regularization can prevent the collapse issue under any degree of MLP complexity? Technical Quality: 3 Clarity: 1 Questions for Authors: 1. In Line 137, the setting assumes the same sample size for all the datasets X_k. Is it necessary, and how about other CCA methods? 2. What are the practical application of the CCA methods? Despite the highly theoretical nature of the work, I think it would be better if there is at least one line of comment that explains a practical use case of the CCA methods in the introduction or appendix for the readers who are not familiar with the field. 3. In line 137, what are the actual examples of X_k in practice? 4. The experiment setting is not self-contained. What is the exact protocol from Hwang et al. (2021) in Line 250? It should be given in the appendix. 5. What is the deep network $f$ used for DCCA? 6. What if the features in X_k are the representations from foundational pre-trained encoders but possibly of different vector dimension across different views? 7. The NESum recovers after epoch 600 in Figure 3 for DCCA_PRIVATE. Why is it that? But based on (d), the model seems to collapse. Confidence: 4 Soundness: 3 Presentation: 1 Contribution: 3 Limitations: Please refer to 'Weaknesses' Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your comments and feedback. In addition to the general response, we address your itemized concerns here. >Readability. weak1 We apologize for not specifying the structure of the MLP in the paper. All our MLPs use the Leaky ReLu activation function. The first linear layer is feature\_dim \* hidden\_dim and the final linear layer is hidden\_dim \* embedding\_dim. For the synthetic dataset, which is simple, we have only one hidden layer with a dimension of 256\. For the real-world dataset, we have used MLPs with three hidden layers, and the dimension of the middle hidden layer is 1024\. To enhance reproducibility, we will release all the experiment settings and source codes after the blind review process. > weak2 Thanks a lot for your suggestion, which is very reasonable. It is essential to test the robustness of NR to model complexity, and we supplemented our synthetic datasets with the following experiment (hidden\_dim \= 128): | Epoch/R2 | 1 hidden layer | | 2 hidden layer | | 3 hidden layer | | | :---- | :---- | :---- | :---- | :---- | :---- | :---- | | | DCCA | NR-DCCA | DCCA | NR-DCCA | DCCA | NR-DCCA | | 100 | 0.284 \+/- 0.012 | **0.295 \+/- 0.005** | 0.161 \+/- 0.013 | **0.304 \+/- 0.006** | 0.071 \+/- 0.084 | **0.299 \+/- 0.010** | | 800 | 0.137 \+/- 0.028 | **0.313 \+/- 0.004** | \-0.072 \+/- 0.071 | **0.307 \+/- 0.005** | \-0.975 \+/- 0.442 | **0.309 \+/- 0.005** | | 1200 | 0.106 \+/- 0.027 | **0.312 \+/- 0.005** | \-0.154 \+/- 0.127 | **0.303 \+/- 0.006** | \-1.412 \+/- 0.545 | **0.308 \+/- 0.006** | Since synthetic data is simple, increasing the depth of the network makes both DCCA and NR-DCCA less effective, but the nature of NR for preventing model collapse is still maintained. We can clearly see that by increasing the network depth, the DCCA collapses more severely, while NR remains effective. >Q1 In Line 137 The questions you mentioned are very helpful in understanding CCA. The CCA family is not strong enough to deal with missing views (missing data for a view at some sample points), which requires that the data for each view of the sample is complete. The MVRL domain has a specialized approach to address the missing view scenario, which is called partial multi-view learning, and it may not be the focus of this paper. >Q2 What are the practical applications of the CCA methods? Your suggestions are very reasonable and I will add various practical application scenarios for the CCA methods in the appendix: The application of CCA mainly focuses on multi-view data. Multi-view data may come from multiple domains, such as computer vision, natural language, speech, and so on \[1\]. The features of different views are utilized to obtain a unified form of representation that captures the correlation between different views \[2\], which can be used in various downstream tasks such as classification, retrieval, clustering, and dimension reduction \[2\]. \[1\] Yan, Xiaoqiang, et al. "Deep multi-view learning methods: A review." *Neurocomputing* 448 (2021): 106-129. \[2\] Hardoon, David R., Sandor Szedmak, and John Shawe-Taylor. "Canonical correlation analysis: An overview with application to learning methods." *Neural computation* 16.12 (2004): 2639-2664. (No room for more references) >Q3 In line 137 I apologize for the potential misunderstanding caused. Let's take the Caltech101 dataset as an example, the training set has 6400 images, so the same image has been fed to three different feature extractors producing three features a 1, 984-d HOG feature, a 512-d GIST feature, and a 928-d 681 SIFT feature. Then for this dataset. X\_1: 1984\*6400 , X\_2: 512\*6400 , X\_3: 928\*6400 > Q4 The experiment setting We apologize for the misunderstanding and will add Hwang et al. (2021) in the appendix. Our experiment setting is the same, in the training set, we only utilize the multi-view features for multi-view learning and train the encoder to capture the correlation between multiple views. Subsequently, using the trained encoder, the test set features are projected into a new space, and the newly obtained features are used as a multi-view representation. This representation is then used in the downstream classification task to report the average classification metric (F1\_score) through 5-fold cross-validation using the SVC linear classifier. >Q5 What is the deep network used for DCCA? Again, we apologize for not specifying the structure of the MLP in the paper. The first linear layer is feature\_dim \* hidden\_dim. All our MLPs use the Leaky ReLu activation function. For the manually generated dataset, which is simple enough, we have only one hidden layer with a dimension of 256\. For the real-world dataset, we have used MLPs with three hidden layers, and the dimension of the middle hidden layer is 1024\. >Q6 What if The questions you mentioned are very helpful in understanding CCA, and it's exactly what CCA methods are trying to solve: how to utilize features generated by different pre-trained foundational encoders. DCCA uses an MLP for each view, where the first linear layer dimension is feature\_dim \* hidden\_dim, and the structure of the MLP behind the first layer is the same. This way our MLP will project different features onto the same size of feature space. >Q7 The NESum Your question is very helpful in helping us understand model collapse. We believe that the addition of DropOut to DCCA\_private, which is a common regularization technique, does, to some extent, prevent redundancy among network weights (NESum rises), but it is clear that its effect is not stable for DCCA, especially as seen in (a), where the performance variance of different datasets is very large, which means DCCA\_private is highly dependent on the dataset and is not generalizable.
Summary: This work focuses on multi-view learning. Specifically, it studies the deep canonical correlation analysis and its variants. This study observes the issue of model collapse and proposes a regularization learning strategy to release the problem, then solve the early stop challenging. Strengths: 1. Multi-view learning is a critical research topic in machine learning field, which is valuable to explore. 2. The model collapse issue matters in multi-view learning due to the challenging early stop decision. 3. Overall, the writing is easy to follow. Weaknesses: 1. Adding regularization has been fully explored in different machine learning scenarios. To this end, the proposed method is lack of research novelty, which may diminish the paper contribution. 2. The compared methods are relatively old, adding more recent publication for comparison helps to support the draft. 3. Numerical results only contain some small datasets, using more large-scale ones will be helpful, especially in current large-scale learning era. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the weaknesses section for reference. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your comments and feedback. In addition to the general response, we address your itemized concerns here. >Adding regularization has been fully explored in different machine learning scenarios. To this end, the proposed method is lack of research novelty, which may diminish the paper contribution. In the machine learning area, adding regularization is a common setup. However, the exploration of new regularization approaches is always needed. To be specific, **Our motivation is new**, we first time demonstrate the difference in behaviors between DCCA and Linear CCA, and our regularization forces DCCA to mimic the behavior of Linear CCA in order to take on the properties of CIP, thus mitigating the problem of model collapse of DCCA. **Our approach is new** in that noise regularization previously had to rely on the Autoencoder architecture to implicitly regularize the network, but we now rely on (G)CCA loss to regularize the network from a different perspective using noise, which opens up the possibility of wider use of noise regularization. >The compared methods are relatively old, adding more recent publication for comparison helps to support the draft. Thank you for your suggestion. We add two more methods: DCCA\_EY and DCCA\_GHA, the latest two DCCA-based methods for comparisons \[2\]. Due to time constraints, we conducted experiments on synthetic datasets (mean and variance are from multiple datasets (different common rates)): DCCA\_EY and DCCA\_GHA use an efficient algorithm for matrix eigenvalue decomposition in CCA loss that can be replaced by inter-matrix operations. This algorithm is fast and does not depend on a large batchsize (no gradient bias). In terms of effectiveness, they are no better than DCCA in terms of generating feature quality (we all used large batchsize 2000). There is still model collapse, but they collapse slower than DCCA. | Epoch/R2 | DCCA | NR-DCCA | DDCA\_EY | DDCA\_GHA | | :---- | :---- | :---- | :---- | :---- | | 100 | 0.284 \+/- 0.012 | **0.295 \+/- 0.005** | 0.187 \+/- 0.005 | 0.209 \+/- 0.005 | | 400 | 0.206 \+/- 0.025 | **0.310 \+/- 0.005** | 0.266 \+/- 0.008 | 0.273 \+/- 0.010 | | 800 | 0.137 \+/- 0.028 | **0.313 \+/- 0.004** | 0.248 \+/- 0.009 | 0.272 \+/- 0.010 | | 1200 | 0.106 \+/- 0.027 | **0.312 \+/- 0.005** | 0.214 \+/- 0.012 | 0.256 \+/- 0.011 | \[1\] Hwang, HyeongJoo, et al. "Multi-view representation learning via total correlation objective." *Advances in Neural Information Processing Systems* 34 (2021): 12194-12207. \[2\] Chapman, James William Harvey, Ana Lawry Aguila, and Lennie Wells. "A Generalized EigenGame With Extensions to Deep Multiview Representation Learning." \[3\] Ke, Guanzhou, et al. "Rethinking Multi-view Representation Learning via Distilled Disentangling." *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2024\. >Numerical results only contain some small datasets, using more large-scale ones will be helpful, especially in the current large-scale learning era. Thanks to your suggestion, the largest dataset used for the current MVRL method is the PolyMnist dataset with 5 views, 60,000 images per view, and 10 categories. However, it is really necessary to test MVRL in larger and more complex scenarios. To this end, we constructed 2 views of data in Cifar100 (100 categories, 60,000 images) by CLIP and BLIP pre-training image feature encoders respectively. Using the same PolyMnist experimental settings as in the paper, except that the embedding dimension was changed to 400, the 5-fold F1 score results are as follows: | Epoch/F1\_score | DCCA | NR-DCCA | Concat | | :---- | :---- | :---- | :---- | | 50 | 0.749 | 0.752 | 0.733 | | 500 | 0.672 | 0.753 | | We can see that although DCCA starts with better results than concat, it quickly collapses, while NR-DCCA shows stable performance. As to whether DCCA can be pre-trained for large-scale multimodal data like CLIP, this is an interesting question that we leave for future work.
Rebuttal 1: Rebuttal: We thank all reviewers for their questions and constructive feedback. In the general response, we respond to the five core issues : **More DCCA methods**, **Larger MVRL experiments**, **Effects of MLP structures**, and **New metric of weight redundancy and Contributions.** The image quality mentioned by the reviewers will be fixed in the next version. **More DCCA methods** In this paper, we focus on the family of DCCA methods, and we add DCCA\_EY DCCA\_GHA, the latest two DCCA-based methods \[1\]. Due to time constraints, we conducted experiments on synthetic datasets (mean and variance are from multiple datasets (different common rates)): | Epoch/R2 | DCCA | NR-DCCA | DDCA\_EY | DDCA\_GHA | | :---- | :---- | :---- | :---- | :---- | | 100 | 0.284 \+/- 0.012 | **0.295 \+/- 0.005** | 0.187 \+/- 0.005 | 0.209 \+/- 0.005 | | 400 | 0.206 \+/- 0.025 | **0.310 \+/- 0.005** | 0.266 \+/- 0.008 | 0.273 \+/- 0.010 | | 800 | 0.137 \+/- 0.028 | **0.313 \+/- 0.004** | 0.248 \+/- 0.009 | 0.272 \+/- 0.010 | | 1200 | 0.106 \+/- 0.027 | **0.312 \+/- 0.005** | 0.214 \+/- 0.012 | 0.256 \+/- 0.011 | DCCA\_EY and DCCA\_GHA utilize an efficient algorithm for matrix eigenvalue decomposition in CCA loss that can be replaced by inter-matrix operations. This algorithm is fast and does not depend on a large batchsize (no gradient bias). There is still model collapse, but they collapse slower than DCCA. \[1\] Chapman, James William Harvey, Ana Lawry Aguila, and Lennie Wells. "A Generalized EigenGame With Extensions to Deep Multiview Representation Learning." **Larger MVRL experiments** The largest dataset used for the current MVRL method is the PolyMnist dataset with 5 views, 60,000 images per view and 10 categories. However, it is really necessary to test MVRL in larger and more complex scenarios. To this end, we constructed 2 views of data in Cifar100 (100 categories, 60,000 images) by CLIP and BLIP pre-training image feature encoders respectively. Using the same PolyMnist experimental settings as in the paper, except that the embedding dimension was changed to 300, the average 5-fold F1 score results are as follows: | Epoch/F1\_score | DCCA | NR-DCCA | Concat | | :---- | :---- | :---- | :---- | | 50 | 0.749 | 0.752 | 0.733 | | 500 | 0.672 | 0.753 | | We can see that although DCCA starts out with better results than concat, it quickly collapses, while NR-DCCA shows stable performance. **Effects of MLP structures** It is essential to test the robustness of NR to model complexity, and we supplemented our synthetic datasets with the following experiment (hidden\_dim \= 128): | Epoch/R2 | 1 hidden layer | | 2 hidden layer | | 3 hidden layer | | | :---- | :---- | :---- | :---- | :---- | :---- | :---- | | | DCCA | NR-DCCA | DCCA | NR-DCCA | DCCA | NR-DCCA | | 100 | 0.284 \+/- 0.012 | **0.295 \+/- 0.005** | 0.161 \+/- 0.013 | **0.304 \+/- 0.006** | 0.071 \+/- 0.084 | **0.299 \+/- 0.010** | | 800 | 0.137 \+/- 0.028 | **0.313 \+/- 0.004** | \-0.072 \+/- 0.071 | **0.307 \+/- 0.005** | \-0.975 \+/- 0.442 | **0.309 \+/- 0.005** | | 1200 | 0.106 \+/- 0.027 | **0.312 \+/- 0.005** | \-0.154 \+/- 0.127 | **0.303 \+/- 0.006** | \-1.412 \+/- 0.545 | **0.308 \+/- 0.006** | Since synthetic data is simple, increasing the depth of the network makes both DCCA and NR-DCCA less effective, but the nature of NR for preventing model collapse is still maintained. We can clearly see that by increasing the network depth, the DCCA collapses more severely, while NR remains effective. **A new metric of weight redundancy** There is a difference between the notion of the rank of a matrix and that of NESum. NESum was originally proposed as a measure of whether the eigenspace is dominated by a small number of very large eigenvalues. In order to better analyze the weight matrices in DNNs, we add the average of the cosine similarity between the dimensions in the weight matrices as another measure of redundancy, and we use DCCA and NR-DCCA in synthetic datasets as an example: | Epoch | NESum | | Cosine Similarity | | | :---- | :---- | :---- | :---- | :---- | | | DCCA | NR-DCCA | DCCA | NR-DCCA | | 100 | 0.260 | 0.227 | 0.074 | 0.070 | | 800 | 0.149 | 0.330 | 0.093 | 0.052 | | 1200 | 0.141 | 0.354 | 0.098 | 0.050 | It can be seen that the metrics of cosine similarity and NESum are synchronized, with weight redundancy in DCCA increasing (cosine similarity increases, NESum decreases) and NR-DCCA redundancy decreasing (cosine similarity decreases, NESum increases). In particular, we visualize the correlation matrix and the eigenvalues of the first Linear layer in DCCA and NR-DCCA. Again, it can be found that the redundancy of the weight matrices in DCCA is rising (**please see the pdf material**). **Contributions** Our NR method is intuitively efficient, and it rests on a number of theoretical guarantees, including why NR guarantees CIP properties that constrain the behavior of the network (Theorem 1), and why NR guarantees that features do not degrade (Theorem 2). Moreover, our motivation is new, as for the first time, we explicitly demonstrate the difference in behavior between DCCA and Linear CCA in an attempt to explain and alleviate the problem of model collapse in DCCA. Secondly, our NR method is new. While noise regularization previously had to rely on the Autoencoder architecture to implicitly regularize the network, we now rely on the (G)CCA loss to regularize the network from another perspective using noise, which opens up the possibility of wider use of noise regularization. Overall, we would like to thank the reviewers once again for your detailed and effective suggestions. Improving and supplementing the experiments based on your suggestions will undoubtedly make our paper more convincing. We hope that our response has adequately addressed your concerns. Pdf: /pdf/02e99c32071af59d1f70dc2ee1e85ef463829a57.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Initializing Services in Interactive ML Systems for Diverse Users
Accept (poster)
Summary: The paper introduces a novel method for initializing machine learning services tailored to diverse user preferences. The work addresses the challenges of non-convex optimization and lack of pre-existing user preference data before running a service; the authors propose a randomized algorithm that adaptively selects a minimal set of users for data collection. The approach guarantees a total loss close to the global optimum under mild assumptions and extends the k-means++ algorithm to a broader problem class. The results also are supported by experiments on real and semi-synthetic datasets. Strengths: The authors tackle the challenge of service initialization in the context of bandit feedback and non-convex optimization, which has not been extensively studied before. The proposed algorithm is an interesting extension of the k-means++ algorithm, designed to handle general loss families. The analysis of the proposed approach is solid. The theoretical result is supported by experimental study. The paper is well-written and clearly structured. Weaknesses: The computational complexity is not well studied, especially in large-scale settings. Technical Quality: 3 Clarity: 3 Questions for Authors: Could you provide any data on the time performance of the algorithms in the experimental study? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough and thoughtful review of our paper. We appreciate your positive comments on both the theoretical and experimental study, as well as finding our paper well-written. Below, we hope to address your concerns regarding the computational complexity of our algorithm: We want to emphasize that our algorithm is indeed very computationally efficient, even for large-scale settings like Netflix, where recommendations are made to an order of a billion users. Below, we provide a theoretical analysis of the computational complexity of AcQUIre and also back it up with empirical validation. Theoretical Analysis: We sketch the computational complexity of AcQUIre. Each iteration of the loop is examined as follows: Line 5: Collecting data across a large user base occurs in a distributed network for recommendation systems like Netflix, where queries can be processed in parallel. This step effectively takes O(1) time. Line 6: After aggregating the data, a new user can be sampled in O(log⁡N) time with modern multinomial samplers (see [1], just as an example). Lines 7 and 8: These involve O(1) time updates, as they are performed only on the selected user. Since the loop runs K times, the total cost is O(Klog⁡N). The growth in N is logarithmic, while the growth in K is linear. In practical settings like Netflix recommendation systems, K is much smaller than N. For instance, Netflix groups N=200 million users into 1300 recommendation clusters [2]. Just to compare with our baselines, the greedy and epsilon greedy methods are O(KN) because selecting the highest loss needs going over all N users. Experimental Validation: To study the effect of number of users N on the runtime, we generate synthetic datasets in d=200 dimensional space with sizes N=10^4, 10^5, 10^6, 10^7, 10^8, 10^9 (1 billion), and report the runtimes for AcQUIre and the baselines in Figure 2 (left) (attached in the pdf) keeping number of services K fixed at 2000. We observe than even with N=1 billion users, AcQUIre finishes running in less than 300 sec (5 minutes), whereas the greedy and epsilon greedy methods take >10^5 sec (~1day) even for N=10 million users. To study the effect of the number of services K on the runtime, we generate a dataset with N=5 million users in d=200 dimensional space and report the runtimes in Figure 2 (right) as we vary K from 500 to 5000 in steps of 500. We find that even with 5000 services, AcQUIre finishes running in <900 secs (15 minutes), whereas the runtimes for greedy and epsilon greedy are in the range of 10^5 secs (~1-3 days). We hope this addresses the reviewer’s concerns and demonstrates the practical efficiency of our method for large-scale recommendation systems. [1] Bringmann, Karl, and Konstantinos Panagiotou. "Efficient sampling methods for discrete distributions." In Automata, Languages, and Programming: 39th International Colloquium, ICALP 2012 [2] https://recoai.net/netflix-recommendation-system-how-it-works/ --- Rebuttal Comment 1.1: Comment: Thank you for the addressing my concerns. It sounds satisfactory for me.
Summary: This paper introduces a novel method for initializing services in interactive machine learning (ML) systems tailored to diverse user preferences. The focus is on scenarios where multiple models or services are deployed, allowing users to choose the one that minimizes their personal losses. The authors highlight two primary challenges in determining optimal initial conditions for these services: the absence of user preference data prior to deployment (bandit feedback) and the presence of non-convex loss landscapes that can lead to suboptimal local solutions. To overcome these challenges, the authors propose a randomized algorithm for service initialization. They provide theoretical guarantees, demonstrating an approximation ratio for the algorithm, and present empirical results that showcase the approach's effectiveness on both real and semi-synthetic datasets. Strengths: - The proposed adaptive randomized algorithm for service initialization in interactive ML systems is a novel contribution. It extends the well-known K-means++ algorithm to a more complex setting involving diverse user preferences. - The paper provides strong theoretical guarantees, including tight bounds on total loss and a generalization of the K-means++ guarantee. This adds significant value by ensuring the robustness of the proposed method. - The empirical results on real and semi-synthetic datasets validate the algorithm's effectiveness in reducing total loss and improving service specialization. The inclusion of fairness considerations further strengthens the practical relevance of the work. Weaknesses: - The algorithm relies on specific assumptions about the loss functions (e.g., uniqueness of minimizers, approximate triangle inequalities). These assumptions, although reasonable in many cases, may not hold in all practical applications, potentially limiting the algorithm's applicability. - The empirical validation, although convincing, is limited to two datasets. Additional experiments on a broader range of datasets and application domains would provide stronger evidence of the method's effectiveness and generalizability. Technical Quality: 3 Clarity: 3 Questions for Authors: - What is the performance of optimization algorithms after using your proposed initialization method? Have you empirically compared that performance with other existing initialization approaches? - Could you elaborate on the potential impact of violating the assumptions made about the loss functions? How robust is your algorithm to deviations from these assumptions in practice? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed review, and for finding our tight bounds on a novel setup to be a significant contribution, as well highlighting the practical impacts of our fairness considerations. Below, we hope to address some of the reviewer’s questions: **Question 1 (Performance of optimization algorithms after initialization):** We thank the reviewer for this question. Once a set of services are initialized, indeed with more user interactions, the provider updates the services on new data to improve the quality (indicated by the reduction in total loss). To evaluate the importance of initialization, we conducted experiments using two different optimization algorithms: **Generalized k-means:** The services are iteratively updated by training each service on the current group of subpopulations selecting it. After updating the service parameters, the subpopulations reselect their best service. This process repeats until convergence. **Multiplicative weights update [1]:** Similar to k-means, but each subpopulation can have users choosing different services simultaneously. Both generalized k-means and the multiplicative weights update guarantee that the total loss reduces over time [1]. In our experiments, we initialized a set of services using our proposed initialization scheme, AcQUIre, and other baseline methods. We then let both optimization algorithms run until convergence. We plotted the total loss values as a function of the number of iterations (see Figure 3 in the attached PDF). Our results demonstrate that our initialization method, AcQUIre, leads to: **(A) Faster Convergence:** The optimization algorithms converge more quickly with our initialization method compared to other baselines. **(B) Lower Final Loss:** Initializing with our method convergence to lower losses whereas other initialization schemes are prone to being stuck in suboptimal local minimas. These findings highlight the significance of a robust initialization strategy. By starting with a better initial configuration, the optimization algorithms can more effectively and efficiently reach higher quality solutions. The reviewer's comments underscore the importance of this aspect, and we believe our empirical comparisons provide strong evidence of the advantages of our proposed method. **Question 2 (Robustness of AcQUIre to violating assumptions):** We appreciate the reviewer’s attention to the potential impact of violating assumptions underlying our proofs. We conduct a new set of experiments, where we use a 2 layer Neural Network with ReLU activations that takes as input the user features and outputs their score. We still use the standard squared prediction error. However note that in this modeling scenario, the loss no longer satisfies our assumptions in the parameters of the neural network. Given a user’s features and true score, since there are no unique minimizers, we run gradient descent to compute a local minimizer and then use this trained neural network as a service to predict other user’s scores. We run AcQUIre and other baselines under this modeling and report our results in Figure 4 (see attached pdf). We find that the performance of AcQUIre even when violating assumptions is almost similar to using AcQUIre under modeling which satisfies assumptions. Additionally, we would like to emphasize that the implementation of our algorithm itself does not rely on these assumptions. It only requires the loss values to be observable. Therefore, one can model very complex precision models and only supply the loss values of these models to our algorithm. [1] Dean, Sarah, et al. "Emergent specialization from participation dynamics and multi-learner retraining." International Conference on Artificial Intelligence and Statistics. PMLR, 2024.
Summary: 1) This paper introduces a new algorithm to efficiently initialize a system providing K services to N users (K << N) where user preferences are unknown beforehand and the system iteratively learns about user preferences as the services are recommended, example : netflix movie recommendations. 2) The proposed method has been inspired by the k-means++ algorithm and applies it in a more general setting. Authors also provide theoretical proof that the system initialized in this way will achieve a worst-case loss not exceeding a log multiplier on the optimal loss for this system. 3) AcQUIre and it's modification to preserve fairness across subpopulations (Fair AcQUIre) are the main algorithms presented in this paper with experiments ablating the effect of different user selection strategies used in AcQUIre 4) Experiments on census dataset and movielens dataset are provided. Strengths: 1) Paper is well-written and authors have done a great job introducing the problem with sufficient notations and related work. 2) The problem introduced is very relevant and encourages future research in this direction. Weaknesses: 1) In the "Movie Recommendation" experiment users are divided into N=1000 subpopulations based on the similarity of their movie ratings and all experiments are then conducted to achieve minimal excess error w.r.t to these subpopulation groups, however in a typical setting where the proposed method might be applied there's no such prior data to group users conveniently so an ablation on user clustering methods prior to applying AcQUIre would further boost it's effectiveness. 2) The size of the datasets are small enough to make the computations practical but the method is actually expensive depending on the choice of K the system needs to get desired loss on the population of N users, so some discussion around this aspect would be useful where authors go deeper in practical deployments of this method is a system like Netflix recommending movies to a billion users. Technical Quality: 3 Clarity: 3 Questions for Authors: 1) It is unclear how line 7 of the proposed method : "New service: Query user l’s preference" would be implemented in a real world setting. In a large scale distributed recommendation system such as netflix where user preferences are being collected on a subset of recommended services in parallel as soon as the system is deployed it's unclear how to get a specific user's preferences so some clarification here would be useful. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: There's no negative societal consequences of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed review, and for finding our problem setup novel with the potential to encourage future research in this direction. We appreciate the thoughtful questions the reviewer asked, and believe that our findings and explanations to these questions will further strengthen our paper. Here are our responses: 1. **Subpopulation grouping (Weakness 1):** We appreciate the reviewer's insight regarding the practical application of our method where prior data for grouping users may not be available. To address this, we have conducted additional experiments to evaluate the dependence of AcQUIre's performance on the quality of grouping. Specifically, we initially form clusters based on prior data, then shuffle x% of the users into incorrect clusters, varying x from 100% (completely random grouping, i.e. no prior data) to 66%, 33%, and 0% (accurate grouping based on prior data). Our results demonstrate that AcQUIre consistently outperforms the baselines across all values of x (see Figure 1 in attached pdf). Additionally, Figure 1 (right) shows that AcQUIre with different qualities of user grouping outperforms all the baselines with accurate grouping based on prior data. Als0, the performance deterioration as x varies from 0 to 100 is very small, demonstrating AcQUIre’s robustness to accuracy of prior data to form groups. 2. **Time Complexity (Weakness 2):** We appreciate the reviewer’s concern regarding the computational efficiency of our proposed algorithm, AcQUIre. We want to emphasize that despite these concerns, our algorithm is indeed very computationally efficient, even for large-scale settings like Netflix, where recommendations are made to a billion users. Below, we provide a theoretical analysis of the computational complexity of AcQUIre and also back it up with empirical validation. Theoretical Analysis: We sketch the computational complexity of AcQUIre. Each iteration of the loop is examined as follows: Line 5: Collecting data across a large user base occurs in a distributed network for recommendation systems like Netflix, where queries can be processed in parallel. This step effectively takes O(1) time. Line 6: After aggregating the data, a new user can be sampled in O(log⁡N) time with modern multinomial samplers (see [1], just as an example). Lines 7 and 8: These involve O(1) time updates, as they are performed only on the selected user. Since the loop runs K times, the total cost is O(Klog⁡N). The growth in N is logarithmic, while the growth in K is linear. In practical settings like Netflix recommendation systems, K is much smaller than N. For instance, Netflix groups N=200 million users into 1300 recommendation clusters [2]. Just to compare with our baselines, the greedy and epsilon greedy methods are O(KN) because selecting the highest loss needs going over all N users. Experimental Validation: To study the effect of number of users N on the runtime, we generate synthetic datasets in d=200 dimensional space with sizes N=10^4, 10^5, 10^6, 10^7, 10^8, 10^9 (1 billion), and report the runtimes for AcQUIre and the baselines in Figure 2 (left) (attached in the pdf) keeping number of services K fixed at 2000. We observe than even with N=1 billion users, AcQUIre finishes running in less than 300 sec (5 minutes), whereas the greedy and epsilon greedy methods take >10^5 sec (~1day) even for N=10 million users. To study the effect of the number of services K on the runtime, we generate a dataset with N=5 million users in d=200 dimensional space and report the runtimes in Figure 2 (right) as we vary K from 500 to 5000 in steps of 500. We find that even with 5000 services, AcQUIre finishes running in <900 secs (15 minutes), whereas the runtimes for greedy and epsilon greedy are in the range of 10^5 secs (~1-3 days). We hope this addresses the reviewer’s concerns and demonstrates the practical efficiency of our method for large-scale recommendation systems. 3. **Querying new user preferences (Question 1):** In a large-scale distributed recommendation system like Netflix, user preferences are collected from initial interactions with the service. When a new set of services is deployed, user preferences can be quickly gathered by prompting users to rate or select their favorite content from a curated list. Additionally, the system can analyze immediate user behaviors, such as search queries, viewing choices, and engagement patterns, to start building a profile. Services like Netflix often also provide incentives such as free first month trial to increase new user engagement during the free trial. As users spend more time watching movies, Netflix gathers more data about their preference. It is realistic that all users will not have high levels of engagement when a set of new services are provided. Our algorithm also works when only a small set of a potentially large subpopulation interacts with the provider. Our results in Section 4 provide confidence bounds on the performance of AcQUIre in this setting (see Theorem 4.4). [1] Bringmann, Karl, and Konstantinos Panagiotou. "Efficient sampling methods for discrete distributions." In Automata, Languages, and Programming: 39th International Colloquium, ICALP 2012 [2] https://recoai.net/netflix-recommendation-system-how-it-works/ --- Rebuttal Comment 1.1: Comment: Thanks for addressing each of my questions with detailed experiments. I've gone through the new analysis presented and the attached references. In light of the supporting evidence presented I've raised my score.
null
null
Rebuttal 1: Rebuttal: We thank all the reviewers for their detailed reviews. We deeply appreciate the questions and insights the reviewer's gave in their reviews. We attach below a set of empirical studies to hopefully answer the questions the reviewers had. Please let us know if we can answer any more questions. Thanks! Pdf: /pdf/e11495ed830466fa2aa1bcd7925bd02c6761ffe6.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Time-FFM: Towards LM-Empowered Federated Foundation Model for Time Series Forecasting
Accept (poster)
Summary: The authors study an important problem of time series forecasting considering the increasing concerns of privacy and copyright. The paper proposes a novel LLM-empowered federated time series forecasting model with three main components, i.e., modality alignment, prompt adaption, and personalized strategy. Strengths: 1. The paper is well-written and easy to follow. 2. The authors propose Time-FFM, a novel LLM-empowered federated foundation model for time series forecasting. Time-LLM encompasses a cross-modality module, a prompt adaption module, and a personalized federated training strategy. 3. Extensive experiments show the effectiveness of the proposed method. Weaknesses: 1. We often run federated learning on edge devices, which have restrictedly limited data processing capability. Time-FFM is difficult to deploy in real-world scenarios. Additionally, the high latency of LLM during inference may impede its applicability for real-time time series forecasting. This presents a practical challenge for deployment in resource-limited FL environments. Moreover, it requires at least a GPU to run LLM which is impractical and unrealistic in real-world FL scenarios. 2. TimesFM [r1], a centralized foundation model for time series forecasting from Google, was accepted by ICML 2024. It was released on Arxiv last October. Google possesses various kinds of data from a variety of domains without any copyright and privacy concerns, which conflicts with your motivation. [r1]. Das, et al., A decoder-only foundation model for time-series forecasting, ICML, 2024. 3. More detailed explanations are needed regarding how the proposed method addresses data heterogeneity across clients. 4. It would be better to improve and clarify the technique depth. Patching and channel-independent mechanisms are widely used in existing time series forecasting methods. In addition, the prompt adaption seems to be another version of an existing technique [22]. The projection head is also simple and straightforward. Further, the training process of Time-LLM is based on FedAvg. 5. It is suggested to illustrate more on personalized strategy including the global encoder. In addition, it is also confusing how to deal with the conflicts between generalization and personalization. 6. It would be better to convert TimesNet, DLiner, and PatchTST, into their federated version with FedAvg and compare their federated versions with the proposed Time-FFM. Additionally, it would be better to include a case study to intuitively show how accurately the proposed Time-FFM can predict time series. Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Are the practical deployment challenges considered? 2. How to solve the bottleneck problem of limited FL resources? Could you provide more evidence? 3. How to demonstrate the rationality of the prompt generated by LLM? How to justify the generated prompt is related to the corresponding domain? It would be better to provide more case studies on all datasets for a more comprehensive evaluation. In addition, it is especially encouraged to provide a corresponding theoretical analysis. Confidence: 5 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The main limitation is the practicality of the proposed Time-FFM. In addition, please see the weaknesses and questions for other limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We show great gratitude to you for approving of the quality of our paper. We have make the detailed response to thoroughly release your concerns. > W1 & Q1 & Q2: Concerns on the applicability in real world. We are sorry for not covering the deployment of Time-FFM in the manuscript. Since we make Time-FFM a "domain" level time series forecaster, we deem that federated training participants are more likely to be **edge clouds with abundant computing resources**, instead of resource-limited edge devices. Upon training, Time-FFM can also be deployed at edge clouds for forecasting tasks. Hence, we think that the resource limitation might not be a bottleneck of the deployment of Time-FFM and it realistic to deploy Time-FFM in real-world FL scenarios. > W2: TimesFM, as a centralized foundation model, conflicts with your motivation. Compared with the modality of text, time series is more domain-specific and (commercial) copyright-sevsitive, i.e., private knowledge may be inferred from historical time series readings, especilly in finance and healthcare domain. Hence, it is of great significance to take data privacy into consideration in the construction of time series foundation models. Moreover, a multitude of public data cannot even be adopted for pre-training foundation models due to data license restriction. Some foundation models like MOIRAI, have released large-scale public time series data merely for research instead of commercial use. Hence, our work uniquely **bridges the gap between foundation models and federated learning**, which not only enhances the privacy and applicability of foundation models in sensitive domains but also opens up new avenues for leveraging rich, yet previously inaccessible, time series data for advanced predictive analytics, addressing a crucial need in the (commercial) field. > W3&W5: More details for how to tackle data heterogeneity across clients. We aim to learn the underlying commonalities present in time sereis data, though corss-domain time series data are of great heterogeneity. In Federated Averaging framework, all domains learn a single shared model which would **perform poorly on each specific domain**. If we simply train dedicated model for each domain using their own local data (even if it is small) to train a local model, the domain-specific models may not generalize well to unseen domains. Therefore, we need to **strike a balance between the generalization and domain-specific prediction**. We devise a persoanlized federated strategy with a global encoder and personalized heads. The global shared encoder can learn the** common temporal embeddings or representations across domains**, which is in line with the success of centralized deep learning, i.e., the sharing of global feature representations among data [1][2]. The personalized heads enables **domain-specific decoders**, accounting for yielding prediction results fitting for local distributions. [1] LeCun, Y., Bengio, Y., and Hinton, G. Deep learning. nature, 2015. [2] Bengio, Y., Courville, A., and Vincent, P. Representation learning: A review and new perspectives. IEEE TPAMI, 2013. > W4: It would be better to improve and clarify the technique depth. Thanks for your insightful concerns on our contributions. We aim to propose a Language model-empowered federated foundation model for time series forecasting. - Given the differentiation of dimensionality and horizon, we introduce the modality alignment module encomposing the channel-independent and patching techniques, which may follow the track of GPT4TS, Time-LLM, MOIRAI, etc. - For bootstrapping the pre-trained GPT2 backbone for cross-domain time series reasoning, we propose to **adaptively construct prompts from how to understand patch tokens**, rather than from rigid domain instructions like Time-LLM and UniTime. - Due to cross-domain time series heterogeneity, we devise a personalized federated strategy (different from Federated Averaging which aims at learning a global prediction model), with **global encoder and personalized prediction heads**. In conclusion, we propose the first federated foundation model for time series forecasting, with adaptively generating domain-specific prompts and tackling time series heterogeneity for general-purpose learning and personzalied prediction. > W6: It would be better to convert TimesNet, DLiner, and PatchTST, into their federated version. Thanks for your valuable suggeations. We report the performance comparison in **Table 3 of the PDF in the "Author Rebuttal"**. In the modified version, we will supplement the experiment results and the prediction case study to make the effectiveness of Time-FFM more convincing. > Q3: The rationality of the generated prompts. We appreciate your attention on the rationality of prompts. We want to adaptively select the "optimal" text prototypes as prompts according to the domain-sepecific patch tokens by prompt adaption module. We think the prompts indicate how the LM backbone understand the domain temporal pattterns (conveyed by patch tokens) and thus can bootstrap domain-sepecific time series reasoning, though the prompts may have conflict with how we humans understand the patch segment [1]. We evaluate the effectiveness of prompts in Table 5. We have the key observation that the prompts can improve the forecasting performance (by comparing A.1 and A.2); the proposed adaptive prompt outperforms the domain instruction (by comparing A.1 and A.3). Furthermore, the showcase in Fig. 3 illustrates that different domain datasets correspond different text prototypes (say prompts). Hence, we think ablation results and showcase can both reflect the rationality of prompt adaption. In the modified version, we will follow your suggestion to provide more case studies and theoretical analysis to make it more convincing. [1] Sun, C., Li, H., Li, Y., & Hong, S. TEST: Text Prototype Aligned Embedding to Activate LLM's Ability for Time Series. In ICLR 2024. --- Rebuttal Comment 1.1: Comment: Thanks for the response, which addresses most problems. However, there are still some minor points to be clarified. 1. If we consider the clients as edge clouds, which are often possessed by big companies, such as Google, many kinds of time series data are supposed to be included in these edge clouds. In such situations, training an LLM-based FL method would be less important, considering that data access restriction is not a big problem. 2. It would be better to explain the personalized federated strategy in detail for better understanding. 3. It is suggested to improve the draft by adding a section (or contents) to clarify the technical depth according to the response to W4. --- Rebuttal 2: Title: Response to Reviewer 7qPS Comment: We are happy that our responses have effectively addressed your concerns. We would like to express our gratitude again for taking the time to review our paper and provide us with such detailed and invaluable suggestions. >C1. If we consider the clients as edge clouds, which are often possessed by big companies, such as Google, many kinds of time series data are supposed to be included in these edge clouds. In such situations, training an LLM-based FL method would be less important, considering that data access restriction is not a big problem. Thanks for providing such insightful point. We will make analysis from the perspectives of the necessity of federated learning and the adoption of the LLM-based backbone. **Necessity of Federated Learning.** Actually, in the big company on the scale of Google, there exists data restriction across different departments. The time series data, especially from different bussiness scenarios, can hardly be transmitted directly to a centralized cloud server for training, given the **data volume** and **access limitation**. Hence, a foundation model trained in FL paradigm may be an optimal choice. On the other hand, different (big or small) companies can be united in the community for construct a in- or cross-domain foundation model. **Adoption of LLM.** We adopt the first 6 transformer layers of GPT2 as the backbone. This also follows the track of training-from-scratch foundation models, such as MOIRAI and TimesFM, where multiple transformer layers are also adopted as the backbone. In our paper, we merely initialize the backbone with the pre-trained GPT2 parameters, which can achieve comparable performance in full-tuning or fine-tuning. As a matter of fact, with the available large-scale time series data, we can train our Time-FFM from scratch without being initilized by GPT2 parameters, like Chronos. >C2. It would be better to explain the personalized federated strategy in detail for better understanding. Thanks for your valuable suggestion. **Firstly, we want to clarify that domain personalization is a necessity.** The foundation model can learn the underlying commonalities in time series data, but may not guarantee the prediction performance for a specific domain, if we directly adopt the Federated Averaging framework to learn shared global model parameters for each domain. While in centralized training mode, the pre-trained model needs to be fine-tuned firstly and then adopted for specific forecasting tasks to ensure the (personalized) prediction performance. **Secondly, we analyze the rationality and present the techinical details of the designed personalized strategy.** We are inspired by the success of the centralized learning, where heterogenous data are projected to representations of the shared latent space. Hence, we can learn a global encoder to enable a shared representation space over different domains and dedicated local prediction heads for generating personalized prediction results in line with local distribution. We will modify the manuscript according to the response to W3 and C2, and supplement a case study to illustrate the rationality of personalized strategy. We sincerely hope such revision is under your consideration. >C3. It is suggested to improve the draft by adding a section (or contents) to clarify the technical depth according to the response to W4. We will carefully follow your suggestion to clarify the technical depth in the revised version to making the novelty of the proposed Time-FFM more convincing. --- Rebuttal Comment 2.1: Comment: Thanks for the response, which makes sense and addresses all my questions. --- Reply to Comment 2.1.1: Title: Gratitude to Reviewer 7qPS Comment: We are happy that our responses have fully released your concerns. We would like to thank you for your professional review work, constructive comments, and valuable suggestions on our manuscript.
Summary: This paper proposed a federated foundation model for time series forecasting. This foundation model is trained in a distributed setting, with global shared parameters in a server, and domain / dataset specific parameters in clients. This allows the authors to personalize their predictions to local domain-specific data, while still learning general time series patterns. The global parameters belong to the prompt adaptation and modality alignment modules, whereas the local parameters include word embeddings, frozen LLMs, and client specific prediction heads. These methods are then compared with federated fine-tuning, other LLM adaptation methods for time series forecasting, and specialized time series models on standard time series forecasting datasets. Strengths: 1. The paper is well written and easy to understand. 2. The authors conduct multiple experiments including ablation experiments, which are well executed. Weaknesses: 1. **Missing literature:** The authors do not cite or discuss two bodies of work which are pertinent to this discussion: (1) literature on time series foundation models, including models such as LagLLama [1], MOIRAI [2], TimesFM [3], Moment [4], Chronos [5] etc., (2) recent time series forecasting methods such as iTransformer [6], or older techniques such as N-BEATs [7] and N-HITS [8] etc. which have been shown to be strong time series forecasters, and (3) literature on federated learning for time series data. 2. **Rationale and contributions:** I am not sure about the motivation behind the work. I feel that foundation modeling and federated learning seem at odds with each other, at least in this case. The authors argue that data owners may not be willing to share their data. But most time series foundation models, or even LLMs and LLVMs are trained on public data, and later adapted to private data, so I don't see the value proposition of such an approach. Also I am not sure how this is a foundation model, as it doesn't solve multiple tasks, and is neither trained on multiple large-scale datasets. Secondly, why use large language models for time series forecasting, when studies have shown that there is sufficient data to pre-train foundation models from scratch [4], and that LLMs may not be better at forecasting time series data despite their significant computational cost [9]. Finally, I am not sure what the contributions of the work are. It seems like most of the work is built on architectural design choices from PatchTST, TimeLLM and Federated Averaging. Also, the personalization aspect sounds interesting, but can be addressed by fine-tuning foundation models, or custom built forecasting methods trained on specific datasets. 3. **Baselines and comparisons:** The authors do not compare their methods with any recent time series foundation model, or recent time series forecasting models such as iTransformer, and old but performant methods such as N-BEATs / N-HITS. Recent studies such as [] have shown that older methods get the better of most recent methods in many settings. ### References 1. Rasul, Kashif, et al. "Lag-llama: Towards foundation models for time series forecasting." arXiv preprint arXiv:2310.08278 (2023). 2. Woo, Gerald, et al. "Unified training of universal time series forecasting transformers." arXiv preprint arXiv:2402.02592 (2024). 3. Das, Abhimanyu, et al. "A decoder-only foundation model for time-series forecasting." arXiv preprint arXiv:2310.10688 (2023). 4. Goswami, Mononito, et al. "Moment: A family of open time-series foundation models." arXiv preprint arXiv:2402.03885 (2024). 5. Ansari, Abdul Fatir, et al. "Chronos: Learning the language of time series." arXiv preprint arXiv:2403.07815 (2024). 6. Liu, Yong, et al. "itransformer: Inverted transformers are effective for time series forecasting." arXiv preprint arXiv:2310.06625 (2023). 7. Oreshkin, Boris N., et al. "N-BEATS: Neural basis expansion analysis for interpretable time series forecasting." arXiv preprint arXiv:1905.10437 (2019). 8. Challu, Cristian, et al. "Nhits: Neural hierarchical interpolation for time series forecasting." Proceedings of the AAAI conference on artificial intelligence. Vol. 37. No. 6. 2023. 9. Tan, Mingtian, et al. "Are Language Models Actually Useful for Time Series Forecasting?." arXiv preprint arXiv:2406.16964 (2024). Technical Quality: 3 Clarity: 3 Questions for Authors: I do not have any specific questions from the authors. But I would appreciate some clarity on the motivation and rationale behind the method. Please see my arguments above. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors have discussed some limitations of their work. I believe that the authors should mention the computational cost of their method as a limitation. Each dataset / domain / client has a large language model backbone + other parameters, which makes their method computationally expensive. ---- After rebuttal: discussions on legality of using non-permissively licensed datasets to train FMs and notions of privacy need to be discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are deeply grateful for the insightful review you have provided for our manuscript. We have made the following response. > W1 & W3: Missing literature on **(1)** time series foundation models, **(2)** deep learning methods, and **(3)** federated learning methods. We show great appreciation that the suggested literature is of great value to improve our manuscript. We will cite these works and report the corresponding performance comparison in the revision. **For (1):** Works in **(1)** aim to construct a time series "corpus" and then train a time series foundation model from scratch. The performance comparison is reported in **Table 1 of the attachment PDF in the "Author Rebuttal"**. **For (2):** We compare multiple time series forecasting methods in the manuscript and feel sorry for not covering the relevant methods you metioned. We report the performance comparison in **Table 2 of the attachment PDF in the "Author Rebuttal"**. **For (3):** There have not any literature focusing on building foundation models for time series forecasting. Therefore, we merely compare the federated fine-tuning methods in **TY1**. > W2.1: Rationale and contributions: **(1)** The motivation behind the work. **(2)** How this is a foundation model. **For (1):** We agree with you that LLMs or LLVMs are trained on public data. Compared with the modality of text, time series is more domain-specific and (commercial) copyright-sevsitive, i.e., **private knowledge may be inferred from historical time series readings**, especilly in finance and healthcare domain. Hence, it is of great significance to take data privacy into account when constructing time series foundation models. Moreover, a multitude of public data cannot even be adopted for pre-training foundation models due to data license restriction, such as Kaggle public datasets. Some foundation models like MOIRAI, have released large-scale public time series data **merely for research instead of commercial use**. Hence, our work uniquely **bridges the gap between foundation models and federated learning**, which not only enhances the privacy and applicability of foundation models in sensitive domains but also opens up new avenues for leveraging rich, yet previously inaccessible, time series data for advanced predictive analytics, addressing a crucial need in the (commercial) field. **For (2):** We design Time-FFM as a foundation model for **time series forecasting task**, like MOIRAI and Lag-Llama. Due to the absence of public time series repositories, we merely train Time-FFM on 8 benchmark datasets. **Actually, Time-FFM can be trained on the built large-scale time series archive in MOIRAI or Lag-LIama** and then applied for downstream forecasting tasks. > W2.2: Why use LLMs. In our manuscript, we adopt the first 6 Transformer layers of the pre-trained GPT2 in Time-FFM, while in the foundation models training from scratch (MOIRAI, Lag-Llama, MOMENT and TimesFM), stacked Transformer layers are also adopted as the encoder backbone. We simply freeze all parameters of Transformer layers, which can achieve comparable performance compared to fine-tuning or full-tuning (observed from Table 6 in the manuscript). Actually, with the available large-scale time series data, we can train our Time-FFM from scratch without being initilized by GPT2 parameters, like Chronos. > W2.3: **(1)** The contributions of the work and **(2)** necessity of personalized heads. **For (1)**: We aim to propose a LM-empowered federated foundation model for time series forecasting. - Given the differentiation of dimensionality and horizon, we introduce the modality alignment module encomposing the channel-independent and patching techniques, which may follow the track of GPT4TS, Time-LLM, MOIRAI, Moment, etc. - For bootstrapping the pre-trained GPT2 backbone for cross-domain time series reasoning, we propose to **adaptively construct prompts from how to understand patch tokens**, rather than **from rigid domain instructions like Time-LLM and UniTime**. - Due to cross-domain time series heterogeneity, we devise a personalized federated strategy (different from Federated Averaging which aims at learning a global prediction model), with **global encoder and personalized prediction heads**. In conclusion, we propose the first federated foundation model for time series forecasting, with adaptively generating domain-specific prompts and tackling time series heterogeneity for general-purpose learning and personzalied prediction. **For (2)**: Our proposed persoanlized strategy is coherent to the federated training process. Each domain participant merely uploads the updated parameters of encoders, **without additional modification of local optimization process**. Upon finishing federated training, a global encoder and multiple heads (one for each domain) are obtained. When applying to the downstream forecasting tasks, the global encoder and in-domain heads can be frozen and directly adopted, avoiding the process of fine-tuning. > L1: Limitations on computational cost. We agree with you that the computational cost is potentially high even we freeze all parameters of the LM backbone. In the revised version, we will follow your suggestion by supplementing such limitation. Furthermore, since Time-FFM works as a "domain" level time series forecaster, we think that federated training participants are more likely to be edge clouds with abundant computing resources, instead of resource-limited terminal devices. Upon training, Time-FFM can also be deployed at edge clouds for forecasting tasks. Hence, we think that the resource limitation might not be a bottleneck of the deployment of Time-FFM in real world. We will carefully incorporate your comments in the revised paper. Considering the encouraging comments from Reviewers LqRs, PDR4, and 7qPS, we believe our research findings are worth sharing with the research community. We sincerely hope that a revision is still considered. --- Rebuttal Comment 1.1: Title: Kindly Request for Reviewer's Feedback Comment: Dear Reviewer uvxC, Since the End of author/reviewer discussions is coming soon, may we know if our response addresses your main concerns? If so, we kindly ask for your reconsideration of the score. Should you have any further advice on the paper and/or our rebuttal, please let us know and we will be more than happy to engage in more discussion and paper improvements. Thank you so much for devoting time to improving our paper!
Summary: This paper introduces TIME-FFM, which is a Federated Foundation Model for Time Series Forecasting. TIME-FFM is comprised of (1) Modality Alignment which aligns time series patches with text tokens; (2) Prompt Adaption which learns the text prompts for an input time series; (3) LM backbone; (4) Prediction Head for domain specific output. Experiments are conducted on several benchmark datasets to demonstrate the effectiveness of the proposed method. Strengths: 1. Federated learning for time series foundation model is an interesting and promising direction. This paper presents the first attempt in this direction. 2. Compared with SOTA federated methods (the TY1 group), the proposed method TIME-FFM could outperform the baselines. 3. The writing is clear and easy to follow. Weaknesses: 1. The overall novelty is limited. Modality Alignment, Prompt Adaption and different prediction head has been explored by previous methods. 2. Compared with SOTA methods from TY2, the proposed TIME-FFM does not have a significant improvement. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How do you obtain $E'$ from $E$? 2. Have you tried personalized Modality Alignment and Prompt Adaption for different domains? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We express our gratitude to you for providing constructive feedback on our paper. We have addressed the specific concerns as detailed below. > W1: The overall novelty is limited. Modality Alignment, Prompt Adaption and different prediction head has been explored by previous methods. Thanks for your insightful concerns on our contributions. We aim to propose a language model-empowered federated foundation model for time series forecasting. - Given the differentiation of dimensionality and horizon, we introduce the modality alignment module encomposing the channel-independent and patching techniques, which may follow the track of GPT4TS, Time-LLM, MOIRAI, Moment, etc. - For bootstrapping the pre-trained GPT2 backbone for cross-domain time series reasoning, we propose to **adaptively construct prompts from how to understand patch tokens, rather than from rigid domain instructions** like Time-LLM and UniTime. - Due to cross-domain time series heterogeneity, we devise a personalized federated strategy (different from Federated Averaging which aims at learning a global prediction model), with **global encoder and personalized prediction heads**. In conclusion, we propose the first federated foundation model for time series forecasting, with adaptively generating domain-specific prompts and tackling time series heterogeneity for general-purpose learning and personzalied prediction. > W2: Compared with SOTA methods from TY2, the proposed TIME-FFM does not have a significant improvement. The methods in TY2 are centralized methods which intrinsically outperform federated methods with the same model structures. However, our proposed Time-FFM even outperforms the SOTA methods in TY2, which further indicates the effectiveness of the devised prompt adaption module and the personalized federated learning strategy. > Q1: How do you obtain $E$ from $E'$? This is accomplished through a simple linear projection. Specifcally, given $E \in \mathbb{R}^{V \times D}$, we learn a weight matrix $W \in \mathbb{R}^{V\times V'}$ to identify a small set of text prototypes in $E' \in \mathbb{R}^{V'\times D}$. We will revise the paper accordingly and include this technical detail in the revised version. > Q4: Have you tried personalized Modality Alignment and Prompt Adaption for different domains? Thanks you for such insightful suggestions. **A.6 Time-FFM-D** in Table 5 denotes the distributed version of Time-FFM, i.e., personalized modules of Modality Alignment, Prompt Adaption, and Prediction Head for each domain. As depicted in Table 5, the performance of **A.6** is inferior to that of **A.1 Time-FFM**, which further indicates that constructing a unified model across domains outperforms training dedicated prediction models for each domain. --- Rebuttal Comment 1.1: Title: Kindly Request for Reviewer's Feedback Comment: Dear Reviewer PDR4, Since the End of author/reviewer discussions is coming in one day, may we know if our response addresses your main concerns? If so, we kindly ask for your reconsideration of the score. Should you have any further advice on the paper and/or our rebuttal, please let us know and we will be more than happy to engage in more discussion and paper improvements. Thank you so much for devoting time to improving our paper!
Summary: The paper introduces TIME-FFM, a federated foundation model aimed at addressing the challenges of time series forecasting due to data scarcity and privacy concerns. The approach involves transforming time series data into text tokens, leveraging pretrained language models (LMs) for analysis, and using a personalized federated training strategy. Strengths: 1. Innovative Approach: The idea of transforming time series data into text tokens and leveraging LMs is creative and could potentially address the data scarcity issue effectively. 2. Addressing Privacy Concerns: The use of federated learning helps in alleviating privacy concerns and encourages data sharing without compromising sensitive information. 3. Personalized Training Strategy: The personalized federated training strategy, which combines global encoders with local prediction heads, is a thoughtful approach to handling data heterogeneity across domains. Weaknesses: 1. Algorithm Description: On page 6, the description of the algorithm seems to contain an error where Line 6 should actually be Line 5. Please carefully check the manuscript. 2. Experimental Results Discrepancy: On page 6, the value of Baseline PatchTST on ETTm1 in Table 1 differs from the values reported in reference 25. You need to provide a detailed explanation of your simulation results and why these differences occur. 3. Discussion on Future Directions: The article could benefit from a more in-depth discussion of the future directions and challenges for the proposed method. [25] Xu Liu, Junfeng Hu, Yuan Li, Shizhe Diao, Yuxuan Liang, Bryan Hooi, and Roger Zimmermann. 369 Unitime: A language-empowered unified model for cross-domain time series forecasting. In 370 Proceedings of the ACM Web Conference 2024, 2024. Technical Quality: 3 Clarity: 3 Questions for Authors: Include a more detailed discussion on the potential future developments and challenges for TIME-FFM, highlighting areas for further research and potential improvements. Provide a detailed explanation of the simulation results in Table 1, particularly regarding the discrepancies with reference 25. This will help in understanding the validity and reliability of your experimental results. Correct the algorithm description to ensure that Line 6 reflects the appropriate content, as it currently appears to match what should be in Line 5. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Please refer to the described weakness above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We express our sincere thanks for the detailed and thoughtful review of our manuscript and for the encouraging appraisal of our work. We have addressed the specific concerns and respond to the constructive recommendations detailedly as follows. > **W1. Algorithm Description**: On page 6, the description of the algorithm seems to contain an error where Line 6 should actually be Line 5. Please carefully check the manuscript. Thanks for your careful readings. Line 1-5 in Algorithm describes the process of global execution and the server obtains the global model parameters by Line 5. Line 6 annotates that the following lines are meant "for local training". We are sorry for confusing you and will polish the description in the revised version. > **W2. Experimental Results Discrepancy**: On page 6, the value of Baseline PatchTST on ETTm1 in Table 1 differs from the values reported in reference 25. You need to provide a detailed explanation of your simulation results and why these differences occur. As is described in the footnote of Page 5, we modify PatchTST into a "unified" version as per Reference [25]. Table 1 reports the averaged evaluation results over 4 prediction windows. The MSE and MAE values of PatchTST on ETTm1 dataset are "0.971, 0.629". In Reference [25], the averaged evaluation results of PatchTST in the type of "Models Trained Across Datasets" on ETTm1 are also "0.971, 0.629". Therefore, our simulation results agree with Reference [25]. > **W3. Discussion on Future Directions**: The article could benefit from a more in-depth discussion of the future directions and challenges for the proposed method. We sincerely show great gratitude for your valuable suggestion. **For challenges**: We aim to propose a LM-empowered federated foundation model for time series forecasting, which is non-trivial technically, considering the following aspects. - **heterogeneous inputs**: Cross-domain time series data input into the foundation model are heterogeneous in terms of dimensions and historical readings, posing evident difficulty to modality alignment. - **Rigid instructions as prompts**: Prompts are adopted to bootstrap LMs for time series reasoning hinging on rigid domain-specific instructions, rather than the understanding of LMs, exhibiting poor robustness for unseen domains. - **Conflicts between generalization and personalization**: The ideal foundation model needs to learn the common temporal representations across domains and simultaneously enable the personalized prediction for domain-specific inputs. **For future directions**: Future directions include two aspects. - Thanks to the adopted "personalized prediction heads", future researches may focus on constructing a unified foundation model over different time series analysis tasks, such as classification, anomaly detection, and forecasting. - Further research should explore how to train a foundation model from scratch based on the existing and potential large-scale time series repositories. We will follow your suggestions and supplement more in-depth challenge analysis and future directions in the modified version. --- Rebuttal Comment 1.1: Title: Request for Reviewer's Feedback Comment: Dear Reviewer LqRs, Since the end of author/reviewer discussions is coming soon (in one day), may we know if our response addresses your main concerns? If so, we kindly ask for your reconsideration of the score. Should you have any further advice on the paper and/or our rebuttal, please let us know and we will be more than happy to engage in more discussion and paper improvements. Thank you so much for devoting time to improving our paper! --- Reply to Comment 1.1.1: Title: Kindly Request for Reviewer's Acknowledge Comment: Dear Reviewer LqRs, As the discussion phase is about to end and we were really trying our best to resolve your concerns, could you please acknowledge if your concerns are addressed? If so, please reconsider the rating; if not, we are very happy to resolve your further concerns. Thanks for your time.
Rebuttal 1: Rebuttal: We commerce by thanking the four reviewers for their thoughtful and constructive comments. We are really encouraged to see that the reviewers appreciate some positive aspects of our paper, such as technical quality (**Reviewer LqRs, PDR4, uvxC, and 7qPS**) and presentation skills (**Reviewer LqRs, PDR4, uvxC, and 7qPS**). Your expertise significantly helps us strengthen our manuscript. We are sorry for several unclear parts and weakness mentioned by reviewers and endeavor to respond to each comment. We sincerely hope that the responses can release the reviewers' concerns. We present a brief introduction of the responses as follows. - In response to the feedback from **Reviewer PDR4, uvxC, and 7qPS**, we have clarified the novelty and contributions of our paper. - In response to feedbacks from **Reviewer LqRs**, we have checked the details of the algorithm description and numerical results, and provided the in-depth analysis of the challenges and future directions. - In response to feedbacks from **Reviewer PDR4**, we have clarified the experimental details and analysis of experimental results. - In response to feedbacks from **Reviewer uvxC**, we have compared the performance with the suggested literature and analyzed the potential limmitations. - In response to feedbacks from **Reviewer 7qPS**, we have detailedly analyzed the potential problems in practical application and supplemented the suggested performance comparison and analysis to validate the effectiveness of the proposed Time-FFM. In the attachment PDF, we report the supplemented experimental results for **Reviewer uvxC and 7qPS**. Pdf: /pdf/a0cc173d28ef5f47230f16d21d427af07626e5c0.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Schur Nets: exploiting local structure for equivariance in higher order graph neural networks
Accept (poster)
Summary: This paper proposes Schur layers. Schur layers are meant to be used in higher-order MPNNs and are based on respecting local automorphism equivariance, however without the need for explicitly computing all automorphisms. In experiments, this method achieves state-of-the-art results on ZINC. Strengths: - **(S1 - Significance)** Schur Layer are a general architecture and can thus be used in a lot of higher-order architecture. This means that (theoretically) interesting properties of Schur Layers can be used to improve many other architectures - **(S2 - Significance)** In experiments on the ZINC dataset, SchurNets seem to consistently outperform other methods. - **(S3 - Novelty)** The approach does seem to be entirely novel, albeit based on some well-known mathematics. - **(S4 - Clarity & Quality)** Sections 1 and 2 are very well written. Especially Section 2 which introduces MPNNs based on the way the authors understand equivariance presents an extremely interesting way of thinking about GNNs. Weaknesses: - **(1 - Clarity)** The theorems are difficult to understand and no intuitive explanation is given. To me it is not clear what the theoretical contributions of this paper mean. - **(W2 - Significance)** It seems to me, as if the theoretical contributions are quite small. However, I am unable to really verify this due to (W1). - **(W3)** The paper is not reproducible as the code is not supplied (“ We will also make our code publicly accessible if the paper get accepted”). - **(W4 - Quality)** The experiments are lacking: - **(W4.1)** All experiments are performed on a single dataset (ZINC12k). This dataset in particular has been used in a lot of GNN papers and by now the community is probably overfitting to this specific dataset. - **(W4.2)** The advantages of SchurNet are small often only making a difference in the third digit of the MAE (for example in Table 3 the difference between the best SchurNet achieves 0.064 \pm 0.004 and a higher order MPNN $0.071 \pm 0.02$, that is a difference of 0.007). - **(W 4.3)** Replacing layers in higher-order MPNNs with Schur-Layers is in my opinion the most exciting application of this approach. Unfortunately, the authors only experiment with this for a single model ($P$-tensors). Overall, I think that SchurNets could be an interesting and useful method to be used with higher-order GNNs. However, I think this paper requires more work and thus vote to reject. In particular: (1) the writing needs to be improved and better explanations are needed for the theory; and (2) the paper needs more experiments on diverse datasets with more models. Technical Quality: 3 Clarity: 2 Questions for Authors: - **(Q1)** What is the (intuitive) meaning behind: Theorem 1, Theorem 2 and Theorem 3? - **(Q2)** As far as I understand it Schur Layers are equivariant under the automorphism. Does that mean that there is some limit to their expressivity? Otherwise, you would be solving the NP-hard automorphism problem? ## Miscellaneous - **(M1)** In my opinion, Table 5 is important enough to your narrative to warrant being in the main paper. - **(M2)** Missing citation in line 186. Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments on the paper. QUESTION 1. What is the intuition behind Lemma 1- theorem 3? The theory of equivariant neural nets usually proceeds by considering the action of a group on the space that the output of a given layer of the network lives in, and decomposing this space into an orthogonal sum of subspaces that transform independently, i.e. subspaces that are invariant under the action. Then they find operators on each subspace that are invariant and show that any linear combination of these operators is a valid equivariant map. All this usually involves irredudible representations of the group, generalized Fourier transforms and other abstract mathematical tools. In contrast, Section 4.1 of our paper shows that in the case of equivariance to the automorphism group of subgraphs, essentially the same can be accomplished simply by finding the eigenspaces of the graph Laplacian. To the best of our knowledge, this has not been done before, and we believe that it is of independent interest. So, in summary, the intuition is that the outputs of the subgraph-layers decompose into subspaces that transform independently under elements of the automorphism group, the specific decomposition is reflective of structure of the automorphism group, but still, the decomposition (up to the the expressiveness gap discussed below) can be found simply from the graph Laplacian. We think this result is not obvious, but it can be visualized intuitively, and it is an important contribution of the paper. Following your comments, we will try to make Section 4.1 less dry and more intuitive. Specifically, some diagrams might help visualize the decomposition into subspaces. QUESTION 2. Yes and no. There is indeed an interesting connection to graph isomorphism but as we also mention in the response to Reviewer uFzV, it is not that simple. As we admit in the Limitations, Schur layers are not guaranteed to be the most general automorphism-equivariant linear layers. Specifically, following the proof of our theorems, there might be a gap in expressivity if the eigenvalues corresponding to distinct invariant subspaces coincide. However, the literature on graph isomorphism amongst others suggests that barring extra symmetries, for moderate sized graphs, this is quite exceptional. So for pratical purposes Schur layers are almost as good as the full group theoretical approach (which would be infeasible to implement in a practical GNN). Regarding using Schur layers for finding the automorphism group, that is unfortunately not so easy. Schur layers can generate automorphism equivariant maps without having to find the automorphism group explicitly. That does not mean that we could somehow reconstruct the automorphism group itself from the equivariant maps. FURTHER COMMENTS - We acknowledge that the experimental results are limited. We were pressed for time and limited in computational resources before the deadline. We have since conducted experiments on several other standard benchmarks and we see a consistent improvement when we add Schur layers (see global rebuttal). - The reason we mostly focus on comparing performance to P-tensors is that P-tensors [Hands et al, AISTATS 24] is currently the best performing (and in many ways most general) prior model amongst subgraph based higher order GNNs. Specifically on ZINC 12K, according to the results reported in [Hands et al, AISTATS 24] they represent the state of the art. Moreover, Schur layers work best in tandem with a broader higher order message passing architecture, and P-tensors are one of the most general such frameworks. So it is natural to focus our experiments on contrasting higher order message passing with or without Schur-layers. - Finally, it is not uncommon for the competition between GNNs to come down to just the 1% level in accuracy. This is a crowded field and experience shows that even very simple architectures can bring down the error to ~90% of the state of the art. Recent works focus on extracting more subtle information from graphs that is responsible for the last few percent of improvement. Admittedly, the competition on the standard benchmarks is getting saturated. In future work we want to apply Schur Nets to practical chemistry problems where the flexibility of the framework and its ability to capture chemically relevant structures (functional groups) will be the most important. First, however we need to validate the architecture by showing its performance on the benchmarks. - Reproducibility. As we explain the response to Reviewer uFzV, Schur layers themselves are not difficult to implement. However, because of the irregular nature of information flow between subgraphs, the rest of the higher order GNN framework is quite involved, especially if all the messages need to be computed in parallel on the GPU. This forced us to write a separate C++/CUDA software library for general higher order message passing with a PyTorch interface. The library is available on github, but we cannot share a link to it here without jeopardizing anonimity. Once the injunction for anonimity is lifted, the link to the library will be added to the paper and naturally we will also make the training scripts public. --- Rebuttal Comment 1.1: Comment: I thank the authors for their answers and the thorough rebuttal. The additional experiments clearly strengthen this paper (especially the performance on MOLHIV). Furthermore, I find the explanations of the theorems really intriguing and am looking forward to seeing visualizations of them. Overall, if the authors manage to improve their explanations I am not against acceptance and have thus increased my score accordingly.
Summary: This paper introduces Schur layers, a novel approach in graph neural networks (GNNs) that enhances expressive power by leveraging spectral graph theory. Traditional GNNs struggle with capturing complex local graph structures due to their reliance on full permutation equivariance, which is overly restrictive and computationally intensive. Schur layers address this issue by computing basis functions directly from the graph Laplacian, circumventing the need for explicit enumeration of automorphism groups. This method is shown to significantly improve GNN performance, particularly in scenarios like molecular data analysis where local subgraph structures (e.g., cycles) are crucial. Strengths: 1. This paper makes a significant contribution by introducing Schur layers, which offer a novel approach to enhancing the expressive power of GNNs through higher order message passing. 2. The theoretical framework and mathematical rigor underpinning Schur layers are robust and well-explained. Weaknesses: 1. Experimental Evaluation: Limited empirical evaluation on diverse datasets and benchmarks undermines broader applicability and comparison with existing methods. 2. Contextualization: The paper could better situate its contributions within the broader landscape of GNN research, particularly in comparison with other higher order message passing techniques. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. How does the computational complexity of Schur layers compare to traditional GNN approaches in terms of training time and memory usage? 2. How sensitive are the performance gains of Schur layers to different types of molecular datasets beyond the benchmarks used in the paper, and how would they perform in varied experiments using larger datasets such as OGB datasets and different tasks like link prediction to validate the model's versatility and resilience? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The authors have addressed several limitations, such as the spectral approach not ensuring the finest decomposition into invariant subspaces, potentially limiting the generality of automorphism-equivariant linear maps and the theoretical expressivity gap this might cause. However, a more detailed discussion of the method's limitations in terms of computational overhead and potential impact on large-scale graph datasets would be beneficial. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: QUESTION1 In short: it is not the Schur layers themselves that make this architecture more expensive than classical GNNs, but rather the higher order message passing itself (like in P-tensors). The natural thing to compare Schur-layers to are the vanilla "linmaps" operations. In the case of a subgraph with $m$ nodes represented with a 1st order permutation equivariant tensor with $c$ channels (i.e., $X \in R^{m*c}$) the cost of the Linmaps operation followed by a linear layer mixing the channels is $O(mc+mc^2)$, whereas the cost of the Schur Layer would be $O(m^2 c+ mc^2)$. Since the number of channels is typically much larger than the size of the subgraph, these are essentially the same. In general, higher order GNNs are more expensive because (a) there are potentially many more overlapping subgraphs than vertices (b) in the case of k'th order message passing between subgraphs of size m, the size of the messages scales with $m^k$ (c) the way that each pair of subgraphs communicates depends on exactly how many vertices and which vertices they overlap in, so the communication protocol cannot be reduced to a simple gather/scatter operation of the type that PyTorch Geometric performs. Since the subgraphs used in practice are still relatively small (think $m=6$) it is really the last point that constrains speed. To get around it we spent months architecting a C++/CUDA based higher order message passing library with which we could run our experiments only 3-4 times slower than conventional GNNs. Unfortunately we can't give a pointer to this library without jeopardizing anonymity. As for memory, this is again a general higher order GNN issue. Instead of just storing a $c$ element vector at each vertex, we now store $m\times c$ matrices of $m\times m\times c$ tensors. For the types of small molecules datasets that we mostly experimented with this is still not an issue. The practical issue is that efficient higher order message passing on GPUs involves precomputing various control data structures and for large datasets we can't store all of these in the GPU's RAM, while moving information back and forth between the GPU and system memory is slow. QUESTION 2. We did some more experiments on TUdatasets and OGB-hiv datasets. Results are attached in general rebuttal. We observe consistent improvement by just adding Schur layer (replacing Linmaps) which shows the robustness of performance gain. Since Schur layers are strictly more expressive than layers which cannot take the local automorphism group into account (as a form of side-information), it makes sense that they should boost performance but it is reassurring to see that this is indeed consistently the case. On a more general level, higher order GNNs, and Schur nets are really not just a specific architecture but a class of architectures -- in particular the theory leaves open the question of how the subgraphs are selected and which subgraphs should communicate with which others. Implicit in your question is whether these choices should change when we move to different types of datasets or we move to a different task, e.g. link prediction. This is a very valid question, but we think it cannot be answered in just one paper. It will take the community some time to fully explore the vast design space of higher order networks. One thing that we are specifically working on is finding out whether in chemistry datasets including subgraphs corresponding to actual functional groups is helpful or not. COMMENTS. Thank you for appreciating that this paper is not just about a specific design variation on GNNs, but a general connection between spectral graph theory and automorphism-group equivariance, which, to the best of our knowledge, has not appeared in the literature previously. Automorphism group equivariance is important, but there have been few papers that explicitly exploited it in GNNs. Our hope is that the theory outlined in this paper will make it a little less daunting to incorporate automorpshism group equivariance in GNNs.
Summary: This paper introduces Schur Net, an architecture designed to attain subgraph equivariance without fully determining the automorphism group. Utilizing spectral graph theory, Schur layers incorporate equivariant side-information from local structures to improve expressiveness. The authors have confirmed Schur Net's effectiveness through experiments conducted on the ZINC dataset. Strengths: - Efficiency. The proposed Schur Nets attain subgraph equivariance without the necessity to identify the full automorphism group of the subgraph, rendering it more efficient than the group theoretic approach. - Theory. The authors have demonstrated the first-order permutation equivariance of Schur layers and have extended this to include higher-order permutation equivariance. - Experiments. Schur Nets have shown impressive performance on the ZINC dataset. Weaknesses: - Presentation. This paper suffers from numerous spelling and grammar errors, as well as incorrect citations. The authors are advised to carefully proofread their work to amend these mistakes. Additionally, the theoretical background part is challenging to comprehend. The authors should work to simplify this part, incorporating only the necessary parts for the main results in the text and elaborating details in the appendix. - Theory. The authors have indicated the use of only first-order activations; thus, it appears that only Corollary 1 is applied as the form of Schur Nets in the experiments. If this is the case, including Theorem 3 in the main text is unnecessary. The outcomes of Corollary 1 seem trivial, offering limited contribution. - Expressiveness. Schur layers, as the paper admits, cannot express all equivariant linear maps, which constrains their expressiveness. The conclusion asserts that their method is "almost as expressive" as the group theoretic approach without providing supporting theorems. Furthermore, the paper lacks a thorough discussion on the precise expressiveness of Schur Nets and how they compare to other models. - Missing Related Works. The absence of a related works section hinders the reader's ability to compare this study with prior research. The authors should introduce a section on related works and highlight the novelty of their study in comparison. - Experiments. The scope of the experiments is too narrow, being limited to the ZINC dataset without a robust set of baseline models for comparison. Despite the paper's subpar presentation, I acknowledge that my understanding may be flawed, and I am receptive to the authors' clarifications. Nevertheless, I urge the authors to enhance the paper's clarity and accessibility. Technical Quality: 2 Clarity: 1 Questions for Authors: How were Schur layers implemented in the experiments? Was equation (8) simply applied to the features of the subgraphs? Confidence: 3 Soundness: 2 Presentation: 1 Contribution: 3 Limitations: The limitations are outlined in Section 6, where the authors recognize the limited expressiveness of Schur Nets, the narrow range of experiments conducted, and the use of first-order activations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and several suggestions that are very much on point. QUESTION 1. Yes, in our experiments we used for order Schur layers, which just use equation 8. What we like about our approach is that it bypasses all the group theory and reduces to something so simple. The rest of the network however, which involves message passing between different subgraphs is technically quite complicated because it has to account for how many vertices any pair of subgraphs have in common etc., as it is generally the case in higher order GNNs. In the main body of the paper we decided to focus only on Schur layers, because that is the novelty relative to other higher order message passing papers. Maybe we should emphasize that what gives our architecture its power is the combination of Schur layers with higher order message passing. Schur layers on their own, if they could only communicate via scalars, would not give very good performance. Conversely, higher order message passing on its own has already been shown to give state-of-the-art results, but we show (see also our more recent experiments mentioned in the global rebuttal) that take the automorphism group into account with Schur layers consistenltly improves performance further. FURTHER COMMENTS Thank you for pointing out the need to proofread the paper and increase the clarity of some sections. It is also a good idea to add a "Related works" section, the reason we decided to cite the literature inline was mostly to save space, but if the paper gets accepted, the extra page available in the camera ready version would allow us to add a separate section for this purpose. - Theory. While equation 8 is simple, we beg to differ about it being trivial. Since the early days of GNNs, researchers have proposed building GNNs based on the "graph Fourier transform", i.e., some kind of expansions of functions in the eigenvectors of the Laplacian. However these architectures were usually motivated by an analogy with convolution in Euclidean space. Somewhat independently, people studied GNNs from the point of view of equivariance alone and proposed corresponding architectures. Corollary 1 and Theorem 3 bring these two threads of work together, and show that equivariance to the automorphism group (of subgraphs), which seems like a complicated algebraic issue, can in fact be enforced via the Laplacian. The proof not only shows that this approach is equivariant but also why it is equivariant: because the eigenspaces of the Laplacian MUST transform independently under permutations. The only case that the spectral approach is weaker than a fully group theoretical approach (which to our knowledge has never been implemented) is when some of the eigenespaces further split into smaller invariant subspaces, i.e., if some of the eigenvalues of the Laplacian "accidentally" coincide. This gives a whole new interpretation to spectral GNNs and suggests that the reason they could achive relatively good performance was maybe not so much the analogy with convolution but simply that they could take the automorphism of the group into account. Of course the actual setting in our case is different because we apply Schur layers at the subgraph level rather than at the level of the whole graph. In summary, the reason that we included the theoretical results, including the proofs in the main body of the paper in full detail is that we think they are of independent importance. The proofs themselves shed light on a new connection between spectral graph theory and equivariance. Potentially this could be exploited in other settings as well. Further, it can be the basis of theoretical studies of expressivity (see below). - Limitations of expressiveness. We mainly mention the fact that Schur layers cannot necessarily account for all equivariant linear maps because we think that studying in what cases they can or cannot is an interesting theoretical research question. As mentioned above the "gap" only arises if some of the eigenvalues of the Laplacian coincide, while the corresponding subspaces are not, in fact, irreducible invariant spaces of the action of the automorphism group. We know from the literature on e.g. graph isomorphism that in the typical case this does not happen: unless there is some specific symmetry involved, the eigenvalues of moderate to large graphs are typically distinct. In fact, for most graphs they are sufficient to distinguish between non-isomorphic graphs. So studying this gap comes down to studying the "special" graphs that defeat the Schur layers and possibly there is an interesting connection to graph isomorphism. However, we felt that this would go beyond the scope of the present paper. - Regarding further experiments please see the "global rebuttal". --- Rebuttal Comment 1.1: Comment: I appreciate the detailed rebuttal; however, my concerns regarding the paper persist. Primarily, the paper's presentation requires refinement. The theoretical sections (Section 3 and the initial part of Section 4) are hard to follow, and their connection to the proposed Schur layer is ambiguous. I recommend that the authors condense these sections to include only essential results. Additionally, the paper's theoretical contributions appear limited. Lemma 1, which seems rather obvious, is essentially Lemma 3.1 from [1], and Corollary 1 is a direct consequence of Lemma 1. Although Theorem 3 extends this to higher-order equivariance, it lacks experimental validation. I would also advise the authors to perform more experiments on larger datasets beyond the TU datasets. For these reasons, I am inclined to retain my original score. [1] Babai, L., Grigoryev, D. Y., & Mount, D. M. (1982, May). Isomorphism of graphs with bounded eigenvalue multiplicity. In Proceedings of the fourteenth annual ACM symposium on Theory of computing (pp. 310-324).. --- Reply to Comment 1.1.1: Comment: Thank you for your response! For experimental validation on other (relatively large) datasets, please see our result on OGBG-MOLHIV (81.6+-0.295) as reported in the global rebuttal and the detailed results table posted to Reviewer EFuW. Further comments: 1. Thank you so much for bringing the [Babai et al., 1982] paper to our attention. Lemma 1 by itself is indeed almost trivial (that's why we called it a Lemma not a Theorem). However, the [Babai et al.] paper is still an important reference because it underlines the potential connection to the literature on graph isomorphism. Specifically, there might be results in that paper or follow-up publications that allow us to put a bound on the potential expressivity gap between the group theoretical and the spectral approaches. We will look into that and potentially add a discussion to the Appendix. Thank you!! 2. The intended message of our paper is that something which other papers attempted to do in a complicated way with irreducible representations etc., can be done (almost) equally well in a simple way using just the eigendecomposition. From this point of view, in our opinion, the fact that Lemma 1 and Corollary 1 are technically simple is more of a strength rather than a weakness. We do appreciate your point about presentation, however. We certainly didn't mean to make the description of the group theoretic approach look more complicated than it needs to be, so we will will rework Sections 3 and 4 to make them as simple and readable as possible. We appreciate that for readers who are not already familiar with the group theoretic approach, Section 3 might be too dense, so we will add more background information in the Appendix and remove unnecessary details from the main body of the paper. We will also add a description of how Linamps (from [Maron et al., 2019]) work, and provide illustrative examples. 3. Regarding the gap in expressiveness mentioned in your original review, please see the table in our new global comment comparing the number of invariant spaces produced by the group theoretical vs the spectral approach. We found that for the specific subgraphs we considered there is, in fact, no gap. actually there is no gap.
Summary: This paper proposes to define permutation equivariant functions on subgraphs via spectral theory as opposed to the more traditional 'equivalence classes' of permutations [Maron et al.] The paper lays out the theory behind producing equivariant maps via the eigen decomposition of the subgraph's Laplacian and the permutation invariance of the eigenspaces, which along with standard arguments from representation theory produces equivariant functions, although not all of them. The validity and robustness of the approach is further exemplified in the experiments on benchmark tasks and datasets, comparing this approach to [Maron et al.] [Maron et al.] Maron et al., Invariant and Equivariant Graph Networks NeurIPS 2019 Strengths: 1. This paper proposes a novel method that can significantly reduce the exponential search space of permutation equivariant layers between overlapping subgraphs in the P-tensor formalism. 2. This can be extended to Subgraph GNNs and other high order networks 3. Some experimental verification of good performance Weaknesses: 1. The background is not clearly laid out, especially that of the P-tensor formalism. Without referencing back to these papers it wasn't possible for me to understand this manuscript. I suggest to add background in the supplementary material. 2. Other than the spectral convolution being permutation equivariant, there are no theoretical guarantees on its expressivity (in terms of Weisfeiler Leman or Subgraph GNNs.) Technical Quality: 4 Clarity: 3 Questions for Authors: 1. I read the background but I still don't understand what this means in line 337, "In table 1, we see that Schur layer outperforms Linmaps under different cycles sizes, especially when only cycle 5 and 6 are considered. " Do you choose as your neurons as only cycles of certain size and then computing the equivariant layers in your proposed spectral approach? Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments and questions. QUESTION 1. We experimented with various architectures, but in the final experiments we just used subgraph-layers corresponding to cycles, edges and vertices. The cycles were of size 5 and 6 corresponding to the aromatic rings that commonly occur in molecules. The experiments show that applying the Schur-layer idea to just the cycles alone consistently improves performance over the baseline P-tensors model. The P-tensor model is the natural baseline here here since it is currently the state-of-the art model amongst subgraph-based higher order GNNs. The experiments (both those that are in the original paper and those that we have done since) show that automorphism-group aware convolutions with Schur layers within subgraphs robustly improve performance over the more traditional local convolutions that ignore the automorphism group of the subgraphs. QUESTION 2. There might be a slight misunderstanding here. The naive approach would not involve an exponential number of layers. It would involve first finding the automorphism group of each subgraph (which is a complicated combinatorial problem but could be done in precomputation) then symmetrizing over the automorphism group in each layer either by (a) enumerating each of its elements or (b) using the representation theory of the specific group. Technically, both finding the automorphism group and enumerating over its elements is NP-hard in the size of the subgraph. The real problem is not just the computational cost but how complicated considering all potential automorphism groups, generating the irreducible representations, etc. would be. We are not aware of any higher order GNN paper that seriously proposed doing this. Therefore, we cannot directly compare the running time of our model with this. What our paper shows is that all of this can be bypassed with a simple spectral graph theory "trick". A huge benefit of our approach is its flexibility. We can easily change to other subgraphs just by changing the subgraph template, the Laplacian etc are computed automatically, and there is no need to search for the automorphism group, somehow derive its irreducible representations, and so on, neither there is a need to sum over all group elements, which would have $O(m!)$ complexity in principle, where $m$ is the size of the subgraph. In practical terms, as we also explain in our response to Reviewer fyBT, the biggest computational bottleneck in higher order GNNs is orchestrating the message passing process in parallel across all subgraphs in a given layer, given that the subgraphs overlap with each other in different ways. To facilitate this we had to write a separate library complete with specialized CUDA kernels and a considerable amount of engineering. In practice, using this library, first order message passing, like in our experiments, is only 3-4 times slower than conventional GNNs. The actual Schur layers only make up a small fraction of the run time. If we summed over all elements of the automorphism group, however, we estimate that the automorphism-group equivariant layers would be about 100 times slower, but for the reasons cited above such an approach would be very clunky to implement anyway. QUESTION 3. There might be another slight point of confusion here. First of all, "Linmaps" is not the same as [Maron et al]'s original model, since all the layers in [Maron et al.]'s model operate on the entire graph, whereas we consider subgraph based models where the Linmaps happen at the level of subgraphs. The reason that [Maron et al.] is cited is that we apply the same linear transformations at each subgraph as they did at the level of the entire graph and they were the first to enumerate all such possible linear transformations. Secondly, the problem with "Linmaps" (as implemented in for example P-tensors) is not that they are expensive but that they are not expressive enough because they fail to take into account the automorphism group of the subgraph. Schur layers are a plug-in replacement for Linmaps that can take into account the automorphism group, which is basically a form of equivariant side information (see Sections 3 and 4). Fortunately, (see response to fyBT), using to the spectral graph theory trick, the added computational cost is minimal and in practice pales in comparison to the cost of using higher order message passing in the first place (please see the response to fyBT for the theoretical complexity). In summary, we have a more expressive model and achieve a consistent improvement over other higher order models like P-tensors without any appreciable extra computational cost. The reason we did not manage to get results on the full ZINC dataset is primarily a memory issue. As we explain, to achieve speed, we need to cache a variety of control data structures on the GPU and on the full ZINC dataset we simply ran out of memory. We are presently working on removing this limitation from the software by improving the engineering of the backend. FURTHER COMMENTS. - We appreciate your comment about the paper not being self-contained because it doesn't explain the mechanics of higher order message passing, especially since this is a relatively new formalism. In the main paper we just wanted to concentrate on what is novel, which is the Schur layer. We will add background information on P-tensors, etc. to the appendix. - See the global rebuttal for further results on the performance boost of Schur layers, unfortunately these only came after the deadline. - Thank you for bringing [Feng et al.] to our attention, we were not aware of it and the results on ZINC 12K are very impressive! One fundamental difference is that theirs is a fundamentally 2nd order model. While we have worked out the theory of higher order Schur layers we have not used them in our experiments. This paper gives a strong motivation to explore 2nd order Schur layers. --- Rebuttal Comment 1.1: Title: Response Comment: I highly appreciate your detailed rebuttal. 1. What I'm missing is that if you only restrict to automorphisms of cycles and other simple subgraphs, isn't it computationally feasible to know the decomposition into irreducible representations? Isn't it the decomposition of the representation of the cyclic group acting on the vectors on the subgraph (`neuron')? 2. Thank you for the clarification. Yes, upon looking it over again the number of linear equivariant mappings you have is always larger as the automorphism group of the subgraph is a subgroup of $S_m$, where $m$ is the subgraph size. 3. "cache a variety of control data structures on the GPU": By this you mean the eigendecompositions of the subgraphs? 4. I agree with the comment re [Feng et al.]. I'm still debating whether this merits a score increase. is it possible to put together all the experimental results in a more readable format like a pdf? Or just a reply that is succinct with highlights from the best experimental claims you think from both the paper and rebuttal. Thank you --- Reply to Comment 1.1.1: Comment: Thanks so much for the quick response to our rebuttal. 1. For cycles the relevant group is not the cyclic group but the dihedral group, which is a little bit more complicated because it also has some two dimensional irreducible representations. Table 4 shows that adding cycles with {1,2,3} branches hanging off them can improve performance, and here the automorphism group is different (see the paragraph titled "Flexibility" in the Experiments section). Without the Schur layer formalism we would have to work out the irreducible representations of all of these group separately by hand. However, for the "production run" used to generate the results of Table 5, we ended up just using cycles, so you make a fair point. Our primary goal with the paper was to develop the general machinery for automorphism group aware message passing in higher order subgraph neural networks. then. After we have done that and wrote the corresponding software, we started working on validating the approach, trying to show that adding automorphism-group aware operations can improve the state-of-the-art in empirical results on benchmark datasets. What we had in mind was adding subgraphs corresponding to actual functional groups. We found that just by adding cycles we can already beat the other papers on ZINC 12K (with the exception of [Feng et al.] which you kindly pointed out but we didn't know about). In addition to branched cycles, we also did some experiments on star-shaped subgraphs for example, but in terms of producing results for this conference deadline we just concentrated on cycles in the end because that seemed like the most direct path to competing with the other algorithms. Our longer term goal is to explore much more adventurous applications of this framework. In summary, you are right, if all that we were interested in were cycles and first order message passing, then we could have developed an architecture specialized to just that and that would have been easier maybe. It would probably be a rather incremental contribution to the field, though. 3. We wish it were just the eigenvectors that need to be cached. The main technical difficulty in GNNs is parallelizing the message passing step on the GPU, given that the graph structure is not regular. Most of the community uses PyTorch Geometric, which solves this problem for classical message passing GNNs by reducing the message passing step to one big "scatter" operation. The problem with using this for higher order message passing is that in architectures like ours for a given pair of sending and receiving subgraphs, the exact form of the message passing map between them depends on how many vertices they have in common and which vertices those are. So effectively the message passing operation is different for each pair of (sending, receiving) subgraphs. To solve this problem and be able train our model at a comparable speed to other GNNs, we had to write specialized CUDA kernels and ultimately a separate library. The kernels take as input the source subgraph-layer, a data structure specifying which subgraph communicates with which subgraph (in most of our experiments this is just determined by whether they overlap or not), as well a data structure that specifies for each pair of (source,destination) subgraphs which vertices they share. The latter two data structures are relatively expensive to compute because they need to be created on the CPU so it makes sense to cache them (on the GPU). This is what can lead to memory issues. It also complicates the batching process, because unlike in PyTorch Geometric, we can't just batch the graphs by merging them into one big graph. We are working on relieving the memory issue by updating the library so that it can move these data structures in and out of GPU memory flexibly, but using a preloading strategy so as to hide the latency of the memcpy calls. The overall goal is that the library should make higher order GNNs just as easy to use as regular message passing networks, hiding all these details on the backend. We should also not that the library can also do second order message passing as described in Theorem 3, we just haven't had a chance to experiment with that yet. ---- I don't think we can submit pdf's at this stage, but we will compile the additional results in table format and share them soon.
Rebuttal 1: Rebuttal: We thank all the reviewers for their careful reading of our paper and thoughtful comments. Points raised by individual reviewers are addressed in the individual responses, here we would like to make some general comments. - First of all, we stress that our paper has two separate aims: 1. Making a general theoretical contribution to the literature by pointing out a connection between spectral graph theory and the theory of equivariant GNNs, specifically, how the eigendecomposition of the graph Laplacian can "mimick" the decomposition into irreducible subspaces that is at the heart of the mathematically more sophisticated group theory based approach to equivariance. 2. Making a practical contribution by showing how the theoretical results can be used to easily "upgrade" higher order subgraph-based GNN to take into account the automorphism group of subgraphs as "side information". This upgrade makes the GNNs strictly more expressive at little additional computational cost. - Several reviewers ask about the potential gap in expressivity related to the fact that in principle the Laplacian-based decomposition into invariant subspaces might be coarser than the group theory based approach. We explain that finding the cases in which this happens is quite a deep theoretical question that goes beyond the scope of the present paper. In practical cases for moderate sized subgraphs however we argue that there is little or no gap. - Regarding the comments about the limited scope of the experiments, we have now conducted further experiments, and found that they confirm that taking the automorphism group of subgraphs into account with Schur layers boosts the performance of higher order subgraph neural networks. For example, on the classic TUDatasets we find: | Dataset | *Linmaps* | SchurLayer | |-----------|-----------|-------------| | Proteins | 74.7±3.8 | **75.4±4.8**| | MUTAG | 89.9±5.5 | **90.9±4.7** | | PTC\_MR | 61.1±6.9 | **64.6±5.9**| | NCI1 | 82.1±1.8 | **82.7±1.9**| Importantly, on the OGBG-MOLHIV dataset we achieve an ROC-AOC of 81.6%+- 0.295, whereas a corresponding P-tensor model without Schur-layers only achieves 77.925% +- 2.461. Please note that this result is highly significant on its own terms, in particular it beats all other competing models (GINE, PNA, HIMP, CIN, SUN, etc.) cited in the P-tensor paper [Hands et al, 2024]. - Several reviewers also ask the computational cost of our model. In addition to our detailed comments about computational complexity and the practical challenges of implementing higher order message passing, here are some illustrative single GPU per-epoch wall-clock runtimes: | Dataset | Linmaps | SchurLayer | |-----------|---------|------------| | ZINC-12k | 25.4s | 27.6s | | NCI1 | 9.5s | 11.5s | **Table 2**: Runtime per epoch with hyper-params num_layers = 4, rep_dim = 128, dropout = 0.0, batch_size = 256, cycle_sizes = 3,4,5,6,7,8. The implementation of Linmaps and SchurLayer is based on our internal software.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Neural Pfaffians: Solving Many Many-Electron Schrödinger Equations
Accept (oral)
Summary: In this paper, the authors propose a novel Pfafifian-based anti-symmetrization method to represent the generalized wavefunction in quantum chemistry. Unlike the traditional Slater determinant-based anti-symmetrization, the Pfaffian-based method offers greater flexibility in selecting the number of orbitals. This flexibility is crucial for generating a universal wavefunction representation for molecular systems with varying number of electrons. The authors also introduce a new pretraining scheme that addresses the rotational symmetry in Hartree-Fock (HF) solutions, thereby improving the quality of the initial guess in their method. They validate the effectiveness of their approach through experiments on atomic systems and the TinyMol dataset. Strengths: 1. To the review's knowledge, this is the first study that uses Pfaffian as an anti-symmetrization method in the area of solving the Schrödinger equation with neural networks. 2. The proposed pretraining scheme effectively avoids orbital disorder in HF solutions, ensuring that the initial guess for the wavefunction is more accurate and stable. This scheme can be integrated with the other methods in solving the Schrödinger equation, facilitating further development and refinement in this area. 3. The performance of the proposed method is not affected by the uncorrelated data, indicating its potential to incorporate more training data to improve the transferability of the proposed method. Weaknesses: 1. The primary techniques in this paper are the Pfaffian anti-symmetrization method and the associated pretraining scheme. However, in the TinyMol dataset evaluation, the equivariant part of the proposed method differs from that of the baseline, making it difficult to attribute performance difference solely to the new anti-symmetrization method. Additionally, the performance of the proposed method is not consistently superior to the baseline. The reviewer recommends an ablation study to clearly demonstrate the effectiveness of the proposed method. 2. To handle odd numbers of electrons, the authors introduce a learnable padding vector to prevent Pfaffian collapse. While this approach appears effective in in-distribution experiments (e.g., affinity and ionization potential of atomic systems), the reviewer is concerned about its efficacy in out-of-distribution scenarios, particularly when generalizing to unseen systems with different electron numbers. Technical Quality: 2 Clarity: 3 Questions for Authors: There are other ways to construct a skew-symmetric matrix, such as $\Phi_1^{\top}\Phi_2-\Phi_2^{\top}\Phi_1$, where $\Phi_1,\Phi_2\in\mathbb{R}^{N_{0}\times N}$ are the output of different equivariant networks. The reviewer is curious about why the authors choose $\Phi^{\top}A\Phi$ in Neural Pfaffians. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: See weakness part Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their invaluable feedback and hope to address their concerns. Firstly, we would like to highlight the broad range of new experimental evidence we present in the general comment. The following details how the experiments relate to the reviewer's concerns. **Ablation studies**\ To better isolate the contribution of our Neural Pfaffians, we performed several new ablation studies on the TinyMol dataset. In particular, we also trained Globe (+ Moon) [1] on the TinyMol dataset and combined our Neural Pfaffians with FermiNet [2] and PsiFormer [3]. Globe shares the same embedding technique as our Neural Pfaffian and, thus, only differs in how to enforce the fermionic antisymmetry. The results of Globe are depicted in Figure 1 of the general response. There, we can see that while Globe starts similar to our Neural Pfaffians, it cannot reach similar accuracies and converges to significantly higher, i.e., worse, energies. Since the Neural Pfaffian is not restricted to Moon as an embedding technique, we perform an ablation study where we replace Moon with FemriNet and PsiFormer, respectively. The convergence plots are depicted in Figure 2 of the general response. There, we can see that our Neural Pfaffian outperforms TAO and Globe, independent of the choice of embedding model. However, consistent with [1], Moon performs better in generalized wave functions. Additionally, we would like to point the reviewer to the ablation study in Appendix F, where we replace the Pfaffians with AGPs $\Psi=\det(\Phi_\uparrow\Phi_\downarrow^T)$, a special case of the Pfaffian that is faster to evaluate. The rest of the network stays the same. There, we find that AGPs are significantly faster to compute, though they cannot match the accuracy of our Neural Pfaffians. **Comparison to CCSD(T)**\ Our NeurPf does not match the CCSD(T) energies in Figure 5 of the paper, as convergence typically requires 100k-200k steps [2,3]. We only trained for 32k steps to match the setup from [4]. To better illustrate final convergence, we added an evaluation to Table 1 of the general response, where we train a NeurPf for 128k steps. Our NeurPf comes within chemical accuracy ($\leq 1.6mE_h$) or surpasses CCSD(T) even on the larger dataset. **Odd numbers of electrons**\ We are happy to present an alternative solution to deal with odd numbers of electrons in the general comment and would appreciate the reviewer's opinion on this. In short, instead of appending a learnable vector to the orbital matrix, we pad the orbitals $\Phi$ in both dimensions with an identity block to obtain $\hat{\Phi}=\begin{pmatrix}\Phi&0\\\\0&1\end{pmatrix}$. Additionally, we also pad the antisymmetrizer $A$ to $\hat{A}=\begin{pmatrix}A&1\\\\-1&0\end{pmatrix}$ such that one obtains $\text{Pf}(\hat{\Phi} \hat{A}\hat{\Phi}^T)\propto\det\Phi$ if $\Phi$ is square. We repeat the experiment with this new formulation on the second-row elements experiment in Figure 6 of the general comment. We find little difference. Going forward, we will adopt the new padding technique as it requires no additional parameters. **Construction of skew-symmetric matrix**\ This is a great point raised by the reviewer. We agree with the reviewer that there are various ways of parametrizing skew-symmetric matrices. We decided to go with $\Phi A\Phi^T$ as a general construction. For instance, $A=\begin{pmatrix}0&I\\\\ -I&0\end{pmatrix}$ and $\Phi=(\Phi_1 \hspace{1em} \Phi_2)$ corresponds to the reviewer's suggestion $\Phi A\Phi^T=\Phi_1\Phi_2^T - \Phi_2\Phi_1^T$. We experimentally verify the advantage of having $A$ being fully learnable in Figure 3 of the general response, where we compare our approach to a fixed non-learnable $A$. The results indicate that having $A$ as learnable grants an accuracy benefit in the later stages of training. During the development of our method, we experimented with several other parametrizations, e.g., $A$ being learnable block-diagonal or other fixed forms like $A=\text{diag}\left(\begin{pmatrix}0&I\\\\-I&0\end{pmatrix}, ...\right)$. Still, all of these resulted in very similar training trajectories to the one depicted in Figure 3 of the general response. We also experimented with parametrizing the skew-symmetric matrix for the Pfaffian directly via pair-orbitals $\text{Pf}(A)$ with $A_{ij}=\phi(h_i, h_j) - \phi(h_j, h_i)$ but found this to be numerically unstable for molecular systems, especially with heavier atoms. To better communicate this, we will add this experiment among all the other new experimental evidence to the paper. **Final remarks**\ We hope to have addressed the reviewer's concerns and were able to isolate our contribution better with our additional experimental evidence. We are happy to discuss further concerns and look forward to an engaging discussion. [1] Gao et al. "Generalizing Neural Wave Functions"\ [2] Pfau et al. "Ab-Initio Solution of the Many-Electron Schrödinger Equation with Deep Neural Networks"\ [3] von Glehn et al. "A Self-Attention Ansatz for Ab-initio Quantum Chemistry"\ [4] Scherbela et al. "Towards a transferable fermionic neural wavefunction for molecules" --- Rebuttal Comment 1.1: Comment: Thank you for your response. It has adequately addressed my concerns, so I'd like to increase the score to 6.
Summary: This paper proposes a new ansatz (Neural Phaffian) for parameterizing wave functions. The new ansatz improves the expressive power by making it possible to increase the number of orbitals. It is also beneficial to tasks like generalization between different systems. The effectiveness is demonstrated with plenty of experiments. Strengths: - The paper targets important tasks in physical sciences. It is laudable that the authors not only address ground-state energy calculation but also consider ionization energy and generalization among different systems, which are of more practical significance. - Handling anti-symmetry with Pfaffian is a very novel and clever idea! How to enforce anti-symmetry is one of the most important problems in this task, and for decades we have not been so far from original Slater determinants. This work provides rich insights and opens up great opportunities for future research. - The authors present comprehensive empirical studies to demonstrate the advantages of their model. Weaknesses: - As mentioned in L250-258, Neural Pfaffian is slower than a comparably-sized wave function with Slater determinants. It would be better if the authors could add training-convergence plots with time instead of iterations when comparing Neural Pfaffian with baseline methods. - The metric of summing over all energies in a dataset, as in Fig 7, is quite weird. I do not think this metric makes much sense. Additionally, it is possible that some systems with large absolute energy values dominate the curve. It is not convincing enough to showcase the advantage in **most** systems in the dataset. - There are several typos. I highly recommend the authors read the paper thoroughly again to fix all the typos. Just to list some of them: 1. L23: It should be $ \langle \Psi | \hat{H} | \Psi \rangle$. 2. L120: ‘inferring’. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. Is it obvious that Neural Pfaffian is better than the sum of parameterized orbital determinants $\sum_k |(\phi_{i,k}(r_j;r_{-j}))_{i,j}|$ that has a similar number of parameters? By ‘better’, I mean both expressive power (i.e. the least energy that the model can attain with sufficiently long optimization time) and convergence (i.e. the energy achieved within a fixed time period/number of iterations). 2. Regarding the ablation study on envelopes, it turns out that the full envelope(green) is less costly per iteration than the red and yellow curves. This is weird because the model with efficient envelope contains fewer computations. Furthermore, the red line reaches lower energy than the green line. This is also weird because the model/wave function with full envelope is richer in expressiveness and thus has the potential to attain a lower energy. Please correct me if I misunderstood this part. 3. Could the authors give a possible explanation on why Neural Pfaffian’s result gets significantly worse in the 2-hydrogen case? Intuitively this setting is easier than the case with a larger number of hydrogens. 4. Regarding L157, when the paper addresses systems with odd numbers of electrons, the approach of concatenating an additional learnable vector appears unnatural. The structure of the wave functions would change significantly when a system gains or loses an electron. Overall, I enjoy reading this work and vote for a strong accept. Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are delighted by the reviewer's positive feedback and want to address the remaining concerns. Firstly, we would like to highlight the broad range of new experimental evidence we present in the general comment. The following details how the experiments relate to the reviewer's concerns. **Time convergence plot**\ For our new ablation studies with NeurPf (+ Moon/PsiFormer/FermiNet) and Globe (+ Moon), we added convergence plots regarding compute time in Figure 10 of the general response. One can see that despite the additional computational overhead through the Pfaffian, our NeurPf is approximately as fast as Globe with the same embedding. This comes mainly from the fewer operations, as we do not require Globe's extra message-passing steps from atoms to orbitals. **Total energy**\ We agree with the reviewer that the total energy of all elements in the training set is an imperfect measure. For the ablation study, we decided to plot the total energy as this represents the optimization objective and is less noisy than individual molecular energies. Further, all energies are within the same order of magnitude ($-78.5E_h$ to $-114.49E_h$). Nonetheless, to better communicate the error per molecule, we added Table 1 in the general response to attribute the error on a per-molecule basis. Generally, models that perform well on one of the molecules also perform well on the others. We also would like to highlight the results in Figure 8 of the manuscript, where we break down the error per molecule with error bars indicating the different structures for finer details. Figure 8 shows that our Neural Pfaffian results in more consistent relative energies than TAO. **Pfaffian vs. determinant**\ Great question; we will extend the Appendix with the following discussion to clarify this. While it is a simple result to show that a Pfaffian can generalize Slater determinants, i.e., a Pfaffian can represent every Slater determinant, it is non-obvious that Pfaffians are naturally better in convergence or expressiveness. Empirical evidence in classical QMC [1] finds little to no improvements in molecular systems. However, in non-molecular systems, Pfaffians had greatly improved accuracy [2]. In our new ablation study in Figure 3 of the general response, we replace the learnable $A$ in $\text{Pf}(\Phi A\Phi^T)$ with a fixed one and investigate the impact on convergence. Here, we find that the parametrization is an essential factor in the accuracy of our Neural Pfaffians. In summary, it is unclear whether Pfaffians are generally better suited for modeling molecular systems. However, as we demonstrate in this work, they can achieve identical accuracy and are well-suited for generalized wave functions. **Envelopes**\ We appreciate the keen eye of the reviewer; the classical envelopes are indeed faster than our memory-efficient envelopes. While our envelopes and Pfau et al. [3] aim to reduce memory requirements, they require more operations. In particular, the full envelopes require $O(N_bN_dN_nN_e^2)$ operations. In contrast, our memory efficient envelopes require $O(N_bN_dN_nN_\frac{\text{env}}{\text{atom}}N_e^2)$ (for $N_o=N_e$) operations where $N_b$ is the batch size, $N_d$ is the number of determinants, $N_n$ the number of nuclei, $N_e$ the number of electrons and $N_\frac{\text{env}}{\text{atom}}$ is the number of envelopes per atom in our memory-efficient envelopes. Our envelopes primarily reduce the memory from $O(N_bN_dN_nN_e^2)$ for the full envelopes to $O(N_bN_dN_nN_\frac{\text{env}}{\text{atom}}N_e)$ where $N_\frac{\text{env}}{\text{atom}}\ll N_e$. We most likely attribute the empirical performance to the increased number of wave function parameters. While the $\sigma$ tensor is reduced from $N_d \times N_n \times N_e$ to $N_d \times N_n \times N_\frac{\text{env}}{\text{atom}}$, the $\pi$ tensor is enlarged from $N_d \times N_n \times N_e$ to $N_d \times N_n \times N_\frac{\text{env}}{\text{atom}} \times N_e$. For instance, for $N_e=20, N_n=4, N_d=16, N_\frac{\text{env}}{\text{atom}}=8$, we get the following parameter counts: ||$\sigma$|$\pi$|Total| |-|-|-|-| |full|1600|1600|3200| |our|640|12800|13440| We will update Appendix A to better reflect memory and compute requirements in the context of the full envelopes. **Hydrogen chain results**\ We agree with the reviewer that the H2 case is the simplest of all structures. However, it is arguably the most distinct from the other structures as the chain has no "middle" elements. We hypothesize that due to this, it has the lowest accuracy as no fine-tuning has been performed in this experiment. **Odd numbers of electrons**\ We are happy to present an alternative solution to deal with odd numbers of electrons in the general comment and would appreciate the reviewer's opinion on this. In short, instead of appending a learnable vector to the orbital matrix, we pad the orbitals $\Phi$ in both dimensions with an identity block to obtain $\hat{\Phi}=\begin{pmatrix}\Phi&0\\\\0&1\end{pmatrix}$. Additionally, we also pad the antisymmetrizer $A$ to $\hat{A}=\begin{pmatrix}A&1\\\\-1&0\end{pmatrix}$ such that one obtains $\text{Pf}(\hat{\Phi} \hat{A}\hat{\Phi}^T)\propto\det\Phi$ if $\Phi$ is square. We repeat the experiment with this new formulation on the second-row elements experiment in Figure 6 of the general comment. We find little difference. Going forward, we will adopt the new padding technique as it requires no additional parameters. **Final remarks**\ We will make sure to correct any typos in our manuscript. We hope to have adequately addressed the reviewer's concerns and questions. We welcome any additional feedback from the reviewer and eagerly await their response. [1] Bajdich et al. "Pfaffian pairing and backflow wavefunctions for electronic structure quantum Monte Carlo methods"\ [2] Kim et al. "Neural-network quantum states for ultra-cold Fermi gases"\ [3] Pfau et al. "Natural Quantum Monte Carlo Computation of Excited States" --- Rebuttal Comment 1.1: Comment: Thanks for your reply. I will keep my score.
Summary: In this paper, the authors propose using Pfaffians instead of Slater determinants to learn a generalized neural wave functions so that the permutation antisymmetry enforcement can be better addressed. Empirical study shows one single proposed model can generalize to various systems (second-row elements) with chemical accuracy. For the nitrogen potential surface prediction, the proposed model outperforms previous work Globe. The proposed model outperforms CCSD(T) and previous work TAO on the TinyMol dataset which contains hundreds of samples. Strengths: - By replacing Slater determinants, the proposed method avoids discrete and manual orbital selections, and hence is overparameterized, fully learnable, and applicable to any molecular systems. - Techniques such as memory-efficient envelopes, pretraining by Hartree-Fock, and generalization, are investigated to improve the efficiency and application of the proposed method. Weaknesses: I am not capable of discovering any weaknesses beyond the ones listed in the Limitation section, or the limitation of the entire neural wave function domain. Technical Quality: 4 Clarity: 4 Questions for Authors: - In Fig. 3, why don't you train the model as many steps as FermiNet did? - There are other works on generalized wave functions mentioned in the related-work section, such as PESNet. Why aren't they compared in the experiments? - If I understand correctly, for the large molecules in Fig. 5, none of the neural wave functions outperforms CCSD(T) baseline? Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The authors provided a section describing the limitations of the current work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback on our manuscript and would like to take the opportunity to address the few concerns that were raised. Firstly, we would like to highlight the broad range of new experimental evidence in the general comment. The following details how the experiments relate to the reviewer's concerns. **Comparison to Globe**\ To include more baselines in our empirical evaluation, we trained Globe [1] on both TinyMol datasets and plotted the convergence in Figure 1 of the general response. While Globe is initially close to our NeurPf, it converges to higher, i.e., worse energies. **Comparison to PESNet**\ We want to stress that PESNet [2] can only perform generalization across different geometric configurations, while Neural Pfaffians tackle the more complex problem of generalization across arbitrary molecular compounds. Nonetheless, we are happy to add comparisons of PESNet on the N2 energy surface. We also added FermiNet [3] from [4] to cover a broader range of methods. However, it should be noted that Globe (Ethene) and our (Ethene) have been trained on a more challenging augmented dataset, while PESNet is only optimized on the N2 energy surface directly. For FermiNet, each structure is optimized independently. The results in Figure 8 of the general response demonstrate NeurPf's high accuracy on energy surfaces. **Comparison to CCSD(T) CBS**\ In Figure 5 of the manuscript, we replicated the setting from [5] and, thus, only trained for 32k steps. However, neural wave functions typically require between 100k and 200k steps to converge [3,6]. Therefore, we extend the training of our Neural Pfaffian to 128k steps and compute energies for the converged model. The results are displayed in Table 1 of the general response. There, our long-trained NeurPf significantly outperforms CCSD(T) CBS on 3 of the 4 large molecules while being within chemical accuracy ($\leq 1.6mE_h$) on the last one. **Extended training on second-row elements**\ As the reviewer suggested, we trained our Neural Pfaffian for 200k steps on the second-row elements in Figure 6 of the general response. These results strongly suggest that a single Pfaffian can learn the ionization potentials with higher accuracy with additional training. **Final remarks**\ We hope to have answered the reviewer's questions and look forward to an engaging discussion. We appreciate any further feedback and questions from the reviewer. [1] Gao et al. "Generalizing Neural Wave Functions"\ [2] Gao et al. "Ab-Initio Potential Energy Surfaces by Pairing GNNs with Neural Wave Functions"\ [3] Pfau et al. "Ab-Initio Solution of the Many-Electron Schrödinger Equation with Deep Neural Networks"\ [4] Fu et al. "Variance extrapolation method for neural-network variational Monte Carlo"\ [5] Scherbela et al. "Towards a transferable fermionic neural wavefunction for molecules"\ [6] von Glehn et al. "A Self-Attention Ansatz for Ab-initio Quantum Chemistry"
Summary: This paper proposes NeurPf, a novel approach that replaces the standard determinant structure with a Pfaffian-based structure that allows systems of varying sizes to be represented with a single neural wave function. The key idea of the new ansatz is that given a large enough skew-symmetric matrix $A$, we have $\text{Pf}(BAB^\top) = \det(B) \text{Pf}(A)$ for invertible matrix $B$, allowing anti-symmetry to be broadcasted from the neural orbitals to the final output. This makes it possible to set the number of output orbitals $N_o$ into a fixed value that is irrelevant to the system size $N_e$ (so long as $N_e \leq N_o$). Several details on implementing the NeurPf, including architecture selection, envelops, and computation, are discussed. Further, the authors ran experiments on the second-row atoms and small molecular systems to verify the effect of the proposed method. Strengths: - I think the major contribution, i.e., using Pfaffian to overcome the size consistency issue for varying systems, is quite novel and compelling. It is not only a direct generalization of the Slater determinant (which means that it can be directly applied to most existing architectures), but it allows systems with varying sizes to be represented with one set of parameters. - The paper is well-written and organized, making it easy to follow and understand. - The experiments that the authors consider, while not large enough systems (will be discussed later), indeed demonstrate the potential of the proposed method. I especially like the joint training experiments on all atoms of the second row, which serve as solid evidence that the proposed NeurPf ansatz can be used for multiple systems. Weaknesses: I think the following weaknesses are important to be addressed for the sake of a strong paper: - First, the authors emphasize the generalization ability of NeurPf, but it looks like all experiments they conduct are a sense of joint training (which I agree is still good to pursue). That said, the authors demonstrate the potential of training one network for multiple models but have not shown that the trained model can be generalized to unseen (but relevant) systems (probably, I admit, with some fine-tuning required). I think it will be necessary for the authors to demonstrate the capacity of generalization since the advantage of NeurPf is exactly to represent multiple systems together. For instance, training on 7/8 of the second row systems and generalizing to the rest, training on the ionized systems of some of the atoms and generalizing to the ionized systems of the rest, etc. - To me, NeurPf is, instead of a separate algorithm, more of an ansatz modification that can be applied to existing ansatzes, e.g., Ferminet, PsiFormer, LapNet, Moon, and Globe. Applying the method to all those architectures empowers them to be trained jointly on varying systems, as well. Unfortunately, the results of this combination are lacking, and the comparison the authors presented is only between the proposed method applied by Gao et al. 2023a. I think is will be important if the authors show that 1. If we apply NeurPf to Ferminet, Psiformer, LapNet, Gao et al. 2023a, and TAO, then the joint training performance is similar to separate training, while the efficiency is much better. 2. For Fig 5, the results of all existing methods other than TAO should also be presented. 3. Comparison between NeruPf on different architectures to demonstrate which one is / is not compatible with the proposed architecture. - Can the authors add a concrete analysis of the computation efficiency of NeurPf? Specifically, how is it compared to a fixed-size Slater determinant (if $N_e = N_o$)? In order to train all systems together, we have to use the largest size (which implicitly increases the computation cost of the smaller systems). How will this influence the overall efficiency? Some plots or tables (instead of a number in texts) are preferred. - The authors should apply NeurPf on metal atoms (whose structures are complex but similar, and whose ionized energies are important to compute) to see how well it goes. For instance, Training on Na, Mg, Al, K and Ca. These systems are within 20 electrons (smaller than C(NH)_2), and should be feasible to train. The intuition is that since metals share similar electron organization structures to some extent, the proposed method should be able to capture the similarity and hence outperform separate training. Technical Quality: 4 Clarity: 4 Questions for Authors: This is irrelevant to my rating, but right now the number of dimension $N_o$ must be larger than the size of the maximum systems $N_e$. What do you think we should do as the system size $N_e$ goes up to 100, e.g. in Psiformer and LapNet paper? Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Irrelevant. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their invaluable feedback and suggestions. We hope to address their concerns. Firstly, we would like to highlight the broad range of new experimental evidence we present in the general comment. The following details how the experiments relate to the reviewer's comments. **TinyMol baselines**\ In addition to the results from [1], we also trained Globe (+ Moon) from [4] on the TinyMol datasets and plot the convergence in Figure 1 of the general response. While starting similarly to our NeurPf, Globe does not converge to the same energies but saturates at energy errors at around $9.4mE_h$ and $47.8mE_h$ on the small and large datasets, respectively. Unfortunately, we cannot provide FermiNet or PsiFormer energies for all structures as these require $\approx$ 6k A100 GPU hours each. **Embedding**\ We agree with the reviewer's suggestion to ablate the embedding method of our NeurPfs. To demonstrate that they also perform well with different embedding methods, we train them with FermiNet [2], PsiFormer [3], and Moon [4] on both TinyMol test sets. We omitted LapNet due to its similarity with PsiFormer but happily add it to the next version of the paper. The results are depicted in Figure 2 of the general response. NeurPfs outperform Globe and TAO independently of the choice of embedding network. Though consistent with the results from [4], Moon performs best in generalized wave functions. **Transferability**\ We agree that transferability is an exciting aspect of generalized neural wave functions but would also like to stress that it has not been the focus of this work as it typically cannot achieve chemical accuracy. Nonetheless, we are happy to present additional experiments. Like Scherbela et al. [1], we first train our NeurPf on the TinyMol training set and then transfer it to the unseen molecules in the test sets. The results are depicted in Figure 4 of the general response. These suggest that even after pretraining, both methods still require significant fine-tuning to lower errors, but only NeurPf can reach chemical accuracy. In addition to the TinyMol results, we would like to point the reviewer to the hydrogen chain experiment in Appendix E, where we extrapolate to larger hydrogen chains without finetuning. **Joint vs. separate training**\ We compare separately optimized wave functions to our NeurPf trained on the 30 and 40 structures, respectively. Since training separate wave functions for all 70 molecules in TinyMol exceeds our computational resources ($\approx$ 6k A100 hours), we picked one structure for each of the 7 molecules. We trained a separate NeurPf for each to estimate the convergence. The results are shown in Figure 5 of the general response. At a fixed cost, our generalized wave function generally offers higher accuracy than separately optimizing wave functions. However, we also find that the additional degrees of freedom (higher ratio of parameters/molecule) and specialized optimization offer better final accuracies for separate optimization. **Computational efficiency**\ We measure the compute time of the Pfaffian operation in Figure 8 of the general response. Our Pfaffian is five times slower than the determinant. This is primarily due to optimized CUDA kernels for the latter. Note that here, we only measure the Pfaffian and determinant, not the rest of the network. We benchmark the effect of the batch composition on the time per training step in Figure 9 of the general response for different batches composed of two molecules. There, we see a small overhead for small structures, while for large $N_e$, the time per batch converges to the geometric mean of the individual structures. When training systems of different sizes, we optimize with various techniques. We work with flattened representations for the embedding network. For the Pfaffian operation (or determinant), we switch to sequential processing for each molecule in a batch (but batch different conformer). This also allows us to use different $N_o$ for each molecule. We want to highlight that this maintains a high level of parallelism thanks to the batch of electronic configurations per molecule. For a comparison to APG, $\det (\Phi_\uparrow\Phi_\downarrow^T)$, we would like to point the reviewer to our ablation study in Appendix F. AGPs are a special case of Pfaffians. There, the AGP wave function is faster and reaches 32k steps in 70h compared to our Pfaffian's 95h. However, the AGP cannot match the accuracy of our NeurPf. **Number of orbitals**\ When going to large systems, the number of orbitals must increase with the number of electrons. As described in Section 4.4 and further detailed in Appendix C.3, we accomplish this by predicting a set of orbitals per atom: >[...] we grow the number of orbitals $N_o$ with the system size by defining $N_\text{orb/nuc}$ orbitals per nucleus, as depicted in Fig. 2. Thus, a system with twice the number of atoms (assuming they are the same atoms) has twice the number of orbitals, while the generalized wave function still has the same number of parameters. The computational scaling of neural network wave functions (incl. FermiNet/PsiFormer/...) to hundreds of electrons remains an issue for future work. Still, NeurPf remains well-defined independent of the system size, thanks to the orbitals growing with system size. **Final remarks**\ Again, we thank the reviewer for their detailed assessment and suggestions for improving our manuscript. We hope to have addressed their concerns and look forward to an engaging discussion period. We appreciate any further feedback or questions from the reviewer. [1] Scherbela et al. "Towards a transferable fermionic neural wavefunction for molecules"\ [2] Pfau et al. "Ab-Initio Solution of the Many-Electron Schrödinger Equation with Deep Neural Networks"\ [3] von Glehn et al. "A Self-Attention Ansatz for Ab-initio Quantum Chemistry"\ [4] Gao et al. "Generalizing Neural Wave Functions" --- Rebuttal Comment 1.1: Comment: Thanks for the reply. Part of the concerns have been addressed, but I am not sure why the authors do not reply to the concerns related to training on metal atoms. In terms of the number of electrons, they are approximately ~20 and would not require more resources. --- Reply to Comment 1.1.1: Comment: We are happy to hear that we resolved several concerns. While we intended to accommodate the metal atom experiment in our rebuttal, we found Moon numerically unstable and frequently resulting in NaNs. Given the already large amounts of computation spent on the other rebuttal experiments (>2000 A100 GPU hours), we could not get results for this experiment in time. Nonetheless, we are happy to provide additional experimental evidence on this now that computing resources are available again. **Intermediate results on metals**\ As our Neural Pfaffian is independent of the embedding model, we switched to PsiFormer. We then trained on the suggested atoms and their ions. Here, we show intermediate results on the ionization potentials of our Neural Pfaffian compared to the reference energies from [1] after 27k steps (current state of training): | | Neural Pfaffian (m$E_h$) | Reference [1] (m$E_h$) | Error (m$E_h$) | rel. Error | |:---------|---------------------------:|---------------------:|-----------------:|:-------------| | Na | 189.143 | 188.840 | 0.303 | 0.16% | | Mg | 278.765 | 280.975 | -2.210 | 0.79% | | Al | 219.486 | 219.958 | -0.472 | 0.21% | | K | 160.663 | 159.512 | 1.151 | 0.72% | | Ca | 220.374 | 224.643 | -4.269 | 1.90% | The Neural Pfaffian energies are averaged over the last 20% of training steps. On 3 of the 5 atoms, the ionization energies are already within chemical accuracy. Note that these are intermediate results, and the model is not yet converged. If the reviewer wishes so, we will update the table as the training continues. In the next iteration of the paper, we will include a similar figure to Figure 3 of the manuscript for the metallic atoms. But, we cannot include it here due to the policies regarding uploading PDFs and external links. We attribute Moon's poor performance here to its focus on size-extensivity w.r.t. the number of atoms, which doesn't play a role in atomic systems. Even worse, since Moon heavily relies on message passing between nuclei and electrons, heavier nuclei create information bottlenecks. PsiFormer has no such bottleneck thanks to its self-attention between electrons. [1] J.E. Huheey et al. "Inorganic Chemistry : Principles of Structure and Reactivity"
Rebuttal 1: Rebuttal: We thank all reviewers for their invaluable feedback and great suggestions for additional experimental evaluation. We enriched our work with several ablation studies, which we present in the attached PDF. We will add all results to the manuscript. **Fig 1: TinyMol baselines**\ In addition to the TAO results, we trained Globe [1] on the TinyMol datasets and added it to our evaluation. While Globe is initially close to our NeurPf, it convergences slower and to significantly higher, i.e., worse, energies. **Fig 2: Embedding**\ Since NeurPf is not limited to Moon, we performed additional ablations with FermiNet [2] and PsiFormer [3] as the embedding. Our Neural Pfaffians outperform Globe and TAO with any of the three equivariant embedding models. Consistent with [1], Moon is the best choice for generalized wave functions. **Fig 3: Skew-symmetric construction**\ We picked $\text{Pf}(\Phi A \Phi^T)$ as parametrization because it generalizes Slater determinants and many alternative parametrizations. For instance, by choosing $A=\begin{pmatrix}0 & I\\\\ -I&0\end{pmatrix}$ and $\Phi=(\Phi_1 \hspace{.5em} \Phi_2)$, one obtains the parametrization suggested by Reviewer F2V3 $\text{Pf}(\Phi A \Phi^T)=\text{Pf}(\Phi_1\Phi_2^T - \Phi_2\Phi_1^T)$. We investigate the impact of having $A$ being fixed/learnable in Figure 3. The results suggest that having $A$ being learnable is a significant factor in our Neural Pfaffian's accuracy. **Fig 4: Transferability**\ We want to stress that we focus on direct optimization in this work as it currently provides the only path toward chemical accuracy in generalized wave functions. Nonetheless, we replicated the setup of TinyMol and pretrain our NeurPf on the TinyMol training set before finetuning on the test sets. The results show that any method requires significant finetuning. However, only our Neural Pfaffians can match the reference calculations. **Fig 5: Joint vs separate training**\ We compare separately optimized wave functions to our Neural Pfaffian trained on the 30 and 40 structures, respectively. We plot the total number of steps on the x-axis and the mean difference to CCSD(T) CBS on the y-axis. Since training separate wave functions for all 70 molecules in TinyMol exceeds our computational resources, we picked one structure for each of the 7 molecules. We trained a separate Neural Pfaffian (with Moon) for these 7 to estimate the errors. At a fixed cost, our generalized wave function generally offers higher accuracy than separately optimizing wave functions. However, we also find that the additional degrees of freedom (higher ratio of parameters/molecule) and specialized optimization offer better final accuracies for separate optimization. **Fig. 6: Odd numbers of electrons**\ We propose a new solution to address the reviewers' concerns regarding handling odd numbers of electrons. Starting from the classical Slater determinant where $\Phi$ is square and $\Psi=\det\Phi$: Let $\Phi\in R^{N\times N}$ be the orbitals for odd $N$ electrons and $A\in R^{N\times N},A=-A^T$. For $\hat{\Phi}=\begin{pmatrix}\Phi&0\\\\0&1\end{pmatrix},\hat{A}=\begin{pmatrix}A&1\\\\-1&0\end{pmatrix},\text{Pf}(\hat{\Phi}\hat{A}\hat{\Phi}^T)\propto\det\Phi$. In our Neural Pfaffians, we generalize this to $\Phi\in R^{N\times D},A\in R^{D\times D}, \hat{\Phi}\in R^{N+1\times D+1},\hat{A}\in R^{D+1\times D+1}$. We train our new approach on the second-row elements and show the training energies in Figure 5. As suggested by Reviewer K3cB, we increased the number of training steps to 200k to match FermiNet. The results suggest little difference between the appending of a learnable vector and the new dimension augmentation. Given the avoidance of additional learnable parameters, we use the new parametrization as default. **Fig. 7: N2 baselines**\ We added FermiNet results from [5] and PESNet [4] as reference energies. **Fig 8: Pfaffian runtime**\ We benchmark our implementation for $\text{Pf}(\Phi A\Phi^T)$ (incl. the matrix multiplications) against the standard operation of $\det\Phi$ for 10 to 100 electrons. We implement the Pfaffian in JAX while highly optimized CUDA kernels are available for the determinant. In summary, both share the same complexity of $O(N^3)$, but the Pfaffian is approximately 5 times slower. **Fig 9: Runtime by batch composition**\ Here, we benchmark the total time per step for a two-molecule batch. We test all combinations of two molecules with $N_e^1,N_e^2\in\{2,4,8,16,32\}$. While we find a small runtime increase when processing small molecules jointly, for larger systems, we see the runtime per step converge to the geometric mean of the individual runtimes. **Fig 10: Convergence by time**\ For NeurPf with FermiNet, PsiFormer, and Moon in addition to Globe (+ Moon), we show convergence by the number of steps. For any time budget, all variants of NeurPf converge to lower energies than Globe. **Tab. 1: TinyMol energies**\ We list energy differences to CCSD(T) after training for Globe, TAO, and our NeurPf for 32k steps to match the setup from [6]. However, since NN-wave functions typically require 100k-200k steps to converge [2,3], we add a NeurPf trained for 128k steps. The results show that among generalized wave functions that are optimized on each of the sets, our Neural Pfaffians achieve the lowest energies in 32k steps. Once further converged, our neural Pfaffians also reach or surpass CCSD(T) CBS on the larger structures. [1] Gao et al. "Generalizing Neural Wave Functions"\ [2] Pfau et al. "Ab-Initio Solution of the Many-Electron Schrödinger Equation with Deep Neural Networks"\ [3] von Glehn et al. "A Self-Attention Ansatz for Ab-initio Quantum Chemistry"\ [4] Gao et al. "Ab-Initio Potential Energy Surfaces by Pairing GNNs with Neural Wave Functions"\ [5] Fu et al. "Variance extrapolation method for neural-network variational Monte Carlo"\ [6] Scherbela et al. "Towards a transferable fermionic neural wavefunction for molecules" Pdf: /pdf/e49b64ec1c6e68612268f857581b250493ec0abd.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Is Score Matching Suitable for Estimating Point Processes?
Accept (poster)
Summary: EDIT: I have changed my score from a "weak accept" to an "accept." The paper studies the potential use of score matching (SM) to do inference with the model of interest is a Poisson point process or Hawkes processes. Because using SM in practice requires modifying the objective with integration by parts, the paper shows for many common Poisson and Hawkes processes a core assumption to make this modification possbile does not hold. In order to remedy the situation, the paper proposes a random weighted version of SM called WSM (along with an analog for Autoregressive Score Matching (ASM) called WASM), which reestablishes the validity of integration by parts for the aforementioned processes. The paper argues theoretically that their new estimators are consistent, and it provides empirical analyses to demonstrate that the new estimator outperforms the non-weighted variant and matches the performance of alternatives. It also makes an argument to select a weighting function based on some set of assumptions. Strengths: The paper points out a fundamental issue with trying to use SM for general Poisson point processes and Hawkes processes, and in addition, it recommends a weighting scheme that patches up this problem based on some reasonable assumptions. This is a valuable observation, and the proposed remedy looks reasonable. There is ample empirical work to suggest that their tweak on SM and ASM does agree with the maximum likelihood estimator (MLE). The theory provided in the paper also argues that their weighted estimator is consistent if the model is well-specified. Originality: [+] + The authors have applied a weighting scheme to SM and ASM that permits them to use integration by parts for the standard objective and prove the consistency of their new estimator under suitable regularity conditions. Quality: [+] + The empirical work is thorough and does a good job checking the accuracy of their new estimator. Clarity: [+] + The presentation of the theory and empirical work is very neat and polished. + Assumptions are clearly laid out to ensure consistency of the new estimator. Significance: [+] + The new estimator provides the ability to do consistent semiparametric estimation for Poisson point and Hawkes processes with SM (instead of MLE), which provides a new tool for very large dimensional problems. Weaknesses: There are a couple critiques of the paper: - While the idea of using a weighting function to enable integration by parts is a good one, it has been seen in other fields before (e.g. weighting Stein kernels in compact settings), and hence is not completely novel. - The argument in Section 5 outlining a weighting function is a great addition to the paper, but given that it is only optimizing $C_h$ and not $\Gamma(h, A, B)$, it is hard to know how close it is to the optimum. Originality: [-] - The only novel insight of the paper is that the trick of integration by parts, so often using in score matching, is not possible for general point processes, but it can be recovered by weighting the Fisher divergence a bit differently. It is not clear if this idea alone is sufficient for publication. Quality: [-] - The argument to choose their weighting function $h$ is definitely better than simply picking one. That being said, it still not clear how close to optimal their choice of $h$ is given they are only optimizing one term in the objective ($C_h$). The paper would be a bit stronger if they could include a different (natural) weighting function to show their new choice is an improvement. Clarity: [-] - Equation 1: Can we at least have a sentence explaining the paper will be slightly abusing notation by referring to $p(T)$ as the conditional distribution of $p$ given $N_t = |T|$? Technical Quality: 3 Clarity: 3 Questions for Authors: [Q1] Given the choice of $h$ optimizes $C_h$ but ignores $\Gamma(h, A, B)$, how sensitive is $\Gamma$ to the choice of $h$? This could be demonstrated empirically or theoretically, but it would be good to have some analysis of this to argue why this weighting function is decent. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for reviewing our paper. We answer your questions below. > The argument to choose their weighting function $h$ is definitely better than simply picking one. That being said, it still not clear how close to optimal their choice of h is given they are only optimizing one term in the objective $C_{\bf h}$. The paper would be a bit stronger if they could include a different (natural) weighting function to show their new choice is an improvement. We thank the reviewer for pointing this out. To address your concern, we carry out experiments and compare three different weight functions that satisfy the criterion (Eq. 15) for AWSM on Hawkes process. Other than the near-optimal weight function $\bf{h}^0$ introduced in our paper, we also consider a natural weight function $\bf{h}^1$ defined as, $ h_n^1(t_n)=(t_n-t_{n-1})(T-t_{n}). $ We also consider another valid weight function: the square root of $\bf h^1$ denoted as $\bf h^2$, $ h_n^2(t_n)=\sqrt{(t_n-t_{n-1})(T-t_n)}. $ All three weight functions can be applied in AWSM to recover ground-truth parameters, however with different convergence rates. We carry out experiments on synthetic data for exponential-decay model with the same setting as section 6.2 in our paper. We measure their MAE for different sample sizes in Figure 1 of the attached PDF and find that $\bf h^0$ does achieve the best results among the three weight functions. We hope these additional experimental results can enhance the credibility of our paper and address your concerns. We would definitely add these results in the camera ready. > Given the choice of $\bf h$ optimizes $C_{\bf h}$ but ignores $\gamma(\bf h,A,B)$, how sensitive is $\Gamma$ to the choice of $\bf h$? This could be demonstrated empirically or theoretically, but it would be good to have some analysis of this to argue why this weighting function is decent. We thank the reviewer for pointing this out. This is indeed a tough question that we do not yet have a satisfying answer. For specific parametric models, $\dot A_n(\mathcal T), \dot B_n(\mathcal T)$ can be computed analytically (see line 478-480) and then $\Gamma(\mathbf{h},A,B)$ can be computed via Monte Carlo. Then we can study how sensitive $\Gamma(\mathbf{h},A,B)$ is to $\bf h$. For general models, especially when $\psi_{\theta}$ is a deep neural network like THP or SAHP, $\Gamma(\mathbf{h},A,B)$ is intractable to compute. However, heuristically speaking, our choice of near-optimal weight function $\bf h^0$ should be a good choice even concerning $\Gamma(\mathbf{h},A,B)$. To make $\Gamma(\mathbf{h},A,B)$ small, a natural idea is to make $|h_n(\mathcal T)|$ and $|\frac{\partial}{\partial t_n}h_n(\mathcal T)|$ small. The weight function we chose and its derivative have relatively positive low powers with respect to $t_n$, therefore making $|h_n^0(\mathcal T)|$ and $|\frac{\partial}{\partial t_n}h_n^0(\mathcal T)|$ small. For weight functions $h_n^1(\mathcal T)=(T-t_n)(t_n-t_{n-1})$, the power w.r.t. $t_n$ is two. And for $h^2_n(\mathcal T)=\sqrt{(T-t_n)(t_n-t_{n-1})}$, its derivative is $\frac{\partial}{\partial t_n}h_n^2(\mathcal T)=\frac{1}{2}\frac{T-t_n-(t_n-t_{n-1})}{\sqrt{(T-t_n)(t_n-t_{n-1})}}$ , the numerator is usually a bounded quantity and the denominator may be close to zero, making it large. In conclusion, $\bf h^0$ is a better choice compared with $\bf h^1$ or $\bf h^2$ concerning $\Gamma(\mathbf{h},A,B)$. > Equation 1: Can we at least have a sentence explaining the paper will be slightly abusing notation by referring to $p(\mathcal T)$ as the conditional distribution of p given $N_T=|T|$? We thank the reviewer for pointing this out. Here since $N_T$ is random, by the notation $p(\mathcal T)$, we are referring to the probability density or the likelihood of observing $N_T$ events $t_1,\ldots, t_{N_T}$ in $[0,T]$. For Poisson process, the conditional distribution of event time $t_1,\ldots, t_{N}$ given $N_T = N$ is, $ p(t_1,\ldots, t_N|N_T = N) = \frac{\prod_{n=1}^N \lambda(t_n)}{\int_{0\leq t_1\dots\leq t_N\leq T}\prod_{n=1}^{N}\lambda(t_n)dt_1\ldots dt_N}, 0\leq t_1\ldots\leq t_{N}\leq T. $ We acknowledge that our paper may not employ the rigorous notation typically used in the fields of probability and stochastic processes. Instead, we have opted for notations that we believe are accessible to readers from both the computer science and statistics communities. We deeply apologize for any confusion this may have caused. --- Rebuttal Comment 1.1: Comment: Thanks for your reply. I appreciate that you have done some extra work to demonstrate the choice of $h^0$ is better than other obvious alternatives. I am inclined to accept this paper. --- Reply to Comment 1.1.1: Title: Thanks Comment: Thank you very much for your recognition and support. We will include the additional comparison of different weight functions in the camera-ready version. Thank you once again for your constructive feedback and for increasing your rating to accept.
Summary: This paper studies the use of score matching for estimation in point process models, and proves the incompleteness in the original score matching estimators due to the bounded support. Weighted score matching is use to address the issue and theoretical results are establish ed for the consistency and optimality of the proposed estimator. Experiments on different data demonstrate the effectiveness of the proposed estimator which yields results comparable to maximum likelihood estimation. Strengths: 1. This paper theoretically shows the limitation in the use of original score matching estimators for point processes, pointing out their incompleteness and provide a solution. 2. Theoretical analysis is given to support the claim and the convergence of the proposed method. 3. The paper is well-written and easy to follow. Weaknesses: 1. As acknowledged in the limitation section in the paper, existing approaches that adopt denoising score matching on the point process is not considered. Even with theoretical gaurantee, experimentally, such baselines would be necessary to include in order to fully demonstrate the superiority of the method. 2. For Hawkes processes, the problem cause by the bounded support can be solved by applying log-normalization to transform the bounded temporal domain into an unbounded one [1]. One could achieve this by removing the right bound and transform the $(0,+\infty)$ to $(-\infty,+\infty)$. Although the real data is observed within finite window, the effect of removing the right bound should be negligible as it only affect the last event. 3. Including more metrics on event prediction, such as MSE of event time, would be helpful, as the practical use of point process model is for prediction. [1] Lin, H., Wu, L., Zhao, G., Liu, P., & Li, S. (2022). Exploring generative neural temporal point process. Transactions on Machine Learning Research. Technical Quality: 3 Clarity: 3 Questions for Authors: * Did the author observe any instability during the score matching training? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Included in weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for reviewing our paper. We answer your questions below. > Denoising score matching for point processes is not considered. We thank the reviewer for pointing this out. Indeed, denoising score matching (DSM) is not considered in our paper because we mainly focus on correcting the original score matching. However, to address your concerns, we have carried out experiments on DSM. **We can conclude that DSM is inferior to AWSM in terms of both efficiency and accuracy.** We deploy DSM on THP and SAHP. We use the DSM loss as in [1]. For observed timestamps $t_n^{(m)}$ in $m$-th sequence, we sample $L$ noise samples $\tilde t_{n,l}^{(m)}=t_n^{(m)}+ \epsilon_{n,l}^{(m)}, l = 1,\ldots, L,$ where $\text{Var}(\varepsilon_{n,L}^{(m)})=\sigma^2$ and get the DSM objective: $ \hat {\mathcal J}(\theta)=\frac{1}{M}\sum_{m=1}^M\sum_{n=1}^{N_m}\sum_{l=1}^{L}\frac{1}{2L}[\psi_\theta(\tilde t_{n,l}^{(m)})+\frac{\varepsilon_{n,l}^{(m)}}{\sigma^2}]+\alpha\hat {\mathcal J}_\text{CE}(\theta), $ where $\hat {\mathcal J}_{\text{CE}}(\theta)$ is the cross-entropy loss as line 191-192 in our paper. We carried out experiments on synthetic data where $T$ is known. We compared the TLL, ACC, and running time of DSM with MLE and AWSM. The comparison results are shown in Table 1 in the attached PDF. DSM performs the worst among the three estimators, and AWSM is the fastest. The reasons why DSM performs poorly are as follows: 1. In terms of accuracy, as discussed in Section 5 of [2], DSM is biased when $\sigma > 0$, while our AWSM is unbiased and produces consistent estimates. Therefore, it is not surprising to see that AWSM achieves better TLL and ACC than DSM. 2. In terms of efficiency, previous work states that DSM is faster than SM because the Hessian matrix is expensive to compute, while DSM avoids this. However, as presented in [3], ASM (or AWSM) also avoids computing the Hessian matrix by replacing it with a one-dimensional partial derivative $\frac{\partial}{\partial t_n}\psi_{\theta}(t_n|\mathcal F_{t_{n-1}})$. Therefore, DSM is not more efficient than AWSM for avoiding the computation of the Hessian matrix. On the contrary, DSM can be slower than AWSM because DSM requires $L$ noise samples for each timestamp and must compute their scores. > For Hawkes processes, the problem can be solved by applying log-normalization to transform the bounded temporal domain into an unbounded one [1]. This is a misunderstanding. **Apply log-normalization does not provide any solution to the problem, we can prove that the integration by parts still produces an intractable term after taking this transformation. Therefore, using ASM on a log-transformed sequence still results in wrong estimation**. If one hopes to get consistent estimation after log transformation, a suitable weight function is still needed. This is summarised as follows. | | ASM | AWSM | | ---------------------------------------------------- | ---- | ---- | | No transformation | × | √ | | Log Transform $x_n = \log t_n$ | × | √ | | Log Transform on interval $y_n = \log (t_n-t_{n-1})$ | × | √ | We illustrate this using the simplest example both theoretically and empirically. Consider a homogeneous Poisson process with intensity $\lambda ^*$ with observations $t_1, \ldots, t_{N_T}$ over $0\leq t_1\leq \ldots \leq t_{N_T}\leq T$. Now we consider 1. First apply log transformation to timestamps then using ASM. The estimate is denoted as $\hat \lambda_{\text{ASM,Log}}$. 2. First apply log transformation to intervals then using ASM on intervals. The estimate is denoted as $\hat \lambda_{\text{ASM,Log Interval}}$. For log-transformation on timestamp, we have $x_n=\log t_n$. Therefore the conditional pdf of $x_n$ given $\mathcal F_{x_{n-1}}$ is: $ p(x_n|\mathcal F_{x_{n-1}})=\lambda \exp\left[-\lambda \exp(x_n)\right]\exp(x_n), $ and we can compute conditional score accordingly. We plug these terms into the implicit ASM objective and get the estimate. Suppose we observed $M$ sequences of $t_1^{(m)}, \ldots, t_{N_m}^{(m)}$, after derivation we get an analytical form of $\hat \lambda_{\text{ASM,Log}}$: $ \hat \lambda_{\text{ASM,Log}} =2\frac{\sum_{m=1}^M\sum_{n=1}^{N_m}\exp(x_n)}{\sum_{m=1}^M\sum_{n=1}^{N_m}\exp(2x_n)}. $ Similarly, we consider transformation on intervals $\tau_n:= t_n-t_{n-1}, n\geq 2, \tau_1:=t_1, y_n = \log (\tau_n)$ and consider the score $\psi(y_n|\mathcal F_{y_{n-1}})$ then apply ASM. The estimate in this case is: $ \hat \lambda_{\text{ASM,Log Interval}} =2\frac{\sum_{m=1}^M\sum_{n=1}^{N_m}\exp (y_n)}{\sum_{m=1}^M\sum_{n=1}^{N_m}\exp(2y_n)}. $ Both estimates are wrong and can never recover the true parameter regardless of sample size. To compare, after transformation, suitable weight functions can be added. For log transformation on timestamps, we use weight function $h_n(x_n)=(x_n-x_{n-1})(\log T-x_n)$. For log transformation on intervals, we use $h_n(y_n) = \log [T-t_{n-1}]-y_n$. The corresponding estimates are $\hat \lambda_{\text{AWSM,Log}}$ and $\hat \lambda_{\text{AWSM,Log Interval}}$. We measure the MSE of these four estimates. Results are shown in Table 2 in the attached PDF. It is easy to see that as sample size increases, MSE of $\hat \lambda_{\text{ASM,Log}}$ and $\hat \lambda_{\text{ASM,Log Interval}}$ remains large and unchanged, showing that these two estimators are wrong regardless of sample size. > Including more metrics would be helpful. We will consider adding MSE to our revised paper. > Did the author observe any instability during training? For our weighted score matching, we did not notice instability. [1] SMURF-THP: score matching-based uncertainty quantification for transformer Hawkes process [2] A connection between score matching and denoising autoencoders [3] Autoregressive score matching --- Rebuttal Comment 1.1: Comment: Thank you sincerely for the additional experiments on denoising score matching and clarifications! For the log-normalization, could you kindly provide the detailed derivation for obtaining $\hat{\lambda}\_{\text{ASM,Log}}$ and $\hat{\lambda}\_{\text{ASM,Log Interval}}$? Additionally, is it possible to express $\hat{\lambda}\_{\text{AWSM,Log}}$ and $\hat{\lambda}\_{\text{AWSM,Log Interval}}$ in closed form? --- Rebuttal 2: Comment: We greatly thank the reviewer for your constructive feedback. > For the log-normalization, could you kindly provide the detailed derivation for obtaining $\hat \lambda_{\text{ASM,Log}}$ and $\hat \lambda _{\text{ASM,Log Interval}}$? Sure, we are more than delighted to discuss this with you. For log-normalization on timestamps, since the transformed variable $x_n$ has conditional pdf, $ p(x_n|\mathcal F_{x_{n-1}})=\lambda \exp\left[-\lambda \exp(x_n)\right]\exp(x_n). $ The conditional score and its partial derivative w.r.t. $x_n$ will be, $ \psi(x_n|\mathcal F_{x_{n-1}}) = -\lambda \exp(x_n) + 1, $ $ \frac{\partial}{\partial x_n}\psi_{\theta}(x_n|\mathcal F_{x_{n-1}})=-\lambda \exp(x_n). $ Now we perform ASM on the transformed variable $x_n$. Denote $\mathcal X = (x_1,\ldots, x_{N_T})^T$ as the vector of transformed timestamps, the implicit ASM objective (Eq. 13) for $\mathcal X$ will be, $ \mathcal J_\text{ASM}(\theta)=\mathbb E_{p(\mathcal X)}[\sum_{n=1}^{N_T}\frac{1}{2}\psi^2_\theta(x_n|\mathcal F_{x_{n-1}})+\frac{\partial}{\partial x_n}\psi_\theta(x_n|\mathcal F_{x_{n-1}})]. $ We plug in the conditional score we just computed into the above equation and get, $ \mathcal J_{\text{ASM}}(\theta) =\mathbb E_{p(\mathcal X)}[\sum_{n=1}^{N_T}\frac{1}{2}\exp(2x_n)\lambda^2-\sum_{n=1}^{N_T}2\exp(x_n)\lambda ]+C, $ where $C$ is a constant not containing $\theta$. The empirical objective based on samples $t_1^{(m)},\ldots t_{N_m}^{(m)}, m=1,\ldots M$ will be, $\hat J_\text{ASM}(\theta)=\frac{1}{M}\sum_{m=1}^M\sum_{n=1}^{N_m}[\frac{1}{2}\exp(2x_n^{(m)})\lambda^2-2\exp(x_n^{(m)})\lambda]. $ This is quadratic with respect to $\lambda$, and its minimal $\hat \lambda_{\text{ASM,Log}}$ can be computed analytically as $ \hat \lambda_{\text{ASM,Log}} = 2\frac{\sum_{m=1}^M\sum_{n=1}^{N_m}\exp(x_n^{(m)})}{\sum_{m=1}^M\sum_{n=1}^{N_m}\exp(2x_n^{(m)})}. $ For log transformation on time interval $y_n$, since the conditional pdf of $\tau_n = t_n-t_{n-1}$ is, $ p(\tau_n|\mathcal F_{\tau_{n-1}})=\lambda \exp(-\lambda \tau_n), $ thus the conditional pdf of $y_n = \log \tau_n$ will be, $ p(y_n|\mathcal F_{y_{n-1}})=\lambda \exp\left[-\lambda \exp(y_n)\right]\exp(y_n), $ it has the same analytical expression as that of $x_n$, except that $y_n$ and $x_n$ have different support. Therefore, following exactly the same steps as those of $x_n$, we can get the analytical expression of $\hat \lambda_{\text{ASM,Log, Interval}}$ in our rebuttal (with minor modification for adding superscript $(m)$). > Additionally, is it possible to express $\hat \lambda_{\text{AWSM,Log}}$ and $\hat \lambda_{\text{AWSM, Log Interval}}$ in closed form? Yes, it is possible. We take $\hat \lambda_{\text{AWSM, Log}}$ as an example. The derivation of $\hat \lambda_{\text{AWSM, Log Interval}}$ is basically the same. First, the implicit AWSM objective as Eq. 17 in our paper for $\mathcal X$ will be, $ \mathcal J_\text{AWSM}(\theta)=\mathbb E_{p(\mathcal X)}[\sum_{n=1}^{N_T}\frac{1}{2}\psi^2_\theta(x_n|\mathcal F_{x_{n-1}})h_n(\mathcal X)+\frac{\partial}{\partial x_n}\psi_\theta(x_n|\mathcal F_{x_{n-1}})h_n(\mathcal X)+\psi_\theta(x_n|\mathcal F_{x_{n-1}})\frac{\partial}{\partial x_n}h_n(\mathcal X)]. $ Plug in $\psi_\theta(x_n|\mathcal F_{x_{n-1}})$ and $\frac{\partial}{\partial x_n}\psi_\theta(x_n|\mathcal F_{x_{n-1}})$ into the empirical version of the above equation and get, $ \hat J_\text{AWSM}(\theta)=\frac{1}{M}\sum_{m=1}^M\sum_{n=1}^{N_m}[\frac{1}{2}\exp(2x_n^{(m)})h_n(\mathcal X^{(m)})\lambda^2-2\exp(x_n^{(m)})h_n(\mathcal X^{(m)})\lambda - \exp(x_n^{(m)})\frac{\partial}{\partial x_n}h_n(\mathcal X^{(m)})\lambda]. $ This is still quadratic w.r.t $\lambda$, and its minimal will be, $ \hat \lambda_{\text{AWSM, Log}}=\frac{\sum_{m=1}^M\sum_{n=1}^{N_m}[2\exp(x_n^{(m)})h_n(\mathcal X^{(m)})+\exp(x_n^{(m)})\frac{\partial}{\partial x_n}h_n(\mathcal X^{(m)})]}{\sum_{m=1}^M\sum_{n=1}^{N_m}\exp(2x_n^{(m)})h_n(\mathcal X^{(m)})}. $ Since we choose weight $h_n(\mathcal X)=(x_n-x_{n-1})(\log T-x_n)$ with derivative $\frac{\partial}{\partial x_n}h_n(\mathcal X)=\log T - 2x_n+x_{n-1}$. Plug them in the above equation, then we get a closed form of $\hat \lambda_{\text{AWSM, Log}} $. --- Rebuttal Comment 2.1: Comment: Thank you for the further derivation and clarification! For the log interval case, each $\tau_i$ should follow the same exponential distribution, so $\hat{\lambda}_{\text{ASM,Log Interval}}$ should approximate the correct $\lambda$ with a large sample size. Could you please clarify why this is considered incorrect? I might have misunderstood something. --- Rebuttal 3: Comment: We thank greatly for the reviewer for further engaging in the discussion. > For the log interval case, each $\tau_i$ should follow the same exponential distribution, so $\hat{\lambda}_{\text{ASM,Log Interval}}$ should approximate the correct $\lambda$ with a large sample size. This is not true. It is widely known that for an exponential distribution on $[0,\infty)$, it indeed has expectation $\frac{1}{\lambda}$ and second moments $\frac{2}{\lambda^2}$. Then the average of exponential distribution should be equal to $\frac{1}{\lambda}$ when sample is large. However, this is not the case when considering the time interval in a Poisson process on $[0,T]$. Considering the numerator of $\hat \lambda_{\text{AWSM,Log Interval}}$, we **cannot** draw the conclusion that, $ \lim_{M\rightarrow \infty}\mathbb E_{p}[\frac{\sum_{m=1}^M\sum_{n=1}^{N_m}\tau_n^{(m)}}{\sum_{m=1}^MN_m}]=\frac{1}{\lambda} $ We cannot use CLT or other things here since $\sum_{m=1}^M N_m$ is random and each $\tau_n^{(m)}$ does not necessarily have the same marginal distribution since their support is different. To illustrate this, we compute $\frac{\sum_{m=1}^M\sum_{n=1}^{N_m}\tau_n^{(m)}}{\sum_{m=1}^MN_m}$ for $M=10^{6}, T = 1$ under different $\lambda^{*}$ and show that the expectation of it is not equal to $\frac{1}{\lambda}$. | | $\lambda=1$ | $\lambda = 2$ | $\lambda = 4$ | $\lambda = 8$ | | ------------------------------------------------------------ | ----------- | ------------- | ------------- | ------------- | | $\frac{\sum_{m=1}^M\sum_{n=1}^{N_m}\tau_n^{(m)}}{\sum_{m=1}^MN_m}$ | 0.3693 | 0.2836 | 0.1886 | 0.1094 | Also for the denominator, its expectation is not equal or asymptotically equal to $\frac{2}{\lambda^2}$ when taking average. Therefore, $\hat{\lambda}_{\text{ASM,Log Interval}}$ does not approximate the correct $\lambda$ with a large sample size. We give a brief explanation as follows. For a stochastic process in a finite window $[0,T]$. Each $\tau_i$ has support $[0, T-\sum_{i\leq n}\tau_i]$ instead of $[0,\infty)$. Therefore, firstly, each $\tau_i$ has different support and does not follow the same distribution. Secondly, for each $\tau_i$ , it has support different from that of the exponential distribution, which is $[0, \infty)$. In conclusion, when discussing a point process on a finite time window, the right bound's effect must be considered. And the case is different from sampling a fixed length time sequence over an infinite time window. --- Rebuttal 4: Comment: Thanks so much for providing concrete answers to my queries and issues! Most of my concerns have been resolved and I have increased my score. I believe it would be valuable to include a discussion on denoising score matching, as well as some recent related work on score matching with point processes [1,2]. [1] Beyond Point Prediction: Score Matching-based Pseudolikelihood Estimation of Neural Marked Spatio-Temporal Point Process. [2] Exploring generative neural temporal point process. --- Rebuttal Comment 4.1: Title: Thanks Comment: Thank you for your constructive feedback and for increasing your rating to accept.
Summary: The paper considers the problem of utilizing score-matching approaches for point processes. The main motivations for the paper are the Poisson and Hawkes processes. Both of these model the repeated occurrences of events over a finite time interval $[0, T]$. The Poisson process models the occurrence of an event within a fixed infinitesimal interval with a fixed homogenous rate $\lambda$ while a Hawkes process is a self-excitation process where past events increase the likelihood of future ones. Score-matching is a natural approach in settings where the computation of the normalizing constant of a parameterized model are challenging to compute. However, to ensure traceability, one utilizes an implicit approach where the score matching objective is approximated by a tractable alternative which may be easily computed through the derivatives of the approximating model. The main contribution of this work is the identification of simple scenarios, even for the simple setting of the generalized Poisson process, where the score function optimization function of prior approaches fails as the regularity condition underlying them does not apply. The paper derives an expression showcasing the bias (Proposition 3.1) and is related to the evaluation of the estimated score function (for fixed number of occurrences) at the end points of the interval of interest ($t = 0$ and $t = T$). The paper then proposes an alternative where a weight function $h$ is used to augment the score function loss which ensures that these biasing terms evaluate to $0$. Furthermore, these functions may be chosen independently of the parameters of the underlying model. In their experiments, it is shown that the use of such functions still satisfies desirable properties such as the optima corresponding to the true values of the parameters while also allowing for a tractable score function formulation which avoids the bias incurred by prior approaches. This is a natural approach which suggests an avenue towards improving the performance of such models. Overall, the problem considered in the paper and the solution they identify are interesting and likely relevant to the NeurIPS community. However, I do have concerns regarding the writing of the paper. For example, there are several assumptions on the weight function in Equation 9 but it is not clear which properties are used in the proof of Proposition 3.3. For instance, Line 400 features an explicit expression for $h_1$ and $h_N$ while the expression in Equation 9 features none of these. It is my understanding that the assumptions in Equation 9 ensure that the biasing terms equate to $0$ for these elements. Furthermore, I believe that under the assumptions in Equation 9, both terms appearing in the display between Lines 399 and 400 are $0$. Please clarify whether this is in fact the case. The same issues persist for the proofs of the results in Section 4. I would like clarification from the authors on these points before updating my review. ************************************************************************************************************************************************ I thank the authors for their response and have amended my review accordingly. Strengths: See main review Weaknesses: See main review Technical Quality: 3 Clarity: 3 Questions for Authors: See main review Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See main review Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for reviewing our paper. We answer your questions below. > For example, there are several assumptions on the weight function in Equation 9 but it is not clear which properties are used in the proof of Proposition 3.3. For instance, Line 400 features an explicit expression for $h_1$ and $h_N$ while the expression in Equation 9 features none of these. Please clarify whether this is in fact the case. The same issues persist for the proofs of the results in Section 4. We greatly thank you for pointing this out. Your understanding is correct, and we apologize for the typos. The proofs for Propositions 3.3 and 4.3 are from our previous edition, where we specified one weight function instead of all weight functions that satisfy the condition. **We have proofread our work and ensured that the rest of our proofs remain correct.** For Propositions 3.3 and 4.3, the main parts remain correct. Only minor modifications are needed to utilize the assumptions stated in the main text, ensuring that the proofs hold for general weight functions. We have corrected the necessary parts as follows. From lines 398 to 404, the corrected proof is as follows: Using the first two equations in Eq. 9, we have $ \int [p(t_1,\ldots, t_N)\frac{\partial \log p_{\theta}(t_1,\ldots, t_N)}{\partial t_{n}}h_n(\mathcal T)]\big\vert_{t_{n}=t_{n+1}}d\mathcal T_{-n}=0, \forall n\in [N] $ $ \int [p(t_1,\ldots, t_N)\frac{\partial \log p_{\theta}(t_1,\ldots, t_N)}{\partial t_n}h_n(\mathcal T)]\big\vert_{t_n=t_{n-1}}d\mathcal T_{-n}=0,\forall n\in [N]. $ Therefore, the first term on the right side of the second equation of Eq. 21 will disappear, and the second term equals $-\mathbb E_{p(\mathcal T)}[\sum_{n=1}^{N_T}\frac{\partial}{\partial t_n}\psi_{\theta}(t_n)h_n(\mathcal{T})+\psi_{\theta}(t_n)\frac{\partial}{\partial t_n}h_n(\mathcal T)]$.The existence of such an expectation is due to the last two equations in Eq. 9. Therefore, we complete the proof. We can see from the proof that, in Eq. 9, the first two equations ensure that the integration by parts trick does not produce an intractable term, and the last two equations are simply regularity conditions that ensure all terms are well-defined. Similarly, from line 431 to 434, the corrected proof is as follows: $ \mathbb E_{p(\mathcal T)}[\sum_{n=1}^{N_T} \psi(t_n|\mathcal F_{t_{n-1}})\psi_\theta(t_n|\mathcal F_{t_{n-1}})h_n(\mathcal T)] $ $ =\sum_{n=1}^\infty \int p(t_1,\ldots t_n)\psi(t_n|\mathcal F_{t_{n-1}})\psi_\theta(t_n|\mathcal F_{t_{n-1}})h_n(\mathcal T)d\mathcal T_{1:n} $ $ =\sum_{n=1}^\infty \int p(\mathcal T_{:n-1})p(t_n|\mathcal F_{t_{n-1}})\psi_\theta(t_n|\mathcal F_{t_{n-1}})h_n(\mathcal T)\vert_{t_n=t_{n-1}}^{t_n=T}d\mathcal T_{:{n-1}} $ $ -\sum_{n=1}^\infty \int p(t_1,\ldots, t_n)[\frac{\partial \psi_\theta(t_n|\mathcal F_{t_{n-1}})}{\partial t_n}h_n(\mathcal T) + \psi_\theta(t_n|\mathcal F_{t_{n-1}})\frac{\partial h_n(\mathcal T)}{\partial t_n}]d\mathcal T_{1:n}. $ Between the second and the third line above, we omit the steps used in the derivation of Proposition 4.1 to make it concise. For the term in the third line above, it will be eliminated using Eq. 15. For the term in the fourth line above, using Lemma B.1, we have: $ -\sum_{n=1}^\infty \int p(t_1,\ldots, t_n)\left[\frac{\partial \psi_{\theta}(t_n|\mathcal F_{t_{n-1}})}{\partial t_n}h_n(\mathcal T) + \psi_{\theta}(t_n|\mathcal F_{t_{n-1}})\frac{\partial h_n(\mathcal T)}{\partial t_n}\right]d\mathcal T_{1:n}=\\ -\mathbb E_{p(\mathcal T)}[\sum_{n=1}^{N_T}\frac{\partial \psi_{\theta}(t_n|\mathcal F_{t_{n-1}})}{\partial t_n}h_n(\mathcal T) + \psi_{\theta}(t_n|\mathcal F_{t_{n-1}})\frac{\partial h_n(\mathcal T)}{\partial t_n}]. $ The existence of the expectation is ensured by the last two terms of Eq. 15. We hope the corrected proof can address your concerns. --- Rebuttal 2: Title: Thanks Comment: Thank you very much for your thoughtful review and for taking the time to carefully consider our responses. We greatly appreciate your kind words and are pleased that you find our work to be well-presented, thorough, and novel. We will certaintly correct the proofs in the updated manuscript, as you suggested. Your feedback is valuable in improving the quality of our work, and we are grateful for your support. Thank you once again for your constructive feedback and for increasing your rating.
null
null
Rebuttal 1: Rebuttal: We express our sincere appreciation to all reviewers for their time, effort, and insightful feedback. We are encouraged by their recognition of the significance of our work in identifying the incompleteness of the original score matching for point processes, proposing the weighted score matching method, studying its convergence rate, proposing a near-optimal weight function, and deploying the method on deep point process models. In the following, we have provided specific and direct responses to each of the primary concerns raised by the reviewers, supplemented with additional experiments that substantiate our arguments. We aim to ensure that we address all the concerns and offer clarity and reassurance where needed. Should any additional questions arise, we invite reviewers to engage in further discussion. Once again, we express our gratitude for your time and dedication in reviewing our work. Pdf: /pdf/323b6bff2a0dc2920b1d0ca6112f190d7068ad4d.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Finding good policies in average-reward Markov Decision Processes without prior knowledge
Accept (poster)
Summary: The goal of the paper is to learn near-optimal policies in average reward MDPs without prior knowledge of complexity parameters, contrasting extensive prior work which requires knowledge of the values complexity parameters. They rule out the possibility of easily removing this knowledge requirement through estimation of the optimal bias span, but they show that the diameter can be estimated, and use it to give an algorithm in the generative model setting. In the online setting, they again show a negative result for achieving a span-based complexity, but build on the ideas from the generative setting to give a diameter-based complexity bound. They also propose a stopping rule and give some preliminary analysis of its performance. Strengths: The problem of learning near-optimal policies in average reward MDPs without knowledge of complexity parameters is a fundamental and important problem, and this paper makes a number of small but important steps towards either achieving this goal or demonstrating its difficulty. I therefore am confident that this work will prove to be significant, in the sense that it will lead to future work which build off of its attempts. I think the presentation is mostly very clear and does a good job motivating the different settings and ideas. Weaknesses: Algorithm 1 is not very novel, since it combines two ingredients from prior work: an optimal algorithm when knowledge of the optimal bias span is known, and a diameter estimation algorithm. The value of including section 6.2 (the value iteration-inspired stopping rule) is not clear to me, since the algorithm is only given a proof-of-concept analysis for the generative model setting (not the motivating online setting), and furthermore it involves seemingly foreign complexity parameters and does not improve upon the other introduced methods. Technical Quality: 3 Clarity: 4 Questions for Authors: Line 110: In weakly communicating MDPs there is guaranteed to be a policy with optimal average reward which is unichain, however this policy might not be any of the higher-order notions of optimality (ex. bias optimal or Blackwell optimal). Also/therefore note that multiple policies with optimal gain may not have the same bias. Thus I think a more careful definition of the optimal bias span is needed here. Also I don't understand why the optimal policy is assumed to be unique; this is clearly not true in many situations (especially in average reward settings, there may be multiple ways to reach the same optimal recurrent class) and would greatly limit the applicability of the results. It doesn't seem like this assumption is needed for anything except potentially convenience in definitions, so I think it should be removed. This is very nitpicky, but I find it a bit weird that some bounds in Table 1 do not have a listed dependence on $\log(1/\delta)$ (ex. [27], which apparently does actually depend on $\log(1/\delta)$). Anyways, why include the dependence on $\log(1/\delta)$ for the other results if the $\tilde{O}$ notation is meant to hide log factors? Theorem 1 (impossibility of estimating $H$) uses unknown rewards, but the previously described setting assumes known mean rewards. Is it possible to get this result to work with known rewards? (One can try replacing the unknown reward state with a sub-MDP with unknown dynamics, but the part which is unclear to me is how adding this sub-MDP would affect the bias span) The formatting for the display for Algorithm 1 should be fixed Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: There are no significant limitations which require addressing Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are thankful to the reviewer for their very constructive feedback on our paper. While the algorithms we present are combinations of previously known methods, the main point of interest of this paper is that $H$ is not the right measure of complexity due to the impossibility of estimating it and the fact the sample complexity does not scale well with it in the online setting. %The solutions given are indeed proofs of concept. We show that combining prior work is near optimal with regards to the more meaningful complexity measure $D$. While the complexity analysis in 6.2 is only valid in the generative model, the correctness is proved also in the online setting, and thus provides a new way of constructing online best policy identification algorithms. The assumption of unique optimal policy is indeed merely useful for convenience of definitions. We can remove this assumption by redefining $H$ to be the maximum bias span over all gain-optimal policies, as we do not try to reach bias-optimality. The notation in Table 1 is meant to hide log factors in $\log(1/\delta)$, not those in $\delta$ (as $\log(1/\delta)$ is the ``right'' dependence in $\delta$). We will clarify this in the paper. The bound in [27] depends on $\log(1/\delta)$ and that dependence is missing in the table: this will be fixed. The other entry that currently does not have a $\log(1/\delta)$ factor is the lower bound of [12], which indeed does not depend on $\delta$. For Theorem 1, we can change the example to obtain a MDP with known rewards, and shift the difficulty to estimating transitions. We can replace state 2 by three states, (2, 2', 2''), where 2 is connected to 1 and 3 as in Figure 1, but we remove the action with reward R. Instead we add an action with reward 1/2 that transitions to 2' with probability $R$ and to 2'' with probability $1-R$. In 2' there is only one action which goes to 2 with probability 1 and reward 1. In 2'' there is also one action which goes to 2 with probability 1 and reward 0. Choosing $p$ to be $(1+\varepsilon)/2\Delta$ and $R=1/2 \pm\varepsilon$ yields the result through the same reasoning as in Theorem 1. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I am not sure if the proposed solution for redefining $H$ is satisfactory, because the paper concerns the estimation of $H$ and this redefinition seems like it may affect the difficulty of this task. Again, I think assuming a unique optimal policy is a very bad assumption, as this is basically assuming that gain optimal $\implies$ bias optimal. Overall, I will maintain my score.
Summary: This paper studies the problem of learning a good policy in averaged-reward MDP with finite diameter. Given previous work on the problem assuming knowledge on the optimal bias span $H$, the authors try to remove the prior knowledge and propose diameter-dependent sample complexity without any prior knowledge. In the meantime, the authors present several observations about the hardness to reach better sample complexity bounds. Strengths: I appreciate the efforts to study the sample complexity bounds without prior knowledge. However, the current result seems incremental and insufficient for an acceptance in my opinion. Weaknesses: The major concern is due to limited contribution. Two possible directions to improve this work: (1) Is $H$-dependent sample complexity reachable without prior-knowledge (even allowing worse dependence on $1/\epsilon$)? It might be hard to estimate $H$ precisely according to your observations, but that does not means a lower bound. (2) It seems the exact $D$ factor in the online case could be removed by some efficient reward-free exploration algorithms. One could assume an approximate transition model to help conduct reward-free exploration, where the target is to solve the following problem $\min_{\pi_b}\max_{\pi}\sum_{s,a}\frac{d_{\pi,T}(s,a)}{d_{\pi_b,T}(s,a)}$ ($d_{\pi,T}(s,a)$ is the occupancy distribution following $\pi$ in $T$ steps). I think the following papers might help solve the problem efficiently. Minimax-optimal reward-agnostic exploration in reinforcement learning, Li et.al., 2024; Horizon-Free Reinforcement Learning in Polynomial Time: the Power of Stationary Policies, Zhang et. al., 2022 Technical Quality: 3 Clarity: 2 Questions for Authors: Please find my questions in the comments above. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's insightful feedback on our paper. Our main contribution in this paper is that H is not the right complexity measure for best policy identification in average reward MDPs. Once that point is made, we then turn to other measures and provide an algorithm that shows that a bound depending on D instead can be attained without prior knowledge. It is true that we have not shown that there cannot be any algorithms scaling in $H$ in the generative model. However, we have seen that there are none in the current literature. Since the original lower bound of [22] scales in $D$, $H$ is impossible to estimate, and the in the online setting an algorithm can reach $D$ but not $H$, we postulate that bounds scaling in $H$ are unattainable. However, what is certain is that the current state of the art algorithms cannot actually scale in $H$, making our algorithm scaling in $D$ meaningful. We thank the reviewer for the references to reward-free exploration and will investigate how they could be used for best policy identification. --- Rebuttal Comment 1.1: Comment: Thanks for the response. Current I would like to keep the score. I will adjust the score after discussion with AC and other reviewers.
Summary: This paper addresses average-reward Markov Decision Processes (MDPs). In the context of the generative model, existing literature presents an $\epsilon$-optimal policy with a sample complexity of $O(SAD/\epsilon^2)$. However, this approach requires prior knowledge of an upper bound on the optimal bias span $H$. This paper initially demonstrates that accurately estimating $H$ can have arbitrarily large complexity. Subsequently, it introduces the Diameter Free Exploration (DFE) algorithm for communicating MDPs, which operates without any prior knowledge about the MDP and achieves near-optimal sample complexity for small $\epsilon$. In the online setting, the authors establish a lower bound suggesting that achieving a sample complexity polynomial in $H$ is infeasible. Furthermore, they propose an online algorithm that attains a sample complexity of $O(SAD^2/\epsilon^2)$. They also propose a data-dependent stopping rule that they believe could reduce sample complexity in the online setting. Strengths: This paper presents a complete procedure, DFE, that achieves near-optimal sample complexity for small $\epsilon$ without any prior knowledge of $H$ for average-reward MDP. This approach fills a gap in the existing literature. The finding that accurately estimating $H$ can have arbitrarily large complexity is new and interesting. This paper also presents a new finding that achieving a sample complexity polynomial in $H$ in the online setting is infeasible, setting theoretical boundaries for future research. The paper is technically sound, demonstrating rigor in the development and justification of its claims. The presentation quality of this paper is good. The authors did a great literature review on average-reward MDPs. Weaknesses: The main concern with this paper is the limited novelty and contribution of the proposed solutions. The paper's claim regarding the necessity of knowing $H$ in the algorithm from [27] is somewhat misleading. The referenced paper explicitly mentions that only an upper bound for $H$ is required, not precise knowledge of $H$. The technique of using an upper bound of $D$ to estimate $H$, as presented in this work, is not original. Reference [25] previously introduced this idea, diminishing the perceived innovation of the current paper’s methodology. Additionally, Algorithm 2, designed to estimate $D$, closely resembles Algorithm 4 in [21], suggesting a lack of substantial differentiation in their algorithm design. The main algorithm, Diameter Free Exploration (DFE), seems to be a straightforward combination of algorithms in [21] and [27]. The primary theoretical contribution, Theorem 2, appears to be direct. The theorem’s proof does not seem to require substantial intellectual effort, suggesting that similar results could be easily derived by others familiar with the cited works. In the online setting, the situation is similar. The online-DFE primarily integrates existing algorithms with minimal modification, and the theoretical insights it offers do not extend far beyond established results. This recombination of known techniques without substantial new insights significantly diminishes the paper's novelty and impact. Theorem 3 in this paper is also very similar to Theorem 9 in [5]. While the authors propose a data-dependent stopping rule that they believe could reduce sample complexity in the online setting, they defer its exploration to future work. While postponing this analysis is understandable, it cannot be recognized as a substantial contribution within the current paper. Technical Quality: 4 Clarity: 3 Questions for Authors: Regarding the development of DFE and online-DFE, and Theorems 2 and 4: Is it accurate to say that anyone familiar with [20], [21], and [27] could readily replicate these results? Specifically, were there unique challenges or complexities that are not immediately apparent but critical to the contributions of this paper? Could the authors clarify which algorithms and theorems they consider to be the main technical contributions of this paper? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 2 Limitations: The authors adequately addressed the limitations in the conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed feedback on our paper. Our main point in this paper is that H is not the right complexity measure for best policy identification in average reward MDPs. Once that point is made, we then turn to other measures and provide an algorithm that shows that a bound depending on D instead can be attained without prior knowledge. In [27] and other previous papers, the lower bound considered uses H as the main complexity measure, and the algorithm of [27] is said to have a sample complexity bound that also depends on H, provided H is given to the algorithm. As the reviewer points out, the algorithm also can take an upper bound of H and have a sample complexity that depends on the upper bound (and we use this in our paper). Nonetheless, the focus in that previous paper and others is on H: H is presented as the right complexity measure, and the need to know an upper bound on it in the algorithm is minimized (H is assumed known "for simplicity"). We argue that having an upper bound on H that actually reflects H (and not something else like D) is not feasible, and that the focus on H in the bounds in the literature is perhaps misplaced. In the generative setting, we prove that we can't first estimate H and then use it in an algorithm that takes it as a parameter (Theorem 1). In the online setting, we prove a lower bound that shows that no algorithm can have an upper bound polynomial in H and not D (Theorem 3). Then we demonstrate that a weaker complexity measure, D, is attainable. We agree that our algorithms are a combination of previous ones. The novelty in our work is the idea that since H seems to not be attainable, obtaining algorithms that depend on D is meaningful. Given that it's achievable by a combination of previous algorithmic elements, we did not reinvent a new way of doing it (the combination of those methods to obtain an algorithm scaling with D had however never been described). --- Rebuttal Comment 1.1: Comment: The review of reviewer NtfV has the following structure: Acknowledging that the paper makes interesting, meaningful and original contributions in a well written paper. Next, the reviewer complains about that the paper builds heavily on existing work, and the results are achieved with, what they think, is essentially too little effort, resulting in a verdict to reject the paper (rating of 4). As a fellow reviewer, I find this unreasonable. I think we should cherish meaningful, interesting, original findings, even if the results are using tools and techniques that are well established. There is much to like about a paper besides whether it introduces entirely new ideas. And I think the lower bounds in this paper do have some interesting new twists to them, which is overlooked by the reviewer. I hope that reviewer NtfV will change their harsh rating in light of this: Our field does not need to be adversarial. We all build on previous results in smaller or bigger ways.
Summary: The problem of identifying near optimal policies with high probability, either in the generative, or in the online setting, is considered when the state-action space is finite, and the criteria to compare policies is how much reward they collect on the average in the long term ("PAC setting"). Algorithms are compared based on their sample complexity: the number of interactions they need before they return with a near-optimal policy. The main question is whether knowledge of the span of the optimal value function allows for a reduced sample complexity. Some partial answers are obtained: For the generative setting, it is shown that estimating the span itself is not tractable. Next, the more moderate goal of designing an algorithm that adapts to the diameter is achieved. For the online setting it is shown that with or without the knowledge of H, the problem is intractable. Finally, sound algorithms are designed that control sample complexity in terms of the (possibly unknown) diameter. The paper also explains the difficulty of reducing the PAC problem to cumulative regret minimization. Strengths: Novel results, new ideas, especially with the lower bounds. Weaknesses: It was known that one can estimate the diameter; hence algorithms that adapt to the diameter are not that surprising. The PAC setting is a bit artificial. Results for the PAC setting are more interesting if they mimic the results of the other settings (fixed, known or unknown budget, simple regret, or cumulative regret); and the results in this paper just underline how unnatural the PAC setting is (the algorithm needs to know how well it does; this is nice to have, but less essential than doing well.) Since there is no box to present my summary opinion, I note it here that the above does not mean that there is no reason to study the PAC setting (ie I consider the above a really minor point). In fact, I find the question of whether there is a real difference between the PAC and the other settings an interesting and important question. Overall, I think the paper makes important and interesting contributions. Technical Quality: 4 Clarity: 4 Questions for Authors: n.a. Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: n.a. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback on our paper. It is true that we do not relate the PAC setting to most other settings, but we do point out that the results in the cumulative regret setting are unapplicable here, which is quite different from the finite horizon or discounted models. An interesting new preprint, Achieving Tractable Minimax Optimal Regret in Average Reward MDPs by Boone et al., 2024, even provides a regret-minimizing algorithm that scales in $H$ without requiring prior knowledge of it - something we argue does not seem attainable in best policy identification. Looking at what happens when attempting to minimize for example simple regret in average-reward would be interesting future work. --- Rebuttal Comment 1.1: Title: Rebuttal read Comment: I confirm I read the rebuttal and the other reviews. I still maintain this is a fine paper investigating a delicate issue in a thoughtful manner.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Theoretical Analysis of Weak-to-Strong Generalization
Accept (poster)
Summary: The paper provides a theoretical analysis of weak-to-strong generalization, a phenomenon where strong models can learn from weak supervisors and outperform them [[Burns et al 2023](https://arxiv.org/abs/2312.09390)]. The authors make precise assumptions about the nature of the strong student model family and the weak supervisor mistakes. Specifically, (1) near any datapoint (where neighborhoods are defined according to the strong student model class) that is incorrectly labeled by the weak supervisor, there should be many correctly labeled datapoints and (2) datapoints that are unlabeled (i.e. in the test set) also have a correct number of correctly and incorrectly labeled datapoints in the neighborhood. The authors make these ideas mathematically precise, and derive generalization bounds for empirical risk minimization for the strong student model. Importantly these bounds describe two phenomena: (1) pseudolabel correction, i.e. correcting for the mistakes of the weak supervisor and (2) coverage expansion, i.e. generalization to unlabeled datapoints. Finally, the authors provide a way of statistically testing their assumptions empirically, and show that they are applicable in one setting: sentiment analysis with a bag-of-words supervisor. Strengths: S1: The theoretical approach used by the authors makes sense intuitively. The authors often provide an informal intuition for the definitions and results that they derive. S2: The derived generalization bounds are to the best of my understanding novel and non-trivial. The authors provide detailed comparisons to existing generalization bounds for weak supervision. S3: The authors provide a rigorous statistical method, as well as a heuristic for testing their assumptions in practice. S4: The authors evaluate their assumptions and bounds in a simple empirical setting. Weaknesses: W1: While the core of the paper makes sense intuitively, the paper becomes more mathematically dense and hard to follow towards the end. W2: It is not obvious to me that the paper provides a novel empirical insight. As authors admit in the limitations section, they do not provide a new training method with improved weak-to-strong generalization or generally make practical recommendations. However, the paper makes a valuable contribution in formally describing conditions under which we can provably get weak-to-strong generalization. W3: The empirical evaluation is limited, and only covers one simple case. It would be very interesting to apply the bound to several settings, with varying weak-to-strong generalization. For example, can the bound predict the result in [[Burns et al 2023](https://arxiv.org/abs/2312.09390)] that weak-to-strong generalization doesn't work well on the reward modeling task? Does the bound provide an intuition for why that would be the case? Technical Quality: 3 Clarity: 3 Questions for Authors: Q1. See questions in W3. Q2. Which of the assumptions specifically captures the intuitive notion that the strong student should not be able to fit the mistakes of the weak model too well? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitations are adequately discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > __For example, can the bound predict the result in [Burns et al 2023] that weak-to-strong generalization doesn't work well on the reward modeling task?__ Great question! Our analysis is limited to classification for now, since we essentially assume that the class-conditional sets $\mathcal{X}_i$ have some neighborhood structure that is respected by the hypothesis class of the strong model. This type of assumption makes the most sense when $\mathcal{X}_i$ has some consistent semantic meaning. In the reward modeling case, the labels (chosen/rejected) are not semantically meaningful in the same way, so different structural assumptions are likely required. While our bounds don't explain the lack of weak-to-strong generalization in reward modeling, we are encouraged that they seem to plausibly capture these effects in classification problems. We will add reward modeling / ranking losses as an interesting direction for future work! > __Which of the assumptions specifically captures the intuitive notion that the strong student should not be able to fit the mistakes of the weak model too well?__ > > [Related question from Reviewer 3HaP] __What happens in the case of self-training when the function class of strong and weak models are the same?__ If the strong model class can exactly fit the weak model, the expansion assumption is not satisfied, as we now show. Suppose we have a weak model $\tilde{y} \in \mathcal{F}$ and aim to fit a strong model $f \in \mathcal{F}$. In this case we can have $f = \tilde{y}$ (i.e., the strong model exactly fits the weak model), so suppose that is the case. The class of sets we need to expand for pseudolabel correction is $\mathcal{M}’ = \\{R(g) \cap S_i^{good} \setminus \text{mistakes}(g) : g \in \mathcal{F}\\}.$ Since $f = \tilde{y}$, $S_i^{good} \setminus \text{mistakes}(f) = S_i^{good}$. Now consider a pair $(x,x')$ with $x\in S_i^{good}$, $x' \in \mathcal{N}(x)\cap S_i^{bad}$. Since $x$ and $x'$ are both in $S_i$, $y(x) = y(x')$. Then we have $f(x) = \tilde{y}(x) = y(x) = y(x') \ne \tilde{y}(x') = f(x')$, so $f(x) \ne f(x')$. Thus $x$ is not in $R(f)$. Since $x$ was an arbitrary point in $S_i^{good}$, this shows $R(f) \cap S_i^{good} = \emptyset$, so $\mathcal{M}’$ contains $\emptyset$. In this case, expansion requires: $0 = P(\mathcal{N}(\emptyset) | S_i^{bad}) > c P(\emptyset | S_i^{good}) = 0,$ which is not possible. Working out this example hopefully gives more intuition for the expansion assumption, and we will include it as an example in the paper. Thanks for the question!
Summary: The paper provides a theoretical explanation for weak-to-strong generalization. A weak model produces pseudolabels that can have errors and may not cover the entire input space. The paper argues current weak supervision theory fails to explain how and when the strong model can correct the psuedolabels (pseudolabel correction) and expand the coverage of pseudolabels (coverage expansion). The authors derive a new bound based on the expansion properties of the data distribution and the hypothesis class of the strong model. Their bound suggests that generalization occurs when the strong model is unable to fit the mistakes of the weak model without incurring additional errors. They provide experiments to show the expansion properties are verifiable in practice. Strengths: 1. The paper shows the gaps in the existing theory of learning from weak labels ( potentially erroneous labels available for part of the input space). In programmatic weak supervision (PWS) the focus has been on aggregating several weak labeling sources and showing that learning with aggregated labels is as good as learning from clean labels when the weak labeling sources satisfy certain conditions. The paper shows shortcomings of the previous results on PWS – failing to explain pseudolabel correction and coverage expansion. 2. It provides upper bounds on the expected error of the learned strong model on the part of the space that is covered by the psueodlabels and the part that is not covered. The first bounds suggests the error decreases in the covered part implying pseudolabel correction. A non-trivial error bound is provided on the error of the strong model in the uncovered set by utilizing the expansion property i.e. many points in the uncovered set have many correctly pseudo-labeled points. 3. The authors also provide a theoretical result suggesting that the expansion property is statistically checkable with finite samples but is computationally hard. They provide a heuristic approximation to verify the property in practice. Weaknesses: 1. It is not clear from the bounds how the size of covered sets (pseudolabel coverage) affects the results and why are the results in terms of subsets of the covered set. Is it possible to provide a final result conditioned on the entire covered set? 2. The improvements in terms of coverage and error correction are not immediately clear. For instance, the following paper [1] explains the improvement in coverage with self-training in a specific setting. It would be useful to have a discussion and instantiation to some specific settings like in [1] to understand the results better. [1] https://arxiv.org/abs/2006.11006 Technical Quality: 3 Clarity: 2 Questions for Authors: 1. How do the bounds depend on the size of the covered points and by how much does the coverage improve? 2. Where does the complexity of the strong model's function class come into play, how do the bounds depend on it? What happens in the case of self-training when the function class of strong and weak models are the same. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > __Is it possible to provide a final result conditioned on the entire covered set?__ Yes, we just left it out for space and because getting a combined bound with a simple functional form requires more definitions (max/min of the weak label errors $\alpha_i$, the minimum expansion parameter across all the $S_i$'s, etc). Given these definitions, the combined bound essentially follows from averaging the bounds on each $S_i$. Similarly, we can give a combined coverage expansion bound conditioned on the entire uncovered set $T$ instead of the individual covered sets $T_i$. These two combined bounds can also be averaged together to get a final combined bound on the unconditioned error $err(f,y)$. We will include combined bounds in the Appendix. > __It is not clear from the bounds how the size of covered sets (pseudolabel coverage) affects the results__ Thanks, this is a great point. We had a discussion on the role of coverage and had to cut it for space since it affects our bounds in a somewhat subtle way. The most direct way in which coverage enters the bounds would be in a combined error bound that uses $err(f,y) = err(f,y|S) P(coverage) + err(f,y | T) (1-P(coverage))$ and then applies the combined-source and combined-target bounds discussed in our reply to your previous question to $err(f,y|S)$ and $err(f,y|T)$, respectively. However, this doesn’t capture the full story. The “coverage rate” $P(S)$ also enters the picture implicitly in the $S$–$T$ expansion parameter. For a fixed neighborhood $\mathcal{N}$ (e.g., all points with similar embedding), it will be qualitatively easier to have expansion from $T_i$ to $S_i$ when $S_i$ is larger, since (informally) for each uncovered point in $T_i$, there are more possible covered neighbors in $S_i$. But increasing the coverage by including more points in $S_i$ might also affect the weak label accuracy parameters $\alpha_i$, which also affect the bounds. In practice, there is not a clear tradeoff between coverage and performance, as explored recently in, e.g., [37], so it qualitatively makes sense that our bounds do not prescribe a functional dependence of the error on the amount of coverage and instead allow that dependence to enter through data-dependent parameters (expansion $c$ and weak error rate conditioned on coverage $\alpha_i$). We hope this at least partially answers your question and we will include a more detailed discussion of the role of coverage in the final draft. > __[How do the bounds depend on] the complexity of the strong model's function class?__ Great question! Following related work on expansion-based bounds (e.g., [23]), our error bounds are expressed as relationships between population quantities (e.g., expansion, population error on the weak/ground-truth labels). There is no statistical estimation aspect, which is where the complexity of the strong model's function class directly enters the picture. We focus on population quantities because relating the population error on the weak and ground-truth labels is the key problem-specific component. Once these quantities are related, the sample complexity aspect is more standard and can be dealt with by applying existing generalization bounds. We discuss this topic in more detail in Appendix B.5. The strong model hypothesis class can also enter the bounds indirectly via the expansion parameter, since the amount of expansion depends on that class. A richer class for the strong model may decrease the amount of expansion. For example, if the strong model class is rich enough to exactly fit the weak labels, there is zero expansion, as worked out in our response to Reviewer cokD. At the same time, the error of the strong model on the weak labels appears as a term in the bounds ($err(f,\tilde{y} | S_i)$), and a richer class might decrease this term. So these two terms capture a potential tradeoff—a stronger hypothesis class may decrease the expansion, but it may also decrease the error on the weak labels. Whether this makes the bounds tighter or looser depends on the sizes of these decreases. As with the coverage $P(S)$, our bounds do not prescribe a functional form for this dependence and instead allow it to enter via data-dependent parameters. This should make the bounds flexible enough to capture seemingly conflicting empirical results, where sometimes a stronger hypothesis class works better and sometimes a weaker one works better, as seen for example in the WRENCH weak supervision benchmarks [73]. We will highlight this implicit dependence in Section 4. An interesting direction for future work is to see whether these tradeoffs can reveal how to pick a strong model hypothesis class for a given weak labeler, or pick a weak labeler for a given strong model hypothesis class. Thanks for the insightful question. > __It would be useful to have a discussion and instantiation to some specific settings like in [1] to understand the results better.__ We wanted to include an instantiation of the results for special cases in the main text, but were limited by space. Appendix C.1 has a worked example under the distributional assumptions of co-training (where each data point consists of two conditionally independent views). We can mention this as a simple example where the expansion assumptions are satisfied with good values for $c$. There are other distributional assumptions that lead to good expansion, such as the Gaussian Mixture Model style distributions studied in Oymak and Cihad Gulcu---Wei et al. [69, Example 3.4] showed that GMMs satisfy their expansion assumption, which is very related to ours. We will extend our existing discussion of Oymak and Cihad Gulcu in Appendix A to comment on its relationship to our assumptions. > __What happens in the case of self-training when the function class of strong and weak models are the same?__ Please see our response to Reviewer cokD! We will include this in the paper as a worked example of a case where expansion does not hold. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I have no further questions and I'll keep my current scores. --- Reply to Comment 1.1.1: Comment: Thanks for reading our reply! We tried to address all of your questions and concerns (combined bounds, the role of coverage, the role of the strong hypothesis class, results for specific settings, what happens to the bounds when the strong model can exactly fit the weak labels). Please let us know if there is anything else we can do to improve your view of the paper!
Summary: This paper proposes a theoretical framework to interpret the weak-to-strong generalization phenomenon. It shows that strong student models trained on noisy labels from weak teacher models can outperform the weak teacher models, correcting their errors and generalizing well to examples where the weak teacher models are not confident or abstain. The paper demonstrates that existing bounds for learning from noisy labels do not explain this phenomenon and derives new bounds based on assumptions about the strong model’s robustness in its neighborhood and its expansion property. An empirical study validates the generalization bound in a practical setting. Strengths: - Weak-to-strong generalization has gained renewed attention despite being well-known in weak supervision. However, there has been no theoretical analysis of why a model trained with noisy labels can be more accurate than the labels themselves. This paper’s contribution is crucial as it provides a theoretical framework for this problem. - The theory explains under what conditions weak-to-strong generalization can occur, which has practical implications. - The paper is technically solid, with reasonable assumptions for derivation. Sections 4.2 and 5 add to its practicality. - The writing is clear and the main ideas are easy to follow. Weaknesses: - The experiment is limited to only one setting, though it is already mentioned in the limitations section. This is a minor concern since the paper is primarily theoretical. Technical Quality: 3 Clarity: 3 Questions for Authors: - Adding comparisons with recent concurrent works ([1, 2, 3]) would be beneficial. I am curious about the authors' views on these papers and whether there are any conflicting points in theory or if the conclusions are well-aligned. - Including simpler or synthetic experiments could enhance the paper. The (c, q) expansion property in a practical setting is not intuitively straightforward. [1] Somerstep, Seamus, et al. "A statistical framework for weak-to-strong generalization." *arXiv preprint arXiv:2405.16236* (2024). [2] Charikar, Moses, Chirag Pabbaraju, and Kirankumar Shiragur. "Quantifying the Gain in Weak-to-Strong Generalization." *arXiv preprint arXiv:2405.15116* (2024). [3] Zhang, Edwin, et al. "Transcendence: Generative Models Can Outperform The Experts That Train Them." *arXiv preprint arXiv:2406.11741* (2024). Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: Authors adequately addressed the limitations in Conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > __Adding comparisons with recent concurrent works ([1, 2, 3]) would be beneficial.__ Thanks for pointing these out! First, we just want to note that all 3 of these works appeared on arXiv after the NeurIPS submission deadline, so we were not aware of them at the time of submission. Broadly, [1] and [2] are similar to our paper in that they make some structural assumptions under which weak-to-strong generalization can provably occur. [1] uses a framing of weak supervision as transfer learning to give pseudolabel correction results when the data is distributed according to a Gaussian Mixture Model, whereas we do not make specific distributional assumptions and also focus on coverage expansion in addition to pseudolabel correction. As discussed in our response to Reviewer 3HaP, Gaussian Mixture Models can be shown to satisfy notions of expansion. [2] is the most similar to our paper, but focuses on pseudolabel correction for regression problems, whereas our work tries to explain both coverage expansion and pseudolabel correction for classification problems—both phenomena are important in weak supervision settings that we are interested in. Additionally, the underlying assumptions and conceptual ideas are very different. In our work, we use structural notions of expansion to explain these two phenomena, and our bounds directly generalize existing results from other literature (co-training, self-training), and extend this theory to more general settings. In [2] they assume convexity of the hypothesis space for fine-tuning, and show that projections onto this space can explain pseudolabel correction effects. We feel these two explanations are very different but potentially complementary, and it would be very interesting future work to develop a common framework that incorporates both. [3] considers a qualitatively very different setting, showing that when the training data consists of a mixture of data generated by different experts, a strong student model can be better than the best single expert that generated the data. Their results are more related to classical work on crowdsourcing, where by observing generations from many experts the learner can outperform the best single expert. Our paper and [1] and [2] consider the case where there is a single weak teacher model (which itself may be an ensemble of many models, but this is not exposed to the learner). > __Including simpler or synthetic experiments could enhance the paper. The (c, q) expansion property in a practical setting is not intuitively straightforward.__ Thanks for the suggestion! Per our responses to the other reviewers, we will include a worked example that shows (c,q)-expansion does not hold when the strong model can exactly fit the weak model’s errors. Subject to space limitations, we can also include a version of the content in Appendix C.1, which gives a very straightforward setting (“conditionally independent views”) where the expansion assumptions hold. These examples should give more intuition for how the assumption works. --- Rebuttal Comment 1.1: Comment: Thank you for your response! I was aware that [1, 2, 3] were published after the NeurIPS deadline, but I was curious to hear the authors' perspective, as these works seem to offer somewhat different interpretations of the same phenomenon. I thoroughly enjoyed reading this paper, and I will maintain my initial score, as I had already given it a high rating. --- Reply to Comment 1.1.1: Comment: Thanks for reading our reply and for the encouraging feedback!
null
null
Rebuttal 1: Rebuttal: Overall comments: -- Thanks to all the reviewers for their time, effort, and helpful feedback. We are encouraged that all the reviewers found our work technically sound, novel, and potentially impactful. We have replied to individual points below. We hope the reviewers will consider raising their scores if our replies address their questions and concerns!
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Online Feature Updates Improve Online (Generalized) Label Shift Adaptation
Accept (poster)
Summary: The authors of this paper focus on the task of label distribution shift in the online setting without true labels. Following the importance of improving feature extractors even during test-time found in the current literature, they propose to update the feature extractor online with unlabeled instances during testing for online label shift and then refine the last classification layer with labeled instances. The proposed method is simple and intuitive, and extensive experiments over several label shift cases and benchmark datasets have shown its effectiveness. Strengths: 1. The online label shift task is an interesting problem and worth paying more attention to, and the proposed method in this paper is simple and intuitive. 2. In addition, this paper has proved the regret convergence of the proposed method partially, so that provides the necessary justification. 3. The experimental section and the supplementary material present experimental results that demonstrate the effectiveness of the method proposed in the article. Weaknesses: 1. The motivation of this paper is not clear. The authors are just motivated by “the potential for improving feature extractors”, and hypothesize “that a similar effect can be harnessed in the context of (generalized) label shift”. This may be too intuitive. Why is improving feature extractors helpful for only label shift? What can improve feature extractors gain? Are there some theoretical or empirical results to prove that? The theory of regret convergence may be not enough. 2. The novelty of the proposed method may be limited. Improving feature extractors has been proved its importance during testing time for distribution shifts in the current literature, such as [1]. What is the difference between the proposed method and them? What are your main contributions in this paper? 3. What are the limitations and broader impacts? [1] Y. Sun, X. Wang, Z. Liu, J. Miller, A. Efros, and M. Hardt. Test-time training with self-supervision for generalization under distribution shifts. In International conference on machine learning, pages 9229–9248. PMLR, 2020. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. What is the difference between (online) label distribution shift and domain adaption? Can the proposed method adapt to the domain adaption task? 2. What will happen if there are some instances related to new unseen categories during testing? And how to handle this case? Can improving the feature extractor be helpful? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: See the above “*Weaknesses*” and “*Questions*” parts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for providing insightful comments! Here are our responses. **Motivation and its justification**. We would like to answer your questions in this point separately. - “Why is improving feature extractors helpful for only label shift?” In fact, updating feature extractor can help with other types of shift too. As discussed at line 359-361, the existing algorithm to solve online *covariate* shift, which has strong theoretical guarantee, also doesn’t take feature extractor update into account. We believe by adopting online feature update, the performance of the algorithm can get certain improvement as well, and our work shows this possibility as a first work to introduce feature extractor update into those theoretical algorithms. - “What can prove feature extractors gain? Are there some theoretical or empirical results to prove that?” Equation (7) describes **the gain from the feature extractor improvement rather than the label shift adaptation**, because it is “almost” the best loss at time $t$ given the updated feature extractor $f_t’’$ or the old feature extractor $f_0$ together with ** the knowledge of label distribution $q_t$**. On the one hand, the theoretical guarantee can be improved from the feature extractor when Equation (7) holds; on the other hand, in the experiment, we empirically validate the holdness of equation 7. We believe these two theoretical and empirical evidence explain how the feature extractor updates improve our target problem. We will write those arguments more explicitly in our revision! **Relation to the related work [1]**. Thanks for connecting our work to [1]. First, as cited in section 2, we introduce it as a self-supervised feature update algorithm, which can serve for our motivations discussed at line 135-141. Moreover, we discussed in Section 3.4 on how it is particularly powerful to address online *generalized* label shift. We would like to clarify this point here: According to the definition of generalized label shift, it can be reduced to label shift once the learner knows the underlying feature mapping $h$, and online feature update actually helps learn the underlying feature mapping $h$ that makes $P(h(x_t)|y_t)$ invariant. In the literature of test-time training [1] and more [2, 3], they found the self-supervised learning of feature updates can explicitly align the feature space of new distribution to the feature space of training distribution; they even only trained with the feature extractor **without retrain the last linear layer**, this strongly supports the straightforward feature space alignment between original distribution and shifted distribution. We would like to add this explanation in Section 3.4, which we believe can better explain the role of [1] when tackling the online *generalized* label shift. **Our main contribution**. Our main contribution is as a first work to bring self-supervised learning into the online label shift problem, which was mainly studied from the theoretical aspects. Our algorithm provides a way to leverage self-supervised learning, which largely improves the performance as validated through the experiments and still keeps the similar theoretical guarantees. **Limitations and broader impacts**. The goal of this work is to advance the robustness of models for real-world deployment. Our work contributes to the mitigation of different adverse effects of online label shift, such as out-of-date or miscalibrated models (e.g. healthcare or finance settings, autonomous systems such as self-driving). Adaptation to changing shifts has positive ethical implications, such as in fairness (e.g. improving the models with updated, fair data over time) or privacy (e.g. unlearning data owned by specific groups). **Difference between label distribution shift and domain adaptation**. Domain adaptation is a broader concept, including the case where the support of x can be totally different. Label distribution shift has an explicit assumption where $P(x|y)$ doesn’t change. Our algorithm and previous label shift adaptation algorithms are studied given this explicit assumption and the theoretical results are based on this assumption. While those algorithms work well for label shift, there is no guarantee that they can work well without the label shift assumption. **How to handle unseen categories**. This is actually an interesting question! As pointed out by Reviewer aV82, [4] focuses on this special setting by unsupervised estimating the portion of unseen data. We will add the discussion of this setting and the related work [4] into the related work section. [1] Y. Sun, X. Wang, Z. Liu, J. Miller, A. Efros, and M. Hardt. Test-time training with self-supervision for generalization under distribution shifts. In International conference on machine learning, pages 9229–9248. PMLR, 2020. [2] Wang, Dequan, et al. "Tent: Fully test-time adaptation by entropy minimization." arXiv preprint arXiv:2006.10726 (2020). [3] Liu, Yuejiang, et al. "Ttt++: When does self-supervised test-time training fail or thrive?." Advances in Neural Information Processing Systems 34 (2021): 21808-21820. [4] Qian Y Y, Bai Y, Zhang Z Y, et al. Handling New Class in Online Label Shift[C]//2023 IEEE International Conference on Data Mining (ICDM). IEEE, 2023: 1283-1288. --- Rebuttal Comment 1.1: Comment: Thanks for your responses. I have read them, and would like to keep my score.
Summary: This paper addresses the problem of online label shift and proposes a novel algorithm that exploits feature representation learning to enhance performance. Inspired by test-time training literature, the proposed method uses self-supervised learning to refine the feature extraction process. The algorithm comes with strong theoretical guarantees, and experiments demonstrate its effectiveness. Strengths: Online label shift problem is a common but crucial issue in many real-world applications. This paper considers the important task of exploiting feature representation, proposing a novel algorithm that leverages self-supervised learning to refine the feature extraction process in the online label shift problem. The proposed method satisfies the practical requirements while having solid theoretical guarantees that ensure its general applicability and reliability in various non-stationary learning scenarios. Experiments show the superiority of the proposed algorithm on several benchmark datasets and two distinct online shift patterns, highlighting its effectiveness and practical impact. Weaknesses: 1. The proposed method requires the storage of previous historical data for self-supervised learning (batch accumulation in the paper). This may not be feasible in certain problems with privacy concerns. 2. The experiments are primarily conducted on simulated benchmark datasets, such as CIFAR-10 and CINIC, rather than real-world applications or datasets. This limits the understanding of the method's performance in practical settings. 3. It is recommended to include some recent work on online label shift in the paper, such as [1], which addresses the appearance of new class data in the scenario of online label shift. [1] Qian Y Y, Bai Y, Zhang Z Y, et al. Handling New Class in Online Label Shift[C]//2023 IEEE International Conference on Data Mining (ICDM). IEEE, 2023: 1283-1288. Technical Quality: 3 Clarity: 3 Questions for Authors: See weakness above. In addtion, for the storage of historical data, is it feasible to apply data augmentation techniques to the limited number of data per time stamp and use the augmented data for self-supervised learning? How does the number of historical data stored affect the proposed algorithm? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: the authors adequately addressed the limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for providing insightful comments! Here are our responses. **The storage of previous historical data**. This is actually an insightful point! We acknowledge this can bring additional privacy concerns, depending on the practical scenarios, and we would like to discuss this in the revision. At the same time, as shown in Table 1, we can find that even without batch accumulation (batch size $\tau$=1), OLS-OFU still outperforms OLS. Parameter $\tau$ can be treated as the performance and privacy trade-off. **Dataset set-up**. Our selection of datasets and the way to simulate the shift pattern mainly follow the previous online label shift literature [1, 2] and additionally, we experiment with additional domain adaptation dataset CIFAR-10C. Although we agree experimenting with real-world shifts can be meaningful, we believe our current experiments are sufficient enough to prove the improvement from the literature. **Related work**. Thank you for the pointer! This work particularly solves the unseen categories in the online label shift setting and is very relevant. It tackles the new unseen class by unsupervised estimating the portion of unseen data. We will add this literature in our related work section. [1] Baby, Dheeraj, et al. "Online label shift: Optimal dynamic regret meets practical algorithms." Advances in Neural Information Processing Systems 36 (2024). [2] Bai, Yong, et al. "Adapting to online label shift with provable guarantees." Advances in Neural Information Processing Systems 35 (2022): 29960-29974. --- Rebuttal Comment 1.1: Comment: Thank you for your reply. After reading other comments and rebuttals, I would like to keep my score.
Summary: This paper addresses the online label shift (OLS) adaptation problem, which involves continually adapting a pretrained offline model to test data with various and evolving label distribution shifts. The proposed method integrates existing self-supervised learning (SSL) techniques into current OLS methods, based on three proposed method-combination principles. Experimental results demonstrate that incorporating SSL methods leads to performance improvements. Strengths: 1. The proposed method encompasses various OLS baseline methods and self-supervised learning approaches, demonstrating a comprehensive range of experimental cases. 2. The proposed method consistently achieves performance improvements over the comparison methods. Weaknesses: 1. **Limited Novelty and Significance**: The proposed method appears to be a straightforward combination of existing approaches, specifically SSL and OLS methods. The three proposed principles for combining these approaches are relatively trivial: - Since SSL affects extracted features, OLS is performed first. - As SSL changes the feature extractor, the classifier is retrained. - Given that SSL requires additional training resources, it is applied after accumulating enough data. These principles seem too basic to be considered significant technical contributions. 2. **Problems in Paper Structure**: The paper has serious structural issues. The proposed method relies heavily on existing OLS methods, yet there is a lack of brief or detailed introduction of related OLS methods. Only three sentences describe OLS and SSL methods. Conversely, the problem setting of online label shift, which can be summarized in one sentence as "the test data have a different label distribution from training data", is explained with excessive and redundant content, including irrelevant details about online distribution shift. As a result, the experimental section is compressed into two pages, limiting the space to present results adequately. The conclusion is also excessively brief, reduced to just one sentence. 3. **Unreasonable Problem Setting**: The addressed problem seems unreasonable for several reasons. - **Data Privacy Concern**: Adapting the model at each time step requires storing all historical training data, testing data, and models, raising significant data privacy concerns. For instance, in the MRI example mentioned in the paper, it is questionable whether you are allowed to carry MRI data from 999 clinics just to adapt the model to the 1000th clinic. - **Adequate Offline Training Data**: The offline training data appears overly sufficient, making the problem less challenging. In the CIFAR-10 experiment, all training data are used for offline training, with the only challenge being the variation in label distribution in the test data. Without any adaptation, the model performs well on the testing data. 4. **Low Baseline Performance**: The reported baseline result without any adaptation appears too low. A ResNet-18 model on CIFAR-10 typically achieves around 93\% accuracy on test data without any special data augmentation (see https://github.com/kuangliu/pytorch-cifar), which is significantly higher than the reported 84\% and most adaptation results. 5. **Minor Typos**: - Line 83: How do you reweight a model f? Do you mean reweight the model output? - Line 179: OGD or ROGD? - Lines 232-233: "Either...or" should be used correctly. - Line 249: Should it be "is" or "as"? Technical Quality: 2 Clarity: 1 Questions for Authors: See Weaknesses. Confidence: 4 Soundness: 2 Presentation: 1 Contribution: 2 Limitations: Already addressed by Authors. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for providing some useful comments about data privacy concerns and some typos. However, we respectfully disagree with the criticism on novelty, paper structure and the experiment set-up (data split and baseline performance). We reply to the weaknesses and questions point by point as below. **Novelty of three principles**. While we agree that it turns out the steps are simple, the choice of them rather than other options is out of careful considerations. We list the possible other options and re-iterate why our final design stands out from other options. 1. *Update the feature extractor first or run original OLS first*. Through our analysis, we found only running original OLS first can still guarantee the theoretical results, while the other option cannot. 2. *Re-training the linear layer under training distribution or other distribution*. A more natural option is actually to retrain the linear layer with the most recent estimated label distribution. However, by checking with the original OLS methods, they require the model to be in the training distribution, or at least a distribution having all classes, but test distribution is not guaranteed to have this property since it can be just one class. 3. *Batch accumulation or not*. In fact, as shown in Table 1, even with batch size $\tau=1$ OLS-OFU has consistent improvement over OLS, and the necessity of batch accumulation can be missed. Through the experiment, as shown in Table 1, the benefit of batch accumulation brings both the performance gain and more time efficiency. Moreover, through theoretical and empirical analysis, we show that our algorithm provides a way to leverage self-supervised learning, which largely improves the performance as validated through the experiments and still keeps the similar theoretical guarantees. **Paper structure**. > “Only three sentences describe OLS and SSL methods.” We actually spent more energy to describe OLS and SSL methods. Given that our focus is on how our algorithm makes the bridge between the two, we included the sufficient details in the main paper and enumerated more details in the appendix. - For OLS, we kindly refer the reviewer to these positions: 1. At line 115 - 125 in Section 2, original OLS methods do not update feature extractor and this motivates the methodology of our paper. 2. Line 175-181 describes the details of the condition where the theoretical guarantees would hold for original OLS methods, and this motivated our first design of our algorithm. 3. Line 187-191 describes the choice of hypothesis space of original OLS methods, and the underlying reason for this choice in the literature motivates our second design of our algorithm. 4. Moreover, for completeness, we included detailed algorithms of all 5 OLS methods as Algorithm 2-5 in the appendix. - For SSL, the details of SSL are not our focus, as long as it has the form of $\ell_{\rm ssl}(S)$ for any batch of data $S$. Our algorithm OLS-OFU should work for general SSL methods, which is one of the information we would like to convey from our experiments. Therefore, we experiment with different SSLs as introduced at line 304-308 and their numerous details in the Appendix E.1. > “The problem setting of online label shift, which can be summarized in one sentence as "the test data have a different label distribution from training data", is explained with excessive and redundant content, including irrelevant details about online distribution shift.” We kindly disagree with the description of online distribution shift and online label shift is redundant. Rather than just describe the mathematical problem, other components are necessary in introducing a problem too to make the paper have a broader audience, which include the motivating example (line 85-96), the description of objective function in the online setting (Line 97-103), the learning setting of unsupervised adaptation and limited unlabeled batch (Line 104-110), the assumption of label shift (Line 110-113), the summarization of existing online label shift methods and their limitations (Line 114-128) and introducing online generalized label shift adaptation as a first work (line 128 until line 129.) > “The experimental section is compressed into two pages, limiting the space to present results adequately” We believe our experiment results are strong enough to demonstrate the effectiveness of our algorithms, which can be reflected by other reviews. We disagree that a two-pages experiment is a limitation, while we would appreciate any *concrete advice* to improve our result presentation. **Data privacy concern**. This is actually an insightful point. We acknowledge this can bring additional privacy concerns, depending on the practical scenarios, and we would like to discuss this in the revision. At the same time, as shown in Table 1, we can find that even without batch accumulation (batch size $\tau$=1), OLS-OFU still outperforms OLS. Parameter $\tau$ can be treated as the performance and privacy trade-off. **Data split and baseline performance**. There seems to be some misunderstanding about our data split and baselines, where we basically followed the literature. For the data split, as described at Line 293, our model was trained by 80% training set offline, with a 20% split as a validation set. We followed the code released from [1] for training the $f_0$ and our performance of “base” matches their numbers in Table 1, which are around 16% error for the CIFAR10. **Typos**. Thanks for catching them! We will fix the typos. The definition of reweight is to reweight the soft-max probability by element-wise multiplying another vector. [1] Baby, Dheeraj, et al. "Online label shift: Optimal dynamic regret meets practical algorithms." Advances in Neural Information Processing Systems 36 (2024). --- Rebuttal Comment 1.1: Comment: Thanks for you response. Despite the feedback from the authors, most of the problems in this paper remain there. **Limited novelty and significance.** The proposed method is a simple combination of existing OLS and SSL methods. The main contribution comes from the proposed three combination principles. While we agree that other options are not optimal, these principles are too trivial and lack of significance. **Paper structure.** - Counting all the sentences the author mentioned, there are totally 5 sentences introducing the related works about OLS methods. - This paper has nothing to do with online distribution shift, including the detailed description about it instead of OLS methods is unreasonable and misleading, especially given that the proposed method heavily depends on previous OLS methods. We would not mind including online distribution shift if existing OLS methods have been thorougly described. - One paragraph for the analysis of experimental results from line 317 to 330 are far away from enough. One sentence describing three figures are far away from enough. - One sentence for conclusion is far away from enough. **Data privacy concern.** The data privacy concern is raised not only from the batch accumulation, but also the requirement of the offline training data, all the training data from previous time step (<t). Here comes the same question, are you allowed to carry MRI data from the first 999 clinics just to adapt the model to the 1000th clinic? **Too much offline training data.** - Both training and validation sets count as data used for training. Thus, all the training data from CIFAR-10 has been used for offline model training, leaving the addressed OLS setting unchallenging at all. **Low baseline performance** - The reported low baseline accuracy has not been explained. From the included link (https://github.com/kuangliu/pytorch-cifar), ResNet-18 on CIFAR-10 achieves 93\% accuracy. From the original ResNet paper, ResNet-18 on CIFAR-10 achieves an accuracy of 91.25\%. CIFAR-10 dataset has its own fixed testing set. Without any adaptation, the offline model outperforms almost all the reported results. This raises the question, is OLS adaptation necessary? Why wouldn't we train the offline model adequately in advance? Based on the forementioned reasons, I would recommend a clear Reject for this submission.
Summary: This paper introduces a novel method for addressing label shifts in an online setting, where data distributions change over time, and obtaining timely labels is challenging. Unlike traditional approaches, this paper explores enhancing feature representations using unlabeled data during test time. The proposed method, Online Label Shift adaptation with Online Feature Updates (OLS-OFU), leverages self-supervised learning to refine the feature extraction process, thereby improving the prediction model. This approach is designed to maintain similar online regret convergence to existing results while incorporating improved features. Empirical evaluations show that OLS-OFU achieves substantial improvements over current methods, demonstrating that integrating online feature updates is as effective as the fundamental online label shift methods themselves. The results are consistent across various datasets and scenarios, highlighting the robustness and generality of OLS-OFU. The paper suggests that this method could be extended to more complex scenarios, such as online covariate shift and varying domain shifts over time. Strengths: - In the Problem Setting & Related Work section, the paper effectively describes the mathematical definition of online label shift (OLS) adaptation and related research. By leveraging a similar mathematical framework, the paper clearly explains the approach to solving the online generalized label shift adaptation problem, demonstrating that the proposed method is well-motivated and thoroughly explained. - The author thoroughly explores potential questions and concerns associated with the introduction of new methods for addressing the problem. The paper presents well-defined principles and explanations that connect these concerns to the proposed approach, demonstrating a deep and thoughtful analysis of the new methodology - The experimental results presented in the paper effectively showcase the strong performance of the proposed OLS-OFU method compared to both baseline approaches and the existing OLS method. The comparisons clearly illustrate that OLS-OFU achieves better results, validating the efficacy of the proposed technique. Weaknesses: - From a critical perspective, the online generalized label shift problem presented in the paper could be viewed as a sub-field of concept drift, where the relationship between x and y changes over time. Therefore, a detailed explanation of the differences between the proposed online generalized label shift scenario and the traditional concept drift is required. Additionally, it would be beneficial to conduct experimental comparisons to determine whether methods developed for concept drift could be effective in the experimental settings used in this study. Technical Quality: 3 Clarity: 3 Questions for Authors: - What is the difference between concept drift and online generalized label shift situations? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for providing insightful comments! Here are our responses. **Discussion with concept drift.** Yes, you are right that the (online) generalized label shift problem is a sub-field of concept drift and has its particular assumption. According to the definition of generalized label shift, it can be reduced to label shift once the learner knows the underlying feature mapping $h$. We would like to explain how each component of our method can solve each part specifically according to the literature and hence our method . 1. Online feature update learns the underlying feature mapping $h$ that makes $P(h(x_t)|y_t)$ invariant. In the literature of test-time training [1, 2, 3], they found the self-supervised learning of feature updates can explicitly align the feature space of new distribution to the feature space of training distribution; they even only trained with the feature extractor **without retrain the last linear layer**, this strongly supports the straightforward feature space alignment between original distribution and shifted distribution. 2. Built upon the updated feature extractor from 1, the problem can be reduced to online label shifts and we adopt the particular online label shift method, which follows the well-studied offline label shift method. Thank you for raising this point about generalized label shifts. We will add these explanations to Section 3.4 and we believe this can help improve the understanding of how our method can tackle the online generalized label shift. [1] Y. Sun, X. Wang, Z. Liu, J. Miller, A. Efros, and M. Hardt. Test-time training with self-supervision for generalization under distribution shifts. In International conference on machine learning, pages 9229–9248. PMLR, 2020. [2] Wang, Dequan, et al. "Tent: Fully test-time adaptation by entropy minimization." arXiv preprint arXiv:2006.10726 (2020). [3] Liu, Yuejiang, et al. "Ttt++: When does self-supervised test-time training fail or thrive?." Advances in Neural Information Processing Systems 34 (2021): 21808-21820. --- Rebuttal Comment 1.1: Comment: Thanks for your responses. I have read them and would like to keep my score. Here are some clarifications for my score: I still have doubts regarding the experimental validity of concept drift. So, I leaned toward the accept side but stayed on the borderline.
null
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper addresses the problem of online (generalized) label shift, where label information is unavailable during testing, and the label distributions change over time. The main contribution of this work is the proposal of a unified framework that integrates feature learning into the online learning process, enabling the method to leverage the strengths of deep models. Theoretical analysis and experiments demonstrate the effectiveness of the proposed approach Strengths: Overall, this paper is well-motivated and makes a valuable contribution to the online label shift problem. It is an important question for me on how to effectively incorporate feature learning into the online learning process, and this paper provides a unified framework that demonstrates strong empirical performance. The strengths of this paper are as follows: + The paper presents a general method applicable to various online label shift approaches proposed in the literature. + The experimental results show a significant improvement in classification accuracy by incorporating the feature learning process. + The proposed method is robust to the generalized label shift problem. Weaknesses: - One of my main concerns is the theoretical analysis of the feature learning component. While Theorem 1 is commendable for demonstrating that the proposed method is comparable to the best model adapted from $f_t^{''}$, it remains unclear how effective the feature extractor obtained via self-supervised learning is (from a theoretical view). The self-supervised learning technique appears to be used as a black box, with no theoretical insight provided into its performance. - Regarding storage costs: It seems that the proposed method requires storing the training set $D_0$. Such a requirement is somewhat unfavorable in practice, as the training set $D_0$ typically consists of a large volume of data. - Concerning Principle 1: I understand that Principle 1 is essential for achieving theoretical guarantees in the online label shift problem. However, I am uncertain whether such a requirement is necessary in practice. Should the update procedure depend on the amount of data available at each round? If we have a reasonable amount of data at each time, wouldn't it be more reasonable to update the feature extractor before the online learning process to gather more information? Conversely, if the data at each iteration is limited, I'm unsure if the difference between update procedures is significant. It would be beneficial if the authors could provide a more detailed discussion on this matter. I am happy to discuss these concerns with the authors and update my score if the questions are adequately addressed. ===post-rebuttal=== I appreciate the authors' efforts in addressing my questions and am satisfied with their feedback. I encourage the authors to incorporate the discussion from the rebuttal period into the main paper. I have raised my score to 7. Technical Quality: 2 Clarity: 3 Questions for Authors: Could you provide more theoretical insight on the feature extractor? For instance, is there any guidance on selecting the SSL method or determining the step size for performing the gradient step? - Is there any method to reduce the requirement of storing the train set $D_0$? - Could you provide more justification for Principle 1? (Please refer to the three points of concern outlined above for more details.) Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: I did not identify the negative societal impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Title: Rebuttal by Authors Comment: We would like to thank the reviewer for these meaningful questions! Here are our responses. **Theoretical insights of SSL and the choice of gradient step size**. We have to admit that the theoretical study for feature learning is generally hard; Instead, the analysis for the SSL in the literature is through exhaustive empirical validation. One hypothesis of the SSL methods in our paper has been validated in the literature: *they can generally improve the feature representation rather than only work for specific tasks / distributions*. Equivalently, *for most data distributions, the accuracy of the classifier based on the “improved” feature representations could be higher*. Moreover, this hypothesis supports our Equation 7, which shows the loss comparison between the classifiers where *given* the test distributions the classifiers are learned with the updated feature extractor or not; we also include the empirical validation for Equation 7 in the experiment section. As analyzed in Section 3.3, Equation 7 is a sufficient condition for FLHFTL-OFU having a better loss upper bound than FLHFTL. These arguments show how the hypothesis of the SSL methods, that has been validated in the literature, can indicate our OLS-OFU to be a better algorithm than OLS in the specific task of online label shift. As for the choice gradient step size, we would have two empirical suggestions for practice. The first is to check the proper batch size for the SSL offline, for example large batch is necessary for the contrastive learning such as MoCo. The second is that as suggested in our experiment, $\tau=100$ is generally good for all three SSL methods, different datasets and different types of shifts. We recommend $\tau=100$ as a good starting point for choosing this parameter. We will explicitly add the discussion of the SSL hypothesis and the gradient step size in our revision for more intuitions behind our algorithm. **The requirement to the training set**. Thank the reviewer for raising this insightful question! We would like to first clarify that *the step of retraining the last linear layer is the only place that needs training data in our algorithm*. Moreover, as discussed in Principle 2 and line 248-252, retraining the last linear layer is designed only for three previous OLS methods (ROGD, FTH, FLHFTL); our algorithm for two other OLS methods (UOGD and ATLAS) in the literature are independent of this step. Therefore, including feature extractor updates by our algorithm for UOGD and ATLAS actually doesn’t require the training data. As for our algorithm for ROGD, FTH, or FLHFTL, we further study how the amount of training data stored for the online test adaptation influences the effectiveness of our OLS-OFU. We evaluate OLS-OFU with $0\%-100\%$ stored training data; *$0\%$ means that we still update the feature extractor but reuse the pretrained linear classifier*. The results are reported in the following table. | % stored training data | FTH-OFU | ROGD-OFU | FLHFTL-OFU | FTFWH-OFU | |------------------------|---------|----------|------------|-----------| | 100% (original) | 8% | 10.8% | 7.45% | 7.33% | | 80% | 8.18% | 10.93% | 7.62% | 7.48% | | 60% | 8.68% | 11.84% | 8.04% | 7.92% | | 40% | 9.49% | 12.50% | 8.91% | 8.40% | | 20% | 9.54% | 12.63% | 9.51% | 8.91% | | 10% | 9.67% | 12.82% | 10.11% | 9.86% | | 5% | 9.81% | 12.94% | 10.34% | 10.03% | | 0% | 10.24% | 13.50% | 10.43% | 10.41% | | OLS only | 12.04% | 13.65% | 12.02% | 11.9% | We can observe that with less stored training data for retraining the last linear layer, the error of OLS-OFU would increase gradually. However, an important finding is that even with 0\% stored training data, which means that we keep reusing the pretrained linear classifier together with updated feature extractor, the error of OLS-OFU is still lower than the OLS without feature extractor updates. This can actually explained by the original test-time training papers [1,2], where they only update the feature extractor without refining the last linear layer and the only feature updates still brings substantial benefit. From the results, we can conclude that even if we remove the requirement of storing training data, our algorithm OLS-OFU can still outperform the OFU; more stored training data can further boost the performance. We will add this results in our revision! [1] Y. Sun, X. Wang, Z. Liu, J. Miller, A. Efros, and M. Hardt. Test-time training with self-supervision for generalization under distribution shifts. In International conference on machine learning, pages 9229–9248. PMLR, 2020. [2] Wang, Dequan, et al. "Tent: Fully test-time adaptation by entropy minimization." arXiv preprint arXiv:2006.10726 (2020). --- Rebuttal 2: Title: Rebuttal by Authors Comment: **The influence of principle 1 in practice.** We agree that empirical evidence can be important as well to illustrate the necessity for Principle 1. Therefore, we further make this ablation study to justify it: we compare OLS-OFU with its other variant named OLS-OFU-difforder where we update SSL first and run OLS later (which violates Principle 1). We compare these two algorithms across all previous 6 OLS methods and two choices of batch accumulation $\tau=1$ and $\tau=100$. The dataset is CIFAR-10, the SSL is rotation degree prediction and the shift pattern is sinusoidal shift; we observed similar performance for other settings of dataset, SSL and shift patterns and will include the full results in the reivision. The results are reported in the following table | | FTFWH | FTH | ROGD | ATLAS | UOGD | FLHFTL | |-------------------------------|--------|--------|--------|--------|--------|--------| | OLS-OFU($\tau=1$) | 11.3% | 11.2% | 13.9% | 11.6% | 11.4% | 11.2% | | OLS-OFU-difforder($\tau=1$) | 12.33% | 12.12% | 14.35% | 12.10% | 11.91% | 12.08% | | OLS-OFU($\tau=100$) | 7.33% | 8% | 10.8% | 10.1% | 8.35% | 7.45% | | OLS-OFU-difforder($\tau=100$) | 7.37% | 8.05% | 10.82% | 10.11% | 8.36% | 7.48% | We can observe that when $\tau=1$, the difference between OLS-OFU and OLS-OFU-difforder can be $0.9\%$ for OLS=FTL for example, which cannot be neglected. This means that Principle 1 indeed is important in practice when the batch is small. As for the $\tau=100$, at first it seems there is no difference between OLS-OFU and OLS-OFU-difforder. This is actually because now we only update the feature extractor every $\tau=100$ time steps; this means that Principle 1 introduces no difference between OLS-OFU and OLS-OFU-difforder for the remaining 99 steps every 100 steps. We take a closer look at the average error of OLS-OFU and OLS-OFU-difforder for only the steps where we update the feature extractor and Principle 1 has been applied. Here are the numbers. | | FTFWH | FTH | ROGD | ATLAS | UOGD | FLHFTL | |-------------------------------|-------|-------|--------|--------|-------|--------| | OLS-OFU($\tau=100$) | 7.23% | 7.91% | 10.81% | 10.12% | 8.33% | 7.53% | | OLS-OFU-difforder($\tau=100$) | 8.02% | 8.63% | 11.19% | 10.55% | 8.80% | 8.29% | We can observe that OLS-OFU where we apply Principle 1 has non-neglecte improvements, which conclude that Principle 1 is important in practice even when the batch is large. To understand this, it is because when the feature extractor has dependency for the to-be-adapted test data, the later estimation for the distribution of these test data in the OLS method can be more inaccurate and hence this can hurt the performance. Overall, Principle 1 is important not only just for the theoretical results, but also matters in practice. We will add this further analysis to our experiment section! --- Rebuttal 3: Title: Rebuttal by Authors Comment: We hope we were successful in addressing your concerns. Please let us know if you have any additional concerns. We look forward to hearing from you!
null
null
null
null
null
null
Approximating mutual information of high-dimensional variables using learned representations
Accept (spotlight)
Summary: This paper explores the idea that underlying low-dimensional structure in high-dimensional data can be exploited to approximate mutual information (MI) efficiently and with a reasonable number of samples. The approach learns a low-dimensional embedding of high-dimensional data using a neural network architecture similar to an autoencoder. The MI of the resulting low-dimensional embedding is estimated using a nearest-neighbor approximation. The paper includes extensive experimental evaluation including synthetic Gaussian data, resampled non-Gaussian data, and two examples in the computational biology domain. Strengths: This is a very nicely-written paper. The problem addressed is important and the ideas and approach are interesting. While the idea is relatively straightforward, the authors provide extensive experimentation to convince the reader that their approach is useful. This reviewer particularly appreciates the inclusion of domain-specific open problems as evidence of the efficacy of the proposed approach. Weaknesses: The synthetic multivariate Gaussian data evaluation is lacking in some respects. The authors consider a only very specific form of dependence, namely linear dependence that obeys the stochastic process discussed in Sec. 3.1 (second paragraph) and that has low intrinsic dimensionality. Because Gaussians model only linear dependence it is likely that MI estimation based on linear projections (such as Sliced MI) would likely perform well in this setting. Yet the experiments fail to provide comparison to Sliced MI or related methods. A more thorough set of synthetic data experiments, including those that exploit low-dimensional structure, is proposed in (Paweł et al. 2024). The motivation and justification for the proposed approach is a little misleading. In particular, the authors misstate the findings in McAllester and Stratos (2018); this reviewer suspects the intended reference is McAllester and Stratos (2020). In that paper the authors do not demonstrate that statistical efficiency of estimators (MINE and InfoNCE) depend on dimension of the random variables, but rather they show a strong dependence on the value of MI. In particular they show that sample complexity scales exponentially with the value of MI. This finding is not consistent with the claim (L52) "_More generally, it has been shown that no MI estimator can be accurate without making strong assumptions about the distribution..._". In fact the estimators referred to are consistent in the infinite sample limit, but they can require a prohibitive number of samples for accurate estimates. A secondary motivation that is questionable is that the proposed method (L62) makes "_strong, yet reasonable, assumptions about data which enable tractable MI estimation._" In fact the MI of the latent embedding is not tractable. Indeed, the authors use a nonparametric estimator (KSG) to approximate the latent state MI. **Detailed Comments** * Sec. 3.1 : The stochastic process described seems overly complicated. Isn't it equivalent to restrict the covariance matrix for a multivariate Gaussian to a known structure? * Fig. 3c : This figure is not very informative due to the failure of the majority of methods ; perhaps consider a larger epsilon value? * L163 : Change "MIME" to "MINE" * Sec. 3.2 (last paragraph) : Change figure references from Fig. 3 to Fig. 4 * Sec. 4.2 : It isn't totally clear what $X_{6'}$ is referring to as it is not explicitly defined **References** Czyż, Paweł, et al. "Beyond normal: On the evaluation of mutual information estimators." _Advances in Neural Information Processing Systems_ 36 (2024). McAllester, David, and Karl Stratos. "Formal limitations on the measurement of mutual information." _International Conference on Artificial Intelligence and Statistics_. PMLR, 2020. Technical Quality: 4 Clarity: 4 Questions for Authors: See "Weaknesses" section Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: An obvious limitation of this work is not explicitly discussed. The latent dimension is introduced as a design variable that must be known (estimated) to compute the MI measure. Sensitivity to the choice of latent dimension is not directly addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for this thoughtful review. Below we address specific points: ***“\[...\] the experiments fail to provide comparison to Sliced MI or related methods.”*** We appreciate the reviewer’s insight that sliced MI is likely to capture dependence well in high dimensions, particularly for Gaussian data (where CCA is sufficient \[1\]). We have now applied sliced MI to a subset of our multivariate Gaussian benchmark, shown in Rebuttal Figure (RF) 7\. While the relative accuracy of sliced MI is near perfect, its absolute accuracy is low. This is because sliced MI is not a method to estimate MI — it is a distinct measure of dependence with different interpretation from classical MI (as discussed in \[4\]). We fully agree with the broader point that the more interesting use-case of LMI is for non-Gaussian data, and we explore this next. ***“A more thorough set of synthetic data experiments, including those that exploit low-dimensional structure, is proposed in (Paweł et al. 2024).”*** We have now included an additional benchmark which considers high-dimensional distributions diversified using the transformations from \[1\] (RF6). The results are qualitatively similar to our Fig. 2 (LMI dramatically outperforming other methods in the high ambient dimensions, low intrinsic dimensions regime). In our initial submission, we had opted to develop our own “realistic” non-Gaussian benchmarking approach (Section 3.2, Fig. 4\) rather than use the tasks of \[1\], because the maximum dimensionality studied in \[1\] is 25, the vast majority of tasks were $\\leq 5$ dimensions, and the distributions, while diverse, are still analytically defined. Now, rather than using the specific tasks from \[1\], we have developed complementary high-dimensional benchmarks by applying the transforms proposed in \[1\] to our multivariate Gaussians from Fig. 2\. This allows us to create versions of the existing Fig. 2 with “half-cube”, “asinh”, and “uniform marginal” distributions (as defined in \[1\]). Due to time constraints in the rebuttal period, we explore only the most challenging settings from Fig. 2, those with 1000 ambient dimensions and 1-9 intrinsic dimensions. We find that LMI performance is similar for untransformed data and all three transformations (RF6). We will include these results, as well as results in varying ambient dimensions, in our revised paper. ***“In particular, the authors misstate the findings in McAllester and Stratos (2018); this reviewer suspects the intended reference is McAllester and Stratos (2020). In that paper the authors do not demonstrate that statistical efficiency of estimators (MINE and InfoNCE) depend on dimension of the random variables, but rather they show a strong dependence on the value of MI.”*** We are grateful that the reviewer caught our mis-citation. We will correct the reference to McAllester and Stratos (2020) and remove its citation in L48 (about curse of dimensionality, leaving only a reference to \[4\]). ***This finding is not consistent with the claim (L52) \[...\] In fact the estimators referred to are consistent in the infinite sample limit, but they can require a prohibitive number of samples for accurate estimates.*** Thank you for pointing this out. We will correct the statement to “*More generally, it has been shown that no technique can accurately estimate MI **from finite samples** without making strong assumptions...*" with the assumptions being, at a minimum, that $I(X, Y) \< O(\\log(N))$ for $N$ samples. ***“In fact the MI of the latent embedding is not tractable. Indeed, the authors use a nonparametric estimator (KSG) to approximate the latent state MI.”*** We apologize for the wording. We had meant tractable in the more colloquial sense, we will replace it with “feasible” for clarity. ***“\[...\] Isn't it equivalent to restrict the covariance matrix for a multivariate Gaussian to a known structure?”*** We agree that the process described in Section 3.1 is unsatisfyingly complicated. We’ll update and simplify exposition in the revised manuscript. It is indeed equivalent to restrict the covariance matrix, for example to the following structure * $\\text{Cov}(X\_i, Y\_i) \= \\rho$ for $i \\in \\{1, …, k\\}$, * $\\text{Cov}(X\_i, X\_i) \= \\text{Cov}(Y\_i, Y\_i) \= 1$ for $i \\in \\{1, …, d\\}$, * $\\text{Cov}(X\_i, X\_j) \= \\text{Cov}(Y\_i, Y\_j) \= \\text{Cov}(X\_i, Y\_j) \= 0$ for all $i \\neq j$ * $\\text{Cov}(X\_i, Y\_i) \= 0$ for $i \\in \\{k+1, …d\\}$ ***“Fig. 3c : This figure is not very informative due to the failure of the majority of methods; perhaps consider a larger epsilon value?”*** Thank you for this suggestion. We have included a version with $\\epsilon \=0.8$ in RF5. We caution that this corresponds to an 80% error in the estimate. ***“L163 : Change "MIME" to "MINE"; Sec. 3.2 (last paragraph) : Change figure references from Fig. 3 to Fig. 4”*** Thank you for catching these typos. We will fix them in the revision. ***“Sec. 4.2 : It isn't totally clear $X\_{6’}$ what is referring to as it is not explicitly defined”*** We apologize for the lack of clarity. In our revision we will define it as gene expression state in the separated well of cells on day 6 after harvest. ***“An obvious limitation of this work is not explicitly discussed. The latent dimension is introduced as a design variable that must be known (estimated) to compute the MI measure. Sensitivity to the choice of latent dimension is not directly addressed.”*** We fully agree that this is an important aspect of our work which warranted more careful treatment. In our response to Reviewer 8utg, we empirically study and discuss sensitivity to latent dimension. Due to space constraints, we kindly point the reviewer to the response to 8utg, specific section denoted \[common with x45D\]. --- Rebuttal Comment 1.1: Title: Thanks Comment: Thank you for meticulously addressing my points. I have read through your rebuttal. I am convinced that this work deserves publication and am willing to champion the paper.
Summary: The focus of the paper is on approximating mutual information (MI) between multidimensional variables. This problem is challenging as the approximation of the MI suffers from the curse of dimensionality. The authors propose a method that approximates the MI via an embedding in a lower dimensional space. They test their methods on embedding from proteins language model, scRNA-seq data, and toy datasets. Strengths: The approach is theoretically founded, and I appreciate that the authors also focus on making it practical. For instance, their model is intentionally simple to avoid doing large parameter sweep. The claim of the paper are well validated on toy datasets for which we can vary the dependence and ambient dimension, while having a close form solution of the MI. Weaknesses: - I would advise reporting the standard deviation in all tables (error bar in figures). - The presentation of the results could be improved, for example a few figures have labels that are not readable (e.g. Fig.7). Technical Quality: 3 Clarity: 2 Questions for Authors: - In Fig.2 , when you estimate 10 MIs, are the samples required to evaluate these MIs seen during training ? - In Fig.4 c), could you also include training time when training is required. - The results on toy datasets really highlight the benefits of the method, but not its limitations. For instance, it would be interesting to show results by varying the latent dimensions of the autoencoder. Especially, what happens if the latent dimension is smaller than $k$ ? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The authors address the limitation of their work at the end of the manuscript. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for this review. Below we address specific questions and concerns. ***“I would advise reporting the standard deviation in all tables (error bar in figures).”*** Thank you for raising this. We apologize for omissions and will include s.d. where applicable in our revision. We have included a version of Figure 4 with s.d. as Rebuttal Figure (RF) 4\. ***“The presentation of the results could be improved, for example a few figures have labels that are not readable (e.g. Fig.7).”*** We apologize for the lack of clarity. We will increase font size in our revision. ***“In Fig. 2, when you estimate 10 MIs, are the samples required to evaluate these MIs seen during training?”*** The samples used to estimate MI are *not seen* during training (for LMI, InfoNCE, MINE). This is the case for all estimates in the paper. We will clarify this in our Appendix section on implementation details. Anecdotally, we find all neural estimators (including LMI) generally perform worse when estimates are made on training data. ***“In Fig.4 c), could you also include training time when training is required.”*** The “runtime” column includes training time for all methods. We will clarify this in the caption. ***\[common with x45D\] “\[...\] For instance, it would be interesting to show results by varying the latent dimensions of the autoencoder. Especially, what happens if the latent dimension is smaller than $k$?”*** Thank you for this insightful question. We did not adequately address this in our initial submission, and will include discussion in our revised manuscript. Below, we outline some general principles for choosing latent space size, give a heuristic approach to choosing latent space size, and show that even suboptimal latent space size often yields state-of-the-art performance in high-dimensional settings. ***General principles for choosing latent space size*** There is a clear tradeoff which arises when changing latent space size. As the latent space gets larger, the capacity of the compressed representation increases, and we might expect that representation quality increases (with the caveat that representation quality can be limited by sample sparsity). However, as the latent space gets larger, the MI estimate in latent space becomes more difficult. As such, the ideal choice is the smallest possible latent space size which captures the dependence structure of the variables. In practice, this size is hard to determine rigorously. ***Heuristic approach to choosing latent space size*** From Theorem 2 of our paper, a simple extension of DPI, we know that $I(Z\_x; Z\_y) \\leq I(X; Y)$. With the caveat that the inequality is not guaranteed to hold for the estimated $\\hat{I}(Z\_x; Z\_y)$, we can reason that the parameter choices that maximize $\\hat{I}(Z\_x; Z\_y)$ are likely ideal. As such, one sensible way to choose a latent space size is to try several, and use that which yields the largest estimate. This is computationally reasonable to do, and we will include this suggestion for practitioners in the Discussion of our revised paper. ***Empirically exploring sensitivity to latent space size*** In our paper, we find that a suboptimal choice of latent space size can still be effective. Every single estimate in the main text of the paper uses 8 latent dimensions per variable, despite datasets with dependence structure of varying intrinsic dimensionality. For example, in Figure 2, LMI with 8 latent dimensions performs better than existing techniques across datasets where the ideal choice ranges from 1 to 9 dimensions. Notably, even in the case where the LMI latent space is smaller than the intrinsic dimensionality of the dependence structure (the 9 intrinsic dimension column), it outperforms alternate approaches in high ambient dimensions. Here, we will give a more direct example for multivariate Gaussian data with 4 dimensional dependence structure, in 1000D ambient space, with ground truth MI of 1 bit. We estimate MI from 5e3 samples using various latent space sizes, and compare this to MINE and InfoNCE. | | LMI-2 | LMI-4 | LMI-6 | LMI-8 | MINE | InfoNCE | | :---- | :---- | :---- | :---- | :---- | :---- | :---- | | Estimate | 0.295732 | 0.719762 | 0.670632 | 0.686974 | \-0.000001 | \-0.001417 | The most accurate estimate comes from LMI with optimal choice of 4 latent dimensions, but all tested choices (including the 2D latent space which cannot fully capture dependence) improve over MINE and InfoNCE. If we use the heuristic approach, we would choose the optimal latent space size. If we had arbitrarily chosen 8, we would be within 5% of the 4 latent dimension estimate. As another example, in real world data, we show the sensitivity of the $I(K;T)$ estimate from the protein sequence embedding dataset described in Section 4.1 (RF3). We also visualize how model loss scales with latent space size, and show that both the estimate and model loss stabilize around 8 latent dimensions. --- Rebuttal Comment 1.1: Title: Follow-up Comment: Thank you for answering my questions and presenting additional results.
Summary: This paper proposes latent MI (LMI), a method for estimating the mutual information (MI) between two high-dimensional multivariate random variables. For that, the technique uses the non-parametric MI estimator from [KSG04] on lower-dimensional latent representations that are learned by neural networks such that their MI is close to the one between the original variables. The paper provides some theoretical motivation for the proposed method. Finally, there is an experimental evaluation and comparison with other state-of-the-art methods for MI estimation, together with applications to problems in biology. [KSG04] Kraskov, A., Stögbauer, H., Grassberger, P. (2004). Estimating mutual information. Physical Review E—Statistical, Nonlinear, and Soft Matter Physics, 69(6), 066138. Strengths: 1) Pragmatic presentation of the problem setup and the proposed solution. 2) Interesting approach to MI estimation in (very) high dimensions, an open research subject, exploiting the informative low-dimensional structure of variables, which is a trendy approach (representation learning). 3) Some numerical illustrations of the proposed method, focused on the interpretability of the estimator, which is of paramount utility when it comes to applying mutual information to real-world problems. 4) Most limitations of this work are acknowledged by the authors. Weaknesses: 1) Theoretical justification for the method is rather simplistic, as acknowledged by the authors, and relies on potentially loose approximations such as the data processing inequality. 2) Overall, the proposed method consists in applying an existing estimator to pre-processed input variables, in the form of latent representations. It seems to lack a joint design, which results in two additive and independent sources of error (one from the representation, the other from the estimation itself). 4) Some minor concerns are raised in the 'Questions' field. Technical Quality: 3 Clarity: 3 Questions for Authors: 1) While the paper only apply the LMI estimator to problems in biology, have the authors considered other applications? For instance, the MI plays a central role in supervised learning in which the feature vector $X$ is high-dimensional (e.g. a MNIST image) while the label $Y$ consists in a few dimensions. Typically $X$ would be compressed but not $Y$, resulting in quite different latent spaces (different dimension and nature) unlike in the examples introduced in the paper. 2) Could the authors explain the choice to apply the KSG estimator to the latent representations in their LMI method? Have the authors considered combining their method to more modern methods such as MINE [BBR18]? 3) The last paragraph of Section 3.2 refers to "Fig. 3a, 3b" and "Fig 3c" while it should be to Fig. 4. Please make sure the figures are referenced properly throughout the paper. 4) Related to the previous point, please be consistent when referring to figures. It should be "Fig. \#" or "Figure \#" (preferably the former), but both are used. 5) Some relevant references (in my opinion) on MI estimation could be added in the introduction e.g., [NZH19], [GVG15], [MAK20]. [BBR18] Belghazi, M. I., Baratin, A., Rajeswar, S., Ozair, S., Bengio, Y., Courville, A., & Hjelm, R. D. (2018). Mine: mutual information neural estimation. arXiv preprint arXiv:1801.04062. [NZH19] Noshad, M., Zeng, Y., Hero, A. O. (2019, May). Scalable mutual information estimation using dependence graphs. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 2962-2966). IEEE. [GVG15] Gao, S., Ver Steeg, G., Galstyan, A. (2015, February). Efficient estimation of mutual information for strongly dependent variables. In Artificial intelligence and statistics (pp. 277-286). PMLR. [MAK20] Mukherjee, S., Asnani, H., Kannan, S. (2020, August). CCMI: Classifier based conditional mutual information estimation. In Uncertainty in artificial intelligence (pp. 1083-1093). PMLR. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors addressed most of the limitations of their work in the "Limitations" section of their paper. Please however see the 'Weaknesses' section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for this insightful review. ***“Overall, the proposed method consists in applying an existing estimator to pre-processed input variables, in the form of latent representations. It seems to lack a joint design, which results in two additive and independent sources of error (one from the representation, the other from the estimation itself).”*** The reviewer is correct in noting that LMI can have two sources of error: representation error, and estimation error. This is a conscious choice, made because KSG has many useful properties which variational bound neural estimators lack. So we will address it in conjunction with the later question: ***“Could the authors explain the choice to apply the KSG estimator to the latent representations in their LMI method? Have the authors considered combining their method to more modern methods such as MINE?”*** We considered the possibility of using MINE or InfoNCE as latent estimators, and showed one empirical result in Appendix A.2.2 where latent KSG outperforms latent InfoNCE. However, our choice to use KSG is deeper than this one result. The KSG estimator has a number of useful properties, most notably: 1. KSG easily decomposes into pointwise mutual information estimates. Decomposing MI estimates into their pMI values allows us to “explain” which samples contribute to an MI value. In real-world applications, this is extremely useful. We show this concretely in Sections 4.1 and 4.2, where pMI decomposition enables prediction of protein interactions (4.1) and identifying transition points during neutrophil differentiation (4.2). 2. KSG is far more sample efficient in low-dimensional settings than MINE and InfoNCE. This is well-established: see Figure 9 of \[1\]. 3. In general, MI estimators struggle in high MI settings \[2\]. This phenomenon is particularly well studied for KSG, and corrections have been developed for situations with high MI \[3\]. We are optimistic that these corrections can be applied to LMI as well. ***“While the paper only apply the LMI estimator to problems in biology, have the authors considered other applications? For instance, the MI plays a central role in supervised learning in which the feature vector is high-dimensional (e.g. a MNIST image) while the label consists in a few dimensions. Typically $X$ would be compressed but not $Y$, resulting in quite different latent spaces (different dimension and nature) unlike in the examples introduced in the paper.”*** Thank you for this interesting question. In principle, LMI easily adapts to this situation. Reusing notation from Section 2, if $X$ is our compressed variable, we must simply train networks $E\_X, D\_{XY}, D\_{XX}$ with the loss $\\mathcal{L} \= \\text{MSE}(X, D\_{XX}(E\_X(X)) \+ \\text{MSE}(Y, D\_{XY}(E\_X(X))$. Then we can estimate $\\hat{I}\_{KSG} (Y; E\_X(X))$. Though our existing software implementation does not generalize to this kind of model, we can still input a one-dimensional variable which gets “encoded” and “decoded” into latent space, such that the network must learn the identity function. While this approach includes some unnecessary parameters, we empirically evaluate its effectiveness in the problem of measuring the mutual information between MNIST digits and their labels. That is, $I(X; L)$, where $X$ is the 784-dimensional distribution over images, and $L$ is the 1-dimensional distribution over digit identities. The ground truth here is roughly $I(X; L) \\approx \\log\_2 (10) \\approx 3.3$ bits. Despite the facts that (1) we are processing images without inductive bias from convolutional layers, (2) we are “wasting” parameters by using our default architecture choices, and (3) we are not carefully treating the discrete-continuous mixture, we find that LMI works reasonably well, significantly outperforming MINE and InfoNCE. We show the results in the table below. We will include this analysis in the Appendix of our revised paper. | | LMI | MINE | InfoNCE | Ground truth | | ----- | ----- | ----- | ----- | ----- | | I(X;L) | 2.37 | 0.02 | 0.47 | \~3.3 | ***“The last paragraph of Section 3.2 refers to "Fig. 3a, 3b" and "Fig 3c" while it should be to Fig. 4\. Please make sure the figures are referenced properly throughout the paper. Related to the previous point, please be consistent when referring to figures. It should be "Fig. \#" or "Figure \#" (preferably the former), but both are used.”*** We apologize for figure reference errors and inconsistency. Our revised paper will correct these. ***“Some relevant references (in my opinion) on MI estimation could be added in the introduction e.g., \[NZH19\], \[GVG15\], \[MAK20\].”*** Thank you for pointing out these relevant references which will strengthen our revised introduction. --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal! Comment: I thank the authors for their detailed response and the newly conducted experiments. Specifically, I believe the new analysis in a supervised learning setup offers some insights for applying the proposed method to more general problems. Naturally, this requires a more in-depth study and could be interesting for future work. The analysis could however be added to the appendix, as well as the experiments presented in the response to all reviewers. This work is overall of good quality, at the intersection of exciting topics, and could pave the way to more profound studies.
Summary: This paper introduces an approach to approximate mutual information (MI), by applying a nonparametric MI estimator to the learned representations. The representations are learned by minimizing a weighted sum of the reconstruction loss and the prediction loss. The authors conducted experiments on both synthetic data and biological data. No higher ratings: The representation learning part has some technical flaws. No lower ratings: The evaluation methods and discussions provide new understandings and suggest some new applications. Strengths: 1. The evaluations are comprehensive with detailed discussions. The demonstrations suggest broader applications of MI in scientific domains. 2. The paper is well organized; presentations are clear and easy to follow. Weaknesses: Despite the comprehensive evaluations and discussions, the proposed approach has a main technical issue: the learned representations might not capture the dependence between $X$ and $Y$, making the LMI fail. 1. Ideally, the reconstruction loss (autoencoder) alone can preserve the information of $X$, $Y$. However, it can be very inefficient when there are many redundancies in $X, Y$. A counter-example is when $X = (U, W), Y = (V, W)$, and $U$, $V$ have much more information than $W$; 2. The prediction loss highly depends on the $X$, $Y$; in the worst case, the prediction loss does not reveal any information about the dependence. A simple example is when the densities of $X$, $Y$ are symmetric. Two particular cases: (1) when $X$ and $Y$ are uniformly distributed on the unit circle; (2) when $X$ and $Y$ are generated from the mixture Gaussian: $ (\mathcal{N}(0, \begin{bmatrix}1, 1/2; 1/2, 1 \end{bmatrix} + \mathcal{N}(0, \begin{bmatrix}1, -1/2; -1/2, 1 \end{bmatrix})/2$. In both cases, the best predictor is simply zero due to the symmetry. 3. By combining the above examples, one can construct examples where the proposed loss does not provide informative information. 4. However, the above counterexamples, especially for prediction loss, can never appear for Gaussian data. This makes the evaluation of Gaussian synthetic data less convincing. Technical Quality: 2 Clarity: 3 Questions for Authors: None. Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The authors have adequately acknowledged the limitations of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for this thoughtful review. You raise one major point of concern: “***\[...\] the learned representations might not capture the dependence between $X$ and $Y$, making the LMI fail.***” We agree that there are situations where LMI will fail (as is broadly true for all MI estimators \[2\]), and aim to clearly identify them in our paper. You are correct that we had failed to consider the setting you describe, and we have now studied it more carefully. Our work has been strengthened as a result. The key new findings are: 1. As the reviewer correctly identified, certain distributions are problematic for LMI. We refer to the general class raised by the reviewer as distributions with “symmetry and exclusivity”. 2. However, all other methods we evaluated also fail in practice, and thus this is a problem more general than for LMI. 3. For LMI, the failure is specific to a choice of regularization, and our paper proposed more than one regularization method (Appendix A.2.1). We now show that the alternate regularization methods recover some functionality for symmetric and exclusive variables. In our revised paper, we will include discussion of symmetry and exclusivity in the Limitations section, and our new empirical results in the Appendix. We now address the specific points raised in more detail: ***“1. \[...\] A counter-example is when $X=(U, W), Y=(U,V)$ and $U,V$ have much more information than $W$;”*** The reviewer is correct that when variable exclusive information outweighs shared information, self-reconstruction loss is insufficient – however the regularization term can still induce $Z\_x, Z\_y$ that share mutual information. Thus, this scenario, representative of “exclusivity”, is alone not problematic for LMI, as seen in our Figure 2 benchmark. ***“2. \[...\] in the worst case, the prediction loss does not reveal any information. A simple example is when $X, Y$ are symmetric. \[...\]”*** We deeply appreciate this key insight. The two examples (“O” and “X”) are instances of distributions for which $\\mathbb{E}\[X|Y\] \= \\mathbb{E}\[X\]$, meaning the MSE minimizing predictor of $X$ is independent of $Y$. As such, the cross-prediction loss will not have a meaningful impact on $Z\_y$. This “symmetry” alone is not problematic for LMI, because the self-reconstruction can still induce meaningful $Z\_x, Z\_y$, as seen in Rebuttal Figure (RF) 2, introduced later in the response. ***“3. By combining the above examples, one can construct examples where the proposed loss does not provide informative information.”*** We agree. While LMI can handle each above example in isolation, a combination can cause LMI to fail. Symmetry is problematic for cross-prediction, and exclusivity is problematic for self-reconstruction. Given this limitation, we now explore (1) whether other methods suffer from the same limitation, and (2) whether it can be overcome by changing the cross-prediction loss. To explore these questions, we develop a benchmark to empirically study MI estimation in cases of exclusive symmetric distributions. Our benchmark involves estimating MI from samples of symmetric distributions with increasing levels of variable exclusive information. We choose two symmetric distributions, shown in RF1: (1) the Gaussian mixture proposed by the reviewer and (2) a noisy circle, modified from the reviewer’s suggestion to be well-behaved. To add exclusive information, we inflate each variable with independently normally distributed dimensions. The ideal estimator should not vary with the number of exclusive dimensions, as true MI is invariant. Following the reviewer’s argument, with no exclusive information, LMI should yield a good estimate (due to self-reconstruction loss), but the estimate should quickly decay to 0 as exclusive information is increased. In the benchmark, we first confirm that LMI behaves as predicted by the reviewer. The decay in LMI occurs quickly, already with \<10 exclusive dimensions (RF2). **Result 1: other MI estimation methods suffer from the same limitation** We found that the problem is not unique to LMI – all studied estimators fail in the symmetry and exclusivity setting, with \<10 exclusive dimensions (RF2). Note that in this dimensionality, MINE and InfoNCE typically perform well \[1\], so their failure is not due merely to the dimensionality of the problem but also due to the nature of the distributions. The estimators do not agree even in the 1D case without exclusivity, suggesting that symmetric distributions may be generally problematic for MI estimation, similar to long-tailed distributions \[1\]. **Result 2: handling symmetry by avoiding L2 cross-prediction loss** For LMI, failure under symmetry is an artifact of the L2 cross-prediction loss. An alternate regularization approach may be able to handle distributions with symmetry and exclusivity. We proposed two such models in Appendix A.2.1, which maximize a lower bound on $I(Z\_x, Z\_y)$ via MINE and InfoNCE in latent space. In our Gaussian benchmark, we found these regularizations to be less effective than cross-prediction: performance decayed more quickly with increasing intrinsic dimensions (Figure 7). Now, in the symmetry and exclusivity benchmark, we find that these alternate models, LMI-MINE and LMI-InfoNCE, perform better than LMI, but still fail with 8 exclusive dimensions. In our eyes, moderate improvement in the case of symmetry and exclusivity is outweighed by the worse scaling with intrinsic dimension, so we prefer the cross-prediction loss for general use. If a practitioner suspects that their data may be symmetric and exclusive, it may be advisable to use one of the alternate regularization methods. In our software library, this is simple to do, e.g. `lmi(X, Y, regularizer="models.AEMINE")`. ***“4. However, the above counterexamples \[...\] never appear for Gaussian data.”*** This point is now fully addressed by the additional benchmarks already discussed above. --- Rebuttal 2: Comment: Thank you for the detailed responses. I have read through the rebuttal. Despite the technical issues (failure cases), I appreciate the authors' follow-up discussions and experiments. I believe the current manuscript has value in proposing a general framework for applying non-parametric estimators to the latent representations, as the title suggested, and shows potential usages of mutual information. I have updated the score to reflect it. However, as discussed, the specific choice of the latent representations can have a big impact on the performance. I believe a good understanding of failure cases is much more useful to the community than oversold/heavily-tuned performance gains. Therefore, the manuscript could be more valuable if such failure cases were explicitly discussed, which can provide an understanding of the interaction between latent representations, data distributions, and learning algorithm designs. --- David Tran, Alex Valtchanov, Keshav Ganapathy, Raymond Feng, Eric Slud, Micah Goldblum, and Tom Goldstein. An open review of OpenReview: A critical analysis of the machine learning conference review process. arXiv, 2020. Hsuan-Tien Lin, Maria-Florina Balcan, Raia Hadsell, and Marc’Aurelio Ranzato. What we learned from NeurIPS 2020 reviewing process. Medium https://medium.com/@NeurIPSConf/what-we-learned-from-neurips-2020-reviewing-process-e24549eea38f, 2020. --- Rebuttal Comment 2.1: Comment: Thank you for your feedback. The point about “oversold performance gain” fatigue is easy to resonate with. We agree that a good understanding of the limitations of the method is important for the work to be useful. To this end, in our revision, we will explicitly discuss failure cases. We plan the following changes to the text: 1. Raise anticipated limitations of LMI already in the introduction of the problem 2. Add a results subsection explicitly studying failure cases of LMI (the analysis from our rebuttal) 3. Modify the abstract to emphasize dependence on learned representations 4. Update summary of failure cases in Limitations In detail: **1\. Raise anticipated limitations in the introduction** We will include the following text at the end of the paragraph in L64: *The usefulness of this assumption relies on our ability to identify low-dimensional structure in data. We will propose methods for learning low-dimensional representations which are useful for MI estimation and highlight examples where the methods can still fail.* **2\. Results subsection explicitly studying failure cases of LMI** We plan to include the following subsection which reports on the analysis from RF2, and explicitly discusses failure modes. *Section 3.3: Constructing and studying problems where LMI fails* *Examining failure modes of LMI can be instructive in understanding the limitations of nonparametric MI estimation in low-dimensional representations learned by neural networks. LMI can fail when (1) its learned representations do not capture dependence structure, or (2) when KSG fails to accurately estimate MI in latent space. The limitations of KSG are quite well documented: it often fails for strongly dependent variables \[3\], and in high dimensions \[1\]. Here, we aim to identify problems where LMI learns representations which result in poor MI estimates.* *A trivial failure mode of LMI is the case where the dependence structure of input variables far exceeds the size of the LMI latent space. An example of this can be seen in Figure 3c. This limitation can be partially overcome by evaluating LMI with a latent space large enough to capture dependence structure, however a priori knowledge of the appropriate embedding dimension is rarely possible. One heuristic approach (Appendix A.5.1) is to make estimates with several latent space sizes, and choose the size which maximizes the estimate.* *A more subtle failure mode occurs when learned representations do not capture mutually informative structure in the data. This can happen when certain symmetries are present in the data, such as with variables $X, Y$ for which $\\mathbb{E}\[X|Y\] \= \\mathbb{E}\[X\]$ and $I(X;Y) \\gt 0$, two examples of which are shown in \[RF1\]. For such variables, the choice of MSE-minimizing predictor of $X$ becomes independent of $Y$ and so the cross-prediction loss fails to constrain latent representations, reducing the LMI model to a pair of independent autoencoders. In these cases, LMI accuracy can degrade.* *We next construct a benchmark to illustrate this limitation, and understand if other estimators suffer from the same limitation in practice. As a benchmark, we generate samples from variables with a single pair of symmetric dimensions, and with varying numbers of independently normally distributed dimensions. An ideal estimator should not vary with the number of independent dimensions, as true MI is invariant. In the case with no independent dimensions, LMI should be accurate up to the limitations of KSG because independent autoencoders are sufficient to learn mutually informative representations. As the number of independent dimensions increases, LMI estimates should quickly degrade.* *As anticipated, LMI estimates implemented with an MSE cross-prediction loss decay quickly, approaching 0 with 8 exclusive dimensions (\[RF2\]). However, this behavior is not unique to LMI: similar decay was seen for all studied estimators. In this dimensionality, MINE and InfoNCE typically perform well \[1\], so their failure is not due merely to the dimensionality of the problem but also due to the nature of the distributions. The estimators do not agree even in the 1D case without independent dimensions, suggesting that symmetric distributions may be generally problematic for MI estimation, similar to long-tailed distributions \[1\].* \[continued with paragraph about alternate regularization approaches; omitted due to space constraints\] **3\. Emphasize dependence on learned representations in abstract** We will adjust L12-14: *Using several benchmarks, we show that unlike existing techniques, LMI can approximate MI well for variables with $\>10^3$ dimensions, **if their dependence structure is captured by the learned latent representations**.* **4\. Update Limitations section** We will update the summary of failure cases in the Limitations section. We omit the text here due to space constraints.
Rebuttal 1: Rebuttal: We deeply appreciate the thoughtful feedback shared by the reviewers. In our responses to reviewers, we share some additional analyses, discussions, and clarifications (companion figures to responses are included in the attached .pdf). Overall, we think we have managed to address all of the concerns and questions raised. Below we summarize the most significant new content: 1. We study the failure of LMI for distributions with “symmetry and exclusivity”, show that such distributions are also troublesome for existing estimators, then propose and empirically evaluate some potential solutions. \[in response to NtLr\] 2. We explicitly discuss tradeoffs in the choice of latent space size, give a heuristic approach to choosing latent dimension, and discuss the consequences of choosing suboptimally. \[Response to 8utg, x45D\] 3. We show that LMI outperforms alternatives in an MI estimation problem when only one variable requires compression (MI of MNIST image and label). \[Response to 13JG\] 4. We show that the performance of LMI on multivariate Gaussian data remains qualitatively similar across a more diverse set of distributions, by constructing high dimensional analogs of the MI estimator benchmark tasks in \[1\]. \[Response to x45D\] **Below are the references used across all responses, centralized due to space constraints.** \[1\] Czyż, P., Grabowski, F., Vogt, J. E., Beerenwinkel, N. & Marx, A. Beyond normal: On the evaluation of mutual information estimators. *arXiv \[stat.ML\]* (2023). \[2\] McAllester, D. & Stratos, K. Formal Limitations on the Measurement of Mutual Information. in *International Conference on Artificial Intelligence and Statistics* 875–884 (PMLR, 2020). \[3\] Gao, S., Steeg, G. V. & Galstyan, A. Efficient estimation of mutual information for strongly dependent variables. *arXiv \[cs.IT\]* (2014). \[4\] Goldfeld, Z. & Greenewald, K. Sliced mutual information: A scalable measure of statistical dependence. *arXiv \[cs.IT\]* (2021). Pdf: /pdf/1f6295503ec84affa0f7790ad295a6f01b2d7851.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Learning Image Priors Through Patch-Based Diffusion Models for Solving Inverse Problems
Accept (poster)
Summary: This work introduces a patch-based diffusion modeling approach to efficiently learn image priors that can be used to solve inverse problems. Particularly the model maintains memory and data efficiency due to the patch-based operating scheme. Experiments are demonstrated in both natural and medical image domains to solve various inverse problems (e.g., CT reconstruction, deblurring, etc.) based on priors learned via patches. Strengths: - Paper presents a very interesting and novel approach to learn image priors with patch-based diffusion models, and empirically demonstrates rigorous results. Weaknesses: - Some experimental clarifications are necessary, and writing & quality of figures can be improved. Technical Quality: 3 Clarity: 3 Questions for Authors: - Details regarding how the evaluation metrics PSNR and SSIM are calculated are missing (i.e., in RGB domain, or via the luminance in YCbCr domain)? - The authors' justification on using non-overlapping patches during training is somewhat not clear to me. What exactly is the negative impact of using overlapping patches during training this model? - In Figure 4, results obtained with [16] are surprisingly bad. Is the implementation modified, or did the authors implement themselves? Is it the best case scenario for this method's performance? Also on a different note, did the authors perhaps try using the patch-based diffusion approach from [23] in an unsupervised manner in this simulation? - Considering Algorithm 1's sampling loop, looks like the sampling process goes through all denoising steps without skips (sampling takes T steps)? In that case, perhaps inference time comparisons should also be demonstrated comparatively with other methods. - Visual results (e.g., in Figures 5, 6, 7) should be blended in the compiled PDF not as a PNG/JPEG, but as a high resolution image as a e.g., PDF (more professional looking figure quality needed). Fine differences/details that one should see when zoomed-in can disappear if figures are blended in the paper using a PNG/JPEG type of format, which loses its meaning. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Sufficiently addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments. **Comment: Details regarding calculations of PSNR and SSIM are missing** The PSNR and SSIM of RGB images are calculated in the RGB domain. Data preprocessing consisting of dividing all the RGB values by 255 was done first, so all the reconstructed images have values between 0 and 1. The PSNR and SSIM values were then computed from these images. **Comment: Justification for using non-overlapping patches is unclear** Using overlapping patches during training would significantly increase the computational cost. Section A.6 details the theory behind which the original score matching process (that must be done on the entire image), used to train the network, can be reduced to score matching on individual patches. In particular, in going from equation (A.6) to equations (A.7) and (A.8), the assumption that the patches do not overlap is made to bring the sum out of the norm. If overlapping patches were to be used, it would be necessary to backpropagate through all the terms of the sum during the training. On the other hand, when nonoverlapping patches are used, we can perform score matching on individual patches and only the loss of these individual patches needs to be backpropagated through the network. **Comment: Results obtained by [16] are surprisingly bad** In Figure 4, for [16], we used the code shared by the paper’s authors. However, there is a key difference between the figures we generated and the figures generated in [16]. In [16], for the best results, some portion of the training time must be spent on learning the distribution of the entire image (without patches). Then during generation, the entire image is used as an input to the network. However, the goal for our paper was to avoid needing to input the entire image into the network both at training time and generation. Therefore, when running [16] in Figure 4, we trained the network only on patches of images and we generated full size images by first generating patches of the images (with positional encoding information) and then simply stitching them together. **Comment: Did the authors try using [23] in an unsupervised manner** When using [23] in an unsupervised manner, it is necessary to first train an unsupervised diffusion model (the patch-based networks from our work suffices for this) and then apply the unsupervised network to solve the inverse problem. The original paper is able to use the conditional network to enforce data consistency with the measurement, but with an unsupervised network, it is necessary to add in an additional step in the inference loop that enforces data consistency. We ran experiments using DPS as the data consistency strategy that is consistent with PaDIS. Further note that [23] has an additional tunable parameter which is the amount of overlap between patches. Table 6 shows the results of this method when this parameter has been tuned under the name Patch Averaging. The table shows the method can obtain reasonable results but is outperformed by our proposed method. Additionally, the optimal overlap parameter value requires a significant amount of overlap between patches (approximately 1/4 of the patch dimension in both x and y) which increases the number of patches per iteration whose score function must be evaluated using the network. Finally, while the empirical results are reasonable, there is a lack of mathematical justification in [23] for the procedure of averaging the predicted noises of overlapping patches and future work is required to theoretically justify (from a probability distribution perspective) this method. In the revision, we will include more visual examples of this method compared with the others. **Comment: Inference time comparisons should be demonstrated with other methods** Algorithm 1 indeed indicates that none of the sampling steps are skipped. For a fair comparison with other diffusion model based methods in Table 5, we used the same number of steps (1000) for all the methods; an increase in image quality was present with an increased number of steps for all the methods. DPS requires more time per image due to the computation of the gradient of the norm term which involves backpropagating through the network, and predictor-corrector sampling involves two network evaluations per iterations. The average reconstruction time per image in seconds for the methods in Tables 1 and 5 are shown below for 20 view CT. Baseline: 0.1 ADMM-TV: 0.7 Whole image diffusion: 172 PaDIS (VE-DPS): 195 Langevin dynamics: 98 Predictor-corrector: 189 VE-DDNM: 105 **Comment: Visual results should be presented as a high resolution image** The global rebuttal PDF page has some examples of higher resolution images for different CT reconstruction experiments including 60 view reconstruction and fan beam CT. These images are displayed with higher contrast and should be more helpful in performing clinical diagnosis. --- Rebuttal Comment 1.1: Title: response to rebuttal Comment: Thanks to the authors for their rebuttal and detailed comments. Majority of my concerns are answered, and I believe this submission went through a successful rebuttal period. Thus, I also increased my score. Please include the discussions presented here in the revised PDF as well, particularly the inference time comparisons that are provided here. It presents a more fair comparison of the proposed algorithm. --- Reply to Comment 1.1.1: Comment: Thank you for the review and reading our rebuttal. We will make sure to include these discussions in the revised paper. Feel free to let us know if there are any remaining questions about the manuscript and we will try our best to answer.
Summary: In this work, the authors propose a novel method for learning efficient data priors for entire images by training diffusion models only on image patches. During inference the authors introduce a patch-based position-aware diffusion inverse solver, which obtains the score function for the whole image through scores of individual patches and their positional encoding, using this as the prior for solving inverse problems. Multiple experiments on CT data as well as on the CELEBA dataset are conducted. Strengths: The idea of including positional information of patches into the diffusive reconstruction process seems novel and promising. Indeed, the authors demonstrate that their proposed approach can compute the score function for entire images without needing to feed the whole image through the network. Weaknesses: While the patch-based reconstruction for diffusion models appears promising, the evaluation and experiments presented in the paper require further extension to justify publication. Additional evaluations should be conducted against other models, reconstruction methods, inverse problems, and datasets. Furthermore, a sensitivity analysis should be included to examine how the proposed approach performs with different forward operators. Additionally, many of the presented CT reconstructions exhibit significant hallucinations, which is particularly concerning in the context of medical imaging. Overall, my decision is influenced by the weak evaluation of the method. Technical Quality: 3 Clarity: 2 Questions for Authors: I would be interested as to which number of data-samples the patch-based approach outperforms the Vanilla one? How would the proposed approach perform with different forward operators, i.e. different inverse problems on the same datasets? Why do the presented CT reconstructions exhibit such significant hallucinations, and how can this issue be addressed? Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Although sparsely addressed in the conclusions, the authors do not give limitations about the proposed approach at all. Especially, the proposed approach seems to have problems concerning hallucinations in the reconstructed CT images. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments. **Comment: Additional evaluations should be conducted against other inverse problems, sensitivity analysis should be included with different forward operators** We conducted more experiments with different forward models: namely 60 view parallel beam CT, 180 view fan beam CT, and deblurring with a larger kernel of size 19x19. The results are shown in Table 7 and further demonstrate that our proposed method outperforms various SOTA methods for a large variety of forward models. Additionally, Table 6 contains a comparison with a wider variety of inverse problem solving methods. These comparisons with several other SOTA methods strengthen the evaluation of our method. **Comment: CT reconstructions exhibit significant hallucinations** The authors acknowledge that the images obtained by the generative models investigated including the proposed method for 20 view CT reconstruction show some hallucinations and artifacts. This is a natural consequence of using extreme compressed sensing with ultra-sparse views: normally, to reconstruct a 256x256 image requires (pi/2*256)=402 views, so for the 20 view experiments, the measurements have been compressed by a factor of 20. Due to this lack of information, it is very hard for any model to perform a diagnostic-quality reconstruction, though our proposed method (and the other diffusion model methods, to a lesser extent) are able to partially fill in this information through learning a strong image prior. The alternative methods that do not learn a prior perform significantly worse in terms of the shown metrics and exhibit severe blurring and artifacts. In clinical settings, it is much more common to perform patient diagnosis with CT scans consisting of hundreds of views. To illustrate this point, we perform experiments with 60 view CT, where our proposed method is able to obtain excellent quality images as shown in Figure B.1: essentially no artifacts are visible. (We show the potential of our proposed method to reconstruct images with ultra-sparse views with a decent image quality, which can be potentially used for other clinical applications such as patient positioning.) **Comment: What is the number of samples for which the proposed method is better** When looking at PSNR, the number of samples out of the 25 test samples in which the patch-based approach outperformed the vanilla one is as follows: 23 for 20 view CT, 25 for 8 view CT, 20 for deblurring, 16 for superresolution. We will add this information to the supplement. **Comment: Authors do not give limitations of the proposed method** One limitation of the proposed approach (and all diffusion approaches) is that they tend to be slower than optimization based approaches, plug and play methods, and model-based learning methods. Another limitation of generative modeling approaches is the potential to hallucinate, particularly when the measurements are very compressed and contain little information. This is a limitation of most generative models, as illustrated by the visual examples from the whole image diffusion model for sparse view CT. This problem can be resolved by obtaining more projection views in a CT scan, depending on the application needs. --- Rebuttal Comment 1.1: Title: Response Comment: After considering the author's response, some of my concerns have been alleviated, prompting me to adjust my score to a borderline accept. --- Reply to Comment 1.1.1: Comment: Thank you for the review and reading our rebuttal. Your feedback is crucial for us to improve our manuscript. Feel free to let us know if there are any remaining questions about the manuscript and we will try our best to answer.
Summary: This paper proposes a patch-based diffusion model for inverse problems, such as CT reconstruction and natural image deblurring. This method divides images into patches, reducing the size of the input data fed into the network, thereby decreasing memory consumption. Strengths: This method provides a feasible approach for dividing and merging patches to reduce boundary artifacts for diffusion models. Weaknesses: 1. The innovation is limited. The main contribution of this paper is the application of patch diffusion [1] to inverse problems. However, the experimental results are neither promising nor convincing. [1] Wang, Zhendong, et al. "Patch diffusion: Faster and more data-efficient training of diffusion models." Advances in neural information processing systems 36 (2024). 2. The CT images presented by the author in the paper and appendix are very blurry and not displayed with the correct window level and width, making it impossible to discern imaging details and lacking clinical significance. 3. Even though the CT images are very blurry, it is still evident that all the reconstructed CT images exhibit significant numbers of image artifacts (incorrect organ structures) compared to the ground truth. This is completely unacceptable for medical images. 4. The comparison methods are very limited, with only ADMM-TV, which is a very old method. There are numerous methods for CT reconstruction [2-4], natural image deblurring [5] and super-resolution [6] that the author did not compare with at all. [2] Shen, Liyue, John Pauly, and Lei Xing. "NeRP: implicit neural representation learning with prior embedding for sparsely sampled image reconstruction." IEEE Transactions on Neural Networks and Learning Systems 35.1 (2022): 770-782. [3] Wu, Qing, et al. "Self-supervised coordinate projection network for sparse-view computed tomography." IEEE Transactions on Computational Imaging 9 (2023): 517-529. [4] Chung, Hyungjin, et al. "Solving 3d inverse problems using pre-trained 2d diffusion models." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. [5] Tang, Xiaole, et al. "Uncertainty-aware unsupervised image deblurring with deep residual prior." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023. [6] Wang, Longguang, et al. "Unsupervised degradation representation learning for blind super-resolution." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. I suggest the authors make more improvements for image restoration tasks (inverse problems) to control the reliability of the generated images and reduce artifacts. 2. it is necessary to compare with more SOTA methods to obtain a more comprehensive evaluation. Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: 1. Due to the severe artifacts observed in the visual results (CT reconstruction) and the limited number of comparison methods, the experimental results are hardly convincing. 2. The innovation is very limited. It is based on existing diffusion models and merely applied to the domain of inverse problems (image restoration tasks). Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments. **Comment: Innovation is limited** The main innovation of our method is a method to formulate a diffusion based image prior from solely the patches of the image. Diffusion models are known for requiring a large amount of memory for training and inference and extending them for large scale images is a challenging problem. This work illustrates how image priors for very large images can be learned by learning priors of patches, a much less memory intensive task. This is unlike the work of [1], which ultimately still requires training on whole images as well as inputting the whole image into the network to generate images. We also demonstrate in Table 3 that our proposed method can learn a reasonable prior for dataset sizes much smaller than is normally used for training diffusion models and the advantage over whole image diffusion models becomes more pronounced for small datasets. Finally, unlike previous patch-based diffusion model papers such as [16] and [17] that can only be used for generating images, we show how our method learns a full image prior from only image patch training that can be coupled with most diffusion inverse problem solving algorithms, which is not a trivial extension based on [16] and [17]. **Comment: CT images are blurry and displayed with the wrong window level** We displayed CT images corresponding to our new CT experiments in Figure B.1 using a narrower window size of 800 to 1200 HU and with higher resolution. These images show better contrast between different organs and are more useful for obtaining a clinical diagnosis. In the revision, we will redisplay all the CT reconstruction images with this higher level of contrast. **Comment: CT images contain significant number of image artifacts** The authors acknowledge that the images obtained by the generative models investigated including the proposed method for 20 view CT reconstruction show some hallucinations and artifacts. This is a natural consequence of using extreme compressed sensing with ultra-sparse views: normally, to reconstruct a 256x256 image requires (pi/2*256)=402 views, so for the 20 view experiments, the measurements have been compressed by a factor of 20. Due to this lack of information, it is very hard for any model to perform a diagnostic-quality reconstruction, though our proposed method (and the other diffusion model methods, to a lesser extent) are able to partially fill in this information through learning a strong image prior. The alternative methods that do not learn a prior perform significantly worse in terms of the shown metrics and exhibit severe blurring and artifacts. In clinical settings, it is much more common to perform patient diagnosis with CT scans consisting of hundreds of views. To illustrate this point, we perform experiments with 60 view CT, where our proposed method is able to obtain excellent quality images as shown in Figure B.1: essentially no artifacts are visible. (We show the potential of our proposed method to reconstruct images with ultra-sparse views with a decent image quality, which can be potentially used for other clinical applications such as patient positioning.) **Comment: Comparison methods are very limited** [4] is a method that applies 2D diffusion models to solve 3D inverse problems including CT reconstruction, whereas our proposed method applies for 2D inverse problems, so it cannot directly be applied. However, the sampling algorithm that is used is the predictor-corrector sampling algorithm, which we initially compared to in Table 5 and now added a more comprehensive comparison in Table 6. This sampler did not perform as well as the one chosen for PaDIS (DPS) by the quantitative metrics. [2] and [3] are self-supervised methods for CT reconstruction, which means that network is trained during reconstruction time, substantially slowing down the algorithm. [5] is a deep image prior method that also requires network training at inference time. Furthermore, [2], [3], [5], and [6] are all problem specific methods for which a generalization to the other types of inverse problems would be nontrivial. Our proposed method, along with most of the methods we compared to in Table 6, are easily generalizable methods that can solve a wide variety of inverse problems. Furthermore, the training process for our algorithm need only happen once per dataset, and no network training is required during reconstruction time. Due to these fundamental differences between our method and [2], [3], [5], [6], we believe that comparisons with those methods would not be fair. Nevertheless, to provide a more complete evaluation of our method, we included additional comparisons with plug and play (PnP) methods and other diffusion inverse solvers in Table 6. These methods share the similarity with our method in that network training only needs to be done once per dataset, and the same trained network can be used for different types of inverse problems, allowing for greater flexibility. Table 6 consists of an expanded comparison between various methods. We implemented various diffusion inverse solving methods [1], [7], [19] in conjunction with our patch-based prior. We included two additional patch based methods from [23] and [69] where we applied [23] in an unsupervised way by using the same unsupervised network trained in our proposed method and adding a DPS step during reconstruction. We also implemented two plug and play (PnP) methods by first training denoisers on CT images and the CelebA dataset and then applying these denoisers in an unsupervised way to solve the inverse problems. Optimal hyperparameters for all these methods were found through searching. In all cases, our proposed method outperformed the comparison methods. In the revision, we will add visual examples of these methods. These comparisons with several other SOTA methods strengthen the evaluation of our method. --- Rebuttal Comment 1.1: Title: The rebuttal response Comment: The author's rebuttal addressed some of my concerns, so I have revised my score to borderline accept. However, there are still some unresolved issues, as outlined below. 1. The authors directly addressed the issue of CT artifacts, and I acknowledge their explanation that diffusion-based extremely-limited-angle CT reconstruction inherently results in such artifacts. However, from a medical application perspective, these artifacts are quite concerning. In the rebuttal, the author adds reconstruction results from 60 angles. However, since the selected images primarily show the lungs (which appear as zero in the specified window width), they do not contain enough tissue, unlike abdominal images, to adequately assess the reconstruction quality. 2. In both the paper and the rebuttal, the author points out that performing diffusion on patches can reduce computational costs, enabling to process large-scale images, such as higher resolution and 3D images. In the experiments, both the CT dataset and the CelebA-HQ dataset are 256x256. CelebA-HQ is a standard dataset, but CT data often comes in much higher resolutions, such as 512x512. I suggest the authors conduct experiments on such CT datasets to better support their claims. 3. In summary, while the author's method performs well in terms of quantitative metrics, the inevitable artifacts in the CT data raise some concerns from a medical perspective. --- Reply to Comment 1.1.1: Comment: **Comment: Selected images show the lungs and do not contain enough tissue unlike abdominal images** The requirements of the rebuttals state that we cannot use links in any part of the response except for code (and we can no longer modify our one page PDF), but we have sent a message to the AC asking if it would be permissible to share an anonymized link to images of our new results. We have run experiments on CT images containing more tissue and contrast demonstrating that for 60 view CT, our method is able to obtain high quality reconstructions which do not exhibit artifacts. **Comment: CT data comes in 512x512 resolution, I suggest authors conduct experiments on such CT datasets** The original AAPM dataset cited in the paper consists of 512x512 images. We used this original data scaled between 0 and 1 in the same way as the 256x256 CT images in previous experiments to train a patch based network. The largest patch size was chosen to be 64x64, while patches of size 32x32 and 16x16 were also used for training. The zero padding was set to 64 pixels on all four sides of the image. Due to time constraints, we were only able to train the network for roughly 20 hours. For reconstruction, only the largest patch size was used: a total of 81 patches of size 64x64 were needed to fully cover the 512x512 image while allowing for shifts. Similarly we cannot show visual results of the reconstruction, so we report the quantitative results of the 60 view parallel beam CT reconstruction problem: over the test dataset, the average PSNR was 36.92 and the average SSIM was 0.899. This shows that the patch prior was learned well and leads to a high quality reconstruction free of artifacts. In the revision, we will provide a more complete comparison of applying various methods on these higher resolution CT images as well as more visual results. **Comment: Inevitable artifacts in CT data raise some concerns from medical perspective** We acknowledge that artifacts may arise for very sparse view CT reconstructions. In clinical settings, hundreds of views are typically used to perform patient diagnoses. Our experiments on 60 view CT show a lack of artifacts, so in the future, our proposed method could be used to reduce the number of views needed to obtain an accurate reconstruction for medical settings.
Summary: This manuscript discusses diffusion models for inverse problems. The authors discuss using image patches of the image to improve computational bottlenecks and overcoming the lack of sufficient data in training appropriate surrogate neural network priors for the inversion task. The authors discuss details of their proposed method and illustrate the advantages of their methods on various tasks including CT, deblurring, and superresolution. Strengths: This manuscript is well-written and structured, making it easy for readers to follow the presented ideas. The authors ground their work in existing literature and reference relevant papers in this field. Data-driven approaches for inverse problems have shown significant advances and this manuscript contributes to this field. Weaknesses: This work is partly incremental and heavily relies on various previous and cited publication, e.g., [12,18,19]. Furthermore, I assume computational costs for the solution of the inverse problem are extremely large since just a stochastic gradient approach must be utilized in the inversion process (see algorithm 1). These are typically prohibitive for large-scale inverse problems removing the advantages of the learned data-driven prior. Technical Quality: 3 Clarity: 3 Questions for Authors: See Weaknesses. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments. **Comment: Work is partly incremental, heavily relies on cited publications** The papers [12] and [18] apply diffusion models to solve 3D reconstruction problems, whereas the proposed method performs experiments on 2D reconstruction problems using a patch-based prior. [19] uses the predictor-corrector method for solving medical imaging problems, which we compared to in Table 5. We revised the labels in Table 5 to make this more clear. The method of diffusion inverse problem solving most closely related to our method is DPS [5]. However, the most significant contribution of the proposed method is a patch-based image prior that requires only patch inputs to a neural network and can be paired with any diffusion inverse solving algorithm, as illustrated in Table 5. **Comment: Computational costs are large and requires a stochastic gradient approach** We acknowledge that the computational costs of the proposed method will exceed that of more traditional reconstruction methods, as is true for almost all diffusion based methods, as a tradeoff for achieving better image quality. However, the computational cost of the proposed method is similar to other diffusion based methods. The average reconstruction time per image in seconds for the methods in Tables 1 and 5 are shown below for 20 view CT. Notably, our proposed method takes only slightly more time than the approach using diffusion models trained on entire images while greatly reducing the memory needed and improving the result. We will add this table to the supplement. Baseline: 0.1 ADMM-TV: 0.7 Whole image diffusion: 172 PaDIS (VE-DPS): 195 Langevin dynamics: 98 Predictor-corrector: 189 VE-DDNM: 105 The stochastic approach taken in Algorithm 1 allows the runtime of the algorithm to be similar to other diffusion methods while eliminating artifacts that would otherwise persist between boundaries of patches. The approach is similar to the one taken in the paper below: instead of computing the score function multiple times each iteration, stochastically choose one of them to compute each iteration. Over the course of hundreds of iterations throughout the reconstruction process, the stochastic approximation of the score function becomes more accurate. Furthermore, this approach does not sacrifice the advantages of using this data-driven prior, as Figure 4 shows that our proposed method can still be used to unconditionally generate fairly realistic images. S. Lee, H. Chung, M. Park, J. Park, W.-S. Ryu, and J. C. Ye. “Improving 3D imaging with pre-trained perpendicular 2D diffusion models”. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023, pp. 10710–10720. --- Rebuttal Comment 1.1: Comment: Thanks a lot for your response. While some of the author's comments address my concerns, I will maintain my initial ratings. --- Reply to Comment 1.1.1: Comment: Thank you for the review and reading our rebuttal. Your feedback is crucial for us to improve our manuscript. Feel free to let us know if there are any remaining questions about the manuscript and we will try our best to answer.
Rebuttal 1: Rebuttal: We would like to sincerely thank all the reviewers for the valuable comments and constructive feedback on our paper. We provide point-by-point responses to address each reviewer’s comments and highlight our response to some key questions and additional experiments and results as below: **More baselines for comparison**: We provided new results to compare with more baseline methods in Table 6 as shown in the attached pdf, including three diffusion-based methods [1,7,19], two plug and play (PnP) methods [42, 46], and two patch-based methods [23, 69] as suggested by Reviewers 2 and 3. This comprehensive comparison shows that our proposed method can outperform these relevant methods to a large extent by learning a better image prior and applying the optimal inverse solving algorithm. **Different forward operators**: We conduct more experiments with different forward models: namely 60 view parallel beam CT, 180 view fan beam CT, and deblurring with a larger kernel of size 19x19. The results are shown in Table 7 and further demonstrate that our proposed method outperforms various SOTA methods for a large variety of forward models. **Window size**: We displayed CT images corresponding to our new CT experiments in Figure B.1 using a narrower window size of 800 to 1200 HU. These images show better contrast between different organs and are more useful for obtaining a clinical diagnosis. **Hallucinations and artifacts in the reconstructed images**: The presence of artifacts in some of the reconstructed CT images using generative methods is a natural consequence of using extreme compressed sensing with ultra-sparse views. Due to this lack of information, it is very hard for any model to perform a diagnostic-quality reconstruction, though our proposed method performs best in terms of the quantitative metrics. We added experiments with 60 view CT, where our proposed method is able to obtain excellent quality images as shown in Figure B.1: essentially no artifacts are visible. **Innovation**: The main innovation of our method is a method to formulate a diffusion based image prior from solely the patches of the image. This work illustrates how image priors for very large images can be learned by learning priors of patches, a much less memory-intensive and data-hungry task. Unlike previous patch-based diffusion model papers such as [16] and [17] that can only be used for generating images, we show how our method learns a full image prior from only image patch training that can be coupled with most diffusion inverse problem solving algorithms, which is not a trivial extension based on [16] and [17]. [69]: Rumberger, Josef Lorenz, et al. "How shift equivariance impacts metric learning for instance segmentation." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021. Pdf: /pdf/fb71bbea6e616b141af8a6055af125434d0c11ab.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: The authors present an approach for tile-based training and prediction of diffusion models applied for inverse problem posterior sampling. The core idea is that training is done with random patches and during generation the authors use a differently shifted non-overlapping tiling grid for each iteration of the process. The authors additionally provide the x- and y-pixel-coordinates to the network by encoding them in extra channels and concatenating them. Their approach allows them to generate images without Stichting artefacts. The random tiling during training can be seen as data augmentation and the authors show that this enables their method to be trained on smaller datasets. Strengths: * Improving the memory requirement for diffusion model is an important problem. Many applications need to process large images in a coherent way, while avoiding stitching artefacts. * I appreciate the fact that the proposed training scheme enables training with less data. And that the authors validate this in an experiment. * The authors show that their method does not depend on the particular smapling scheme or network architecture. I appreciate the generality of the approach. Weaknesses: * My main criticism is regarding the motivation of the problem. The authors write: "*Directly using overlapping patches would result in sections of the image covered by multiple patches to be updated multiple times, which is inconsistent with the theory of diffusion models.*" The question of how tiling and stitching can be applied for unets, such that the result is equivalent to processing the image as a whole has been explored before. See questions section for details. * The proposed tiling comes at a cost: It limits the range of correlations that can be captured by the diffusion model. This is visible in Figure 4, where the generated images show no stitching artefacts, but also produce nonsensical large scale anatomy. I am missing a discussion of this aspect. In the posterior samples, the effect is not visible, since the input image contains enough information such that long range correlations are not relevant. I believe this would be a problem for inverse problems where the input image contains less information, i.e., when the noise is very severe, or for super resolution with a more extreme resolution factor. Technical Quality: 3 Clarity: 3 Questions for Authors: It is not correct that using overlapping tiles would be "*inconsistent with the theory of diffusion models*". In general, a tiling scheme for unets can be implemented, such that the stitched tiles are identical to the result of processing the image as a whole. This can be achieved by using overlapping tiles (with the correct shift) and disregarding the areas close to the border in the outputs that are influence by padding. I believe a discussion of this can be found in [1]. What would prevent us from applying this approach with a diffusion model in each step? The network could still be trained using patches. While such a tiling approach would cost additional computation time, since it requires overlapping patches, it should theoretically produce guaranteed stitch artefact-free outputs. Do the authors agree, that this well established approach should ideally be a baseline or at least be discussed? The authors could show that their method produces comparable results at reduced computational cost. [1]: Rumberger, Josef Lorenz, et al. "How shift equivariance impacts metric learning for instance segmentation." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: I am missing a discussion on the cost of the proposed patch scheme regarding the ability of the network to model long range correlations. See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments. **Comment: Tiling scheme for unets can be implemented with overlapping patches** We implemented the method provided in the above reference [1] while using the same trained network, hyperparameters, and DPS for inverse problem solving. The network consisted of 3 layers of upsampling/downsampling and each layer involved pooling with a factor of 2. The largest patches that the network was trained on had size 56. To satisfy the hypotheses of the paper, the overlap between patches was to be a multiple of 8; we set it equal to 8 to minimize the number of patches for image partitioning. Hence each 256x256 image was divided into 36 overlapping patches, compared to 25 for PaDIS. Table 6 shows the results of using this method under the name Patching Stitching. For the shown inverse problems, the method obtains reasonable results but performs worse than our proposed method. Despite this, the reconstructed images did not appear to exhibit any boundary artifacts along patch boundaries, showing that the method of [1] worked in that aspect. The increased number of patches necessary for that approach increased the runtime of the reconstruction algorithm by approximately 30%. We also examined the unconditionally generated images using the method of [1] which are shown in Figure B.2. Although these images exhibited relatively smooth features without any clear boundary artifacts between patches (unlike the clear artifacts visible in the middle row of Figure 4 generated by naive patch stitching), the overall structure was highly inconsistent with the CT images in the dataset. This is in contrast with the images generated by PaDIS as shown in the bottom row of Figure 4. Therefore, although [1] can result in smooth images, the lack of patch shifting means that the learned image prior differs from the one in Eq. (3), resulting in unrealistic looking generated images, which indicated the underlying data distribution cannot be well captured. The significantly worse prior learned by this method is reflected in the worse results for inverse problem solving and especially when the measurements are highly compressed. There are two aspects of the UNet used in our application that likely caused that patch stitching method to fail to learn the prior well. Firstly, our diffusion UNet takes in an additional scalar input indicating the noise level of the noisy image being input into the network. This scalar input is processed through a sinusoidal positional encoding before being embedded into the layers of the UNet through an attention mechanism. Secondly, our network takes in the positional embedding of the location of the patch via concatenation along the channel dimension. Thus, our network learns the score function of patches while also incorporating the location of the patch, making it different from traditional UNets used for segmentation [1]. **Comment: Patches limit the ability of the network to learn long range correlations** Learning longer range correlations within an image is assisted by the method of using positional encoding of patches as an input to the network: it allows our network to learn a different distribution of patches at different locations in the image. Thus, provided that the location of the central object to be imaged is in a relatively consistent location (as is the case for CT scans or human faces), the network can learn that, for instance, the spine of the CT scan is typically around the middle bottom section of the image. Such learning is consistent with the generated results in Figure 4. Nevertheless, generating whole images with realistic large scale anatomy is challenging, and we show the generation results to demonstrate that they appear somewhat reasonable while emphasizing the focus on solving inverse problems. Figure 5 contains some examples of reconstructed images from 8 view CT, a very compressed sensing problem. Normally, to reconstruct a 256x256 image would require (pi/2*256)=402 views, so the compression is a factor of 50. In this case, the images show reasonable large scale anatomy. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal - additional questions. Comment: Thank you for the rebuttal. **Regarding the tiling and Stichting method:** I appreciate that you try the suggested stitching mechanism and put results into the pdf. The sampled results look indeed inferior. You write: *"There are two aspects of the UNet used in our application that likely caused that patch stitching method to fail to learn the prior well. Firstly, our diffusion UNet takes in an additional scalar input indicating the noise level of the noisy image being input into the network. This scalar input is processed through a sinusoidal positional encoding before being embedded into the layers of the UNet through an attention mechanism. Secondly, our network takes in the positional embedding of the location of the patch via concatenation along the channel dimension. Thus, our network learns the score function of patches while also incorporating the location of the patch, making it different from traditional UNets used for segmentation [1]. "* I agree that these are the likely reasons for the performance difference. I did not suggest the tiling strategy as an alternative to providing noise level and patch positions as additional inputs (I think these make a lot of sense), but rather as the established method to avoid stitching artefacts, as opposed to the proposed method of using different shifts in each iteration of the generation process. I don't see a reason not to combine the established way of avoiding stitching artefacts in UNets with the additional input information (position and noise level) **Regarding long range correlations:** I believe learning *"different distribution of patches at different locations in the image"* is not the same as learning long range correlations, which means learning which structures on the one side of the image are likely to appear together with structures on the other side of the image. I still don't think it is possible to learn such correlations without looking at the image as a whole. --- Reply to Comment 1.1.1: Comment: Thanks for the review and reading our rebuttal. Your feedback is crucial for us to improve our manuscript. **Comment: I did not suggest the tiling strategy as an alternative to providing noise level and patch positions as additional inputs** In our experiments of using tiling UNets in Table 6 and Figure B.2, we still included the noise level and patch positions as additional inputs: for a fair comparison, we used the **same trained network** that was used to reconstruct the CT images of Figure 5. The only difference was that at reconstruction time, instead of using shifting non-overlapping patches, we used fixed location overlapping patches with the tiling UNet strategy. The patch size at reconstruction time (56x56) was kept the same for the old experiments (Figure 5) and new experiments (Patch Stitching in Table 6 and Figure B.2). Note that in Figure B.2, although the generated images are of significantly worse quality than the bottom row of Figure 4 (the proposed method), the overall shape of the CT images are still preserved and the spine is roughly located in the correct position for all the images. This is due to the position encoding inputs to the network: if these inputs were not included, the network would learn a **mixture** of distributions consisting of patches of all different locations, and it would be impossible for the network to “know” that the spine should be at the central bottom area of the image. Hence, the positional input is crucial for obtaining even somewhat reasonable looking generated images. In the rebuttal, we highlight the difference between the network used in the generation of Figure B.2 and the types of networks studied in [1]. Although the network used for Figure B.2 utilizes the same ideas as in [1], the two additional inputs (namely, the noise level input and positional encoding) may create additional complications especially in terms of the formulation of the prior as a product of patch priors. That the generated images of Figure B.2 show no discontinuities between boundaries of patches is an indication that while the goal of eliminating boundary artifacts is achieved by the method of [1], the underlying learned prior is worse than our proposed method. **Comment: I still don’t think it is possible to learn correlations without looking at the image as a whole** We acknowledge that the proposed method would not be able to learn which structures on one side of the image are likely to appear with structures on the other side of the image. It would only be able to learn that independently, certain types of structures may be likely to appear on one side, whereas other types of structures may be likely to appear on the other side, but not be able to learn a connection between them. This is a limitation of using a network that only accepts patches as inputs, but we demonstrate that this learned prior is sufficient for solving inverse problems (the focus of this work) particularly when data is limited.
null
null
null
null
null
null
Learning Group Actions on Latent Representations
Accept (poster)
Summary: Learning group actions on latent representations Abstract The work’s contributions are clear from the abstract. Introduction Lines 13-18: Group actions are explained simply and intuitively but the less familiar reader may benefit from a basic example beyond reference to geometric transformations, such as something as simple as an image inversion. Lines 19-30: A simple, elegant, clear example of the problem that the authors are trying to address is covered with a clear distinction between the digit (latent factor) and the image (input data). This really helps the audience to understand the problem and why it is important. Line 33: Is this something that has been studied in the past by the authors or another group? Is there anything in learning the group actions on the latent space that would or should necessitate learning the same group actions on the data space other than explainability in some cases? In the clear example of the rotating 7 they are both clearly “rotation”. I would need to be more creative to think about a distinct group action on the data that is not present in the latent representations or vice versa. I think the authors probably have some in mind. However, I also understand that the authors are not saying the data and latent group actions need to be exclusive, just that they do no need to be the same either. Line 36: I understand that having group-specific layers for encoding could limit the architecture of the AE and could limit expressivity. However, is there not an explainability benefit as well to having group-specific layers in the case where several complicated group actions are acting on the latent factors? I am thinking about something like NIT (https://proceedings.neurips.cc/paper/2018/hash/74378afe5e8b20910cf1f939e57f0480-Abstract.html) but for learning group actions instead of learning separable functions on input data akin to a generalized additive model. Related works Line 62: I think it helps any reader that does not do research in group actions to introduce what some of the group actions (here E(3)) mean/do without having to consult references or web search. Overall the Related works does a great job of summarizing past work on group actions, impacts, and limitations. The differences between the authors’ work and [9] are appreciated and described well. Group actions on latent representations Lines 103-107: The notation is made very clear. Group action requirement is defined for the unfamiliar reader. Lines 108-110: The math holds. I would like to hear more about the intuition on representation of the latent space as a product of the varying and invariant parts. I imagine that the latent space can have components that are both varying and invariant to a group action, such as the example given with 3D images. However, it seems that there would be other groups actions present that are fully varying with respect to a group action. In this case we could have z_j varying to group action A and invariant to group action B. Please note whether this is simply a misunderstanding in the notation relative to these representations. Latent space group action model Lines 111-124: The model formulation is clear. I might go one step further for expanded reach to a broad audience to say something very simple to summarize Lines 115-116 like “z2 is the result of taking group action G on z1” as “lie in the same orbit” might not be explicit enough for all readers. Nothing is lacking in the math for the average reader to follow along. Skip Connections and attention Lines 126-137: The authors highlighting attention and skip connection as optional shine a light on how their overall approach is relatively architecture agnostic, which is great. Induced group actions on the data space Lines 139-169: Propositions check out. The diagram helps the reader. Lines 164-169 are helpful as well. Examples 2D and 3D Rotations Lines 171-172: 2D/3D rotation as matrix operations (transformations) are described clearly. Image contrast transformations Lines 173-174: It has been some time since I thought about image contrast transformation but the authors make it very clear. Cyclic group transformations Lines 174-179: Again, clearly described. Experiments Lines 189-193: Highlighting to the reader how this is a latent space group action not a data group action by keeping the occlusion square fixed and not rotating with the image is key. Same for MRI coronal 2D image contrast group action only on the brain and not the extra-dural tissue. Line 224: I think authors meant to say “they use almost” when talking about Hwang’s encoder. Figure 4: I assume these are test set images but I would label as such in the Figure 4 caption Quantitative results Line 251: Choice of PSNR and SSIM are justified Table 3: I like the addition of the ablation study. It shows that skip connection and LPIPS helps but that the major gains are agnostic to fine tweaks to the modeling approach to learn latent group actions. This is a good thing. Strengths: The problem is clear. Much work has been done to study group action on data but limited work has been done looking at group actions on latent representations. The authors describe why this is important, especially in the era of generative AI for everything. Results are excellent. A nice surprise is that the author's work, despite focusing on group actions on latent representations, actually does just as well or better on group actions on data as compared to prior work in this area, showing it is invariant to the level of the group action (data vs latent). Weaknesses: One thing missing or that I could not find in the main manuscript or paper is an explicit description of how g is initialized. For example, in the 2D rotation case, we need to learn g (reshaped in matrix form as a rotation matrix). The authors note that "For the target group action α, we directly compute α(g, z)". I have seen similar (unpublished) work looking at learning the transformation matrix for a group action in the data space with a similar approach but that learning g was initialization sensitive. Clearing this up and making it more clear in Figure 2 that this is multiplication would help. Technical Quality: 3 Clarity: 4 Questions for Authors: Please see weaknesses above. I would like to know more about how g is initialized or further clarification if I misunderstood the operation in the text vs what is shown in Figure 2. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your very detailed and insightful review. We appreciate the thorough understanding you demonstrated and the thoughtful feedback you provided. We are grateful for the time and effort you invested. We will revise the unclear sentences and captions the reviewer kindly pointed out. As for the question you raised, we actually did not learn high dimensional $g$. We discuss about this in the paper, such as in Section 5.2, but it might be not clear enough. Taking the 2D rotation as an example, we reshape the $z_v$, the varying part of the latent representition, into $k\times 2$ matrix, then do matrix multiplication with the actual 2D rotation matrix $g$. In our preliminary experiments, we experimented with embedding it into high dimensional matrix, but found that none of the embedding techniques work better than simply using the original rotation matrix. --- Rebuttal Comment 1.1: Title: First response Comment: Thank you. I must have misinterpreted that g is not a learned transformation in the latent space as the authors correctly point out that this was addressed in the original submission. It was a bit unclear to me so cleaning that up for broadest access to all readers may help. I share the thought now of another reviewer that the impact is somewhat limited by the requirement that the ground truth group action must be known. However, the authors appropriately point out that this is the current state of this niche area of research and cite other methods (that they already compared in the paper) also requiring ground truth action in the original data space. As responded to another reviewer, this addresses my main question: "We agree that requiring ground truth is a limitation, as we discuss in the paper. However, we would also like to point out that most of the existing works that is applied to model latent group actions, regardless of addressing the problem in terms of group actions or not, are also supervised. This includes both of our comparison models for the two 3D rendered datasets"
Summary: This paper introduces an approach to modelling group actions using autoencoders, particualrly by learning these actions in the latent rather than observed data space. The authors argue that this allows modelling a broader range of real-world scenarios, and does not require particular layers - however, it does require specific architectural changes (e.g., specific skip connections, and attention modules before concatenating upsampling path). In practice, it seems that the main idea of the paper is to enforce a sort of cycle consistency on the decoder output (e.g., bidirectional consistency), with inputs the group action applied to the latent representation. The authors demonstrate the effectiveness of their approach through experiments on image datasets, covering a range of group actions including. Results appear promising, and good with respect to compared work. Strengths: The ablation study on the NMR dataset (Table 3) provides valuable insights into the contribution of different components (skip connections, LPIPS loss) to the model's performance. The approach is simple, and shows promise in transfer learning, few-shot learning, and domain generalization Weaknesses: As the authors identify, the model requires ground truth group actions during training The authors claim that this method does not require group-specific layers, which it indeed doesn't - however the authors propose a specific architecture to show best results The main idea behind this method is basically in the proposed loss function, which is similar to reconstruction-based cycle consistency losses The method relies on the loss function to learn these symmetries from data, requiring also ground truth. There are many methods that can attempt to do so if data exists Technical Quality: 3 Clarity: 3 Questions for Authors: How sensitive is the method to hyperparameter choices? e.g. latent dimensionality - split between invariant and variable part How does the model scale/what is the computational overhead? an example that measures, besides similarity metrics, if the resulting representations/data have truly captured the group symmetry (e.g., measuring variability of output/representations given different inputs) Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: they have Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. We appreciate the time and effort you have dedicated to reviewing our paper. We will address the weaknesses and questions you raised below. **Weakness** - We agree that requiring ground truth is a limitation, as we discuss in the paper. However, we would also like to point out that most of the existing works that is applied to model latent group actions, regardless of addressing the problem in terms of group actions or not, are also supervised. This includes both of our comparison models for the two 3D rendered datasets. - We note that we actually used multiple architectures in the experiments, sometimes with "plain" CNNs (MNIST and MRI experiments), and sometimes with the skip connections and attention mechanism (3D models). While skip connections with attention is proposed to improve the image rendering quality, it is not the decisive factor of our method. As listed in Appedix A, for our two MNIST rendered datasets and the brain MRI dataset, we used plain deep CNNs without skip connections or attetnion mechanism. While we did use it for the two 3D object rendered datasets, we conducted ablation studies in Section 6.3 that shows our model results in state-of-the-art performance even without this additional technique. We also included some samples as qualitative results in the PDF attached to our global rebuttal response. - While there is some similarity between our proposed loss term and cycle consistency losses, we would repeat that the novelty of our work is that it is the first to model latent group actions that manifest as acting on some features in the data, while leaving other features invariant. As stated earlier, there are many methods trained with such data, including two novel view synthesis models we compare to in the paper and outperform. As the experimental results show, we believe our model is capable to handle more general types of group actions. **Questions** - Our hyperparemeters are manually tuned on individual valid sets. The numbers of learnable parameters are compared in Appendix B, which indicates our model did not win by overly complicated architecture. We also reiterate that our base architectures are very standard deep CNNs. The only optional new technique is the skip connection with attentions. - We have some training details in Appendix C. With this setup, our model was trained approximately 20-40 hours depending on the datasets. - We agree that more innovative measurements of how well captured the group symmetry is would be a useful contribution. However, we think this is a quite difficult and subtle property to measure in the output data itself (e.g., how do you know that the variation in an image set is due to a 3D object rotating correctly?). This is a topic for future work. We would note that all of the competing previous works settle for qualitative results and similarity metrics just as we do. --- Rebuttal 2: Title: Reminder: Author-Reviewer Discussion Period Comment: Dear Reviewer, As the author-reviewer discussion period ends on August 13th, we kindly remind you that we will be unable to communicate after that date. We would appreciate your feedback on whether our response has addressed your concerns. If you have any further questions or need additional clarification, please let us know before the discussion period concludes. Thank you for your attention. Best regards,\ The Authors --- Rebuttal 3: Title: Reminder: Discussion Period Ending Soon Comment: Dear Reviewer, This is a quick reminder that the discussion period ends tomorrow. We would appreciate your feedback on our response, as you are the last reviewer we are waiting to hear from. Please let us know if you have any further questions before then. Thank you. Best regards,\ The Authors
Summary: This paper focus on learning group actions in latent space instead of data space. The group action takes effects between the encoder and decoder in an auto-encoder structure. Several tasks with different group actions are evaluated in the experiments. Strengths: 1. The paper is well organized and the core ideas are explained clearly. 2. The results on rotated and blocked MNIST are impressive, which demonstrated that in the latent space, the variant and invariant parts are easier to be disentangled. Weaknesses: 1. The novelty of learning group action in latent space is not so strong, as the operation on latent space is not rare in lots of other tasks, such as disentangle different factors in generation model. 2. The reason of framework building is not well explained in section 3.3. It is easy to understand directly sending high resolution to the decoder help to keep image details. However, the reason of applying attention operation to the skip connection is not explained, which is also not included in the ablation study. 3. The ablation study results will be more convincible if the qualitative results with/without different parts are compared. For example, LPIPS is helpful for image rendering. Without LPIPS, are the numbers rotated correctly (even the image may not be as clear as the full model)? Or is the model able to separate the numbers and the blocks without LIPIS? These are not revealed from the quantitative results but can be easily demonstrated by the visualizations. Technical Quality: 3 Clarity: 3 Questions for Authors: Please see the weaknesses. Besides, I would like to know the reason why the proposed method is better at PSNR on Brain MRI dataset compared to [13]. Firstly, this dataset does not contain any 'occlusions', which means it may be difficult to demonstrate the advantages of the proposed method on this dataset. And the authors also find 'it is difficult to determine a noticeable advantage visually'. Then why the proposed method shows much better performance on PSNR? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Limitations are discussed in conclusion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. We appreciate the time and effort you have dedicated to reviewing our paper. We will address the weaknesses and questions you raised below. **Weakness** 1. While there are some works that explore models with group actions in latent representations, the novelty of our work is that it is the first to model latent group actions that manifest as acting on some features in the data, while leaving other features invariant. Furthermore, we are the first to generalize to handle multiple types of groups without the need for group-specific neural layers. We have referenced all of the work we are aware of regarding group actions in latent representations. If there are more related works in this specific field, we would appreciate it if you could kindly point us to them. We would point out that most work in disentanglement of latent representation factors does not involve group actions at all, and thus does not diminish the novelty of our work. 2. We tried to explain the reason to include attention module in line 132-134. However, it might be not clear enough. For more intuitive explanation, we can imagine rotating a plane so that its head goes from the left side to the right side of the image. Now, if we directly perform skip connection, the plane head will be concatenated to the left side, and cannot affect the right side of the image during the remaining upsampling path. Therefore, ideally we would like to concatenate the patch that contains plane head to the correct target position. We rely on the attention mechanism to perform this matching. We did not perform ablation study on attention module alone, because skip connection by itself does not benefit performance theoretically. 3. We agree that qualitative results will be nice to have. We only excluded them because of the page limit. We reported some samples in the PDF attached to our global rebuttal response. It shows that both LPIPS and skip connection contribute to details and sharpness of images, within which skip connection improves the performance more prominently. However, even without either of them, the model can still learn correct rotations. We would also like to clarify that only our two 3D object rotated dataset is trained with LPIPS and skip connections paired with attention. Therefore, two MNIST rendered datasets are trained merely with reconstruction loss defined in line 119. **Questions** - While occlusions do cause the group action to no longer be on the data space, it is not the only case where this happens. For the brain MRI dataset, the image contrast transformation is only happening to pixels inside the brain, and not the entire image; and which pixels are in the brain is specific to each image. Therefore, it is no longer a group action on the data space, only on the latent space (where the model must learn which pixels are inside the brain). This would explain why we have better PSNR results. As for why this is the case given the similar qualitative appearance of the results, this is most likely due to the difficulty for human's to perceive possibly subtle contrast differences in the images. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thank you. All my concerns are relatively well addressed. Although the novelty is somewhat incremental, I will increase the score to 5 as the paper is well organized and self-consistent, and the experimental results are promising. --- Rebuttal 2: Title: Reminder: Author-Reviewer Discussion Period Comment: Dear Reviewer, As the author-reviewer discussion period ends on August 13th, we kindly remind you that we will be unable to communicate after that date. We would appreciate your feedback on whether our response has addressed your concerns. If you have any further questions or need additional clarification, please let us know before the discussion period concludes. Thank you for your attention. Best regards, \ The Authors
Summary: The paper introduces a new method to learn group structure such as SO(3) rotations on data by representing the group actions in a latent space. The authors propose an autoencoder-based method to represent this latent space, and demonstrate how it can represent equivariance to latent factors. Strengths: The paper is well motivated. In many real world applications, it is important model equivariant structure of data which is not perfectly equivariant to group actions. It makes sense to decompose data into varying and invariant parts. For 2D image data, the method shows a significant improvement over prior work on modeling group actions. The rotated and blocked MNIST dataset demonstrates an example of how the method is still able to model rotation equivariance even if the training data is not perfectly equivariant. On the 3D view synthesis datasets, the authors also achieve better image quality on the NMR dataset (which is fully equivariant), and the new Plane in the Sky dataset (which has both rotation varying and invariant parts). Weaknesses: I was surprised to find no experiments or analysis on the latent space of the autoeconder. From the experiments, it seems that the paper only studies the outputs of to the auto encoder without any experiments on the latent space itself. It is hard to tell to what extent the improved metrics are due to a better neural network architecture versus the modeled group action. To support the paper's central motivation, there should be some experiments that demonstrate that the model learns a meaningful latent space. There should be evidence to demonstrate that the invariant and varying parts of the input images are disentangled properly. For example, some experiments can be - Visualizing the output of D(z) from randomly sampled z vectors. Does it output reasonable images? - Demonstrating to what extent the decoder D is consistent, as defined by Def 4.1. - Demonstrating if or if not E(D(z)) for all z, one of the assumptions of Prop 4.3. - Visualizing the invariant and varying parts of the latent space. Can you swap the varying and invariant parts of two latent vectors? For example, if you take two Plane in Sky images and swap the varying and invariant parts of their latents, then the decoded images should show the plane of one image with the background of the other. Technical Quality: 2 Clarity: 3 Questions for Authors: Most of the my questions are in the section above. I am curious to see evidence demonstrating how successful the latent space is at disentangling invariant and varying parts of the input data. Additionally, do the residual connections interfere with the group structure of the latent space at all? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The paper did not discuss limitations besides the fact that the method requires ground truth labels for group actions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. We appreciate the time and effort you have dedicated to reviewing our paper. We will address the weaknesses and questions you raised below. **Weakness** - We note that we actually used multiple architectures in the experiments, sometimes with "plain" CNNs (MNIST and MRI experiments), and sometimes with the skip connections and attention mechanism (3D models). While skip connections with attention is proposed to improve the image rendering quality, it is not the decisive factor of our method. As listed in Appedix A, for our two MNIST rendered datasets and the brain MRI dataset, we used plain deep CNNs without skip connections or attetnion mechanism. While we did use it for the two 3D object rendered datasets, we conducted ablation studies in Section 6.3 that shows our model results in state-of-the-art performance even without this additional technique. We also included some samples as qualitative results in the PDF attached to our global rebuttal response. In addtion, we compare the number of learnable parameters in Appendix B, which shows that our method did not win by simply boosting up the architecture complexity. - Our model is not a generative model. None of the competing models we compared to is generative either. They are image-to-image models. Therefore, there is no straightforward way to randomly sample latent codes. A generative model, e.g., diffusion model, could be trained in the latent space, but this is outside the scope of the paper. However, the other proposed experiments are conductable. - We checked Prop. 4.3 on our rotating MNIST dataset, since it is an example of group action on the data space (other experiments are group actions on the latent space, but not on the data space, so Prop. 4.3 will not hold). The average $z$ reconstruction L2 distance is 0.304. Comparing to the standard deviation (using L2 distance) of $z$ over the dataset being 4.043, we can conclude that this property is approximately met. As we proposed in Prop. 4.3, this indicates the autoencoder is consistent (Def. 4.1). - We tried rendering images on blocked MNIST and planes with swapped invariant representations. For the plane dataset, this includes the skip connection with attention module. Some samples are shown in the PDF we attached to the global rebuttal response. The results are what one would expect: The number and the orientation are unchanged, while the block is swapped; the plane orientation and overall shape are unchanged, but the sky background and the details such as colors are swapped. The model is able to combine group invariant and varying information to render new image, even though not sepcifically trained in such fashion. **Questions** - As shown in samples of ablation models from attached PDF, the skip connections only help with image details and sharpness. Also, we reiterate that models on MNIST datasets and brain MRI dataset do not have skip connections. --- Rebuttal 2: Title: Reminder: Author-Reviewer Discussion Period Comment: Dear Reviewer, As the author-reviewer discussion period ends on August 13th, we kindly remind you that we will be unable to communicate after that date. We would appreciate your feedback on whether our response has addressed your concerns. If you have any further questions or need additional clarification, please let us know before the discussion period concludes. Thank you for your attention. Best regards,\ The Authors --- Rebuttal Comment 2.1: Title: Thanks Comment: Thank you for your response and for conducting the extra experiments. I believe the experiments on Prop 4.3. and latent swapping very fascinating, and can strengthen the paper. The experiments demonstrate that the autoencoder is learning some principled representation of what should and should not be invariant. I will raise my score, and I hope that these experiments are included in a revised version of the paper.
Rebuttal 1: Rebuttal: We appreciate the valuable feedback provided by all the reviewers. We are grateful for the time and effort you dedicated to our paper. To provide further clarity, we have included a PDF with additional samples. Please refer to our individual responses for detailed information on each review. We look forward to further comments from reviewers. Pdf: /pdf/a0d964202fa33c1b941d17795cdf37bcf417f41c.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Learning Representations for Hierarchies with Minimal Support
Accept (poster)
Summary: This paper develops a framework to identify a subset of entries required to uniquely distinguish a graph among all transitively-closed DAGs. It achieves robust performance on synthetic hierarchies and a larger real-world taxonomy. Strengths: S1: A framework is proposed for detecting a sufficient smallest subset of entities in a graph adjacency matrix to specify a digraph. S2: Details on formulating the problem are provided to help characterize the proposed method. S3: A series of experimental studies were conducted to show the effectiveness of the proposed method. Weaknesses: W1: The end-to-end efficiency is not clearly evaluated in this work, making it a bit unclear to assess the significance of the work in practice. For example, the proposed technique helps learn representations with minimal entries supported during training, where the minimal entries lead to reduced complexity in terms of computation; however, the hierarchy-aware sampling process itself adds additional operations, which may or may not result in increased overall complexity. W2: The hierarchy aware sampling is a key technique in the proposed solution. The authors should elaborate the sampling process. The loss function provided in Section 5 seems relying on several not-well-defined terms, making it a bit concerned about the repeatability of the work. W3: The presentation should be improved. For example, in Section 5, the verb is missing in the first sentence. I assume the authors mean “formally —> formulate”. Technical Quality: 3 Clarity: 2 Questions for Authors: Refer to the weakness section above. Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The authors called out the limitations in term of extending the proposed method to new combinations of graph properties and inductive biases, as well as that the efficacy can vary when applying the method to graphs not transitively closed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and questions. > W1: The end-to-end efficiency is not clearly evaluated in this work, making it a bit unclear to assess the significance of the work in practice. For example, the proposed technique helps learn representations with minimal entries supported during training, where the minimal entries lead to reduced complexity in terms of computation; however, the hierarchy-aware sampling process itself adds additional operations, which may or may not result in increased overall complexity. > All of the additional steps introduced by the hierarchy-aware sampling can be lumped under a one-time preprocessing procedure, as described in Algorithm 1. Since the total number of edges — both positive and negative — in the sidigraph is $O(|V|^2)$, each set of nested for-loops in Algorithm 1 (lines 2-5 as well as lines 6-9) takes $O(|V|^2)$ time, since the cost of removing a redundant edge is $O(1)$. In practice, the preprocessing step of FINDMINDISTINGUISHER can be run on CPU and can be parallelized w.r.t. the negative edges. Moreover, as evidenced in Table 1 Appendix H, the reduction in negative edges is often over 99%. Meanwhile, Figure 5 (GT-Box, rows 3-4) show that the convergence rates of Balanced Tree and nCRP (the graphs with the greatest negative reductions contributed by the reduced negative edges $E_{H}^{-}$) are higher using $E_{H}^{-}$ than using the full negative edge set $\overline{E}$. We therefore believe that our setup is useful in practice. > W2: The hierarchy aware sampling is a key technique in the proposed solution. The authors should elaborate the sampling process. The loss function provided in Section 5 seems relying on several not-well-defined terms, making it a bit concerned about the repeatability of the work. > To elaborate, the terms $\ell^{+}(x)=\lambda_{pos} x$ and $\ell^{-}(x) = -\lambda_{neg}~x$ in the loss equation of Section 5 are the same as the terms in Section 2.2 — positive and negative scalars which we call $\lambda_{pos}$ and $\lambda_{neg}$, respectively, to weight the energy contribution. We always set $\lambda_{pos}=1$ and we optimize for the best $\lambda_{neg}$ (please refer to Appendix G.1 and G.2). Therefore, for SimVec, the expanded loss function is: $\sum_{(u,v) \in E_{H}^{+}}\ell^{+}(E_{\theta}(u,v)) + \sum_{(u,v) \in E_{H}^{-}}\ell^{-}(E_{\theta}(u,v))$ $=\sum_{(u,v) \in E_{H}^{+}}\ell^{+}(-\log\sigma(\theta_u \cdot \theta_v)) + \sum_{(u,v) \in E_{H}^{-}}\ell^{-}(-\log\sigma(\theta_u \cdot \theta_v))$ $=\sum_{(u,v) \in E_{H}^{+}}(-\log\sigma(\theta_u \cdot \theta_v)) -\lambda_{neg}\sum_{(u,v) \in E_{H}^{-}}(-\log\sigma(\theta_u \cdot \theta_v))$ (The energy function for SimVec is mentioned in line 236 of the paper.) For GT-Box, the expanded loss function is the same, except that the $-\log\sigma(\theta_u \cdot \theta_v)$ is replaced by the GT-Box energy function described in lines 181-185. We hope that this alleviates your concerns regarding reproducibility! We will also provide a link to our implementation upon acceptance. > W3: The presentation should be improved. For example, in Section 5, the verb is missing in the first sentence. I assume the authors mean “formally —> formulate”. > Thank you for these observations — we will take care to improve and proof check our writing! --- Rebuttal Comment 1.1: Comment: Thank you to the authors for addressing the review comments and providing additional details, including the loss formulation. I will review the rebuttal carefully and consider these details for further evaluation to determine if any changes to the score are warranted.
Summary: This paper proposes to distinguish a directed graph (digraph) among all transitivity-closed DAGs by finding minimal signed directed graph (sidigraph). This paper exploits this idea to propose a more efficient algorithm for node embedding models. Strengths: - Theoretical supports in Sec. 3 for the sidigraph obtained by algorithm 1. Weaknesses: - It is not clear regarding prop 2 if the distinguishing serigraph H' is "lighter" than the transitive reducted G'. This seems to "shift" the problem from reducing G to G' to reducing G to H'. I do not see the benefit of obtaining H' over directly obtaining G'. - The scope of this paper is unclear. In the introduction, the key question is bolded and may be answered Prop. 2. However, what is the rest of the paper? - It is difficult to understand the aim of the experiments. In L253, the authors wrote as > We investigate the impact of the available positive and negative support sets on model training; since the representational capacities of vector and box models are well-studied, we are not interested in the best F1 score attainable by these models, but the effect of respective support sets on convergence However, the very first sentence of the result in L260 is > First note that GT-BOX universally outperforms SIM-VEC on the graph modeling experiments If the authors want to show that the proposed algorithm outperforms the existing one, then probably the authors want to compare with many more. If not, what is this sentence? I think that this experiment aims to show the superiority of the proposed algorithm by comparing it with SIM-VEC from the viewpoint of the effect of respective support sets on convergence. Thus, I think the authors want to compare with more. This may be the presentation issue or content-wise issue. Anyhow, I hope the authors clarify these points. Technical Quality: 2 Clarity: 1 Questions for Authors: I hope the authors clarify the points raised in the weaknesses section. Confidence: 2 Soundness: 2 Presentation: 1 Contribution: 1 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your questions and comments. > It is not clear regarding prop 2 if the distinguishing serigraph H' is "lighter" than the transitive reducted G'. > Please note that while $G^{\prime}$ is a digraph with only positive edges specified, $H^{\prime}$ is a sidigraph with positive edges *as well as explicit negative edges* specified. Thus, if the positive edges of$H^{\prime}$ are already transitively-reduced, then indeed $H^{\prime}$ is not lighter than $G^{\prime}$ *in terms of positive edges.* However, this is not the case *in terms of the negative edges* of the equivalent sidigraph to $G^{\prime}$. Note that for $H=(V, E_{H}^{+}, E_{H}^{-})$ at the start of Proposition 2, since we are given that $H$ is a distinguisher of $G^{\prime}$, we are guaranteed to have $E_{H}^{+} \subseteq E$ and $E_{H}^{-} \subseteq \overline{E}$ by Definition 2. Therefore, removing any negative edge from $H$ to produce $H^{\prime}$ will necessarily make $H^{\prime}$ lighter than $H$, i.e., lighter than the equivalent sidigraph to the transitively-reduced $G^{\prime}$. For empirical evidence that $H^{\prime}$ is lighter than the equivalent sidigraph $H$ in terms of negative edges, please refer to Table 1 of Appendix H, where the rightmost column gives the reduction in negative edges contributed by Algorithm 1 to be as high as 99% for many graph families. > This seems to "shift" the problem from reducing G to G' to reducing G to H'. I do not see the benefit of obtaining H' over directly obtaining G'. > As noted above, $G^{\prime}$ can only inform us of the reduction in *positive edges*, and is ambiguous w.r.t. the reduction in negative edges. For the machine learning setting that is explored in the second part of this paper, we need to sample both positive and negative examples (edges) for training — therefore it is not enough to specify $(V, E^{tr})$ as it can imply $(V, E^{tr}, \overline{E})$, which offers no reduction in negative edges for the sampling pool. Reducing $G$ to $H^\prime$ implies explicitly reducing it to $(V, E^{tr}, E_{H}^{-})$, which may have orders of magnitude fewer negative edges from which to sample. > The scope of this paper is unclear. In the introduction, the key question is bolded and may be answered Prop. 2. However, what is the rest of the paper? > Indeed, the graph-theoretic question posed by our paper is answered by Proposition 2 and Algorithm 1 which is based on it. However, we would like to understand whether the provable graph-theoretic sufficiency of $H^{*}$ can be exploited by energy-based node embedding models. Proposition 4 shows that if the energy-based node embedding model has transitivity bias, then there exists a parameter configuration that can unambiguously represent $G$ if it can represent a distinguisher of $G$ (such as $H^*$). However, the question of whether this configuration is achievable in practice is an empirical one, which our experiments section explores or real data. The experiment section shows that training the transitivity-biased GT-Box with positive and negative edges from $H^*$ achieves the configuration, whereas training SimVec (which does not have transitivity bias) while sampling from the same support set results in very poor performance (see Figure 6, $(E^{tr}, E_{H}^{-})$). > It is difficult to understand the aim of the experiments. In L253, the authors wrote as > > > > We investigate the impact of the available positive and negative support sets on model training; since the representational capacities of vector and box models are well-studied, we are not interested in the best F1 score attainable by these models, but the effect of respective support sets on convergence > > > However, the very first sentence of the result in L260 is > > > > First note that GT-BOX universally outperforms SIM-VEC on the graph modeling experiments > > > If the authors want to show that the proposed algorithm outperforms the existing one, then probably the authors want to compare with many more. If not, what is this sentence? > Thank you for pointing this out, and we see how this sentence can be misleading as to the aim of our experiments. Indeed, **the main aim is to understand the impact of {$E^{tc}$, $E^{tr}$} and of {$\overline{E}, E_{H}^{-}$} on convergence.** Thus, we were trying to emphasize that it is not the best achievable F1 that is of interest to us, but how the convergence is affected by the support sets. **The fact that SimVec underperformed GT-Box w.r.t. the ultimate F1 (as written in L260) does not obscure the pervasive trend that GT-Box is consistently able to benefit from the hierarchy-aware negative edge set $E_{H}^{-}$, while the performance of SimVec plummets with $E_{H}^{-}$ to the point where SimVec is not able to learn anything useful.** We will certainly revise the first sentence of 6.1 for the camera-ready version, and we appreciate your suggestion! > I think that this experiment aims to show the superiority of the proposed algorithm by comparing it with SIM-VEC from the viewpoint of the effect of respective support sets on convergence. Thus, I think the authors want to compare with more. > We note that GTBox is a generalization of a number of models that have transitivity bias. Meanwhile, SimVec reflects a simple but widespread vector-based node embedding model that does not have transitivity bias. Based on Proposition 4, we have strong evidence to believe that analogous models without transitivity bias would exhibit similar poor trends to SimVec when coupled with $E_{H}^{-}$. --- Rebuttal Comment 1.1: Title: Thank you very much for the rebuttal Comment: Thank you for your effort on the rebuttal. While I think that motivation for the first part is well presented and the results are very sound, the second part is largely improved. The authors may want to more explain on the motivation behind the second part in introduction, as in the similar manner as the first part. Also, as far as I understand, comparing two on transitivity bias is not enough. Is the GT-Box is the ONLY generalized model that has the transitivity bias? Is there any model that has a transitivity bias other than the family of GT-Box? Is there any other possible baseline other than sim-vec? Why you can say that the transition bias has effects on convergence only comparing two, where other factors than transitivity bias may involve? Due to this concern, I keep my score. --- Reply to Comment 1.1.1: Title: Thank you for reviewing our rebuttal! Comment: Thank you for reviewing our rebuttal! > The authors may want to more explain on the motivation behind the second part in introduction, as in the similar manner as the first part. > The motivation for the second part is to demonstrate that the graph-theoretic results can be applied in practice, by leveraging a machine learning model with transitivity bias. (See also lines 41-45 in the introduction). > Also, as far as I understand, comparing two on transitivity bias is not enough. […] > We selected models for empirical evaluation with two goals in mind: ------------------ 1. Validate that our theoretical claims can be obtained in the practical setting, in the context of things such as hyperparameter selection and stochasticity in the training loop. To do so, it was necessary to demonstrate that a model with transitivity bias could achieve similar performance when training on our proposed reduced edge set (up to 99.9% reduction) as when trained on the full edge set. As such, we needed to select a model which (a) had strong modeling capacity on transitively closed DAGs, and (b) satisfied the formal definition of transitivity bias, as in Definition 4. Box embeddings were the obvious choice to satisfy (a), having the best performance of any model on transitively closed DAGs as reported in Boratko et al. 2021. We then proved that box embeddings satisfy (b) in Proposition 5. We then tested the model, and observed that it was, indeed, able to obtain similar or better results when using our reduced edge set. This, on its own, is enough to validate that the theoretical claims at least can be obtained in practice. ------------------ 2. Empirically demonstrate that the facet of transitivity bias is important. For this purpose, we needed to choose a model which (a) had strong modeling capacity on transitively closed DAGs, but (b) did not satisfy the formal definition of transitivity bias from Definition 4. Again, SimVec is the logical choice here based on the results from Boratko et al. 2021 - it obtains perfect performance on modeling transitively closed DAGs (once given sufficient dimensionality), however can easily be shown to not satisfy Definition 4. The substantial performance gap on SimVec when using the reduced set of edges supports our conclusion. ------------------ In particular, we do not intend these empirical results to justify a universal claim regarding all models which have or do not have transitivity bias - such a claim is already supported by the theoretical results. Rather, our intention is for these empirical results to demonstrate an existential claim that the theoretical results can be leveraged in practice. A complete analysis of existing embedding models and the extent to which they have or do not have transitivity bias is actually rather subtle and outside the scope of this work. First, while geometric intuition suggests that it is likely that any region-based model (such as Order Embeddings or disks) would have transitivity bias in the sense of Definition 4, it ultimately depends on technical details related to the energy function. In particular, the proof for box embeddings does not trivially translate to the case of Hyperbolic Entailment Cones. Second, there are more subtle distinctions (e.g. a notion of “weak transitivity bias”, wherein the set of parameters for which the energy function can be thresholded to yield a transitively-closed DAG is sufficiently large) which would be relevant for such a detailed analysis.
Summary: This paper proposes a novel framework for identifying a minimal subset of entries in the adjacency matrix that uniquely distinguishes a directed acyclic graph (DAG) among all transitively-closed DAGs. The authors provide a provably optimal algorithm for computing this minimal set. They then leverage these insights to develop hierarchy-aware sampling, which allows training node embedding models more efficiently by focusing only on the essential graph information. Specifically, they prove that models with an inductive bias of transitivity, such as box embeddings, can learn faithful representations of hierarchies using substantially fewer training examples selected by their sampling approach. Experiments on synthetic and real-world graphs demonstrate that their method significantly reduces training data by up to 99% while maintaining or improving model performance and convergence rates compared to uniform negative sampling. Strengths: - The theoretical grounding of the paper looks good to me, with key properties of the proposed framework and algorithms formally stated and proven. The experiments systematically evaluated the impact of algorithm design choices (e.g. positive/negative edge sets) and data characteristics (e.g. graph structures) on model performance. - Learning faithful graph representations with minimal data has important computational benefits and is conceptually valuable for understanding the essential structural information needed to distinguish different graphs. The reduction in training data achieved by the proposed sampling method is substantial. The theoretical framework and practical techniques in this paper could be built upon to develop further improved graph representation learning approaches. Weaknesses: - While the paper demonstrates the effectiveness of transitivity bias and hierarchy-aware negative sampling for DAGs, the authors could discuss whether these ideas extend to learning representations of other graph families characterized by different structural properties. Are there other useful inductive biases worth incorporating into node embeddings and corresponding "structure-aware" sampling strategies? - The experiments focus on evaluating hierarchy-aware sampling, but there is less empirical analysis of the FINDMINDISTINGUISHER algorithm itself, e.g. runtime complexity, actual sparsity of computed sidigraphs, etc. Knowing when the sidigraph is substantially smaller than the original graph is important for determining when hierarchy-aware sampling is computationally beneficial. Technical Quality: 3 Clarity: 3 Questions for Authors: - The framework of sidigraphs for distinguishing graphs with a given property seems quite general. Have the authors considered instantiating it for properties other than transitivity? - Do the authors have intuition or theory on why hierarchy-aware sampling provides a greater speed-up on some graphs than others (e.g. balanced trees vs Price graphs)? - Could hierarchy-aware sampling also improve the computational efficiency of non-embedding GNN approaches for learning hierarchical structures, since they also implicitly perform a form of negative sampling via contrastive estimation? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, there is a section about the limitations of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your encouraging review and questions! > While the paper demonstrates the effectiveness of transitivity bias and hierarchy-aware negative sampling for DAGs, the authors could discuss whether these ideas extend to learning representations of other graph families characterized by different structural properties. Are there other useful inductive biases worth incorporating into node embeddings and corresponding "structure-aware" sampling strategies? > > …The framework of sidigraphs for distinguishing graphs with a given property seems quite general. Have the authors considered instantiating it for properties other than transitivity? > In this work we have focused on transitivity, but we are certainly on the lookout for other interesting structural properties for which this framework would be useful. One extension we think it would be interesting to explore is relation composition on a multi-relational graph - however, such an exploration would fall outside of the scope of this paper. > The experiments focus on evaluating hierarchy-aware sampling, but there is less empirical analysis of the FINDMINDISTINGUISHER algorithm itself, e.g. runtime complexity, actual sparsity of computed sidigraphs, etc. Knowing when the sidigraph is substantially smaller than the original graph is important for determining when hierarchy-aware sampling is computationally beneficial. > Thank you for your suggestion. Since the total number of edges — both positive and negative — in the sidigraph is $O(|V|^2)$, each set of nested for-loops in Algorithm 1 (lines 2-5 as well as lines 6-9) takes $O(|V|^2)$ time, since the cost of removing a redundant edge is $O(1)$. Moreover, in practice, removing a negative edge in the inner loop will often reduce the number of edges left to iterate over in the outer loop. We will make sure to include this runtime analysis in the camera-ready version. For detailed statistics about sidigraph sparsity please refer to Table 1 in Appendix H, where we provide the unpruned number of positive and negative edges of each graph we consider, as well as the ratios of the “reduced” to the “full” sets of positive/negative/total edges. The rightmost column in Table 1 demonstrates that for Balanced Tree, nCRP and MeSH, the reduction in negative edges is drastic (at least 99% for one or more graph configurations of those families). > Do the authors have intuition or theory on why hierarchy-aware sampling provides a greater speed-up on some graphs than others (e.g. balanced trees vs Price graphs)? > While we do not have a formal proof concerning the influence of particular graph structures on convergence, we would like to refer the reviewer to Table 1 in Appendix H, which shows (in the rightmost column) that for Balanced Trees, the reduction in negative edges contributed by FINDMINDISTINGUISHER is always at least 99%, while it is much less for Price graphs. This suggests that balanced trees can benefit substantially more from the reduction than Price graphs. > Could hierarchy-aware sampling also improve the computational efficiency of non-embedding GNN approaches for learning hierarchical structures, since they also implicitly perform a form of negative sampling via contrastive estimation? > Yes, we agree that hierarchy-aware negative sampling could potentially be used to provide an adaptive noise distribution for training non-embedding GNN models which attempt to learn latent hierarchical structure. To expand on this slightly, while we apply our algorithm to a static DAG, this could also be applied to a dynamic graph, such as an “active learning” setting (eg. where the true graph is unknown and we query humans for the existence or non-existence of particular edges) or the setting you describe, where a model such as a GNN attempts to learn the latent hierarchical structure and the edges presented to the training algorithm update dynamically using our constructive algorithm based on the current model parameters. This is an interesting direction for future exploration! --- Rebuttal Comment 1.1: Title: Feedback on rebuttal Comment: I thank the reviewers for their response to my review in their rebuttal. Several of their comments directly address the concerns I raised, while others acknowledge that certain points I highlighted represent potential avenues for future research. The paper presents interesting findings, and I believe the authors would benefit from further academic discussions on this research topic. In consideration of the aforementioned factors, I have determined to increase my assessment score.
Summary: Authors propose an algorithm to identify a subset of entries required to uniquely distinguish a graph among all transitively-closed DAGs, based on the theoretical analysis of transitive reduction of the associated signed digraph. These newly identified subsets are leveraged to learn node embeddings via contrastive learning approaches based on negative sampling more efficiently. The relevance of this novel contrastive approach is empirically validated on several synthetic datasets of DAG and one real-world dataset, leading to comparable or better performances than vanilla negative sampling strategies. Strengths: - Overall the paper is well-written. - Provide an algorithm to uniquely identify transitively-closed DAGs (Alg. 1) and prove its convergence. - Theoretically analysis the relation between the transitive bias of an energy discriminating edges and their graph reduction technique. - Study the transitive bias of box embeddings and their relations to bit vectors. - Empirical study of various negative sampling strategy supporting the relevance of the methodology proposed by authors. Weaknesses: *edits after rebuttal in italic* 1. As expressed by authors the range of applications relating to their study remains significantly narrow even if interesting. It could be of interest to compare their approach on the studied datasets with methods enforcing transitivity while covering a broader score e.g Rot-Pro (Song & al, 2021 in the paper) 2. Some points remain unclear to me in the experiments: - a) *[partly adressed]* Could you further detail the architecture and optimization procedure followed in your experiments ? Results reported in Figure 1 (Boratko & al, 2021) with d=128 (as you set) seems to be significantly better than the ones you report for sim-vec, could you clarify this matter ? Their analysis also emphasize that it will be relevant to study the sensitivity w.r.t $d$ across various negative sampling strategy, i encourage you to do so. It would also be of interest to check if these rankings are consistent while using another transitive representation learning scheme like GT-box. - b) *[Done]* From L.245, I expected comparison between k=4 and 128 but only results for k=4 are reported in the paper. Could you explain why and potentially fix that ? - c) *[partly adressed]* Unclear analysis of results for synthetic dataset, potentially to be refined by studying correlations with some dataset statistics. For instance, why are there opposite dynamics w.r.t to the depth between 'balanced tree' and 'nCRP' ? - d) Experiments on real datasets stopped too early ? It could be of interest to pursue experiments on the real dataset reported in Figure 8 until specific models seem to have converged as dynamics with small number of samples seem more erratic. small tipos: - L.208, missing word ? we formally `explain`... - L.37-38: misleading / big claim, as you study a very specific type of graph. Technical Quality: 2 Clarity: 3 Questions for Authors: I invite authors to discuss and address the weaknesses/questions I have mentioned above, so that I can consider improving my score. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The authors have made an effort to address the limitations of their work, I encourage them to answer the questions above to ensure that all potential limitations have been covered. No potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and insightful questions! > 1. As expressed by authors the range of applications relating to their study remains significantly narrow even if interesting. It could be of interest to compare their approach on the studied datasets with methods enforcing transitivity while covering a broader score e.g Rot-Pro (Song & al, 2021 in the paper) > Thank you for sharing the Rot-Pro paper. It is an interesting work on KG embeddings, much like TransE, RotatE, etc. According to our understanding, Rot-Pro is designed for the task of KG completion, where a KG is a multi-relational graph, i.e. a graph with categorical edge properties. In Rot-Pro, the transitivity is encoded primarily in the relation representation. In contrast, our work focuses on the efficiency of learning representations for directed acyclic graphs — i.e. there is only one relation. Given the success of our approach, extending it to the case of multi-relational KG completion definitely seems interesting, but it is out of scope for the current paper. > 2a) Could you further detail the architecture and optimization procedure followed in your experiments ? > We describe the architecture of GT-Box in Section 4.2. For GT-Box, a node is parameterized by two $d$-dimensional corners. The GT-Box score function between two boxes gets computed as in the formula between lines 189-190, its energy function as in the formula between lines 182-183. Concerning optimization procedure, please refer to Appendix G for details on optimization procedure for Synthetic Graphs (G.1) and MeSH (G.2) > 2a) Results reported in Figure 1 (Boratko & al, 2021) with $d=128$ (as you set) seems to be significantly better than the ones you report for sim-vec, could you clarify this matter ? > Thank you for bringing up this point. We would like to note that we did not do hyperparameter tuning for learning rate and negative weight ($\lambda_{neg}$) with highest attainable F1 as the objective. Rather, we did hyperparameter tuning with fast convergence as the objective — we achieved this heuristically by cutting off training at the end of a fixed number of epochs which we observed corresponded to the “elbow” of the convergence curve, and taking the hyperparameters that yielded the highest F1 at that point. The learning rate and $\lambda_{neg}$ have therefore been optimized for the purposes of *fastest* convergence, and therefore may underperform on best F1, which was the metric optimized for in Boratko et al. 2021. We detail this hyperparameter optimization procedure for synthetic graphs in Appendix G.1 and for MeSH in Appendix G.2. > 2a) Their analysis also emphasize that it will be relevant to study the sensitivity w.r.t  across various negative sampling strategy, i encourage you to do so. > We would be open to experimenting with other negative sampling strategies if you can suggest some particular ones. We were unable to find other relevant negative sampling strategies in Boratko et al. 2021. > b) From L.245, I expected comparison between k=4 and 128 but only results for k=4 are reported in the paper. Could you explain why and potentially fix that ? > Apologies for the oversight. We are attaching a 1-page pdf of results for $k=128$ to the global response, and will include it in the appendix. Based on the plots, the advantages of hierarchy-aware negative sampling are more apparent in the low-resource $k=4$ than in the higher-resource $k=128$. > c) Unclear analysis of results for synthetic dataset, potentially to be refined by studying correlations with some dataset statistics. For instance, why are there opposite dynamics w.r.t to the depth between 'balanced tree' and 'nCRP' ? > Can you please clarify what you mean by opposite dynamics in those two graph families? Note that since all graphs are generated with (almost) the same number of nodes $|V|$, we always have a tradeoff between depth and breadth, which might confound some conclusions. > d) Experiments on real datasets stopped too early ? It could be of interest to pursue experiments on the real dataset reported in Figure 8 until specific models seem to have converged as dynamics with small number of samples seem more erratic. > We can certainly rerun this training for longer for the camera-ready version. However, we do not think that the trend will change after more epochs. > small tipos: > > L.208, missing word ? we formally `explain`... > > L.37-38: misleading / big claim, as you study a very specific type of graph. > Thank you — we will address these typos promptly! --- Rebuttal Comment 1.1: Title: Answer to author Comment: Thank you for your rebuttal, some of my concerns have been correctly addressed. Could you please complete your rebuttal by answering my following questions in order to reach a final decision? > 2-a) Their analysis also emphasize that it will be relevant to study the sensitivity w.r.t $d$ across various negative sampling strategy. Sorry for the misunderstanding, I was essentially referring to studying the sensitivity w.r.t $d$ (d=64*2=128) in your experiments e.g $(d=64, 32..)$, with the different positive and negative sets as you did in the main paper. 2-b) Thank you for these additional experiments. Could you analyze these results e.g by comparing performances between $k=4$ and $k=128$? $E_{H^*}^-$ seems significantly less relevant in the second case than the first one. > 2-c) Unclear analysis of results for synthetic datasets ... From figure 7 and your descriptions, if I understand correctly, the smaller $b$ gets for balanced trees, the deeper they are and the less relevant the use of $E_{H^*}^-$ seems. Whereas for nCRP, the smaller $\alpha$, the deeper the hierarchies (in the same way as with b for trees), but the most relevant seems to be $E_{H^*}^-$. So could you explain these opposite dynamics ? --- Reply to Comment 1.1.1: Title: Thank you for your clarifications! Comment: Thank you for your clarifications! > 2-a) sensitivity to $d$: > While we think the sensitivity to $d$ is indeed interesting to explore, we are not sure we understand the motivation for this study. We have set $d=64$ in our experiments by analogy to the best-performing setting of Boratko et al. 2021 (cf. Figure 1, row 3 in their paper) because we expected that setting to draw the contrast between regular and hierarchy-aware sampling most sharply, reducing confounding factors such as training instability, which could easily result from too small a $d$. Meanwhile, setting $d > 64$ seemed excessive, as $d=64$ already gives enough capacity for both the models to represent the graphs perfectly. Boratko et al. also show that setting $d=8$ (Figure 1, row 1) results in very poor performance for the SimVec baseline even with $\overline{E}$. For our experiments, we think this would obscure the salient trend of $E_{H^*}^{-}$ causing SimVec (a non-transitivity-biased baseline) to crash. > 2-b) k=4 vs k=128 > We agree with your observation that $k=4$ appears to benefit more from $E_{H^*}^-$ than does $k=128$, for which the convergence curves for $\overline{E}$ and $E_{H^*}^-$ are closer together. We think this is to be expected, because the more negatives we sample from the unpruned pool $\overline{E}$, the more likely it is that we draw “high-signal” negative edges, i.e., the $E_{H^*}^-$ edges which we explicitly filter for using FINDMINDISTINGUISHER. Meanwhile, if we sample very few negative edges, drawing from an unpruned pool will likely result in fewer “high-signal” negative samples per positive example, and will take longer to converge. This trend actually makes our approach more attractive for larger graphs, where the distribution of $\overline{E}$ makes it unlikely to sample many high-signal edges using a small negative ratio $k$. > 2-c) Balanced tree vs nCRP > Thank you for clarifying this interesting observation. We would like to refer the reviewer to [Figure 4 in the appendix of Boratko et al. 2021](https://proceedings.neurips.cc/paper_files/paper/2021/file/88d25099b103efd638163ecb40a55589-Supplemental.pdf) for a visualization of a Balanced Tree vs an nCRP graph. We note that Balanced Tree graph is characteristically less random than the nCRP graph. Therefore we hypothesize that the reduced set of negatives $E_{H^*}^-$ is more uniformly informative for the Balanced Tree graphs than for the remaining nCRP graphs, and the change from $\alpha=100$ to $\alpha=500$ for the nCRP might not actually be a trend inverse to the observed Balanced Tree trend.
Rebuttal 1: Rebuttal: We are sharing a pdf with plots for GT-Box with negative ratio $k=128$ (analogous to Figure 7, which is for $k=4$). Pdf: /pdf/54ceb014439bc5d1d5035bce119baabc81ca575a.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper addresses the challenge of training node embedding models on large directed graphs (digraphs) where it is impractical to observe all entries of the adjacency matrix during training, necessitating sampling methods. Recognizing that many entries remain unobserved in very large digraphs, the authors develop a novel framework to identify a minimal subset of entries required to uniquely distinguish a graph among all transitively-closed directed acyclic graphs (DAGs). They provide an explicit algorithm to compute this minimal set and empirically demonstrate that training node embedding models with this subset improves efficiency and performance, assuming the energy function has an appropriate inductive bias. Experiments on synthetic hierarchies and a large real-world taxonomy show robust performance, with improved convergence rates and a reduction in training examples by up to 99%, making the method highly effective in resource-constrained settings. Strengths: 1. This paper is well written. The notations are clear and the literature review is sufficient. 2. By formulating the disjoint subset of notes in a graph in the notion of signed digraph, the proposed FINDMINDISTINGUISHER method is able to optimally handle both acyclic graphs with minimal support. 3. Box Embeddings is used for node embedding, so that the embeded space has a differentiable structure. Weaknesses: 1. The effciency and robustness of the proposed method is verified via experiments on various types of hierarchies, including Balanced tree, nCPR, Price, and MeSH. However, it could be of more intuitive if the generic motivation of pursuing hierarchy-awareness can be illustrated with real world applications. Technical Quality: 3 Clarity: 2 Questions for Authors: It would be appreciate if the author can illustrate the motivation of this work in the pursuit of effciency and robustness in learning representations for hierarchy with minimal support. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your encouraging review and questions! > 1. The effciency and robustness of the proposed method is verified via experiments on various types of hierarchies, including Balanced tree, nCPR, Price, and MeSH. However, it could be of more intuitive if the generic motivation of pursuing hierarchy-awareness can be illustrated with real world applications. > Thank you for this advice. One highly related application is multi-label classification where the label space forms a taxonomy, and where we model the labels by means of region-based embeddings (cf. e.g. Patel et al. 2022). If the embedding of an instance falls within a region corresponding to some fine-grained label (e.g. a *MacBook Pro*), we would like that label to be contained under all its parent labels in the taxonomy (e.g. *< Laptop < PC < electronics < object*). In large product hierarchies, jointly training the instance embedding model together with label embeddings can be a computationally intensive task, where the label-space training can be alleviated by hierarchy-aware sampling. A broader potentially fruitful application of hierarchy-awareness is active learning with a human-in-the-loop. As we mention in the Introduction lines 20-22, “*when obtaining annotations for edges of an unknown graph, the full adjacency matrix is unknown to us and we obtain the value of any particular entry by requesting an annotation*”. Building up a taxonomy from open-source human knowledge (and training corresponding embeddings for fast retrieval) can require many human annotations — and as we have observed in the case of MeSH, we can obtain the minimal distinguishing sidigraph with 99% fewer edges (annotations) than in its equivalent sidigraph (cf. rightmost column of Table 1, Appendix H). > It would be appreciate if the author can illustrate the motivation of this work in the pursuit of effciency and robustness in learning representations for hierarchy with minimal support. > Once again, we appreciate this advice and will certainly make the real-world motivations (such as the above-mentioned) more explicit in the introduction to make a stronger paper. --- Rebuttal Comment 1.1: Comment: I have read the rebuttal and thank the authors for their candid responses. I maintain my positive opinion on this paper.
null
null
null
null
null
null
SCOREQ: Speech Quality Assessment with Contrastive Regression
Accept (poster)
Summary: The paper proposed a reference-free speech quality prediction framework. This work is based on NOMAD and uses triplet loss for contrastive regression to mitigate the generalization problem of reference-free speech quality metrics. The author conduct experiments on various datasets to show the L2 loss failed to generalize and the proposed method works. Strengths: ● The work uses triplet loss to mitigate the generalization problem of reference-free speech quality metrics. ● Experiments are done with open datasets and achieve notable improvements in generalization. ● The experiments are comprehensive. The results demonstrate the advantages of triplet loss over L2 loss. Weaknesses: ● In lines 165-167, the authors state "Our experiments show that the adaptive margin’s contribution is minimal, while significant improvement is achieved through our batch-all strategy", but there is no ablation study about adaptive margin and batch-all strategy. ● Some spelling mistakes: Figure1 "while SCOREQ quality"; Line 123 "and encoder"; Technical Quality: 3 Clarity: 3 Questions for Authors: ● In the attached codes, the authors use the original wav2vec2 base checkpoint. But in Section 3, the paper mentioned the w2v model is finetuned with SCOREQ loss or trained from scratch, is it a model not mentioned in the paper? ● Why the Adapt variant in Table 11 is much lower than the Const variant? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *In lines 165-167, the authors state "Our experiments show that the adaptive margin’s contribution is minimal, while significant improvement is achieved through our batch-all strategy", but there is no ablation study about adaptive margin and batch-all strategy.* Thank you for highlighting the potential for misunderstanding in our discussion of the adaptive margin and batch-all strategy in lines 165-167. We acknowledge that our terminology was not consistent throughout the paper. To clarify, our experiments did include an ablation study to assess the individual contributions of these components. However, we used an inconsistent terminology in that sentence only. In lines 165-167 "batch-all strategy" refers to the "SCOREQ Const" model in all the tables of the paper. The term "adaptive margin" instead refers to the "SCOREQ Adapt" model in the tables. By comparing the "SCOREQ Const" (batch-all with constant margin) and "SCOREQ Adapt" (batch-all with adaptive margin) models, we conducted an ablation study that demonstrates the adaptive margin's impact is minimal. The primary performance improvement over the baseline consistently arises from incorporating the batch-all strategy, even with a constant margin. Had the adaptive margin been a more significant factor, we would have observed a substantial difference between the SCOREQ Const and SCOREQ Adapt models. We will rephrase the sentence like this: "Our experiments demonstrate that the significant performance improvement stems primarily from adopting the batch-all strategy (SCOREQ Const) over the offline triplet sampling used in NOMAD. The additional benefit of using an adaptive margin (SCOREQ Adapt) compared to a constant margin is minimal." ------ *Some spelling mistakes: Figure1 "while SCOREQ quality"; Line 123 "and encoder";* We thank the reviewer for helping to improve the paper clarity. We hae revised these mispelling and rechecked the whole manuscript. ------ *In the attached codes, the authors use the original wav2vec2 base checkpoint. But in Section 3, the paper mentioned the w2v model is finetuned with SCOREQ loss or trained from scratch, is it a model not mentioned in the paper?* Thank you for pointing out this discrepancy. In Section 3 we mention to train from scratch the w2vlight architecture, a smaller version of wav2vec 2.0 that we propose and train from scratch. This w2vlight model does not use pretrained wav2vec 2.0 weights in the Transformer. The attached code currently includes only models finetuned on the original, more powerful wav2vec 2.0 base checkpoint. Upon acceptance of the paper, we will provide the full codebase, including the w2vlight model and its training procedure. The w2vlight experiment was conducted to assess our approach on a less powerful architecture (randomly initialized weights and 4 Transformer layers instead of 12). We prioritized releasing the code for the two MOS predictors finetuned on wav2vec 2.0, as these will be made available as pip packages upon acceptance, offering immediate practical value to the community. -------- *"Why the Adapt variant in Table 11 is much lower than the Const variant?"* Table 11 evaluates how well the encoder representations capture quality information, specifically comparing representations learned with SCOREQ (Adapt and Const) to those learned with L2 loss. Our hypothesis is that L2 loss does not produce representations ordered according to MOS, leading to reduced robustness. The results in Table 11 confirm that both SCOREQ models outperform the L2 loss baseline. The question of why SCOREQ Adapt underperforms SCOREQ Const in Table 11 relates to the nature of the training data and the adaptive margin mechanism. **Noisy Labels in NISQA TRAIN SIM**: SCOREQ Adapt utilizes the MOS distance between samples within a triplet to adjust the margin dynamically. However, the NISQA TRAIN SIM dataset used for training contains noisy MOS labels due to the limited number of crowdsourced raters. These noisy labels can negatively impact the gradient calculations and hinder the learning of well-ordered embeddings. **Constant Margin in SCOREQ Const**: In contrast, SCOREQ Const employs a fixed margin of 0.2, which is less sensitive to the noise in the labels. This contributes to its more robust performance compared to SCOREQ Adapt in this scenario. Our findings underscore the importance of the batch-all strategy (used in both SCOREQ Const and Adapt) as the primary driver of performance improvement over the L2 loss baseline. While the adaptive margin shows promise in some cases (NR mode), its sensitivity to noisy labels suggests further exploration. We have identified this as an important area for future work. --- Rebuttal Comment 1.1: Comment: Thanks for your reply. My concerns have been solved.
Summary: The paper proposes a triplet loss function for contrastive regression for MOS prediction, which helps the model learn more about the relative rank of speeches. Strengths: 1. The author proposes to solve an important problem in the field of MOS prediction that the L2_loss lacks the awareness of rank. 2. The proposed loss function encourage the model to learn more about rank. 3. Experiments verifies the effectiveness of the proposed method. Weaknesses: See the questions. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Why the distance is the absolute values. If the author is 2, then the pair of (0.5, 4) is the same as (3.5, 4) since the distance is both (1.5^2, 2^2)? In this way, how can we know that the quality of a sample is better or worse than the anchor? 2. Table 1 is quite important, whose explanation should be included in the main context to support the motivation of the paper. Moreover, the Appendix H is not clear enough. Can author explain it in more details? 3. It will be much easier for reader to understand if the "SCOREQ Training" section can have a figure and more detailed discuss the two mode (NR; NMR). 4. For the NMR mode, how to calculate the PC since the distance is the absolute values? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1**. *"Why the distance is the absolute values. If the anchor is 2, then the pair of (0.5, 4) is the same as (3.5, 4) since the distance is both (1.5^2, 2^2)? In this way, how can we know that the quality of a sample is better or worse than the anchor?"* Our approach distinguishes between training and inference behavior. During training, we focus on learning a distance metric where only the magnitude of difference matters, not the direction. This allows the model to learn a feature space where samples with similar quality are pushed close together. The absolute value in the loss function's distance calculation ensures that gradient updates depend solely on the magnitude of the embedding distances. This is expressed mathematically in the constraint of the gradient calculation which is: | Action | Condition | | -------------------------- | ---------------------------------------------- | | Calculate gradient | if $\|f(x_a) - f(x_p)\|^2 + m > \|f(x_a) - f(x_n)\|^2$ | Do not calculate gradient | otherwise where $f(x_a)$, $f(x_p)$, $f(x_n)$ are the embeddings of the anchor, positive, and negative samples, respectively and m is the margin. Consider the reviewer's examples: Triplet 1: Anchor = 2, Positive = 0.5, Negative = 4 Triplet 2: Anchor = 2, Positive = 3.5, Negative = 4 In both cases, training pushes the positive sample's embedding closer to the anchor's than the negative's, irrespective of the direction of the positive sample. This behavior is desirable because we want to learn an embedding space where similar quality samples have similar representations. However, during inference, we require a directional interpretation of distance to determine whether a sample is better or worse than a reference. To achieve this, we use random clean speech samples as non-matching references. This ensures that a positive distance always means worse quality, providing a clear directionality for quality assessment. See below. **Inference stage** During inference, to predict quality we measure the distance in the learned embedding space between a random clean speech signal x_ref, and N degraded signals of the test set x_degr(i) for which quality labels are available. To do that, we forward pass all the signals in the neural network f() producing the embeddings and we measure the Euclidean distance between each i-th degraded signal x_degr(i) and the reference x_ref. | Sample | Predicted Distance | |---|---| | 1 | $ \| f(x_{ref}) - f(x_{degr}(1)) \|_2 $ | 2 | $ \| f(x_{ref}) - f(x_{degr}(2))\|_2 $ | ... | ... | | N | $ \| f(x_{ref}) - f(x_{degr}(N)) \|_2 $ Since we use the absolute values, the predicted distance will always be positive. But, this is not an issue because we use random clean speech as a reference, so positive distance means always worse quality. **What if we want to use a degraded signal as a reference?** In this case, we would have to remove the absolute value since the direction can be either positive or negative. This will work, regardless of the fact that we used the absolute values in the training stage, as explained above. We would like to point out that this scenario of evaluating quality with respect to a degraded reference is outside the scope of our paper. We used applications where signals like the ones in Librispeech corpus are considered the best possible achievable quality. **Action** We thank the reviewer for raising this point. We will add a concise explanation in the appendix detailing the difference between training and inference as written above, emphasizing how training learns a distance metric concerned with magnitude, not direction. This addition should improve the paper's clarity and address any potential confusion regarding our methodology. ---- ***2.** "Table 1 is quite important, whose explanation should be included in the main context to support the motivation of the paper. Moreover, the Appendix H is not clear enough. Can author explain it in more details?"* *Table 1* Table1 indicates state-of-the-art quality metrics that we used in our paper. Together with Figure 4, and Table 2, they support understanding the content of Section 4. This section represents another contribution of our paper, beyond the proposed method. Here, we demonstrate that no-reference speech quality metrics suffer from domain mismatch. To the best of our knowledge, no-one has conducted such a comprehensive examination of heterogeneous domains in speech quality assessment (we used 11 test sets) to showcase shortcomings of the state of the art. Therefore, we believe that moving Table 1 earlier in the text alone can be difficult, considering that this means we would have to move the whole Section 4. *Appendix H* In appendix H, we explain how we produced the plots in Figure 1. We agree that clarity can be improved. To address this issue, we created a block diagram that we will add in Appendix H (see Figure 2 in rebuttal.pdf attached) and we created an improved version of the appendix H that we will be happy to send to the reviewer in the reviewer-author discussion. ---- ***3.** "It will be much easier for reader to understand if the "SCOREQ Training" section can have a figure and more detailed discuss the two mode (NR; NMR)."* We created a figure illustrating both modes NMR and NR which can better help what happens during inference. See Figure 1 in the attached rebuttal.pdf. ---- ***4**. For the NMR mode, how to calculate the PC since the distance is the absolute values?* See our answer to point 1. We explain the difference between training and inference stages. During inference, we use clean speech as random non-matching reference, to make sure that positive distance always means worse quality. Pearson's correlation (PC) is then computed between the predicted distances and the ground-truth MOS labels. --- Rebuttal Comment 1.1: Title: Thanks for your response Comment: Thanks for the detailed response, which has solved most of my concerns. I have adjusted my score accordingly.
Summary: The paper presents SCOREQ, a novel approach for speech quality prediction using a triplet loss function for contrastive regression. SCOREQ addresses domain generalization issues in current no-reference speech quality metrics by incorporating Mean Opinion Score (MOS) labels and improving training efficiency. Unlike NOMAD, which uses the Neurogram Similarity Index Measure (NSIM) and offline triplet preparation, SCOREQ uses in-batch triplet combinations and MOS for better alignment with human perception. The results show that SCOREQ significantly improves generalization and robustness over existing methods, making it a more effective solution for speech quality assessment. Strengths: The paper exhibits several notable strengths across originality, quality, clarity, and significance: Originality: The paper presents a novel approach for speech quality prediction by introducing SCOREQ, which leverages a triplet loss function for contrastive regression. This approach innovatively addresses the shortcomings of existing no-reference speech quality metrics, particularly through the use of MOS labels to align better with human perception. Quality: The experimental results are extensive and robust, showcasing thorough benchmarking and comparison with state-of-the-art methods. The authors provide detailed insights into the architectural design decisions and their impact on performance, demonstrating a deep understanding of the problem domain. Clarity: The distinction between SCOREQ and NOMAD is clearly articulated, highlighting the improvements in training efficiency and performance. The paper effectively communicates the rationale behind replacing NSIM with MOS and the benefits of using in-batch triplet combinations. Significance: The work addresses the important problem of speech quality estimation, a critical area for numerous applications in speech processing. The results show high correlation with human perception, underscoring the practical relevance and potential impact of SCOREQ. Additionally, the paper demonstrates the utility of the proposed approach in both no-reference and non-matching-reference scenarios, supported by comprehensive ablation studies that validate the effectiveness of the proposed method. Weaknesses: The paper presents several weaknesses that could be addressed to improve its overall impact and effectiveness (as also mentioned in the paper itself): Limited Test Datasets: The evaluation is primarily focused on certain domains of speech quality, lacking diversity in test datasets. Specifically, domains such as voice conversion and others are underrepresented. Broadening the scope of datasets could strengthen the generalization claims and applicability across varied speech tasks. Scope of Application: While the SCOREQ loss shows promise for various regression tasks, the study is limited to the speech domain. Exploring and evaluating the loss function in other regression contexts would provide a more comprehensive understanding of its potential and versatility. The current limitation narrows the impact of the proposed method. Memory Requirements: The SCOREQ loss function requires three times the memory compared to the L2 loss, due to the need for backpropagation on all three samples in the triplet. This could be a significant drawback for practical implementation, especially in memory-constrained environments. Strategies to mitigate this memory overhead or discussions on feasible solutions would enhance the practicality of the method. Experimental Scope: The experiments, while extensive, could benefit from additional comparisons with a broader range of state-of-the-art models. Including more baseline models and alternative approaches in the experimental setup would provide a more robust validation of the proposed method's superiority. Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Correlation with Human Judgments: Could you provide more detailed analysis on how well the SCOREQ framework correlates with human judgments on both a sample and distribution level? Specifically, it would be valuable to see if the framework can capture variances in human perception on a population level. This additional data would strengthen the claims of alignment with human subjective assessments. 2. Evaluation on Diverse Domains: Have you considered evaluating SCOREQ on more diverse datasets, such as those from different speech tasks like voice conversion? Expanding the range of test datasets could provide more insights into the generalizability of your method. 3. Application Beyond Speech Domain: While your study focuses on speech quality assessment, do you have any preliminary insights or plans for applying SCOREQ to other regression tasks beyond the speech domain? Exploring this could help in understanding the broader applicability of your proposed loss function. Confidence: 5 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *"Correlation with Human Judgments: Could you provide more detailed analysis on how well the SCOREQ framework correlates with human judgments on both a sample and distribution level? Specifically, it would be valuable to see if the framework can capture variances in human perception on a population level. This additional data would strengthen the claims of alignment with human subjective assessments."* We have meticulously evaluated our SCOREQ framework using both per-condition and per-sample analysis, depending on the availability of condition labels within each dataset. While per-condition evaluation is generally preferred (as recommended by ITU P.1401 [1]) to mitigate individual listener biases and content-related variations, it was not feasible for the TENCENT-REV and TENCENT datasets since condition labels were not available. In both per-condition and per-sample analysis (TENCENT-REV, TENCENT) we show improvement over the baseline model trained with L2 loss function, and the other speech quality metrics. Regarding the speech synthesis test sets (VoiceMOS Test 1, VoiceMOS Test 2), we aggregated by speech synthesis systems as done in the VoiceMOS challenge baselines [2]. ---- **Limited Test Datasets** *"Evaluation on Diverse Domains: Have you considered evaluating SCOREQ on more diverse datasets, such as those from different speech tasks like voice conversion? Expanding the range of test datasets could provide more insights into the generalizability of your method."* Yes, we evaluated 2 different test sets that include voice conversion data (VoiceMOS Test 1, VoiceMOS Test 2). See answer in global review: **Diversity of Test Sets** ---- **Memory Requirements** *"The SCOREQ loss function requires three times the memory compared to the L2 loss, due to the need for backpropagation on all three samples in the triplet. This could be a significant drawback for practical implementation, especially in memory-constrained environments. Strategies to mitigate this memory overhead or discussions on feasible solutions would enhance the practicality of the method."* See answer in global rebuttal: **Memory Constraints** ----- *"Application Beyond Speech Domain: While your study focuses on speech quality assessment, do you have any preliminary insights or plans for applying SCOREQ to other regression tasks beyond the speech domain? Exploring this could help in understanding the broader applicability of your proposed loss function."* See answer in global rebuttal: **Domain-specific evaluation** ---- **References** [1] ITU-T Recommendation P.1401: Methods, metrics and procedures for statistical evaluation, qualification and comparison of objective quality prediction models, 2012 [2] The voicemos challenge 2022. arXiv preprint arXiv:2203.11389. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: I would like to thank the authors for their rebuttal. I have read the reviews and rebuttals and at the moment would like to keep my score the same.
Summary: This paper proposes a novel speech quality assessment method based on contrastive regression. Many speech quality assessment predictors are obtained based on supervised training by estimating an MOS value given an audio. However, it is well-known that these supervised training methods have serious over-fitting issues with the trained speech data domain and cannot be used for the other domains (e.g., DNSMOS trained with noisy-clean speech data cannot be used for TTS quality assessment and vice versa). To tackle the problem, the paper proposes a novel contrastive regression method, which extends NOMAD with several aspects (e.g., dynamic training, contrastive regression, etc.). The experimental results clearly show the improvement from NOMAD and the robustness of the unseen domain. Strengths: - As the authors said, speech quality assessment would become an important research topic as generative speech AI studies become active. - Reasonable extensions from prior studies (NOMAD) - Showing the robust performance across the domain Weaknesses: - The paper has several readability issues. The introduction generally uses some technical terms without explanations (or needs more understanding of the background techniques). For example, - The explanations (especially the introduction part) are twisted. Section 1 uses Figure 4 as an example, but it is very difficult to understand since we do not have explanations about the related terminologies and backgrounds in Figure 4—Ditto for line 38 about Figure 1 (a). - "Rank-and-Contrast (RnC)" suddenly appears without any explanations. - The notation is not consistent. ${\boldsymbol x}$ and $x$ are mixed. - Missing references. There have been several attempts to resolve the domain mismatch issue in speech quality assessment prediction by using self-supervised learning or unsupervised methods, e.g., [1] and [2]. I recommend the authors explore this research direction and compare them (I'm sure the method can outperform this because of the use of more supervised information compared with them). [1] Fu, Szu-Wei, et al. "Self-Supervised Speech Quality Estimation and Enhancement Using Only Clean Speech." The Twelfth International Conference on Learning Representations. [2] Maiti, Soumi, et al. "Speechlmscore: Evaluating speech generation using a speech language model." ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023. Technical Quality: 4 Clarity: 3 Questions for Authors: Can this approach be applied to general audio and music generation applications? There are a lot of demands on evaluating the quality of them. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: The paper describes the limitations of the proposed method in terms of scope, memory usage, and a cross-domain issue, well. It also describes the societal impact of generative model research in speech. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *"The explanations (especially the introduction part) are twisted. Section 1 uses Figure 4 as an example, but it is very difficult to understand since we do not have explanations about the related terminologies and backgrounds in Figure 4—Ditto for line 38 about Figure 1 (a)."* Figure 4 shows an experiment that we conducted to demonstrate that state of the art quality metrics suffer from domain mismatch. We refer to this figure early in the paper in Section 1 because there is no other paper that systematically evaluated this problem. Our paper also contributes to this beyond the proposed SCOREQ. We understand that the full context and terminology might not be immediately clear in Section 1. However, we provide a detailed explanation of the terminology, models, and experimental setup in Section 4, providing the necessary information to fully interpret the significance of Figure 4 within the broader context of our work. Figure 1.a serves as a visual representation of the central issue addressed by our paper: the limitations of the L2 loss and how SCOREQ offers a solution. By placing it at the beginning, we aim to clearly establish the research question for the reader early in the text. We acknowledge that the specific details underlying Figure 1.a might not be immediately apparent. To ensure clarity, we have included a reference to Appendix H in the figure caption where we provide a comprehensive description of the methodology and data used to create this figure. To improve the clarity of the paper, we will add a figure (see Figure 2 in the attached pdf), that describes in detail how the plot has been created. ---- *"Rank-and-Contrast (RnC)" suddenly appears without any explanations"* We agree with the reviewer that the abrupt introduction of "Rank-and-Contrast (RnC)" could be confusing. Our intention was to briefly reference prior work that had identified the weak representation issue in L1/L2 loss, specifically in non-audio regression tasks. We then provide a more detailed explanation of the RnC framework in Section 2. To avoid any ambiguity, we propose to rephrase the sentence as follows: "Other studies [61] have observed this problem in non-audio regression tasks when minimizing the L1 loss." ---- *"The notation is not consistent x and x are mixed."* Thank you for pointing out this potential inconsistency. We have carefully reviewed our notation throughout the paper and believe we have consistently used boldface to denote vectors (e.g., **x**) and non-boldface to denote scalars (e.g., x). However, we are open to making any necessary corrections. To assist us in doing so, could you please specify the line number(s) where you believe the notation is inconsistent? We will gladly address any specific instances you identify. ---- *"Missing references. There have been several attempts to resolve the domain mismatch issue in speech quality assessment prediction by using self-supervised learning or unsupervised methods, e.g., [1] and [2]. I recommend the authors explore this research direction and compare them (I'm sure the method can outperform this because of the use of more supervised information compared with them)."* Thank you for highlighting these references. We acknowledge that we missed these two papers, which explore unsupervised methods in speech quality assessment. While a direct comparison with our supervised SCOREQ method is not achievable without running the experiments, a preliminary analysis using available data from Fu et al. [1] suggests our approach outperforms theirs on the Tencent and Tencent Rev datasets using Pearson's correlation.s | Dataset | Ours SCOREQ | SpeechLM Score | Fu et al. | |:-------------|:-------------------|:----------------------|:-----------------| | Tencent Rev | 0.79 | 0.59 | 0.59 | | Tencent | 0.86 | 0.71 | 0.72 | Our primary focus remains on the limitations of the supervised L2 loss and how SCOREQ addresses these, as reflected in our comparison with 6 quality metrics (NISQA, NR-PESQ, NR-SISDR, NORESQA-MOS, NOMAD, SSL L2 LOSS). However, we recognize the innovative nature of these unsupervised methods and will include them in the background section (Section 2) to provide a broader context for our work. ---- *"Can this approach be applied to general audio and music generation applications? There are a lot of demands on evaluating the quality of them."* We agree with the reviewer that the demans is high in the recent generative approaches for audio. We believe that our approach can be used for music and audio generation. Other metrics such as POLQA [2] and ViSQOL [3] have been successfully adapted from speech to music. Also, the wav2vec model has been already finetuned for music quality evaluation [4]. Other audio encoders could be evaluated e.g., CLAP which is used successfully for the Frechet Audio DIstance [5]. The challenge would be finding an appropriate dataset for this task. To the best of author knowledge, there is no open-source music generation dataset labelled with quality scores that is big enough for training. ---- **References** [1] Self-Supervised Speech Quality Estimation and Enhancement Using Only Clean Speech." ICLR (2024). [2] Subjective and objective assessment of perceived audio quality of current digital audio broadcasting systems and Web-casting applications,” IEEE Trans. Broadcast., vol. 61, no. 3, pp. 407–415, Sep. 2015. [3] Objective assessment of perceptual audio quality using ViSQOLAudio. IEEE Transactions on Broadcasting, 63(4), 693-705. [4] Audio quality assessment of vinyl music collections using self-supervised learning. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 1-5). IEEE. [5] Adapting frechet audio distance for generative music evaluation. In ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 1331-1335). IEEE --- Rebuttal Comment 1.1: Comment: I appreciate the authors' additional experiments, but as the authors say, this is not a fair comparison, and it would not change my overall impressions. Also, I still feel that the explanations are twisted (thanks for the detailed explanations, but these explanations repeat the current paper structure). I think it is better to have brief explanations about them and may also add some remarks that they are explained later in Section 4 if needed. I could not expect significant improvement in the effectiveness and readability of this paper. Thus, I want to keep my score as it is. > Thank you for pointing out this potential inconsistency. We have carefully reviewed our notation throughout the paper and believe we have consistently used boldface to denote vectors (e.g., x) and non-boldface to denote scalars (e.g., x). However, we are open to making any necessary corrections. To assist us in doing so, could you please specify the line number(s) where you believe the notation is inconsistent? We will gladly address any specific instances you identify. Do you mean that $x_a$, $x_p$, and $x_n$ lines 133 and $x_i$ and $x_{test}, x_{nmr}$ in line 178--182 are scalar? If so, please explain them. Also, subscripts and superscripts are mixed ($a$, $p$, $n$, $i$ appear as subscripts and superscripts in equations 1--5 and inline explanations in page 4.). If they are intentional, please explain it.
Rebuttal 1: Rebuttal: **Rebuttal** We thank all the reviewers for their helpful feedback. Three common questions are raised by reviewers **2CaS** and **XmpT**: diversity of test sets, applicability of our method in other tasks, and memory constraints. We address these concerns here. The remaining questions and concerns are addressed in the individual responses. ----- **Diversity of the test sets** @**2CaS**: *"While your evaluation covers multiple datasets, do you believe these datasets comprehensively cover the variability in speech quality assessment? What are the potential limitations of your chosen datasets?* **@XmpT**: *"Evaluation on Diverse Domains: Have you considered evaluating SCOREQ on more diverse datasets, such as those from different speech tasks like voice conversion?"* We are confident that our datasets comprehensively cover the diverse landscape of speech quality degradations. As detailed in Table 5, our *11 test sets* include approximately *16,000* speech recordings subjected to *~12,000* different quality conditions across four languages (English, Japanese, Chinese, German). These sets include a broad spectrum of scenarios, from simulated and live degradations in modern video calls (NISQA TEST SIM, NISQA TEST P501, TENCENT, NISQA TEST LIVE) to real-world reverberation (TENCENT-REV) and traditional codec impairments and packet loss (P23EXP1, P23EXP3). We also include speech enhancement and background noise (NOIZEUS), as well as both traditional and cutting-edge deep learning-based text-to-speech and voice conversion systems (VoiceMOS Test 1, VoiceMOS Test 2). Moreover, our recordings span a wide range of bandwidths (8 kHz, 16 kHz, 48 kHz). These datasets are not only extensive but also representative of those commonly used for evaluating speech quality assessment. The NISQA and TENCENT sets were specifically used for the NISQA challenge at Interspeech, the P23EXP1 and P23EXP3 are datasets developed for ITU standards, while the VoiceMOS datasets are widely used benchmarks for developing and testing synthetic speech quality predictors. ---- **Domain-Specific Evaluation** **@2CaS** *"It is mentioned that the SCOREQ method could be applicable to other regression-based tasks. Can you provide more details or preliminary results on how SCOREQ performs in other domains?"* **@XmpT** *"Application Beyond Speech Domain: While your study focuses on speech quality assessment, do you have any preliminary insights or plans for applying SCOREQ to other regression tasks beyond the speech domain?"* Our paper introduces SCOREQ as a novel approach to tackle the critical issue of domain mismatch in speech quality assessment. We evaluated our approach using 11 test sets and 2 different training domains and it specifically targets this research question. Our evaluation has been defined by other reviewers as *"showing the robust performance across the domain"* (**Sj6A**), , *"experiments verifies the effectiveness of the proposed method"* (**FFmN**), *"The experiments are comprehensive. The results demonstrate the advantages of triplet loss over L2 loss."* (**NTty**). With the SCOREQ loss, the model learns embeddings where samples with similar quality are close. While our focus is on quality regression targets, this approach can be adapted to any regression task by substituting the target and defining a relevant concept of similarity. This broader applicability extends to other fields, such as computer vision. Although exploring general regression tasks is beyond the current scope, we acknowledge this potential in Appendix A as a direction for future work. Our work contributes to the relative unexplored area of contrastive regression, proposing a new solution. ---- **Memory Constraints** **@2CaS** *"How does the computational complexity of SCOREQ compare to traditional L2 loss models and other state-of-the-art methods in terms of training time and memory usage?"* **@XmpT** *"The SCOREQ loss function requires three times the memory compared to the L2 loss, due to the need for backpropagation on all three samples in the triplet. This could be a significant drawback for practical implementation, especially in memory-constrained environments."* In Appendix A we mentioend that the amount of memory occupied by the SCOREQ loss is 3 times the L2 loss, this is technically incorrect. This is the problem of the baseline NOMAD. Unlike NOMAD, our implementation utilizes the batch-all strategy and PyTorch broadcasting for efficient triplet loss computation between all valid combinations. This approach allows us to load the same batch of data in memory for both the L2 and SCOREQ loss, resulting in the same GPU memory usage during training. In terms of training time the SCOREQ loss has a slight increased execution time during training on either GPU or CPU. We conducted a quick experiment to verify this. We computed both L2 and SCOREQ loss using the same batch of data, loading 4 waveforms and measuring the computational time on the GPU for the entire operation required for one batch, including: time to load waveforms, forward pass, loss calculation, backward pass, and zero gradients. On average, we found that the L2 loss takes ~0.65 seconds, while the SCOREQ loss takes ~0.75 seconds. This slight increase is expected since the batch-all strategy necessitates computing distances between all triplet combinations, including invalid triplets which are discarded with the 3D boolean mask. These limitations are outweighed by the significant improvements in out-of-domain and out-of-distribution performance, as highlighted in this and other reviews (2CaS, XmpT, Sj6A, FFmN, NTty). This information, including the experiment results, will be added to Appendix A. The statement about 3 times memory occupation will be removed. Pdf: /pdf/48c651fbf601336f1549a074e6b3a920d3249b8d.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper "SCOREQ: Speech Quality Assessment with Contrastive Regression" tackles the challenges of Out-of-Distribution (ODS) and Out-of-Domain (ODM) problems by introducing a novel approach using a triplet loss function designed for contrastive regression. ### Handling ODS and ODM Problems (Domain Generalization Challenges) State-of-the-art no-reference speech quality models often fail to generalize well to unseen audio degradations. This domain mismatch is particularly problematic given the rapid advancements in generative speech technologies such as neural speech coding, speech enhancement, and speech synthesis. The authors highlight that existing methods struggle with mapping high-dimensional speech data to a low-dimensional quality space, leading to poor generalization across different domains. ### SCOREQ Approach Contrastive Regression: SCOREQ employs a triplet loss function designed for contrastive regression to create a quality manifold. This manifold is structured to order embeddings based on their similarity to MOS, thereby improving the model's ability to generalize to new domains. - On-the-Fly Triplet Mining: Unlike traditional methods that use a fixed set of triplets, SCOREQ generates triplets dynamically during training. This ensures that the model is exposed to a diverse set of triplets, enhancing its robustness to different types of audio degradations. - Adaptive Margin: The main novelty of this work is the incorporation of an adaptive margin in its loss function, which adjusts based on the MOS distance between samples. This margin helps the model learn a more nuanced representation of quality differences, further improving generalization. ### Batch-All for Regression Technique The Batch-All strategy for regression is adapted from person re-identification tasks but modified to suit the continuous nature of MOS labels. This technique ensures that every possible triplet combination within a batch is considered during training, promoting a more comprehensive learning process. - Triplet Loss for Regression: The triplet loss used in SCOREQ is formulated to ensure that the embedding distance between an anchor and a positive sample (closer in quality) is smaller than the distance between the anchor and a negative sample (further in quality). - Mask Generation: For each batch, a 3D mask is generated to identify valid triplets. This mask ensures that only triplets where the MOS distance between the anchor and the positive is smaller than the MOS distance between the anchor and the negative are used in training. - Loss Function: The SCOREQ loss function ensures that valid triplets contribute to the learning process, promoting the creation of a quality-aware embedding space. - Adaptive Margin: The adaptive margin, a **key innovation** of this work, replaces the fixed margin with one that dynamically adjusts based on the MOS distances. This allows the training to adapt more effectively to the continuous nature of speech quality, ensuring that the model learns more accurate and generalizable representations. ### Summary The SCOREQ model leverages a contrastive regression framework with a batch-all triplet mining strategy and an adaptive margin to improve the generalization of speech quality metrics. By ensuring that the embeddings are ordered based on quality differences and dynamically generating valid triplets during training, SCOREQ addresses the domain generalization issues, making it robust against both ODS and ODM scenarios. The adaptive margin, in particular, stands out as the main novelty of this work, significantly enhancing the model's ability to generalize across diverse and unseen speech quality conditions. Strengths: The paper presents a novel approach to speech quality assessment by introducing a contrastive regression framework, SCOREQ, which employs a triplet loss function adapted for regression tasks. This approach is original in several ways: - Adaptive Margin: The main novelty lies in the adaptive margin, which adjusts dynamically based on the MOS distances between samples. This innovative use of an adaptive margin for regression tasks enhances the learning process by ensuring more accurate and nuanced representation of quality differences. - Contrastive Regression: While contrastive learning has been extensively used in classification tasks, its application to regression tasks, particularly in speech quality assessment, is novel. The paper successfully adapts and extends the principles of contrastive learning to handle continuous labels, addressing the domain generalization shortcomings of existing metrics. - On-the-Fly Triplet Mining: The dynamic generation of triplets during training is a creative solution to the limitations of fixed triplet sets, providing a more comprehensive and robust learning experience for the model. Quality The significance of the paper is substantial: - Improved Generalization: By addressing the domain mismatch issues in speech quality assessment, SCOREQ provides a significant advancement in the field. Its ability to generalize well across diverse and unseen conditions makes it highly valuable for real-world applications. - Wide Applicability: The principles and techniques introduced in the paper, such as contrastive regression with adaptive margins, have potential applications beyond speech quality assessment. *They could be adapted for other regression-based tasks in various domains*. Practical Impact: The ready-to-use speech quality metrics developed from SCOREQ can be directly applied in various speech processing applications, such as neural speech coding, speech enhancement, and synthesis. This practical impact underscores the importance of the contributions made by the paper. Weaknesses: - While the paper demonstrates strong performance of SCOREQ in speech quality assessment, its evaluation is limited to this specific domain. The authors suggest that the method could be applicable to other regression-based tasks, but they do not provide empirical evidence to support this claim. To strengthen the paper, the authors could discuss more thoroughly the potential applications of SCOREQ in other domains and possibly provide preliminary results or case studies to illustrate its broader applicability. - The adaptive margin and batch-all strategy, while innovative, introduce additional computational complexity and memory usage. The paper briefly mentions this but does not provide a detailed analysis of the computational overhead and how it compares to existing methods. Including a more detailed discussion on the computational costs and providing quantitative comparisons of training times and memory usage with other state-of-the-art methods would help practitioners understand the trade-offs involved. - The paper does not address the potential for real-time applications of SCOREQ. Real-time processing is critical for many practical applications of speech quality assessment. The authors could provide insights into the feasibility of real-time implementations of SCOREQ, possibly suggesting optimizations or trade-offs that could make real-time usage more viable. Technical Quality: 3 Clarity: 3 Questions for Authors: - It is mentioned that the SCOREQ method could be applicable to other regression-based tasks. Can you provide more details or preliminary results on how SCOREQ performs in other domains? - How does the computational complexity of SCOREQ compare to traditional L2 loss models and other state-of-the-art methods in terms of training time and memory usage? - While your evaluation covers multiple datasets, do you believe these datasets comprehensively cover the variability in speech quality assessment? What are the potential limitations of your chosen datasets? - Have you considered the potential for real-time applications of SCOREQ? If so, what are the main challenges and how might they be addressed? - Can you elaborate on the rationale behind your choice of hyperparameters, particularly for the adaptive margin and triplet loss configurations? How sensitive is SCOREQ to these hyperparameters? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have made an effort to discuss the limitations of their work, particularly in the section on computational complexity and domain-specific evaluation. However, there are areas where the discussion could be more thorough: 1. Memory and Computational Requirements: The paper briefly mentions that the SCOREQ loss requires three times the memory of the L2 loss due to the need to compute the forward pass for anchor, positive, and negative samples. A more detailed analysis of the computational and memory requirements, including quantitative comparisons with other methods, would provide a clearer understanding of the trade-offs involved. This should include discussions on the feasibility of deploying SCOREQ on different hardware setups, especially those with limited resources. 2. Domain-Specific Evaluation: The evaluation of SCOREQ is primarily limited to speech quality assessment. While this domain is well-covered, the broader applicability of SCOREQ to other regression tasks is not empirically validated. The authors should acknowledge this limitation more explicitly and propose future work to explore and validate the generalizability of SCOREQ in other domains. This could include preliminary results or theoretical discussions on potential applications outside speech quality assessment. 3. Real-Time Application Feasibility: The potential for real-time applications is not addressed in the paper. The authors should discuss the challenges and feasibility of implementing SCOREQ for real-time applications. This includes potential optimizations and trade-offs that might be necessary to achieve real-time performance. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *1. "It is mentioned that the SCOREQ method could be applicable to other regression-based tasks. Can you provide more details or preliminary results on how SCOREQ performs in other domains?"* See our answer in global rebuttal: **Domain-Specific Evaluation** ---- *2. "How does the computational complexity of SCOREQ compare to traditional L2 loss models and other state-of-the-art methods in terms of training time and memory usage?"* See our answer in global rebuttal: **Memory Constraints** ---- *3. "While your evaluation covers multiple datasets, do you believe these datasets comprehensively cover the variability in speech quality assessment? What are the potential limitations of your chosen datasets?"* See our answer in global rebuttal: **Diversity of Test Sets** ----- *4. "Have you considered the potential for real-time applications of SCOREQ? If so, what are the main challenges and how might they be addressed?"* SCOREQ training does not inherently affect real-time complexity at inference time, as the underlying model architecture remains the same as with L2 loss. While our primary model utilizes the 95M parameter wav2vec 2.0 architecture, we also demonstrate SCOREQ's effectiveness with the smaller w2vlight model (Table 8), showing improved performance over the L2 baseline. This suggests SCOREQ can enable training of more compact networks that reduce time complexity at inference time, as discussed in Section 5.2. It is important to note that real-time speech quality assessment involves numerous engineering considerations beyond model architecture, such as handling silences, temporal pooling, and targeted degradation monitoring. These aspects are outside the scope of this paper, which focuses on the core model training methodology. --- *5. "Can you elaborate on the rationale behind your choice of hyperparameters, particularly for the adaptive margin and triplet loss configurations? How sensitive is SCOREQ to these hyperparameters?"* The adaptive margin does not require any hyperparameter since it uses the regression targets to calculate the margin. If the question is about the value of the constant margin, we chose 0.2 which is the mostly used one in other usage of the triplet loss [1] and in the baseline NOMAD [2]. ---- **References** [1] "Facenet: A unified embedding for face recognition and clustering." Proceedings of the IEEE conference on computer vision and pattern recognition. 2015. [2] "NOMAD: Unsupervised learning of perceptual embeddings for speech enhancement and non-matching reference audio quality assessment." ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2024.
null
null
null
null
null
null
Enhancing Robustness in Deep Reinforcement Learning: A Lyapunov Exponent Approach
Accept (poster)
Summary: This work studies adversarial RL and policy stability from the perspective of the Lyapunov spectrum. A regularization term is introduced to encourage more robust policies. Strengths: - A novel idea being introduced from classical literature and implemented in the deep RL setting - The regularization introduced is effective at smoothing out trajectories generated by learned policies - This can have important real-world effects in safety-critical / sensitive applications - Opens work for future (theory/computational/applications) studies to build off Weaknesses: The weaknesses I've found are mostly minor, but I still hope you can address: - The correlation between Figure 2 and 3 is not immediately clear to me; can you please discuss this further? - On that note, the improvement in terms of reward is not clear (I know this is not necessarily "the point" but given the discussion of reward trajectories, it would be nice to see further iumprovemetn here - How error-prone is the calculation of LEs discussed in Sec 2.2? - It's not very clear to me the use of "no actions" in the figures. Can you explain how this "benchmark" should be interpreted? How about in environments where there is no such "do nothing" action? - Can you include non-scalar comparisons of algorithms (cf. https://github.com/google-research/rliable) - The appendix is a bit sparse. Can you provide any further experiment details? Technical Quality: 4 Clarity: 3 Questions for Authors: - Can you please provide legends in the figures? If you have the data, can you include a third panel in Fig. 6 corresponding to the regularized policy? (otherwise this figure is ambiguous without comparison) - Maybe I missed it, but what is $S$ in Eq. 4? - Can you measure the divergence in trajectories such as in Fig. 4 & 8 and compare to the calculated LE? - By how much do you weight the regularization in Eq. 5? - Can you discuss the use of hidden states $h$? It isn't clear to me how this affects the loss beyond what the first term contributes These questions may open new areas of research, and I don't intend for them to be answered within this paper, but I'm curious to hear your thoughts on the following: - How does this work relate to the idea of imposing Lipschitz constraints / regularization on DNNs? I see it as an orthogonal direction, but can they solve the same problem? If so, can they be compared? - Can we use pre-trained policies, then finetune them wrt the LE regularization to make them more robust? - In 2.3, can you discuss the case of $\gamma \to 1$? minor typographical: - Eq. 1, I believe you need a conditional on the state $s$ and an expectation over initial states. - L161: "...low levels of chaos **as** they have..." - Eq. 5 use parentheses over both t-dep terms - Please increase fontsizes in Fig 7 Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Can you please include limitations in Sec 7? Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for taking the time to review our work. We are delighted that you are interested in our paper! The question you raised is very insightful, and we would like to share some opinions about it. ### Weaknesses - **The correlation between Figure 2 and 3 is not immediately clear to me**\ These figures show three different metrics attained by various deep RL agents when controlling environments sampled from the Deep Mind control suite. Figure 2 provides the average reward attained, while Figure 3 provides the estimated MLE and SLE for each policy-environment pair. We include these figures to demonstrate that control systems which appear to be performing well can still produce chaotic and unstable state dynamics. - **The improvement in terms of reward is not clear**\ Improving the performance of Dreamer V3 is not the objective of this work, as we instead focus on improving the stability of the state trajectories. We expect that including the regularisation term introduces a tradeoff between stability and performance. Despite this, we find that regularised policies perform better than the state-of-the-art approach in four out of the seven test environments. Furthermore, when noise observation noise is introduced, we find our approach matches or outperforms all baselines in all the environments. - **How error-prone is the calculation of LEs discussed in Sec 2.2?**\ When calculating the LEs using the method outlined by Benettin et al., we need to define the initial perturbation size, time horizon, normalisation frequency, and the number of initial states. We ran ablation studies for all these values, which can be included in the Appendix of the camera-ready version. From these additional experiments, we find that the calculation of the LEs is consistent for all parameter values except initial perturbation size, which can cause differing LE values when too large. Therefore, the calculation of the LEs discussed in Section 2.2 is not error-prone, providing a suitably small initial perturbation size. - **It's not very clear to me the use of "no actions" in the figures.**\ The "no action" benchmark is used to show that chaos is introduced by the DNN policy and not the environment. At each time step, this baseline applies the fixed action $\textbf{0}^m$; thus, no additional torques are applied to the available joints. This baseline is used to evaluate the stability of each environment without any control intervention. Since each environment has MLE = 0 and SLE $\leq$ 0 when controlled by the "no action" baseline, we find that these systems are naturally invariant to small perturbations. In contrast, when controlled by deep RL policies, these environments have nonzero MLE. Therefore, we can conclude that the level of chaos found in the control interaction is produced by the DNN policies. - **Can you include non-scalar comparisons of algorithms?**\ This will be included in the camera-ready version. - **The appendix is a bit sparse. Can you provide any further experiment details?**\ Further experiment details, including Lyapunov Exponent ablation studies, will be included in the appendix of the camera-ready version. ### Questions - **Can you please provide legends in the figures? If you have the data, can you include a third panel in Fig. 6 corresponding to the regularized policy?**\ This will be included in the camera-ready version. - **Maybe I missed it, but what is $S$ in Eq. 4?**\ $S$ is a normalising term used to approximately scale the returns to the range [0,1]. For further details of Equation 4, please refer to the comment in the Global Response. - **Can you measure the divergence in trajectories such as in Fig. 4 & 8 and compare to the calculated LE?**\ These values will be included in the camera-ready version. - **By how much do you weight the regularization in Eq. 5?**\ The regularisation term used in Equation 5 is weighted equal to the policy loss term. We are currently investigating the impact varying this weight has on the balance between performance and stability. - **Can you discuss the use of hidden states $h$?**\ When improving the stability of Dreamer V3, it is important to constrain the internal hidden states, as the policy network uses this and an embedding of the current observation to produce an action. Therefore, simply constraining the prediction observations can still create chaotic dynamics, as differing internal representations can produce different long-term outcomes. - **How does this work relate to the idea of imposing Lipschitz constraints/regularization on DNNs?**\ Improving the smoothness of DNNs by imposing Lipschitz constraints is similar to our problem, as they both consider the smoothness of a function with respect to its inputs. However, when determining the level of chaos, the function we consider is the repeated composition of the dynamical system transition function. Due to this repeated composition, a continuously differentiable function, and thereby Lipschitz continuous, can produce chaotic dynamics (e.g. The Lorenz Attractor). Therefore, imposing Lipschitz constraints/regularisation on the DNN policy does not guarantee stable long-term dynamics. - **Can we use pre-trained policies, then finetune them wrt the LE regularization to make them more robust?**\ This is an active area of research which we are currently investigating. - **In 2.3, can you discuss the case of $\gamma \rightarrow 1$?**\ As $\gamma \rightarrow 1$, the objective function becomes Lipschitz continuous for control systems with $\lambda_1 < 0$ and non-differentiable for systems with $ \lambda_1 > 0 $. For further details, please refer to the original paper by Wang et al. - **Typographical Errors**\ These errors will be fixed for the camera-ready version. - **Limitations**\ The limitations of our work have been discussed in Section 6; however, for clarity, we will include a limitations section in the camera-ready version. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' responses, thank you for addressing the questions raised. I believe the revised version will make for a great paper. I will maintain the current score as it appears the reviewers have converged.
Summary: Deep RL methods are usually lack of robustness in control tasks whose dynamics are chaotic, thereby having positive maximal Lyapunov exponents (MLEs). This paper proposes an approach that improves the stability of trained deep RL controller through MLE regularization. Strengths: * The problem studied in this paper is well-motivated and important. * Characterizing the robustness using maximal Lyapunov exponents is promising. * The presentation is straightforward. Weaknesses: * This paper dedicates many pages to the chaotic phenomena in RL (Sections 3 and 4), which have been addressed in previous works as introduced in Sections 1 and 2. The core contribution, which I believe is the regularization technique, needs more elaboration and analysis. * There is no concrete algorithm provided that the audience can follow to easily implement the approach proposed in this paper. Imagine a person with little or no knowledge of dynamical systems. Can they manage to implement the algorithm after reading the paper? * (minor) Some information is missing. For example, what are $S$, $H$ and $v_\phi(s_t)$ in equation (4)? Technical Quality: 3 Clarity: 2 Questions for Authors: * Does the policy network architecture affect the performance of your approach? * Is it possible to find two trajectories with equal rewards, where one is stable and the other is chaotic? In other words, the reward itself may not be sufficient to reflect the stability of a trajectory. * How does the approach perform in the case that it is unable to estimate the MLE accurately in practice? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: It has discussed its empirical limitations in Section 6. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and thoughtful review. We value your insightful questions, and you can find our response below. ### Weaknesses - **This paper dedicates many pages to the chaotic phenomena in RL (Sections 3 and 4), which have been addressed in previous works as introduced in Sections 1 and 2.**\ While the idea of chaos in RL has been addressed in prior works, they focus on the ***sensitivity of the value function subject to policy parameter perturbations during training***. In these works, it is demonstrated that the return surface can have a fractal structure, and this can produce poor policy updates during training. \ In contrast, the analysis in Sections 3 and 4 focuses on the ***sensitivity of a fixed control system subject to state perturbations after training***. The results in these sections demonstrate that trained state-of-the-art model-based and model-free methods produce a chaotic control interaction in which a small change in the system state can have a profound impact on the long-term state trajectories and the reward attained. As such, these deep-RL methods cannot provide the stability guarantees necessary for real-world control systems. - **The core contribution, which I believe is the regularization technique, needs more elaboration and analysis.**\ We appreciate your suggestion for further exploration and study into the impact of MLE regularisation. We would like to highlight that a series of experiments have been conducted using multiple seeds which demonstrate that the inclusion of MLE regularisation significantly improves the stability of the control interaction and often increases the total reward attained (Table 2). Furthermore, this increased stability improves the performance of Dreamer V3 when noise is added to the observation space (Figure 7). Examining the state trajectories produced by Dreamer V3 and Dreamer V3 + MLE regularisation when controlling the *Walker Stand* task (Figure 8) shows the regularisation improves the consistency of the control interaction as the trajectories do not diverge significantly. These results provide strong evidence that MLE regularisation leads to significant improvements in the stability of the state trajectories produced by Dreamer V3. - **There is no concrete algorithm provided that the audience can follow to easily implement the approach proposed in this paper.**\ We appreciate your comment regarding the clarity of the proposed regularisation method. We would like to highlight that in Section 5, we outline the additional loss term used and that this can be calculated by estimating the state and hidden-state trajectories. However, we acknowledge this is unclear, so we will include a formal algorithm in the camera-ready version. - **What are $S$, $H$ and $v_\phi(s_t)$ in equation (4)?**\ Please, see the comment outlined in the Global Response, in which we address the definition of Equation 4. ### Questions - **Does the policy network architecture affect the performance of your approach?**\ From our preliminary experiments, we find that the network architecture does not impact stability however, this is an active area of research which we are currently investigating. For our current work, we maintained the same network architecture for the baseline Dreamer V3 and MLE regularised Dreamer V3 to allow for a fair comparison. - **Is it possible to find two trajectories with equal rewards, where one is stable and the other is chaotic?**\ Yes, it is possible for two trajectories to attain equal rewards despite having differing levels of stability. Consider the Cartpole Balance task as an example. In this control system, the agent is provided +1 reward per step for maintaining the pole within 1° of the vertical upright position and within 0.25 meters of the centre of the track. As such, a trajectory that maintains a stable vertical position will attain the same reward as a chaotic trajectory that remains within the high-reward region. - **How does the approach perform in the case that it is unable to estimate the MLE accurately in practice?**\ As the estimation of MLE used for the regularisation term uses a learned model of the system dynamics, it is possible for the estimation to be inaccurate. When this incorrect estimation of MLE is included in the policy loss, the updated policy can produce more unstable dynamics. However, avoiding this inaccurate estimation requires an improvement in dynamics prediction models, which is outside the scope of this paper. --- Rebuttal Comment 1.1: Title: Reply to rebuttal Comment: I would like to thank the authors for their rebuttal and it has resolved my concerns. I still recommend the authors to add a formal algorithm table for their approach so that the audience can easily implement it. I am raising my score to 6. --- Reply to Comment 1.1.1: Comment: Thank you very much for your positive feedback and for raising your score. We are delighted to hear that we have effectively addressed your concerns and appreciate your suggestion to add a formal algorithm. We agree that it will enhance the clarity of our work and will, therefore, incorporate this into our camera-ready version.
Summary: To address the issue of stability of deep reinforcement learning, the authors first gauge the chaotic behavior of various state-of-the-art deep reinforcement learning policies in continuous control environments, and quantify the stability of those policies with significant impact of their applicability to real world problems. Then, the authors proposed an improvement based on implementing a Maximal Lyapunov Exponent regularization in the RL architecture, and demonstrate the improvement with examples. Strengths: Lyapunov exponent is an important concept developed in dynamical system. There are many related sophisticated research conducted. It is beneficial to both machine learning and dynamical system to have used it in the analysis of deep RL. Sec 3 & 4 present a reasonable analysis on the stability study of deep RL. Weaknesses: The proposed method on maximal Lyapunov exponent regularization needs to be further explored and studied, both in terms of the theory involved and amount of experiments conducted. The idea is novel, but not fully explored and explained. The current set of numerical experiments are limited and not truly convincing. Technical Quality: 3 Clarity: 3 Questions for Authors: Could more explanation of all the terms and parameters in Equation (4) be provided? Also, how is the proposed regularizing term in (5) is related to (4)? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate your time and effort in reviewing our work. Your questions are very insightful, and we would like to offer our thoughts on it. ### Weaknesses - **The proposed method on maximal Lyapunov exponent regularization needs to be further explored and studied, both in terms of the theory involved and amount of experiments conducted.** \ We appreciate your suggestion for further exploration and study into the impact of MLE regularisation. We would like to highlight that we have outlined the theory used for the regularisation method in Section 5 and have provided extensive experimental results using multiple seeds in Section 6. These results use the same environments as the original Dreamer V3 paper and demonstrate that the inclusion of MLE regularisation improves the stability of the control interaction and often the total reward attained (Table 2). We acknowledge that extending these experiments to more complex control systems is desirable. However, we believe that the results outlined in our work sufficiently demonstrate the benefits of MLE regularisation. ### Questions - **Could more explanation of all the terms and parameters in Equation (4) be provided?** \ Please, see the comment outlined in the Global Response, in which we address the definition of Equation 4. - **How is the proposed regularizing term in (5) related to (4)?** \ The loss function used to train the MLE regularised Dreamer V3 policy is $\mathcal{L}^\text{Policy}(\theta) + \mathcal{L}^{\lambda_1}(\theta)$. This will be clarified in the camera-ready version. --- Rebuttal Comment 1.1: Title: Response to the rebuttal Comment: Thank you the the rebuttal. They address some of my concerns. I would also like to raise my score to (6).
null
null
Rebuttal 1: Rebuttal: We would like to express our gratitude to all the reviewers for taking the time to review our paper and providing valuable feedback. Your comments have been insightful and have certainly contributed to the refinement of our work. We appreciate the effort you have put into the review process. After carefully considering your comments, we would like to address the general concerns and criticisms raised during the review. - **Could more explanation of all the terms and parameters in Equation 4 be provided?** \ Equation 4 outlines the loss function used by Hafner et al 2024 to train Dreamer V3’s policy network. All the terms used are consistent with the original article, including Stop Gradient (sg), Return Estimates ($R^\lambda_t$), Critic Estimates ($v_\phi(s_t)$), Scaling Factor ($S$), Entropy Scale ($\eta$) and Entropy Regulairsor (H$[\cdot]$). This equation is included in Section 5 to provide background information for the reader as the MLE regularisation term (Equation 5) is added to this equation. This will be clarified in the camera-ready version.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Adaptive Exploration for Data-Efficient General Value Function Evaluations
Accept (poster)
Summary: This paper presents a novel method named GVFExplorer for efficiently evaluating multiple Value Functions with different policy (GVFs) in parallel using off-policy methods. It adaptively learns a single behavior policy that minimizes the total variance in return across GVFs, thus reducing required environmental interactions. The method uses a temporal-difference-style variance estimator and proves that each behavior policy update decreases the overall mean squared error in GVF predictions. The performance of GVFExplorer is empirically validated in various settings, including tabular and non-linear function approximation, with stationary and non-stationary reward signals. Strengths: - idea of choosing behavior policy for simultaeneous learning of multiple policies is quite novel, and seem to be able to be applied to some applications. - systematic way of deriving algorithm is novel and interesting, and makes sense as well Weaknesses: - More experiments would help understand the behavior of the algorithm. - uniform policy being the best baseline does not seem to be a good baseline choice. Having some more competetive baselines, e.g. ablations, would have been much better. - experimented environments seem too synthetic. would like to see results on typical (and challenging) RL environments, e.g., mujoco. Technical Quality: 3 Clarity: 3 Questions for Authors: - The paper starts with minimizing MSE, and argues that it uses unbiased IS estimation, leading to a policy that minimizes the variance of return. However, GVF in algorithm is estimated with $Q_\theta$, which has been learned with expected Sarsa, which is a baised estimate given that the target contains $Q_\theta$ instead of GT $Q^\pi$. How do we ensure that the analysis above also works in algorithm 1? What happens if our function approximator is crude so that the bias is large? - Why does PER drastically improve the performance of GVFExplorer? If we have infinite number of data in our experience replay, GVFExplorer would have very similar effect as PER. In my opinion, it seems more natural to have reduced effect of PER as we are sampling hard states more with GVFExplorer, and it is what PER tries to do as well. (it also does not go well with the rest of the paper, as PER is about training efficiency where GVFExplorer is about the choice of behavior policy) Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed feedback and valuable experimental suggestions, which we have now incorporated into our rebuttal. Based on your input, we have also **added results in Mujoco in the main comment (refer Fig 1 in PDF)**. We appreciate your recognition of the novelty of adaptively learning the behavior policy to efficiently evaluate multiple GVFs in parallel and acknowledgment of the systematic derivation of the algorithm. Detailed responses to your queries are provided below. 1. **"How can Uniform sampling be a competitive baseline?"** We use several baselines: uniform, mixture policy, round-robin policy, SR-based policy, and BPS. Learning a behavior policy for multiple GVFs is relatively unexplored, making it challenging to identify more baselines. Our empirical results indicate that the relative performance of these baselines can vary depending on target policy characteristics. For example, in Fig. 7 of the main paper, round-robin and mixture policies perform better when using semi-greedy fixed target policies. Conversely, the uniform policy can be advantageous when the target policies exhibit low goal-reaching probability, because it helps hit the goal by chance. 2. **"Query: Show experiments in Mujoco"** We have now expanded our algorithm to continuous actions and shown results in Mujoco. Please refer to **Fig 1 in the attached PDF** and the main comment for the graphs and explanation of the Mujoco experiments. We believe these additions significantly strengthen our paper and provide a more comprehensive evaluation of our proposed method. 3. **"How does using an approximated Q in the target, rather than the true Q, affect our algorithm's analysis? What is the impact on MSE performance when the function approximator is crude?"** All TD-based methods, including IS-TD, are biased because of bootstrapping. Only with MC return, IS is unbiased. We use the unbiased estimator as a motivation to reduce the problem to minimizing the variance. Empirically, we used TD-based methods in all experiments to approximate both value and variance, due to the better stability and efficiency of these estimators in large-scale problems. To better understand the impact of using a crude function approximator on MSE, we have now added experiments on this ablation study. Specifically, we examined the average MSE in a 20x20 grid with two distinct GVFs. We produced features by reducing the state space into a 10x20 feature grid (grouping factor = 2) and a 5x20 feature grid (grouping factor = 4). An approximation factor of 1 shows the result without any function approximation. In **Fig 2 (PDF attached in main comments)**, as expected, the overall MSE increases with cruder approximations. Despite this, GVFExplorer outperforms round-robin and mixture policies. However, when the approximation is very crude (factor = 4), the uniform policy performs better due to poor variance estimates. These results suggest that GVFExplorer is robust with reasonable function approximators, but can degrade with extremely coarse ones, which is to be expected. Further, these results are also strengthened by the performance in the attached Mujoco environment (Fig 1 in PDF in main comments) where function approximators are used. 4. **"How does PER improve GVFExplorer performance? How is PER different from our algorithm? Given infinite data in the replay buffer, would GVFExplorer have the same effect as PER?"** PER complements GVFExplorer by enhancing data efficiency. While GVFExplorer optimizes the behavior policy to sample informative data, PER reweights the collected samples in the buffer according to priority, ensuring that the mini-batches sampled for gradient descent are selected efficiently. Please refer to lines 307-309 for reasoning on performance boosts of GVFExplorer with PER. With infinite data, the impact of both GVFExplorer and PER would diminish, and direct sampling from the target policies would suffice. However, in practical settings with limited data, GVFExplorer's ability to actively influence data collection provides a significant advantage. **It is crucial to distinguish between GVFExplorer and PER**. GVFExplorer adapts the behavior policy based on variance estimates, while PER reweighs the sampling of the existing data already collected within the buffer. We used PER with all algorithms, including baselines, as shown in Fig 3a of the main paper; all baselines, except the SR method, show improved performance with PER. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I raised my score. --- Reply to Comment 1.1.1: Title: Author response to Reviewer 3wne Comment: Hello Reviewer, thanks for responding and raising the score.
Summary: This paper propose a new algorithm to solve the general value function evaluations problem. In essence, GVFs can be seen as high dimensional value functions. The authors propose a temporal difference learning algorithm that minimizes the overall variance in the return distribution, in the hope to improve the behavior policy for better exploration, such that the samples the behavior policy produces better suffices for off-policy evaluations for each GVFs. Strengths: 1. The idea of minimizing the variance of the return of the behavior policy is novel. While the idea might now be groundbreaking, it is new in this field. 2. Incorporating a temporal difference to approximate the overall variance of the return of the each general values are interesting. Indeed, for large scale problems, TD is a better solution overall. 3. The derivation and analysis of the algorithm is sounds and rigorous. 4. Overall the paper is clearly presented. Weaknesses: 1. The core idea behind this paper is to propose a solution to solve the data collection problem. This paper does answer the question of how to minimize the variance of the returns, but fail to convince me entirely why we should do that at the first place, either through proofs, or empirical investigation. I think this is the biggest weakness of the paper. 2. The experiment section is nice and clear but the problem class is a bit simple (gridworld). For a paper without strong theoretical results, experiments are usually expected to have more materials. In this sense, the results are not very convincing. Technical Quality: 4 Clarity: 3 Questions for Authors: Please see weakness above. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed review and positive feedback on our algorithm's rigorous derivation and analysis. Based on your suggestion, we have now **added Mujoco experiments result in the main comment**. We respond to each query below. 1. **“Why did we choose to minimize variance of return as our objective”?** We think there may be a misunderstanding regarding the question. Below, we have tried to clarify our rationale, but please let us know if further discussion would be beneficial. The primary objective is to identify a behavior policy that minimizes MSE when evaluating multiple GVFs. As outlined in lines 120-125, $\text{MSE} = \text{bias}^2 + \text{variance}$. Using the unbiased IS estimator, we reduce the problem to minimizing the variance, which is the core objective. Minimizing variance improves the accuracy of value estimates by reducing uncertainty. This ensures that the behavior policy collects the most informative samples, thereby reducing estimation error quickly. Furthermore, Owen et al. (2013) demonstrated that using a minimum-variance optimal behavior policy ($\mu^*$) for a single target policy ($\pi$) in scenarios with known dynamics can lead to performance improvements. The value obtained under $\mu^*$ is greater than under $\pi$, where the improvement is directly related to the variance reduction achieved. This indicates that the higher the variance reduction under $\mu^*$ policy, the more significant the performance improvement. We hypothesize that similar effects could be observed in multiple GVF policy “Control” scenarios, which we aim to investigate in future research. This work lays the foundations under the policy evaluation context. We will include this rationale in the camera-ready version to further substantiate our approach. 2. **“Query regarding more experiments in Mujoco”** We have now added the empirical results in the Mujoco environment with continuous actions. Please refer to the attached PDF and the main comment for the performance graphs and explanation on Mujoco experiments. --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: I thank the authors for the response. Judging from other reviews and the added experiments, I will keep the same score for now.
Summary: This paper presents a new algorithm for collecting data needed to learn multiple GVFs in parallel. By focusing data collection on high-variance (s,a) pairs, an agent is able to collect data that will reduce the variance of estimated GVFs. The authors contribute a sort of contraction-mapping proof that using their algorithm will result in non-increasing variances. Strengths: The algorithm is simple, with some reasonable theoretical properties. The paper addresses a good problem. The paper is very well written The experiments seem reasonable and informative The method performs well against other baselines Weaknesses: I have one minor-but-important quibble about the paper. In Thm. 4.2, the authors prove that aggregated variance is "<=" upon successive iterations. However, in the english description of the result, the authors state that the aggregated variances "decrease with each update step". This is NOT what you proved - you proved that aggregated variance "does not increase." The same claim is made in the abstract, and again in the conclusion. I think it's important to be clear on this point, so I would ask the authors to rephrase this. Technical Quality: 4 Clarity: 4 Questions for Authors: I wonder if there are potential degeneracies in the algorithm. For example, if a behavior policy never tries a certain action, it seems like the M's for that action will be 0, and the new behavior policy will assign 0 probability to taking that action in the future, leading to a situation where you never get the data you need. Similarly, if a certain state is never visited (in the tabular case), would the estimate of M be zero? If so, does that imply that there is an unstated constraint on the initial behavior policy -- something about exploring states and actions sufficiently? A related question: if the cumulant function is sparse, is it possible to not get enough non-zero data to get non-zero M estimates? (by the way, it's these sorts of questions where I can kind of see that degeneracies may arise that will never get ironed by your algorithm, which is why the difference between "<" and "<=" in your proof is important) Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: It seems like a central limitation of the work is a disconnect between what was theoretically proven (which relies on a perfect knowledge of the variances M), and what will happen in practice (the M's must be estimated). It would be nice if the paper outlined what happens to the algorithm as a function of the error in the estimates of M. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Title: Author's Response to Reviewer pomH Comment: Thank you reviewer for your thoughtful and constructive feedback. We provide a detailed response to the asked questions below. 1. **“Clarification regarding Theorem 4.2”** We appreciate your careful attention, you are correct that Theorem 4.2 demonstrates that the aggregated variance "does not increase" with each update, rather than strictly "decreases." due to "<=" sign in the proof. We will make the necessary revisions to the paper to accurately reflect this. 2. **“Regarding question on potential degeneracies and exploration of algorithm”** Thank you for raising this important point. Just like in standard RL, to obtain reliable estimates—whether for Q-values or variances—it's important to have at least initial exploration of different areas of the state-action space. This can be achieved by incorporating any standard initial exploration technique. In our work, the proposed algorithm assumes sufficient initial exploration to gather necessary data. We address this by initializing M values to a non-zero constant across all state-action pairs and also allowing epsilon exploration, where epsilon decays over time. This ensures agents visit a wide range of state-action pairs early on, preventing issues of zero variance for unvisited state-action pairs. We added epsilon-exploration for all algorithms including baselines. 3. **“Effects on M with sparse cumulant”** In scenarios with sparse cumulants, the non-zero initialization of M ensures that all states are visited, providing a fair opportunity to correct M estimates over time. 4. **“What happens to the algorithm as a function of the error in the estimates of M?”** Similar to any TD-based algorithm, the empirical version of our approach relies on initial estimates, which will be imperfect. If the initial M estimates are incorrect, the TD error will indicate this, either positively or negatively. The behavior policy update will then be influenced by these M estimates, leading the agent to gather new samples as it interacts with the environment. As agent collects more data, the M estimates will improve—either increasing or decreasing for specific states—allowing the behavior policy to adjust and correct its sampling strategy accordingly. If an M estimate is very small compared to its true value, the agent will first focus on states with higher M estimates, correct those values, and then revisit the poorly estimated states to update their M estimates. This iterative process is analogous to TD-based Q-learning updates in standard RL. Additionally, if the target policies have non-zero probabilities, our behavior policy incorporates a small epsilon probability over those state-action pairs. This approach is also supported by Lemma 6.1 which requires a bounded difference between the behavior and target policies to ensure that the variance function remains well defined. For Mujoco experiments, we added KL term to limit this divergence. We will include these explanations in the camera-ready version to provide a practical understanding of our algorithm.
null
null
Rebuttal 1: Rebuttal: Thank you for the valuable and constructive feedback. We are encouraged by your recognition of the *novelty of the problem* and acknowledgment of our *algorithm derivations as systematic and rigorous*. Based on your feedback, we have extended our experimental results in the **Mujoco environments (Fig 1)** and included these results here. We have also added the codebase to the code repository. Further, based on Reviewer 2's suggestion, we are including an ablation study on the effect of using **feature approximator on the MSE performance metric (Fig 2)**. Both the results in Mujoco and the ablation study further support the benefits of using GVFExplorer for data-efficiently evaluating multiple GVFs in parallel. $~~~~~~~~~~~$ ## Mujoco Experiments (Reviewer 1 and 2) We have now conducted additional experiments using the DM-control suite to experiment with the Mujoco “walker” and “cheetah” domains. For the walker, we define two distinct GVFs: walk and flip. For the cheetah, we also evaluate two GVFs: walk and run. To extend the implementation in a continuous action environment, any policy gradient algorithm can be used. Similar to the Q-value critic, we implement a separate M-variance critic using neural networks for both. The behavior policy interacts with the environment and gathers samples. It uses the samples to first update the two critics, followed by an update to the behavior policy network to minimize the MSE (our objective). We also added a KL regularizer between the behavior policy and the two target policies of the GVFs to prevent divergence. We use Monte Carlo estimates of the true GVF values and compare the MSE between these values and the output of the Q-critic network. We use the same Q-critic architecture for all algorithms, including the baselines. **Fig 1 of the attached PDF:** GVFExplorer significantly reduces the average MSE compared to baselines such as RoundRobin and UniformPolicy, demonstrating that an adaptive behavior policy collects more informative samples. $~~~~~~~~~~~$ ## Ablation study on effects of using feature approximator on MSE (Reviewer 2) We conducted an ablation study to understand the effects of using a very coarse function approximator on MSE. We evaluated the averaged MSE over two distinct GVFs in a 20x20 grid. To simulate approximation, we mapped each 2x1 grid region to the same feature, resulting in a grouping factor of 2 and a 10x20 feature grid. Similarly, mapping each 4x1 grid region to the same feature resulted in a grouping factor of 4 and a 5x20 feature grid. As shown in **Fig 2 in attached PDF**, the overall MSE increases with cruder approximations, as expected. GVFExplorer generally outperforms the baselines, but with a very crude approximation (factor = 4), the uniform policy performs better due to poor variance estimates. These results highlight GVFExplorer's robustness with reasonable function approximators. More details of the experiment are presented in R2’s response. Pdf: /pdf/d52536eb19e3ea070f9e699aa8345e757918b07a.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
The Importance of Online Data: Understanding Preference Fine-tuning via Coverage
Accept (poster)
Summary: This paper focuses on the optimization and learning methods for ``online'' RLHF and contrastive offline methods (DPO, IPO). The authors aim to understand the separation between these two type of methods, where they are different in terms of whether new responses can be sampled or not. The authors state that the difference in the reward parameterization is key to the separation and propose global coverage and local coverage to capture such a difference. The theoretical insights also motivate a novel approach called Hybrid Preference Optimization (HyPO) that combines offline data with online samples, where the online samples are used to control the KL divergence. Empirical results are provided to verify the effectiveness of the proposed methods. Strengths: - The authors study the theory of RLHF under the KL-regularized target, which is more close to the practice, as compared to many previous works using the non-regularized reward; - The notion of local coverage condition is novel in the literature and is very natural in the analysis of KL-regularized target. I appreciate the paper writing at the beginning of section 4, which is easy to follow and informative. - Building open the notions of local convergence and global convergence, the authors show a clear separation between the offline algorithms like DPO and online RL-based algorithms. This aligns with the recent observations that the online algorithms outperform their offline counterparts with a large margin. - This paper takes a step further to study the differences between the DPO and RLHF. While the original DPO paper states that the learning of DPO is equivalent to the RLHF, the empirical results do not align with this. The discussion related to the parametrization of reward for assumption 4.4 explains such a difference in practice. - Overall, I feel that the story of this paper is complete. Lemma 4.1 and the discussion around assumption 4.4 clearly show that the offline algorithms can search for the policy with a large KL (possibly due to the parameterization of reward). This not only aligns with the separation between global coverage condition and the local coverage condition, but also motivates the practical algorithmic design to explicitly control the KL divergence. Weaknesses: This is not a weakness but some clarification on the terminology. In the literature of RL theory, particularly for the preference learning paper, the learning (online exploration) is mentioned with querying the human preference so as to learn the $r^*$. In the setup of this paper, the online data is only used to compute the KL loss without querying the human feedback. Therefore, it is more related to an intermediate setting that we use the offline preference dataset to construct a proxy reward function and we are allowed to query the model to get new responses but not the human feedback. See the RSO [1], some discussions in [2] for proposing multi-step RSO, and some discussions in [3]. [1] Statistical rejection sampling improves preference optimization. [2] Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-constraint [3] Preference Fine-Tuning of LLMs Should Leverage Suboptimal, On-Policy Data. Then, it would be interesting to see the empirical results in terms of the ONLINE setting where we can query the new response and the new human preference signals because a lot recent works show that even with the DPO, the online framework outperforms the original one with a large margin. Moreover, this is standard in the PPO literature like Chat-GPT, Claude, and LLaMA2. [4] Training language models to follow instructions with human feedback [5] Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback [6] Llama 2: Open Foundation and Fine-Tuned Chat Models Technical Quality: 3 Clarity: 3 Questions for Authors: see weakness part Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments and we address the reviewer's comment below: > Discussion on online data. Thank you for pointing out the subtlety of the terminologies. We agree that HyPO indeed does not query any additional information from $r^\ast$, which is different from the papers that the reviewer brought up and some related works that we discussed in the related work section. We agree with the reviewer that HyPO is different from the other online RLHF methods because we only have unlabeled online samples (i.e., the data is not labeled with a reward model). One can see it as a monte-carlo estimation of the reverse KL. We will make this distinction clear with additional discussion with the works mentioned by the reviewer. We also believe that investigating the standard online setting has important empirical values. In our setting, if we use the reward model that is trained on the same offline dataset to label the online data, then theoretically this is equivalent to HyPO which avoids storing and querying the extra reward model. In Fig.1 in the supplemental material of the rebuttal, we compare the memory cost between DPO, HyPO and PPO (which requires the additional reward model), and we show that HyPO is almost as memory efficient as DPO, where PPO requires more than twice memory than DPO and HyPO (although it also requires to store a value function, but that still indicates the additional memory overhead of the reward model). That said, in the case where one has additional GPU memory, many recent and contemporary works [1,2 and many others] showed that empirically, such an online DPO (or iterative DPO) method indeed greatly improves over the offline DPO results, indicating the effectiveness of online reward information. We will make sure to add additional discussion on this topic in our final version. [1] RLHF Workflow: From Reward Modeling to Online RLHF. 2024 [2] Exploratory Preference Optimization: Harnessing Implicit Q*-Approximation for Sample-Efficient RLHF. 2024 We also want to mention that in the general response section, we performed additional large-scale experiment on HyPO on finetuning Llama 3 8B instruct model, and we hope this additional empirical result can further demonstrate the effectiveness of online unlabeled data alone. --- Rebuttal Comment 1.1: Title: Thanks for the responses Comment: For DPO-like algorithm, the memory efficiency is not very important because we do not train a value network. The bottleneck of the PPO algorithm is from the challenges of training policy model and value model simultaneously. In particular, the reward model, and reference model to compute the KL divergence can be served with a remote server. Personally, I believe that the coverage condition is hard to satisfy in the context of LLMs and online exploration and online iterative training are standard in making the state-of-the-art models like llama3.1, gemini, Claude, and gpt according to their technical report. In these settings, the reward models are typically trained on a much larger and more diverse preference dataset (compared to the training set of DPO), and is further combined with some human-written rules, and llm-as-a-judge, which are a proxy of the r^* or P^* (see llama 3.1 report or [1] RLHF Workflow: From Reward Modeling to Online RLHF. 2024 for examples). This setting is different from the online setting considered in this paper. I believe that further study in this setting can be promising as future work. --- Reply to Comment 1.1.1: Title: Reply to reviewer MDLu Comment: We thank the reviewer for their insightful comments. We agree with the reviewer that if we have a reward model which is trained on a more diverse preference dataset (which has a better coverage condition), then it will be beneficial for hypo to query labeled online samples which will provably improve upon our current setting where we do not acquire additional reward information with the unlabelled online samples. As the reviewer points out, a refined study on such iterative/online methods is indeed an important future direction. We are happy to see that the reviewer agrees that the coverage condition is hard to satisfy in the LLMs setting and thus in this work we argue that algorithms that require weaker coverage condition can have better empirical performance, and thus one of the advantage of the iterative methods is to further relax the coverage condition through online exploration. Again we will make sure to distinguish the iterative/online setting from the setting from the current paper in the revised version, with additional discussion with the iterative/online methods including but not limited to the works mentioned in our discussion.
Summary: This work considers the statistical separation between contrastive algorithms (DPO and IPO) and RLHF. It proves that DPO/IPO requires global convergence assumption which is in general a very strong assumption while on the other hand RLHF only requires local coverage. This separation stems from the explicit KL constraint in the objective of RLHF while the implicit reward in contrastive algorithms can results in unbounded KL. Given the observation, a hybrid (offline + online) algorithm is proposed: the preference optimization remains offline DPO however it draws online samples to enforce KL constraints on DPO policy, recovering the statistical guarantee of RLHF while attains the practical computational strengths (w/o value and reward models) of DPO. Strengths: - Well written and easy to follow; understanding DPO and RLHF is important towards better preference optimization algorithms. - Good technical qualities and offer viable insights into DPO. - The proposed hybrid algorithm recovers the statistical guarantee of RLHF while keeps the practical computational strengths of DPO (w/o using value and reward models). Weaknesses: I only have a couple of minor comments: - Second sentence of L562 should be appended to L231 to make the proof sketch immediately clear. - Theorem E.1 could be relocated to Section 5 to be more self-contained. Technical Quality: 3 Clarity: 3 Questions for Authors: - I might have overlooked, what is the intuition behind online KL evaluation? Is it possible to enforce such KL constraint purely in offline setting? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Overall, I am convinced this is a good submission with good technical quality. However, as I did not extensively follow the theoretical results on preference optimization, I am not able to comment on the novelty of this work, hence I give confidence score of 3. It would also be beneficial if the authors could create a table summarizing previous works and highlighting their contributions in comparison to prior research. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the postive feedback and we hope our rebuttal can be helpful to further demonstrate our contribution: > Second sentence of L562 should be appended to L231 to make the proof sketch immediately clear. Theorem E.1 could be relocated to Section 5 to be more self-contained. Thank you for your suggestion for improving the writing of the paper. We will incorporate these changes in the final version which allows additional pages. > What is the intuition behind online KL evaluation? Is it possible to enforce such KL constraint purely in offline setting? The reason for the additional online KL is exactly because purely offline methods can not enforce the KL constraints with only the offline data (Proposition 4.1). The intuition behind Proposition 4.1 is that, since DPO is modeling reward in the form of log ratio between policies, even though it is accurate over the offline data, the behavior of the policy outside the offline data is uncontrolled, which can not enforce the KL constraint, because in reverse KL is measured under the trained policy. In order to prove that offline data alone suffices to control the KL, one has to assume a condition similar to that the offline data covers all possible sequences of tokens (Theorem 4.1), which is unrealistic in most situations. > Comparison to prior research. Thank you for your suggestions and we hope our rebuttal can help clarify our theoretical contributions. We will provide a Table comparing with the relevant previous theoretical RLHF works that we mentioned in the related work section in the final version.
Summary: The paper focuses on the paradigm of fine-tuning large language models (LLMs) using human preference data. It delves into two primary techniques: online reinforcement learning (RL) and offline contrastive methods. The authors challenge the previous notion of these techniques being equivalent by conducting a theoretical analysis through the lens of dataset coverage. They introduce the concepts of global and partial coverage conditions, proving that the former is necessary for offline methods using forward KL like DPO to converge to the optimal policy, while the latter is sufficient for online RL methods using reverse KL. The paper proposes a hybrid preference optimization (HyPO) algorithm, demonstrating its empirical superiority over DPO while maintaining some computational efficiency. The authors also provide a coverage-based explanation for why RL and offline contrastive methods might decrease the probability of preferred responses. Strengths: 1. The paper provides a rigorous mathematical analysis of the different conditions under which offline and online methods have provable performance guarantees, contributing to the theoretical foundation of preference learning in RL. 2. The authors introduce the HyPO algorithm and support their theoretical findings with empirical results, demonstrating the effectiveness of HyPO on the TL;DR dataset. Weaknesses: 1. The experiment is not sufficient. Since it is done with Pythia 1.4B and the TL;DR dataset, it is unclear if the proposed method is still valid on larger models and other datasets. While theoretically interesting, it is unclear if using reverse KL instead of forward KL can lead to a significant performance gain in practice. 2. The proposed HyPO method only uses online samples to calculate the KL loss rather than to collect new preference feedback. While simpler, it may fail to fully leverage the benefits of online methods. Technical Quality: 3 Clarity: 3 Questions for Authors: See the Weaknesses section. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors discussed the limitations well in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer for the helpful comments and we address them below: > Insufficient experiments. We agree that experimenting on larger model sizes and additional dataset is important to demonstrate the effectiveness of HyPO and validate our theory. In the supplementary material for the rebuttal, we include the experiment for finetuning Llama 3 8B-instruct model on UltraFeedback dataset, and provide comparison with DPO on MT-Bench, AlpacaEval, and Open LLM Leaderboard (Table 2 and 3), the standard large-scale finetuning setup. We refer the reviewer to the common response section for more details for these results. > The proposed HyPO method only uses online samples to calculate the KL loss rather than to collect new preference feedback. While simpler, it may fail to fully leverage the benefits of online methods. We believe that it is beneficial to obtain additional online information such as reward information when the resource permits. However, gathering online reward information either requires additional human labellers or another reward model, which the latter reduces to online RLHF which requires additional computational resources. We performed a controlled experiment on the computational comparison between DPO, HyPO and PPO in Fig 1 of the supplementary material, which demonstrates the computational advantage of not using a reward model. On the other hand, if the reward model is trained on the same offline dataset, then theoretically one does not require the online reward since it does not introduce new information. (However, in practice people also observe that using reward model indeed improves performance even though it does not introduce new information, which as we pointed out in the limitation section is an interesting future theory direction to pursue).
Summary: This paper studies learning from human preference feedback for aligning large language models (LLMs). Many existing works share the same observation that offline alignment methods such as DPO underperforms their online counterpart such as PPO. But this phenomenon has not been well understood. This paper studies this difference through the lens of coverage. It unveils that the KL constrain in offline alignment methods can be violated due to incomplete coverage and thus offline methods may find bad solutions. Inspired by this insight, the authors propose a new method, hybrid preference optimization (HyPO), which combines the offline preference update with the online KL regularization. Experiments on the summarization task demonstrate that HyPO is better than DPO. This paper also sheds light on a counterintuitive observation that DPO often decreases the likelihood of preferred responses through an illustrative example. Strengths: This work addresses an important matter and provides insights on a widely discussed question - why do offline alignment methods often underperform online methods? The paper is well written and easy to follow. The theoretical claims are sensible and easy to understand. I believe the community will benefit a lot from this work. Weaknesses: The computational cost of the proposed HyPO algorithm is not sufficiently discussed. Usually the most computational cost in online alignment methods comes from the sampling process of LLMs. In practice, the computational cost plays an important role in comparing algorithms. Thus it will provide useful guidance if the computational cost is well discussed in the paper. The discussion in Section 6 is bit handwavy. While Example 6.1 provides good intuition, a more rigorous analysis is needed to justify the claim. One minor suggestion on presentation: It will be good to provide the PPO numbers in Table 1 to help the readers compare HyPO to PPO more easily. Technical Quality: 4 Clarity: 4 Questions for Authors: In the TL;DR experiment, I am curious to know what happens if the authors also train HyPO for 4 epochs. Would it perform better than PPO? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: This paper includes a fairly thorough discussion on the limitations at the end. I would like to add one point: the analysis in this work only focuses on the solution space. It explains why offline methods may find bad solutions, but doesn't explain why they often find the bad ones in practice. As far as I know, people often use a very low learning rate for DPO in practice. For example, the original paper used 1E-6. One possible reason for such low learning rates is that it compensates the ineffective KL regularization. Despite such efforts, DPO still finds bad solutions. It will be insightful if we get a better understanding of why bad solutions seem inevitable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback and we address the reviewer's comments below: > Computational cost analysis. We thank the reviewer for pointing this out. Indeed the computational analysis is important in comparing finetuning methods. We had a short discussion at the end of Appendix E.2 in the submission but the runtime comparison is performed across a mixed types of computational resource. Towards a rigorous analysis of the computational comparison, in the supplemental material for the rebuttal we include the results of the computational comparison in Fig. 1 – we fix the computational resource on the same node with 8 A6000 GPUs, and we run each algorithm for a batch of 512 prompts, averaged over 50 batches. We use the reported hyperparameter for each algorithm (DPO and HyPO’s hyperparameter can be found in Table 3 and 4 in the submission, and PPO’s hyperparameter is in Table 5 of the supplementary material). > The discussion in Section 6 is bit handwavy. While Example 6.1 provides good intuition, a more rigorous analysis is needed to justify the claim. Thank you for the suggestion. Yes indeed example 6.1 can be turned into a rigorous proof which states (in high level): under linear function approximation, as long as the offline data has global coverage in terms of relative condition number (instead of density ratio), and the optimal response is not in the offline dataset, then the extrapolation behavior of preference learning algorithms will provably happen. We will update this in the final version. > Adding PPO numbers. We adopt the PPO result from [1], which uses the https://github.com/vwxyzjn/summarize_from_feedback_details codebase. We used the same codebase to report DPO performance and implement HyPO. We include the PPO performance in Table.1 in the supplementary material of the rebuttal. The hyperparameter for PPO we use in the computation analysis is the same as the ones used to report in Table 1. > Multiple epochs training. We want to clarify that the current statement in the submission about PPO epoch number is actually imprecise: typically PPO does not train for 4 overall epochs, but the 4 epochs refer to the fact that PPO requires 4 epochs RL update for each online minibatch. We will fix the statement in our final version. PPO’s additional computation cost is indicated in the new computation comparison analysis. Since HyPO uses REINFORCE/RLOO for online kl optimization, we observe that taking 1 gradient step per minibatch suffices and multiple epochs per minibatch does not improve performance (however, multiple batches of PPO update seems necessary). > The analysis in this work only focuses on the solution space. We thank the reviewer for pointing out the additional limitation that our paper only focused on the solution space. We agree that analyzing the more fine-grained dynamics of the algorithms is a very important future direction. --- Rebuttal Comment 1.1: Title: Thank you for the response! Comment: I would like to thank the authors for their response. After reading the authors' response and other reviewers' comments, I believe my initial evaluation is appropriate and thus will maintain my rating.
Rebuttal 1: Rebuttal: ### General responses We thank all reviewers for their positive and constructive feedback. In the general response we provide some additional empirical results, which will be incorperated in our final version of the paper. 1. **Large-scale experiments:** In response to Reviewer ewpX’s suggestion on experiments on larger model and additional dataset, we finetune Llama 3 8B instruct model on the Ultrafeedback dataset [1] and evaluate on AlpacaEval [2], MT-Bench [3] and Open LLM Leaderboard, the standard large-scale experiment setup for empirical RLHF papers [4,5]. Following [4], we only finetune the last 4 layers (same for the DPO baseline), and we trained on 8 A100 GPUs. For this experiment, we use RLOO [6] to optimize the online trajectory level KL with $k=2$ (number of repeated online generation). We summarize the results in Table 2 and 3, and we provide the hyperparameter for HyPO with RLOO in Table 4 of the rebuttal supplemental material. 2. **Comparison with PPO and computational analysis:** In response to Reviewer YCrR’s suggestion we updated our TL;DR result with comparison with PPO: the updated Table is in Table 1 of the supplementary material. Note that as we mentioned in the submission, there is still a gap between HyPO and PPO, and providing a more in-depth theoretical analysis of such gap is an interesting future direction. In addition, we performed a controlled experiment on the computational cost of each method. Following the setup of [4], we fix the computational resource on the same node with 8 A6000 GPUs, and we run each algorithm for a batch of 512 prompts of the TL;DR dataset, averaged over 50 batches. We present the result in Figure 1 of the supplemental material: although HyPO introduces more generation cost, but the time and memory cost of HyPO is still significantly lower than PPO. The hyperparameter of PPO in this experiment is recorded in Table 5 of the supplemental material. We believe the additional empirical results can further validate our theoretical results, and the large-scale experiment can also have independent empirical contribution to the community. We appreciate the reviewers for their suggestions and we look forward to further feedbacks during the discussion period. [1] Ultrafeedback: Boosting language models with high-quality feedback. 2023 [2] Length-controlled alpacaeval: A simple way to debias automatic evaluators. 2024 [3] Judging llm-as-a-judge with mt-bench and chatbot arena. 2024 [4] Rebel: Reinforcement learning via regressing relative rewards. 2024 [5] SimPO: Simple Preference Optimization with a Reference-Free Reward. 2024 [6] Buy 4 REINFORCE samples, get a baseline for free! 2019 Pdf: /pdf/76209a5cf004ec18f7f9e3a1b9f6c34eca31ec04.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Animate3D: Animating Any 3D Model with Multi-view Video Diffusion
Accept (poster)
Summary: This paper introduces a framework for 4D generation to animate static 3D models, consisting of two components: MV-VDM, a multi-view video generation model, and a framework combining reconstruction and 4D score distillation sampling (4D-SDS). A spatiotemporal attention module enhances consistency, using multi-view renderings to preserve the model's identity. The proposed two-stage pipeline for animating 3D models first reconstructs coarse motions and then refines them with 4D-SDS. Experiments show that Animate3D outperforms previous methods. Strengths: 1. The paper introduces a large-scale dataset for multi-view video generation, consisting of 84,000 animations and over 1.3 million multi-view videos, which addresses the challenge of limited 4D datasets. 2. Qualitative experiments demonstrate the ability to generate high-quality 4D objects. The generated shapes appear to have better quality and texture compared to other methods. 3. The paper is clearly written, with a clear motivation for using a multi-view diffusion model to enhance spatiotemporal consistency. Weaknesses: 1. The method appears to be a straightforward engineering pipeline, combining several existing techniques (e.g., MVDream and IP-Adapter in the MV-VDM stage, reconstruction + SDS in the animating stage) without significant innovation. 2. Motion diversity is a concern. The method shifts the core problem to the quality of the multi-view video. The dataset achieves this effect, but the maximum number of animations per ID is only six, which is insufficient for capturing the diverse motions of a single 3D object. This approach lacks generality. 3. Motion controllability is limited. The results shown in the paper exhibit simple, repetitive motions with small amplitudes. Although text prompts are used as conditions, they are relatively simple, and the generated motions are not complex. The method lacks a clear approach to achieving more controllable motions. 4. Experimental results are lacking. DreamGaussian4D provides CLIP-I as a quantitative comparison to assess the consistency between prompts and generated results, which is missing in the paper. Additionally, the qualitative results provided in terms of views and moments are too few. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. Regarding time, the paper reports an optimization time of 30 minutes per object on a single A800 GPU. How much time does it take to reconstruct coarse motions and refine motions using the 4D-SDS model, respectively? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The authors have discussed the limitations and broader impacts of the study in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1: Pipeline without significant innovation.** Thanks. Our work enjoys good novelty in both task formulation and solution pipeline. Firstly, we redefine the concept of 4D generation by introducing a novel task: animating any off-the-shelf 3D objects. This innovative task holds significant relevance in fields with limited exploration (Lines 30-41). Given the rapid advancements in 3D generation, our proposed task is both practical and meaningful to directly drive these high-quality 3D assets. Although some SDS-based methods have been investigated in previous 4D works, the community is still in need of a fast, feed-forward model specifically designed to animate 3D objects. Furthermore, such a model has the potential to facilitate mesh generation with PBR materials, which could enhance commercial applications in software platforms like Rodin-Gen1 and Meshy. This positions our work as a crucial step forward in the evolution of 4D generation technology. As for the novelty of pipeline, the main goal of the proposed pipeline is to **learn accurate motions of static Gaussians**. Note that unlike previous text-video-4D works, e.g., DG4D, 4DGen, STAG4D, we don't update any gaussian points and don't include gaussian densification/pruning. This is because **only in this way, we could learn the accurate trajectory of each points in static gaussian and when initializing static gaussian by the vertices of the mesh, we can achieve mesh animation, as shown in PDF attached (Figure 6).** Once we could animate mesh, 4D generation could benefit from high-quality 3D assets generated by commercial tools. **W2: Concern about the motion diversity.** Please refer to global response (3. Motion Diversity and Dataset). **W3: Concern about motion controllability.** Admittedly, text prompt could not achieve precise control. We'll include more controls such as monocular video in the future. **W4: Lack of experimental result CLIP-I; provide more views and moments for qualitative comparison.** Please refer to global response (1. Comparison methods) for CLIP-I metric. Please refer to the attached **PDF** for more visualization results. **Q1: Time cost of reconstruction and 4D-SDS.** For 8-frame version model, the Reconstruction and 4D-SDS both costs around 15 minutes, totaling 30 minutes (Line 242). For 16-frame, they both takes around 20 minutes. --- Rebuttal Comment 1.1: Title: Post Rebuttal Comment: Thank the authors for addressing my questions. I am still advocating for accepting the paper.
Summary: This paper proposes Animate3D, a 4D generation framework that consists of a multi-view video diffusion model (MV-VDM) followed by 4D Gaussian Splatting optimization, as well as a dataset of 38K animated 3D objects (MV-Video) that is used to provide spatiotemporal supervision to train the model. Different from text-to-4D approaches, Animate3D specifically tackles the problem of animating 3D assets which requires spatiotemporal consistency as well as identity preservation. The proposed architecture is based on a multi-view diffusion model (MVDream) and a video diffusion model (AnimateDiff) with additional spatio-temporal attention blocks. To animate a given 3D asset, the trained model is used for both reconstruction and distillation. First, a multi-view video is generated conditioned on multiple views of the asset. The video is used to optimize a 4D Gaussian Splatting resulting in a model with coarse motion. The same MV-VDM model is used for 4D Score Distillation Sampling to refine the motion. The method compares favorably to state-of-the-art 4D generation methods. Strengths: **[S1]** The problem setting in this paper is interesting and timely, in an area that has recently attracted significant attention. The paper takes a slightly different angle compared to text-to-3D, text-to-4D or video-to-4D methods, focusing on animating 3D assets based on a textual description instead of generating them from scratch. **[S2]** This problem setting also opens up a new challenge, which is to preserve multi-view attributes during the animation. It is nice to see that the identity preservation and the animation are significantly improved compared to state-of-the-art methods. **[S3]** The authors tackle this problem by training a multi-view video diffusion model. Given the success of image (2D), video (2D+time), and multi-view (3D) diffusion models, MV-VDM (4D) is a reasonable next step to ensure spatial and temporal consistency simultaneously. Both the model and the dataset used to train it are likely to influence future work and could thus have significant impact. **[S4]** The proposed architecture aims to reuse existing multi-view and video diffusion models, which are already pre-trained on a larger scale of data, thus taking advantage of the priors built into these models. --- Weaknesses: **[W1]** The data used in this paper are animated 3D models collected from Sketchfab (cited). The authors promise to distribute the collected dataset. However, there is not sufficient information provided about copyrights and licensing. The only piece of information provided is that models with the following clause are excluded: >NoAI: This model may not be used in datasets for, in the development of, or as inputs to generative AI programs. Based on the paper checklist, the paper is flagged for ethics review. --- **[W2]** Lack of clarity in the method section and method figure **(a)** The method section is very hard to parse. The notation is overloaded but not very well explained. The section would benefit from thorough revision to improve the overall clarity and quality. **(b)** It is almost easier to understand the method through Fig. 2, but even that is dense and inconsistent, and often the notation does not align with what is written in Sec. 3.1. For example, in L175 latent features $z$ seem to be the output of the image encoder. In Fig. 2 (middle), $z$ appears to be the input to the spatiotemporal attention module. In Eq. 1, $X_l$ and $X_r$ are instead the inputs to the same module. At the same time, in Fig. 2 (left) the input to the spatiotemporal module seems to be the sum of the output of the cross-attention layers, not directly $z$. This makes reading and understanding the method extremely difficult to the point where the reviewer cannot fully judge the technical contribution and similarities or differences to related models. --- **[W3]** Discussion with respect to 4D generation methods could be expanded Existing work in text-to-4D generation often splits the problem into two stages: a static 3D asset generation and a deformation field to model motion. This disentanglement makes it possible to learn the 3D motion from video diffusion models, without the need for multi-view video. The paper briefly touches upon this and compares to two 4D generation methods, 4Dfy and DreamGaussian4D. However, the paper could further elaborate on the similarities and differences of the proposed approach compared to existing methods and the advantages of a multi-view video diffusion model. --- **[W4]** Certain architectural components are not ablated and their contribution to the overall performance is unclear. See Questions. --- **[W5]** The temporal extent of MV-VDM is only 8 frames, which seems rather limited. --- **[W6]** In some of the qualitative examples, the faithfulness to the text prompt is questionable. Some of the animations provided in the supplementary video are not accompanied by a text prompt, while quite a few animations appear very similar. --- **[Minor]** Typos and suggestions: - L103: pioneer → pioneering - L113: All these manners aforementioned → All aforementioned work - L131-132: this sentence can be improved for clarity since the two parts sound a bit repetitive - L135: spatial → spatially; temporal → temporally - L156: It should be better explained how what $X_l$ and $X_r$ represent and how these features are obtained. - L292: levering → leveraging; manage → manages --- Technical Quality: 3 Clarity: 2 Questions for Authors: **Q1. [Based on W1]** Please elaborate on the difference between _3D animation via multi-view video diffusion_ and _two-stage text-to-4D approaches_. In the latter case, the second stage could be viewed as 3D animation. I understand that a multi-view video foundation model would offer spatiotemporal consistency that other approaches can likely not reach, but I would appreciate a more in-depth discussion about the difference to and the challenges of text-to-3D-and-3D-to-4D approaches than what is currently provided in the paper. --- **Q2. [Based on W1]** Ideally, the authors should provide also empirical comparisons to additional methods (e.g., Dream-in-4D), if possible. --- **Q3. [Based on W4]** The authors state that >We find this simple cross-attention operator [MV2V-Adapter] can effectively improve the object’s appearance consistency in the generated video. The effect of MV2V-Adapter should be better demonstrated with an ablation. --- **Q4. [Based on W4]** Are the cross-attention layers described in L170-172 necessary for identity preservation and alignment with the text prompt? Their effect should be also demonstrated in the ablation studies. --- **Q5.** The proposed 3D animation approach uses both reconstruction and distillation. It first uses the generated multi-view video to optimize a 4D Gaussian Splatting with coarse motion. Then SDS is used to refine the motion. It seems counter-intuitive that SDS would be good at modeling the finer details of the animation, since SDS is typically known for its over-smoothing behavior. The authors should further discuss the motivation behind using SDS during 4D optimization and provide additional examples to prove its effectiveness. --- **Q6. [Based on W6]** Could the authors elaborate on the apparent lack of diversity in the generated motions? It may be helpful to provide some statistics from motion descriptions included in the dataset. Or is this a limitation of the model instead? --- Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Limitations are adequately discussed in Appendix B. Flag For Ethics Review: ['Ethics review needed: Data privacy, copyright, and consent'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1: Data copyrights and licensing** We confirm that all models downloaded from Sketchfab have a distributable Creative Commons license and were obtained using Sketchfab’s public API. Besides, models marked as ``NoAI'' and restricted due to objectionable or adult thematic content were excluded from the dataset. We provide a detailed elaboration of the licensing information as below. | License | Number | |-----------------------|---------------| | CC Attribution | 37,081 | | CC Attribution-NonCommercial | 460 | | CC Attribution-NonCommercial-NoDerivs | 112 | | CC Attribution-ShareAlike | 73 | | CC Attribution-NonCommercial-ShareAlike | 66 | | Free Standard | 38 | | CC Attribution-NoDerivs | 18 | | CC0 Public Domain | 9 | | **Total** | 37,857 | **W2: Clarity of the details of our method.** Thanks very much for careful reading. Sorry for the confusing presentation. We will revise the figures and the text in the updated version. **W3: Discussion with respect to 4D generation methods** Benefiting from the unified spatiotemporal consistent supervision of our 4D foundation model (MV-VDM), our approach can leverage the multi-view attributes of 3D objects to achieve more consistent 4D generation. In contrast, existing methods based on separated text or single-view diffusion models and video diffusion models struggle to preserve spatiotemporal consistency. More importantly, they also fail to faithfully maintain the multi-view attributes of an existed 3D object. **Due to limited space,we would like to discuss this further in the discussion phase**. **W4: More components ablations.** We provide the ablation for MV2V-Adapter, alpha blender and cross-attn layer (image) as follows. Note that we don't ablate the cross-attn layer (text) since it is a necessary component of MVDream and we freeze MVDream in our pipeline. The results indicate that all components are effective. | Method | **I2V $\uparrow$** | **M. Sm. $\uparrow$** | **Dy. Deg.** | **Aest. Q. $\uparrow$** | | --- | --- | --- | --- | --- | | w/o cross-attn layer (image) | 0.887 | 0.978 | 0.966 | 0.504 | | w/o alpha-blender | 0.911 | 0.982 | 0.958 | 0.528 | | w/o MV2V-adapter | 0.927 | 0.986 | 0.961 | 0.526 | | **Ours** | **0.935** | **0.988** | 0.710 | **0.532** | **W5: The outputs are only 8 frames** Thanks for this good point. To validate the scalability of our MV-VDM, we have trained a 16-frame version and reported the quantitative and qualitative results in global response (1. Comparison methods) and Figure 8 in the attached **PDF**, respectively. We find that 16-frame model generates larger amplitude motion while maintaining similar identity preservation ability as the 8-frame version. **W6: Faithfulness to the text prompt, similar animations** For the concerning about faithfulness of prompts, we clarify that our MV-VDM is capable of producing generations well-aligned with the prompts. First, our MV-Video dataset is designed to encompass a wide variety of motion prompts, as illustrated in the word cloud presented in the rebuttal **PDF** (Figure 4). Second, for the same object, our model can synthesize various motions according to different prompts as verified in the rebuttal **PDF** (Figure 7). However, we acknowledge that achieving extremely precise motion control through detailed prompts presents certain challenges. This limitation is primarily due to the constraints of the CLIP model, which serves as the text encoder for MV-VDM. It would be interesting future work to further develop more powerful 4D models for superior text alignment. Please also refer to our global response (3. Motion Diversity and Dataset) for more discussions about motion diversity. Actually, we have omitted some text prompts to make the video more clear. Most objects in the supplementary video are from our test set and the prompts are listed in Appendix (Figure 9). Text prompts for reconstructed objects presented in the video are listed in Appendix (Figure 6). Below we list the rest text prompts: * A glowing blue butterfly is flying. * A cute cartoon dog is dancing. * A monster dog is walking. * A cartoon bear wearing a swimming goggles is getting ready to dive. * A cartoon frog is jumpping up and down. * A panda with very adorable features is dancing. * A cartoon sea lion, adorned with an expressive and charmingly animated face, is singing. * A eagle in cartoon style is flapping its wings. * A gaint panda in cartoon is walking. * A cool spiderman is dancing. * A cute lemur is dancing. **W7: Issues about typos.** Thanks, we appreciate your careful review. We will correct it in the revised paper and check if there remains any typos. **Q1: In-depth discussion between 3D animation via multi-view video diffusion and two-stage text-to-4D approaches** Please refer to W3. **Q2: empirical comparisons to additional methods (e.g., Dream-in-4D)** Please refer to global response (1. Comparison methods) **Q3: Ablation of MV2V-Adapter** Please refer to W4 **Q4: Ablation: Cross-attention layers for image-condition and text prompt** Please refer to W4. **Q5: The use of 4D-SDS for fine-level animation is counter-intuitive** The role of 4D-SDS is to alleviate small floaters, similar to smoothing but without the blurry effect. Our coarse reconstruction is \textbf{sparse-view} reconstruction, i.e., we only have 4 views, and the reconstruction results are inevitably not perfect in novel views, especially when the number of Gaussian points is large. Similar SDS techniques are adopted in sparse-view 3D reconstruction works, such as ReconFusion[CVPR2023] and Sparse3D[CVPR2023]. We also provide detailed qualitative comparisons in the **PDF** attached (Figure 2). **Q6: Issues about the diversity in the generated motions.** Please refer to the global response (3. Motion Diversity and Dataset). --- Rebuttal Comment 1.1: Title: In-depth Discussion with respect to Previous 4D generation Methods Comment: Due to limited space, our initial response did not permit a comprehensive discussion of previous 4D generation works, now we detail the in-depth discussion as suggested: Previous two-stage 4D generation works attempted to **disentangle motion learning** from appearance learning by adopting **different types of supervision signal**, i.e., video diffusion/monocular video for motion and image/3D diffusion for appearance. However, the motion and appearance supervisions adopted in their work are **not orthogonal**, and sometimes have **negative effect** on each other. For example, it is commonly agreed that video-diffusion-SDS usually brings unappealing visual effect to the appearance of the object [Animate124, Dream-in-4D, AYG]. Meanwhile, the appearance supervision signal prevents 4D object from updating along the direction that follows the exact score function of the video diffusion model, leading to less natural motion. Small motion amplitude in [4Dfy, Dream-in-4D] and shaky appearance in [AYG] could partly support this point. As for monocular-video-guided motion learning, previous work [DG4D, 4DGen, STAG4D] relies on 3D diffusion model (Zero123) to supervise both motion and appearance in novel view. Since Zero123-SDS is applied per-frame, temporal consistency in novel view cannot be guaranteed. Moreover, monocular video doesn't provides information about depth/distance, so moving closer to or farther away from the camera can be perceived as the magnification or reduction of the object, resulting in appearance distortion. In contrast, our method takes the **unified** supervision signal from MV-VDM for motion learning and appearance preservation. Our motion and appearance supervision signal inherently don't conflict with each other, since MV-VDM is conditioned on multi-view attributes of the 3D object to generate multi-view videos. Besides, **multi-view motion supervision** in our work enables more natural motion generation when compared with single-view motion supervision in other works. Thus, we achieve superior performance in terms of both motion generation and appearance preservation in the task of animating any off-the-shelf 3D object. Thanks for your insightful suggestion of in-depth discussion, and we'll add this in revision. If you have any further questions, discussions are welcomed. --- Rebuttal 2: Title: Response to rebuttal Comment: Thank you for the detailed rebuttal. Most of my concerns have been addressed and I intend to keep a positive rating. I appreciate the extended discussion regarding existing methods and I would suggest integrating it into the main paper or the appendix. I also appreciate the authors' efforts in providing additional experiments and comparisons during the rebuttal phase. One concern that remains is still about the writing and presentation, which I hope the authors will improve substantially in their revision. --- Rebuttal Comment 2.1: Comment: Thanks for the response. We are carefully revising our manuscript, including figures, tables, text and demo video. The revision will feature significant updates.
Summary: This work presents Animate3D, a framework for animating static 3D models. The core idea involves two main components: 1. A multi-view video diffusion model (MV-VDM) conditioned on multi-view renderings of the static 3D object, trained on a large-scale multi-view video dataset (MV-Video). 2. A framework that combines reconstruction and 4D Score Distillation Sampling (4D-SDS) to utilize multi-view video diffusion priors for 3D object animation. The animation process involves a two-stage pipeline: coarse motion reconstruction from generated multi-view videos, followed by 4D-SDS to model fine-level motions. Quantitative and qualitative evaluations show enhancements to previous methods. Strengths: 1. Performance: The proposed method achieves state-of-the-art results. The experiments well validate the effectiveness of the proposed methods. 2. Clarity: The paper is well-written and easy to follow. 3. Technical Novelty: The main contributions of this paper are threefold: 1) The first 4D generation framework to animate any 3D objects with detailed multi-view conditions, which are incorporated by the proposed MV2V-Adapter. 2) The authors propose to animate any off-the-shelf 3D models with unified spatiotemporal consistent supervision, which can get better results. 3) The collected 4D dataset, MV-Video. Weaknesses: 1. Missing Reference [a]: It is understandable that you did not compare Animate3D with Animate124 in this paper, as 4D-fy and DreamGaussian4D both demonstrate better performance compared to Animate124. However, it is unusual that you did not discuss Animate124 at all, given its relevance in previous comparisons by 4D-fy and DreamGaussian4D. 2. Limited Comparison Scope: The authors only compare Animate3D with 4D-fy and DreamGaussian4D, which seems insufficient. It would be more comprehensive to include comparisons with 4DGen and TC4D, as both claim superior performance over 4D-fy and have released their code. Additionally, while the modification of 4D-fy to a Gaussian representation for fair comparison is understandable, the original 4D-fy results should also be included for a thorough ablation study. [a] Animate124: Animating One Image to 4D Dynamic Scene, \ [b] 4DGen: Grounded 4D Content Generation with Spatial-temporal Consistency, \ [c] TC4D: Trajectory-Conditioned Text-to-4D Generation. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. The results of 4D-fy appear strange. When I check the 4D-fy results in TC4D and 4DGen, they look more reasonable. 2. I think the authors should also conduct ablation studies on MV2V-Adapter. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: The authors have discussed the limitations and potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1: Missing Reference: Animate124** Sorry for missing Animate124, which is a great pioneering work. Comparison is in the global response (1. Comparison methods), required by other reviewers. Reference will be added in our revised paper. **W2: More comparisons with 4DGen, TC4D and original 4Dfy.** Thanks for your suggestion. Please refer to the global response(1. Comparison methods) **Q1: 4Dfy results is strange** Thanks for this point. Actually, 4Dfy is not designed for animating off-the-shelf 3D objects, instead, it is designed for text-to-4D (the task in TC4D/4Gen). So it doesn't fit our task and presents not so good results. Technically, we follow the default training setting in 4Dfy's official implementation for animating off-the-shelf static Gaussian/NeRF. Notably, that implementation reduces the learning rate of static Gaussian/NeRF to a relatively low value in the dynamic stage in hope that high-quality appearance would be preserved. Admittedly, when the static object is generated by image/3D SDS loss and **the exact image/3D SDS loss** is continually used in the dynamic stage, the high-quality appearance could be preserved, as shown in previous works. However, the image/3D SDS loss doesn't match the off-the-shelf 3D object in our task, so using it in the dynamic stage as the original paper results in appearance changes. As the learning rate for static NeRF is very low, the appearance change doesn't finish at the end of the training, so the results are strange. We tried to use a higher learning rate, but found that resulted in very low **I2V** values, as the object appearance was changed completely. Besides, we found 3DGS sometimes fail to generate good results when supervised by MVDream SDS loss. This problem was discussed by other researchers in issue section of threestudio-3dgs repo. **Q2: Ablation of MV2V-Adapter** Please refer to our global response (2. Ablation of MV2V-Adapter) --- Rebuttal Comment 1.1: Comment: Thanks for the efforts of the authors. They have conducted additional experiments to support their claims, and my concerns have been resolved. I will maintain my scores.
Summary: This paper proposes an animation method that animates a 3D model in a 4D one. A Multi-View image conditioned multi-view Video Diffusion Model (MV-VDM) is presented to generate multi-view videos from multi-view renderings of a static 3D object. The MV-VDM is leveraged to train the 4D Gaussian Splatting (4DGS), where As-Rigid-As-Possible (ARAP) loss, reconstruction loss and SDS are used as objectives. In addition, a multi-view video dataset is constructed to train MV-VDM. The paper conducts experiments to show the effectiveness of the proposed method. Strengths: 1. The idea is well-motivated and straightforward. 2. A large-scale multi-view video dataset is presented. 3. A multi-view video diffusion model is presented, where spatiotemporal attention is introduced to animate multi-view images with motions. Weaknesses: 1. The comparison is not very fair. The paper focuses on a new task, i.e., 3D animation, while 4Dfy focuses on text-to-4D, which is a different task. Furthermore, text-to-4D is more challenging than 3D animation since multi-view images are unavailable. The better performance gain of the proposed method may partly attributed to additional multi-view images. In other words, the worse performance of 4Dfy and DG4D may not be due to the methods themselves. In addition, the paper replaces dynamic NeRF in 4Dfy with 4DGS. However, some hyperparameters of 4Dfy are set according to dynamic NeRF, rather than 4DGS. Instead, the paper can compare the proposed method with AYG, which directly trains 4DGS. In addition, the paper can compare Animate234, which is an image-to-4D method. 2. The motivation for the proposed spatiotemporal attention block is not clear. Besides the temporal motion module and temporal encoding, the block introduces a multi-view 3D attention module. What is the multi-view 3D attention module used for? Why are temporal motion modules and temporal encoding not enough to generate motions for multi-view videos? 3. What is the influence of Alpha Blender on the performance of the proposed method? 4. Does the proposed method train all modules in the spatiotemporal attention block? It's not clear whether the "Multi-view 3D Attention" and "Temporal Motion Module (Pre-trained VDM)" are trained or frozen. 5. The implementation details of Alpha Blender are not clear. Is it implemented as a layer of MLP? 6. The temporal encoding is unclear. According to Figure 2, there is a temporal encoding module in the spatiotemporal attention block, but there is no description of what kind of temporal encoding is used. 7. How many views does each animated 3D object contain in the dataset? Only four orthogonal views or more? 8. Although the dataset contains 38K animated 3D objects, it is still much smaller compared to 2D video datasets. Can the proposed method trained on this dataset of this magnitude animate any 3D model? 9. Training a multi-view video diffusion model requires camera parameters. Are the camera parameters in the multi-view video dataset processed to ensure that they are consistent with the prior knowledge learned in the MVDream Technical Quality: 3 Clarity: 2 Questions for Authors: Please refer to my question above. Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors have provided limitations and societal impact of their work Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1: (1) Issues about unfair comparison: 4Dfy and DG4D do not leverage multi-view images; (2) 4Dfy is based on nerf instead of 4DGS; (3) Add comparison with AYG and Animate124.** **(1)**: We proposed a new task of animating any off-the-shelf 3D object, and there is **no previous work specially designed for this task to compare with**, so we could only chose two representative 4D generation works to serve as the comparison baselines. Due to the lack of unified spatiotemporal consistent supervision, existing works (e.g., 4Dfy, DG4D) can only focus on using text or single-view image conditioned diffusion models for motion modeling. Experiments have shown that this often leads to spatiotemporally inconsistent 4D generation results. Our Animate3D is the first 4D generation framework to animate any 3D objects with detailed multi-view conditions. We believe this workflow is more suitable for generating spatiotemporal consistent 4D objects by leveraging advanced static 3D object reconstruction and generation literature. **(2)** and **(3)** Thanks for the suggestion. Please refer to our global response (1. Comparison methods) **W2: Motivation and effectiveness of the proposed spatiotemporal attention block.** To enhance the spatial and temporal consistency of our MV-VDM, we design a new spatiotemporal attention module, which mainly contains a temporal attention branch and a spatial attention branch. This block is build upon the motion module of video diffusion model to inherit the temporal prior. In the early experiments, we found that simply adopting the temporal attention branch is not enough. This is because the features are likely to lose multi-view consistency after being processed by the temporal branch. Therefore, we add a parallel spatial attention branch to address this issue. The effectiveness is validated in **Tab. 3(a) and Fig. 4 in our original paper**. Note that **w/o S.T. Attn** means we replace the proposed spatiotemporal block with motion modules in video diffusion models. We will add this details in our revision. **W3: The influence of the Alpha Blender.** In our spatiotemporal attention block, we employ an alpha blender layer with a learnable weight to fuse the features from both temporal and spatial branches. In the following table, we show the influence of the alpha blender layer and the results validate that it can enhance the spatiotemporal consistency of the 4D generation results. | Method | **I2V $\uparrow$** | **M. Sm. $\uparrow$** | **Dy. Deg.** | **Aest. Q. $\uparrow$**| | --- | ---| ---| ---| ---| | w/o Alpha Blender | 0.911 | 0.982 | 0.958 | 0.528 | | Ours | **0.935** | **0.988** | 0.710 | **0.532** | **W4: Implementation details: trainable modules in spatiotemporal attention block.** Yes, we train all modules in our proposed spatiotemporal attention block, as illustrated in Figure 2 in our paper. Thanks for your reminder, we will add these details in implementation section in our revised version. **W5: Implementation details: Alpha blender** As illustrated in Eq. (1) in our paper, we perform alpha blender with a learnable weight $\mu$, which is implemented via nn.Parameter. We will add this detail in the revised version. **W6: Implementation details: Temporal encoding** Thanks for your reminder, we adopt sinusoid positional encoding in AnimateDiff as temporal encoding. We will add these details in our revised version. **W7: Implementation details: number of views of each aniamted 3D object in our dataset.** Please refer to the Appendix (C.1 Rendering Details), 16 views are evenly sampled in terms of azimuth, starting from values randomly selected between $-11.25^{\circ}$ and $11.25^{\circ}$. The elevation angle is randomly sampled within the range of $0^{\circ}$ to $30^{\circ}$. **W8: Training dataset size is limited, is it capable of animating any 3D object?** Adimittedly, our dataset is much smaller than 2D video datasets, but since we inherit the prior knowledge learned by 3D and video diffusion models, our model can animate most dynamic 3D objects commonly seen in daily life. We have tested various categories of objects, including humans, mammals, reptiles, birds, insects, vehicles, weapons, etc, and obtained good results. We provide more examples in the **PDF** attached. Please also refer to our global response (3. Motion Diversity and Dataset) to see our discussion about the dataset. **W9: Implementation details: camera parameters consistent with MVDream?** Yes, the camera parameters in our multi-view video dataset are processed to ensure that they are consistent with the prior knowledge learned in the MVDream. Please refer to the Appendix (C.1 Rendering Details) in our paper, 16 views are \textbf{evenly} sampled in terms of azimuth. Specifically, during training, we randomly sample four orthogonal views to form a multi-view video for each animation, which is consistent with MVDream's camera setting. --- Rebuttal Comment 1.1: Comment: Thank the authors for addressing my questions. I plan to keep my positive score.
Rebuttal 1: Rebuttal: We thank for the reviewers' appreciation of our work, as they give positive comments of "problem setting is interesting, well-motivated and straightforward" (R2, R4), "achieve state-of-the-art performance of 4D generation" (R3, R5), "the large-scale 4D dataset could have significant influence on this area" (R1, R2, R3, R4, R5), "paper is well-written" (R1, R3, R5). Below we respond to some common concerns raised by the reviewers. **Many reviewers ask for more visualizations of our methods beyond the demo video we provided. Due to the anonymity policy, we cannot share the link to our project page with more than 100 $1024\times1024$ resolution animation videos, so we provide some high-quality images in the attached PDF**. Additionally, we validate **the Gaussian trajectory learned by our model is quite accurate** and even could be used to **directly animate the MESH**, obtaining **animated mesh** that could be used in standard 3D rendering pipelines. (See Figure 6 in the PDF attached. Text prompts there are "A dragon head is roaring", "A cute dog is walking" and "A cute rabbit is dancing" from left to right.) **1. Comparison methods**: Since we propose **a new task** of animating any off-the-shelf 3D object and there are **no previous methods** specially designed for this task, we compared our work with two state-of-the art 4D generation methods of different categories (4Dfy (4DGS) and DG4D). Although we think our experiments in the paper are solid enough to support our claims, we further provide the additional comparisons with all the methods requested by the reviewers except for AYG, which is not open-released and hard to be reproduced during the limited rebuttal period. We use official implementations of all comparison methods, and load our pre-trained static NeRF/3D Gaussians. Note that NeRF-based methods usually require training time from hours to several dozens of hours. Additionally, we add **CLIP-I** as the evaluation metric and provide results of **16-frame version of our model** as suggested. **Quantitative results are as below and qualitative results are depicted in Figure 8 in PDF attached.** (We indicate the best and second best results in bold and italics.) | | **I2V $\uparrow$** | **M. Sm. $\uparrow$** | **Dy. Deg.** | **Aest. Q. $\uparrow$** | **CLIP-I $\uparrow$** | |-------|-------------------|----------------------|--------------|------------------------|------------| | 4Dfy (4DGS) | 0.783 | **0.996** | 0.0 | 0.497 | 0.786 | | 4Dfy (NeRF) | 0.817 | 0.990 | 0.010 | 0.549 | 0.834 | | Animate124 | 0.845 | 0.986 | 0.313 | 0.563 | 0.845 | | 4DGen | 0.833 | *0.994* | 0.187 | 0.453 | 0.776 | | TC4D | 0.856 | 0.992 | **0.830** | 0.565 | 0.859 | | Dream-in-4D | 0.938 | *0.994* | 0.0 | 0.551 | 0.895 | | DG4D | 0.898 | 0.986 | 0.477 | 0.529 | 0.860 | | Ours (8-frame)| *0.982*| 0.991 | 0.597 | **0.581** | **0.946**| | Ours (16-frame)| **0.983** | 0.991 | *0.750*| *0.572* | *0.937*| As the comparison methods are **not specifically designed for this task**, i.e., they do not take multi-view attributes of the given 3D object into consideration, they perform not well in persevering the identity of the object (indicated by **I2V** and **CLIP-I**). Besides, they also struggle with learning motion with relatively large amplitude since they have to balance between appealing appearance and large motion (indicated by **Dy. Deg.**). **TC4D is a special case as it takes pre-defined object trajectory as the global motion**. Our methods are superior to comparison ones in terms of both appearance and motion. **Specially, we find our 16-frame version model can generate motion with larger amplitude while having similar appearance preservation ability as our 8-frame version model. Please refer to PDF attached for a better understanding.** **2. Ablation of MV2V-Adapter**: We ablate the MV2V-Adapter as follows. MV2V-Adapter improves almost all metrics. The decrease in **Dy. Deg.** is because w/o MV2V-Adapter the generated results are noisy and the motion is incoherent. Qualitative comparison is in Figure 5 in PDF attached (Text prompt: "A flame rock monster is launching an attack".) | Method | **I2V $\uparrow$** | **M. Sm. $\uparrow$** | **Dy. Deg.** | **Aest. Q. $\uparrow$** | |---|---|---|---|---| | w/o MV2V-adapter | 0.927 | 0.986 | 0.961 | 0.526 | | Ours | **0.935** | **0.988** | 0.710 | **0.532** | **3. Motion Diversity and Dataset**: We present more generated results in **Figure 7 in PDF attached**, demonstrating that the model trained on our dataset can perform various diverse animations on the same 3D objects. This proves the generalization capability of our dataset. Admittedly, our MV-Video dataset contains only 38K animated objects with 84K animations, which is still smaller and limited compared to 2D video datasets. However, it is worth noting that our multi-view video (4D) dataset is more rare and challenging to obtain compared to web-scale 2D video datasets, especially with regard to the multi-view camera parameters. Our dataset is much larger than previous 4D datasets. Large-scale 4D dataset is crucial for learning multi-view spatiotemporal consistent 4D foundation model, which cannot be achieved with 2D video datasets. We've created a 4D dataset and confirmed its effectiveness. We are expanding the dataset and plan to release it, and believe it will grow with community support, driving advancements in the 4D domain. We hope the reviewers will consider this. Pdf: /pdf/ae484ceccb535783b0813b85b1539bad10f184e8.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper focuses on animating 3D objects with multi-view diffusion models. To improve spatial and temporal consistency, this work builds a large-scale multi-view video dataset, MV-Video and designs an attention module to enhance spatial and temporal consistency by integrating 3D and video diffusion models. To enhance animations from 3D objects, this work jointly optimizes the 4DGS through reconstruction and 4D-SDS. Experiments show the proposed method can more consistent 4D generation. Strengths: * This work builds a large-scale multi-view video dataset to facilitate the training of multi-view video diffusion models. * The method designs an attention module to encourage spatial and temporal consistency for multi-view diffusion models. * The approach introduces 4D-SDS to leverage multi-view video diffusion models for animating 3D objects. * The paper is easy to read and understand. Weaknesses: * This work focuses on 3D objects, however, the paper title claims "Animating Any 3D Models". It is a bit inappropriate. * The effectiveness of the proposed dataset is not fully validated. Can this dataset improve single-view diffusion models or multi-view image-to-3D generation models, like MVDream? * The diversity of the proposed method is unclear. Can one object perform different actions? For example, in Figure 3, can the frog perform swimming? * The picture quality is a bit low, it is hard for readers to distinguish the advantages of the generated objects from the proposed method. * For animating from 3D object, this work first leverages 4DGS to reconstruct coarse motions and then uses 4D-SDS for refinement. If the method uses better 4DGS algorithms, maybe the improvement from 4D-SDS will become little. Technical Quality: 2 Clarity: 2 Questions for Authors: * It is better to validate the effectiveness of the proposed dataset, can it also improve single-view diffusion models (e.g., T2V) or multi-view image-to-3D generation models (e.g., MVDream). Then, can these models further improve other methods, like DreamGaussian4D? * Although the work verify the effectiveness of the S.T attention. The effect of MV2V-adapter is not validated. * If the approach leverages better 4DGS reconstruction methods, such as [1] for the coarse motion reconstruction, can the 4D-SDS still improve the coarse results? [1] Yang, Zeyu, et al. "Real-time photorealistic dynamic scene representation and rendering with 4d gaussian splatting." arXiv preprint arXiv:2310.10642 (2023). * As this work designs a multi-view video diffusion model, can the authors compare it with existing single-view video diffusion models? * For Figure 4, both the ablation models show that the player touches the basketball. However, the full model does not show this interaction. I wonder if the full model can do better for this interaction. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The paper has discussed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1: Inappropriate title: 3D objects instead of 3D models** Thanks for the advice, we will consider revising it in the revised version. **W2: Effectiveness of the proposed dataset in video or 3D diffusion models** Given limited time, we only finetune SVD on a subset (20\%) of our dataset, and glad to see some improvements. For evaluation, we evaluate the finetuned model on the validation dataset used for the ablation of MV-VDM. | Method | **I2V $\uparrow$** | **M. Sm. $\uparrow$** | **Dy. Deg.** | **Aest. Q. $\uparrow$** | |----------------|-----------|------------|---------------|----------------| | SVD | 0.927 | 0.985 | 0.849 | 0.538 | | SVD-finetuned | **0.950** | **0.990** | 0.708 | **0.557** | We also use the generated video from finetuned model as the input for DG4D in 4D generation and report the results as below. | | **I2V $\uparrow$** | **M. Sm. $\uparrow$** | **Dy. Deg.** | **Aest. Q. $\uparrow$** | |---|--------|-----------|------------|------------| | **DG4D w/o finetuned SVD** | 0.898 | 0.986 | 0.477 | 0.529 | | **DG4D w/ finetuned SVD** | **0.907** | **0.989** | 0.407 | **0.533** | Please note that the decrease in **Dy. Deg.** is because SVD w/o fintuning leads to artifacts and noise in the generated input video, hence leading to the failure of 4D generation. We believe that further improvements could be achieved by fintuning it on the full training dataset. **W3: Motion Diversity: object performing different actions.** Thanks for your insightful question. Please refer to our global response (3. Motion Diversity and Dataset). **W4: About the picture quality.** Actually, the picture quality is mainly affected by the quality of static 3D generated/reconstructed object, since our method is good at preserving the appearance of the given object. We should clarify that the reconstruction quality is not the primary contribution of this paper, which is potentially influenced by the hyper-parameter adjustment of GRM (or other reconsturction method). Besides, The compression of images in PDF can lead to a certain degree of loss in image quality. We provide high-quality images in Figure 6-8 in PDF attached. Although they do not have the same quality as the original renderings at 1024$\times$1024 resolution, they are better than those provided before. **W5: Better 4DGS algorithms will make 4D-SDS unnecessary** Though MV-VDM generates spatiotemporally coherent multi-view videos as the ground truth for reconstruction, **the ground truth has only 4 views**. Existing 4DGS algorithms **cannot straightforwardly address this sparse multi-view video reconstruction.** Following your suggestion, we have experimented with an improved 4DGS reconstruction algorithm [2]. The results are reported in the table below. Better 4DGS[2] doesn't improve the reconstruction results primarily because the task here involves sparse multi-view video reconstruction using only 4 views. Better 4DGS[2] is not designed to tackle such scenarios. In our work, we apply arap loss to 4DGS[1] to effectively handle sparse views. We fail to apply arap loss to Better 4DGS[2] given that arap loss is designed to constrain the motion of the 3D Gaussians, however, Better 4DGS[2] doesn't learn the motion (It regards the time dimension as a property of Gaussian points, similar to scale/rotation/opacity). We find the proposed 4D-SDS improves the results of Better 4DGS. We will add this discussion in our revised paper. | Reconstruction | 4D-SDS | **I2V $\uparrow$** | **M. Sm. $\uparrow$** | **Dy. Deg.** | **Aest. Q. $\uparrow$**| |---|---|---|---|---|---| | 4DGS [1] | | 0.978 | 0.990 | **0.657** | 0.572 | | Better 4DGS [2] | | 0.972 | 0.990 | 0.621 | 0.561 | | 4DGS [1] | $\checkmark$ | 0.983 | **0.997** | 0.597 | **0.581** | | Better 4DGS [2] | $\checkmark$ | **0.984** | **0.997** | 0.610 | 0.573 | [1] Wu, Guanjun, et al. "4d gaussian splatting for real-time dynamic scene rendering." (CVPR2024) [2] Yang, Zeyu, et al. "Real-time photorealistic dynamic scene representation and rendering with 4d gaussian splatting." (ICLR2024) **Q1: Validate the effectiveness of the proposed dataset.** Please refer to the response of W2. **Q2: Ablation of MV2V-adapter** Please refer to our global response (2. Ablation of MV2V-Adapter) **Q3: The effectiveness of better 4DGS algorithms.** Please refer to the response of W5. **Q4: Comparison with existing single-view video diffusion models** We conducted some empirical comparisons between our MV-VDM and SVD, finding that our model outperforms SVD when animating 3D object renderings, as we specifically trained our model in this domain. However, we struggled to animate realistic images with complicated backgrounds, which is an area where SVD excels. It’s important to note that the comparison is made between the multi-view video results of our model and the single-view video results of SVD, which may be an unfair assessment for our model. For quantitative evaluation, we tested SVD on the dataset used for the ablation study of MV-VDM, and the results are reported in the table below. | Method |**I2V $\uparrow$** |**M. Sm. $\uparrow$** | **Dy. Deg.** | **Aest. Q. $\uparrow$** | | --- | ---| ---| ---| ---| | SVD | 0.927 | 0.985 | 0.849 | **0.538** | | Ours | **0.935** | **0.988** | 0.710 | 0.532 | **Q5: Issue about the basketball interaction in Fig. 4** Yes, it has the interaction. Please refer to the Figure 3 in our attached PDF. --- Rebuttal Comment 1.1: Comment: Thanks for your response. It has addressed my concerns. I will keep my score.
null
null
null
null
null
null
DOGS: Distributed-Oriented Gaussian Splatting for Large-Scale 3D Reconstruction Via Gaussian Consensus
Accept (poster)
Summary: DoGaussian incorporates the 'divide-and-conquer' approach and introduces the ADMM algorithm into the 3DGS training process for large-scale 3D reconstruction tasks, reducing training time by 6+ times compared to the original 3DGS. Specifically, DoGaussian first splits the scenes into K local blocks of similar sizes and then maintains a global 3DGS node to ensure consistency through consensus on the K shared blocks. During the inference stage, DoGaussian moves all shared blocks and queries only the global block to improve rendering efficiency. Strengths: 1. DoGaussian proposes a novel distributed training strategy, achieving a significant acceleration of the 3DGS training process without sacrificing rendering visual quality. 2. DoGaussian functions more like a plugin and can be applied to any GS representation work. 3. The paper is well-written and easy to follow, and the supplementary video is exceptionally well made. Weaknesses: 1. **Artifacts in Teaser.** There are noticeable artifacts in the teaser, particularly in the picture located in the bottom right corner. It would be beneficial for the authors to provide an explanation of these artifacts and address them accordingly. 2. **Suitable for Street Scenes.** In the experiments section, all experiments are conducted on aerial datasets, but reconstructing street data is also an important problem. Can the authors demonstrate DoGaussian's applicability using the San Francisco Mission Bay dataset from Block-NeRF[1] or Block_A from the MatrixCity[2] dataset? 3. **GPU Memory Problem.** DoGaussian appears to be a system paper that utilizes a distributed approach to improve training efficiency by placing the global 3DGS on a main node and some local 3DGS on other nodes. One potential reason other works do not use this strategy could be GPU memory limitations. All datasets used by the authors are processed by 3DGS on a single GPU, which does not truly qualify as 'large-scale.' It would be beneficial for the authors to demonstrate that DoGaussian can be applied to larger datasets, such as the aerial and street data of big city in MatrixCity, which is used in NeRF-XL[3]. [1] Block-nerf: Scalable large scene neural view synthesis. [2] Matrixcity: A large-scale city dataset for city-scale neural rendering and beyond [3] NeRF-XL: Scaling NeRFs with Multiple GPUs Technical Quality: 4 Clarity: 4 Questions for Authors: There are three questions regarding the weaknesses mentioned above: 1. The authors need to explain the artifacts in the teaser. 2. It would be better to conduct experiments on street datasets and larger datasets. 3. I’m also curious to know if the final results would be better using the visibility-aware splitting strategy of VastGaussian. I would be happy to raise my score if the authors can address all my questions. Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The authors discuss the limitation in the appendix A.8. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - ***Q1: Noticeable artifacts in the teaser, particularly in the picture located in the bottom right corner.*** - **A1:** From Fig.3 in the attached PDF, we can observe that the artifacts in the teaser appear to be near the scene boundary, which is a common problem for 3DGS-based methods and not a particular issue caused by our method. - ***Q2: Suitable for Street Scenes.*** - **A2:**: See **A4** in the common questions. - ***Q3: GPU Memory Problem*** - **A3:** See **A1**, **A2** and **A3** in the common questions. - ***Q4: It would be better to conduct experiments on street datasets and larger datasets, such as the aerial and street data of Big City in MatrixCity, which is used in NeRF-XL.*** - **A4:** Since we have only a limited number of GPUs with 48GB memory for each, we did not test our method on the Big City scene in MatrixCity. However, to further show the applicability of our method to larger-scale scenes, we evaluated our method on the $2.7 \text{km}^2$ Small City scene in the MatrixCity dataset, which contains $5,620$ training views and $741$ validation views. We early terminated the training of the original 3D-GS since it did not finish the training within two days. From the table below, our method achieved the best results in rendering quality. The visual qualitative results are shown in Fig.5 in the attached PDF. *Table. Quantitative results of our method on the Small City aerial scene of the MatrixCity dataset.* | | PSNR $\uparrow$ | SSIM $\uparrow$ | LPIPS $\downarrow$ | Time $\downarrow$ | Points | Mem $\downarrow$ | FPS $\uparrow$ | |------------|-----------------|--------------|-----------------|--------------|--------|--------------|-------------| | 3D-GS | 27.36 | 0.818 | 0.237 | 47:40 | 11.9 | 6.31 | 45.57 | | VastGaussian$^{\dagger}$ | 28.33 | 0.835 | 0.220 | 05:53 | 12.5 | 6.99 | 40.04 | | DoGaussian | **28.58** | **0.847** | **0.219** | 06:34 | 10.3 | 5.82 | 48.34 | - ***Q5: If the final results would be better using the visibility-aware splitting strategy of VastGaussian.*** - **A5:** The visibility-aware splitting strategy of VastGaussian is utilized to add more redundant training views to each block to ensure well-constrained scene boundaries. However, this strategy also imbalances the size of each block. In our experiments, we find this strategy improves little of our method in the rendered image quality but increases the training time. On the Campus scene, when adopting the visibility-aware strategy in our method, the PSNR is increased by 0.09db, but the training time is increased by 25 minutes. While visibility-awareness is a good strategy for improving the rendering quality of the model, some post-processing steps may be required to ensure balanced block-splitting results, which can be a future work of our method. --- Rebuttal Comment 1.1: Comment: Thanks for your reply and effort. All my concerns have been addressed. I keep my original score. --- Reply to Comment 1.1.1: Title: Thanks for your reply Comment: We are grateful for your insightful suggestions and happy our rebuttal has addressed your concerns. We will update the paper with all the experiments and details mentioned.
Summary: This paper proposes a distributed training strategy for 3dgs in large-scale scenes. The scene is evenly splitted into K blocks, but also maintain the global scene representation. Then the optimization of the scene is transferred to a condition optimization solved by the classic ADMM. The results demonstrate their effectiveness and efficiency. Strengths: 1. It is interesting to convert the distributed training of 3dgs into a condition optimization problem. It could boost the rendering in overlapping region significantly. 2. The experiments are sufficient. Weaknesses: 1. The scene splitting is somehow tailored for bird view images. As for the extending scenes with surround views, such as autonomous driving, a training image in one block could see a large ratio of point cloud in other blocks. 2. It would be better to modify some representations in the paper. a. line 82, cone sampling of mip-nerf is not designed for the representation enhancement for outdoor scenes. b. line 98, "leveraging xxx performance for xxx reconstruction" seems confusing. c. line 4 of the fig.2, there are a redundant 'the' in sentence "a copy the the global 3D Gaussians". 3. It would be better to highlight the best performance in Table 2, 4. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How to load the global gaussians? Does the method work when a single GPU can not load the global scene (main node)? 2. As the different splitting could lead to different overlapping regions and different ADMM problems, does it matter? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: As discussed in the Question 1. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - ***Q1: applicability on autonomous driving scenes*** - **A1:** See **A4** in the common questions. - ***Q2: It would be better to modify some representations in the paper*** - **A2:** Thanks for the suggestions. We revised the paper accordingly as suggested: ***(a)*** line 82, we revised it to "To address the aliasing issue in vanilla NeRF, Mip-NeRF[1] proposed to use Gaussian to approximate the cone sampling, ...". ***(b)*** line 98, we revised it as "or leveraging the efficient rasterizer of point rendering for real-time indoor reconstruction". ***(c)*** line 4 of the fig.2, we revised it as "a copy of the global 3D Gaussians". - ***Q3: It would be better to highlight the best performance in Table 2, 4.*** - **A3**: Thanks for the suggestion. We will highlight the best results in Tables 2 and 4 in our revised paper as is done in Table 1, 3. - ***Q4: How to load the global gaussians? Does the method work when a single GPU can not load the global scene (main node)?*** - **A4:** See **A2** in the common questions. - ***Q5: As the different splitting could lead to different overlapping regions and different ADMM problems, does it matter?*** - **A5:** It is possible that better splitting methods can lead to better results and boost the convergence of the consensus ADMM problems. From Eq.8(b) of the main paper, the global Gaussians are updated by averaging the values from local Gaussians. Though local Gaussians converge towards the global Gaussians during training, a high variance of the shared local Gaussians (it occurs when Gaussians are not well constrained) can decrease the convergence of ADMM. Our method does not suffer from this issue since we adopt a confidential scale factor to ensure the scene boundaries have enough overlapping areas. --- Rebuttal 2: Comment: Thanks for your reply and answers. Most of my concerns are addressed. Although the limitation in street scenes exists currently, the contribution of this paper for bringing the condition optimization into large scale scene learning is clear. So I keep my original score. --- Rebuttal Comment 2.1: Title: Thanks for your reply Comment: We are very grateful for your insightful suggestions and support of our work. We are happy to see that our rebuttal has addressed most of your concerns.
Summary: This paper introduces traditional ADMM to Gaussian Spaltting and achieves distributed Gaussian Splatting training. The proposed distributed approach reduces training time and guarantes training convergence and stability. Experiments demonstrates both effectivenss and efficiency of this method. Strengths: 1. The paper introduces powerful ADMM to large-scale 3D reconstruction, which is an effective and elegant approach to distributed optimization. 2. The compact optimization scheme help the method achieves high quality reconstruction. 3. The idea of the proposed distributed optimization could inspire research in other tasks, such as full-body reconstruction. 4. The paper is well written. Weaknesses: 1. More experiments are needed to prove the superiority of the consensus step. I hope to see more qualitative and quantitative comparisons with VastGaussian in areas where blocks overlap. 2. The authors claim that proposed split method can balance the training time of each block. However, there is a lack of experiments showing the training and waiting time of each block. 3. The authors enable consensus and sharing when every block reaches 100 iterations. Is it possible to allow the blocks to be at different iterations? It can be more efficient to avoid blocking. However, there is no discussion about the difference between synchronous and asynchronous methods. 4. There is no discussion about the memory consumption of master node. How much GPU memory is needed for the master node? Will its memory consumption be linearly increase as the total number of Gaussians increases? Technical Quality: 3 Clarity: 3 Questions for Authors: NA Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - **Q1: More experiments are needed to prove the superiority of the consensus step. More qualitative and quantitative comparisons with VastGaussian in areas where blocks overlap.** - **A1:** We include more qualitative results in Fig.2 in the attached PDF to show the importance of the consensus step. We can observe that the distributed training presents noisy results without the consensus step. From the two bottom-right figures, we can observe obvious artifacts along the block boundary without the consensus step. As is shown in Fig.4 of the attached file, both methods produce fairly consistent results. However, our method presents higher fidelity rendering results than VastGaussian near the splitting boundary, which also validated the effectiveness and importance of the consensus step. - ***Q2: The authors claim that proposed split method can balance the training time of each block. However, there is a lack of experiments showing the training and waiting time of each block.*** - **A2:** We tested the time cost from transferring the data to the master node to receiving data from the master node for each slave node on the Campus dataset. The mean and variance of time are respectively $5.63$ seconds and $0.75$ seconds each time. The low variance shows that our method can balance the training time very well. - ***Q3: The authors enable consensus and sharing when every block reaches 100 iterations. Is it possible to allow the blocks to be at different iterations? It can be more efficient to avoid blocking. However, there is no discussion about the difference between synchronous and asynchronous methods.*** - **A3:** The consensus and sharing steps can be done asynchronously in implementation. However, by doing so, the convergence of ADMM is not guaranteed and the results can be sub-optimal. To validate this, (1) we first transfer the local Gaussians to the master node and then optimize them without blocking; (2) we trigger the consensus and sharing step once the master node receives local Gaussians from all blocks; (3) the local Guassians are regularized with the newly updated global Gaussians once the slave nodes received the data. As a result, the PSNR, SSIM, LPIPS on the building scene are respectively $18.73, 0.441, 0.673$, which dropped significantly. We argue that **the data transfer time of our method can be kept constant** since we can always control the number of local Gaussians to a constant number (e.g. $<= 6000,000$ 3D Gaussian primitives) with enough GPUs, no matter how large the scene is, since the data transfer between different slave nodes and the master node is executed distributedly instead of sequentially. - ***Q4: memory consumption of master node*** - **A4:** See **A1** in the common questions. --- Rebuttal Comment 1.1: Comment: Thanks for your reply. I will keep my rating. --- Reply to Comment 1.1.1: Title: Thanks for your reply Comment: Thanks for your reply and support of our work. We are very grateful for your insightful suggestions and will revise our paper accordingly.
Summary: The paper presents DoGaussian, a novel method for accelerating the training of 3D Gaussian Splatting models for large-scale 3D scene reconstruction. It introduces a distributed training approach using scene decomposition and the ADMM, resulting in a 6x faster training time while maintaining high-quality rendering. The method involves maintaining a global 3DGS model on a master node and local models on slave nodes, with a consensus mechanism ensuring consistency across models. Experiments on standard large-scale datasets confirm the effectiveness of DoGaussian. Strengths: 1. The use of the ADMM ensures that the global and local models reach a consensus on the shared 3D Gaussians. This guarantees convergence and stability during the training phase, which is crucial for maintaining the integrity of the model. 2. Despite the focus on efficiency, the paper does not compromise on the quality of the output. It demonstrates state-of-the-art rendering quality, which is a testament to the robustness of the proposed method. 3. By introducing a distributed training methodology for 3DGS, the paper significantly accelerates the training process on large-scale scenes. This is achieved by decomposing scenes into manageable blocks, allowing for parallel processing on multiple nodes. Weaknesses: 1. The qualitative comparisons are not that convincing, especially compared with the original 3DGS and vastGaussian. The authors are encouraged to provide more results in the rebuttal. I will raise the score accordingly. 2. The training and inference speed of DoGaussian is slightly slower than vastGaussian. 3. More baselines should be included, such as [Fed3DGS](https://arxiv.org/pdf/2403.11460). Technical Quality: 3 Clarity: 3 Questions for Authors: Can you provide more qualitative results and comparisons? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - ***Q1: Provide more qualitative results in the rebuttal (compared with the original 3DGS and VastGaussian).*** - **A1:** We include more qualitative results in Fig.1 in the attached PDF. - ***Q2: More baselines should be included, such as Fed3DGS.*** - **A2:** We include the results of Fed3DGS in our updated Table. We can observe that **the rendering quality of Fed3DGS is far below 3D-GS-based methods**. Note that though Fed3DGS also maintains a global model in a central server, it requires optimizing the opacities and appearance of the global model with all local models, which is computationally inefficient. Moreover, not like our method, which adopts ADMM to ensure the training convergence, the global model optimization in Fed3DGS is only used to prune Gaussians to prevent the monotonically increased number of global Gaussians, which is intuitively designed without theoretical convergence guarantee. | **Scenes** | **Building** | **Building** | **Building** | **Rubble** | **Rubble** | **Rubble** | **Campus** | **Campus** | **Campus** | **Residence** | **Residence** | **Residence** | **Sci-Art** | **Sci-Art** | **Sci-Art** | |------------|-------------------------------|------------------------------|---------------------------------|----------------------------|----------------------------|-------------------------------|----------------------------|----------------------------|--------------------------------|-------------------------------|-------------------------------|--------------------------------|-----------------------------|-----------------------------|--------------------------------| | | PSNR$\uparrow$ | SSIM $\uparrow$ | LPIPS $\downarrow$ | PSNR $\uparrow$ | SSIM $\uparrow$ | LPIPS $\downarrow$ | PSNR $\uparrow$ | SSIM $\uparrow$ | LPIPS $\downarrow$ | PSNR $\uparrow$ | SSIM $\uparrow$ | LPIPS $\downarrow$ | PSNR $\uparrow$ | SSIM $\uparrow$ | LPIPS $\downarrow$ | | Mega-NeRF | 20.92 | 0.547 | 0.454 | 24.06 | 0.553 | 0.508 | 23.42 | 0.537 | 0.636 | 22.08 | 0.628 | 0.401 | 25.60 | 0.770 | 0.312 | | Switch-NeRF| 21.54 | 0.579 | 0.397 | 24.31 | 0.562 | 0.478 | 23.62 | 0.541 | 0.616 | 22.57 | 0.654 | 0.352 | 26.51 | 0.795 | 0.271 | | $\text{3D-GS}$ | 22.53 | 0.738 | 0.214 | 25.51 | 0.725 | 0.316 | 23.67 | 0.688 | 0.347 | 22.36 | 0.745 | 0.247 | 24.13 |0.791 | 0.262 | | $\text{Fed3DGS}$ | **18.66** | **0.602** | **0.362** | **20.62** | **0.588** | **0.437** | **21.64** | **0.635** | **0.436** | **20.00** | **0.665** | **0.344** | **21.03** | **0.730** | **0.335** | | $\text{VastGaussian}^{\dagger}$ | 21.80 | 0.728 | 0.225 | 25.20 | 0.742 | 0.264 | 23.82 | 0.695 | 0.329 | 21.01 | 0.699 | 0.261 | 22.64 | 0.761 | 0.261 | | Hierarchy-GS | 21.52 | 0.723 | 0.297 | 24.64 | 0.755 | 0.284 | -- | -- | -- | -- | -- | -- | -- | -- | -- | | DoGaussian | 22.73 | 0.759 | 0.204 | 25.78 | 0.765 | 0.257 | 24.01 | 0.681 | 0.377 | 21.94 | 0.740 | 0.244 | 24.42 | 0.804 | 0.219 | - ***Q3: More qualitative results*** - **A3:** See the attached file for more qualitative results. --- Rebuttal Comment 1.1: Title: Thanks for your reply Comment: Thanks for your reply. I will maintain my original score. --- Reply to Comment 1.1.1: Title: Thanks for your reply Comment: Thanks for your rating. We are very grateful for your insightful suggestions and support of our work. We will update our paper accordingly as suggested in our revised version.
Rebuttal 1: Rebuttal: We sincerely thank all reviewers for their detailed comments and suggestions. Some **common questions are answered below**, and **more qualitative results** are provided in the attached PDF. - ***Q1 How much GPU memory is needed for the master node? Will its memory consumption be linearly increase as the total number of Gaussians increases?*** - **A1**: In our experiments, the peak GPU memory of the master node is below 32 GB for all the datasets. The memory consumption increases linearly with the total number of Gaussians. However, we emphasize that **the global Guassians on the master node do not produce gradients** because we just use the value of local Gaussians in the consensus step to update the global Gaussians (we do not need the backpropagation for the optimizer to update the global Gaussians). In implementation, we update the global Gaussians with torch's no gradient mode. We do not need to maintain a computational graph as we train the local Gaussians. Therefore, the memory consumption of maintaining the global Gaussians on the master node is much less than training the original 3DGS that has the same number of Gaussians on a single GPU. - ***Q2: How to load the global gaussian? Does the method work when a single GPU cannot load the global scene (main node)?*** - **A2:** During training, the global Gaussians are maintained on the GPU of the master node. During consensus and sharing, local Gaussians can be mapped to their corresponding global Gaussians by indexing. During inference, the global Gaussians are used normally just as the original 3D-GS. We emphasize that **our method can still be applied to such scenes when a single GPU cannot load the global scene**: ***(1)*** As is explained in ***A1***, the memory requirement of the master node is not as large as training the same number of global Gaussians on a GPU. ***(2)*** In extremely large scenes where a single GPU cannot hold all global Gaussians (this is also an issue for all other large-scale 3DGS methods. For example, VastGaussian needs the final fused model to do inference), we can cache the global Gaussians on RAM and transfer them to GPU memory in a block-wise manner whenever the consensus and sharing step are triggered. ***In brief***, this is an implementation philosophy that can be solved as explained above and we will leave this as our future work. - ***Q3: One potential reason other works do not use this strategy could be GPU memory limitations. All datasets used by the authors are processed by 3DGS on a single GPU, which does not truly qualify as 'large-scale.'*** - **A3:** The Mill19 and UrbanScene3D datasets are widely adopted in evaluating large-scale NeRF/3D-GS methods, and we follow previous methods (Mega-NeRF, Switch-NeRF, VastGaussian) to evaluate our method on these most commonly evaluated large-scale scenes. We emphasize that how we qualify a scene as `large-scale` is not simply by whether it can be trained on a single GPU (***e.g.*** if we have an 80GB GPU, we can train 3DGS on any existing datasets), but also on how long it takes to train a NeRF/3D-GS model on such scenes. We can train any scene on a single GPU by controlling the number of model parameters, but it can take a very long time for the model to converge to render high-fidelity images. - ***Q4: In the experiments section, all experiments are conducted on aerial datasets, but reconstructing street data is also an important problem. Can the authors demonstrate DoGaussian's applicability using the San Francisco Mission Bay dataset from Block-NeRF or BlockA from the MatrixCity dataset?*** - **A4:** See the table below for the results on the BlockA scene of the MatrixCity dataset. VastGaussian failed on this dataset since two blocks produce no 3D Gaussian primitives due to the imbalanced splitting. Neither 3D-GS nor our method produces satisfactory results with the default parameters used to train 3D-GS, this may be because the BlockA scene contains too many narrow views and needs careful fine-tuning of the training parameters. However, our method still produces better results than the original 3D-GS. An exhaustive evaluation of our method on street view scenes can be a future work. | | PSNR $\uparrow$ | SSIM $\uparrow$ | LPIPS $\downarrow$ | Time $\downarrow$ | Points | Mem $\downarrow$ | FPS $\uparrow$ | |------------|-----------------|--------------|-----------------|--------------|--------|--------------|-------------| | 3D-GS | 20.03 | 0.643 | 0.650 | 14:24 | 1.85 | 2.33 | 193.32 | | VastGaussian$^{\dagger}$ | - | - | - | - | - | - | - | | DoGaussian | 21.61 | 0.652 | 0.649 | 02:33 | 2.37 | 2.89 | 180.51 | Pdf: /pdf/bef68e15e16276ef9d0624e05265a4db186765dd.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
ESPACE: Dimensionality Reduction of Activations for Model Compression
Accept (poster)
Summary: The proposed method applies a PCA-inspired method to the activations of LLMs for model compression. Strengths: - While most papers focus on quantization, pruning, or weight decomposition, the proposed approach goes in an interesting direction. - The paper is easy to follow and the proposed method is simple. - The studied problem is relevant. - The limitations of the method are described. - The experiments contain multiple ablations to verify the effectiveness of the method. Weaknesses: - Compression is never well-defined. Is it compression ratio w.r.t. W? What if you do not compress all weights of the model (line 112)? - Comparison with related baselines is missing. I understand your work is orthogonal to pruning and quantization methods, but it would make sense to compare with methods that use weight decomposition such as ASVD or the ones you mention in the paragraph starting on Line 41. - The practical benefit of the method can be better emphasized. - The quality of the figures can be improved, e.g., the text is grainy and Fig. 2 is too small to read. - Line 192, i==j -> i=j Technical Quality: 3 Clarity: 3 Questions for Authors: - The paper could be made much stronger if you could show some practical benefits in terms of, e.g., inference time, VRAM requirement, or storage requirements. Right now you show compression ratios, but I believe more concrete numbers can make the method much more appealing. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The limitations are well addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback provided, and for recommending acceptance of our paper. We also appreciate the reviewer underscoring the novelty, clarity, relevance, technicality, and substantiveness of our work. Here we provide answers to the concerns and questions raised: * Response to Weakness #1 on the definition of compression ratio. * * We apologize for the confusion. We had indeed properly defined compression ratio in the paragraph starting on line 276, but did not provide an explicit formula. The sizes of matrices $\mathbf{W}$, $\mathbf{P}$, and $\mathbf{P}^T\mathbf{W}$ were explicitly stated to be $K\times N$, $L\times K$, and $L\times N$, respectively, at lines 161-166. We should have repeated those when introducing compression ratio at line 276 for better clarity, and this will be rectified in the revision. We also confirm that this compression refers to model size reduction and therefore does not consider layers involving cross-multiplications of activations (line 112). On lines 284-286, we had mentioned that it was beyond the scope of our work to evaluate the impact of ESPACE when taking into account non-GEMM layers. However, we’ve supplemented our results to provide preliminary results on those. Indeed, in the response below, we’ve added time-to-first-token measurements. * Response to Weakness #2 on a lack of comparison to SOTA tensor decomposition methods. * * We agree with the reviewer that a comparison with other SOTA methods is required to establish a benchmark of ESPACE’s effectiveness. In our submission, we did include this comparison in Figure 4 and section 4.4 (Page 9). Indeed, we compared our Llama2-7b results to those of ASVD, SVD-Lora, and SliceGPT; all contemporary tensor decomposition compression works who have reported perplexity vs compression results for that model. This comparison showed that ESPACE noticeably advances the SOTA. Indeed, while lossless compression cannot be achieved beyond 5% compression (achieved by ASVD) for other methods, ESPACE maintains accuracy with 20% compression. Similarly, the 0.5 perplexity increase at 50% compression with ESPACE matches SliceGPT compression of 10%. Therefore, ESPACE generally advances tensor decomposition compressibility by a factor of 4-to-5x when matching other methods’ accuracy. It seems that this part of our discussion was missed in your original review. We apologize for not highlighting these comparisons more in our manuscript, this will be rectified in the revision. * Response to Weaknesses #4 and #5. Thanks! The figures will be made bigger, and the equal sign will be fixed. * Response to Weakness #3 and Question #1 on metricizing practicality benefits. * * The reviewer is absolutely right about the importance of highlighting practical benefits. We first mention that in the submitted paper, we had included measurements on the speed-up in the matrix multiplication layers. These are included in Tables 1 and 2, and we refer the reviewer to check again if those were missed in the initial review. The findings were that with ~50% compression, ESPACE reduces the latency in GEMM layers by 35%-to-45%. In this rebuttal, per the reviewer’s suggestion, we’ve supplemented these results and measured the time-to-first-token (TTFT) for all models. The experimental setup is similar to that of GEMM latency measurements in Section 4, where we use NVIDIA A100 GPUs, a batch size of 1, and no tensor parallelism. These new results are included below (these results are also included as Table C in the attachment to the common rebuttal): |Model (Compression) |Total GEMM Latency (From submission) | TTFT (New result) | |-|-|-| |Baseline GPT3-1.3B |24.2ms |39.8ms | |ESPACE GPT3-1.3B (20%) |20.6ms (-15%) |36.1ms (-9%) | |ESPACE GPT3-1.3B (47%) |15.9ms (-34%) |31.7ms (-20%)| |-|-|-| |Baseline GPT3-8B |136ms |186ms | |ESPACE GPT3-8B (21%) |110ms (-19%) |155ms (-16%) | |ESPACE GPT3-8B (50%) |76.8ms (-44%) |122ms (-35%) | |-|-|-| |Baseline GPT3-22B |354ms |457ms | |ESPACE GPT3-22B (40%) |229ms (-35%) |313ms (-31%) | |ESPACE GPT3-22B (55%) |181ms (-49%) |261ms (-32%) | |-|-|-| |Baseline Llama2-7B |210ms |368ms | |ESPACE Llama2-7B (21%) |169ms (-19%) |322ms (-12%) | |ESPACE Llama2-7B (50%) |113ms (-46%) |266ms (-28%) | |-|-|-| |Baseline Llama2-13B |406ms |643ms | |ESPACE Llama2-13B (20%)|336ms (-17%) |561ms (-13%) | |ESPACE Llama2-13B (50%)|259ms (-36%) |447ms (-31%) | |-|-|-| * * As we can see, the speed-up in GEMM latency does lead to TTFT reduction. However, since some time is spent in non-GEMM layers (e.g., cross-multiplication of activations in attention), the relative speed-up compared to the baseline is slightly lower for ESPACE. The improvement in TTFT is still significant, and 50% compression with ESPACE leads to 20%-to-35% TTFT reduction. This shows that ESPACE practically improves inference time. In addition, we would like to note that storage requirements are also encapsulated by the previously reported model size reduction achieved via compression. In terms of VRAM requirements, since smaller matrices are loaded to memory, we expect some VRAM requirement reduction. However, a thorough architectural study is needed to quantify accurately what benefits ESPACE can lead. We defer this for future work, but we will include a summary of all of the above, as well as the new TTFT results in the revision. We thank the reviewer, as the above significantly strengthens our work. Once again, we thank the reviewer for the encouraging comments and productive feedback! We hope our responses above were satisfactory to the reviewer. It would be much appreciated if the reviewer would consider increasing their score. --- Rebuttal Comment 1.1: Comment: Thanks for your rebuttal, it answers my questions. It is indeed nice that the proposed method can reduce GEMM time. I will keep my score. --- Reply to Comment 1.1.1: Title: Thank you Comment: We thank the reviewer for reading our rebuttal, acknowledging our answers, and sharing further positive comments on our results. We kindly ask the reviewer to consider increasing the score in order to reflect their post-rebuttal opinion of our work.
Summary: The paper introduces ESPACE (Eigen Static Principal Activation Component Estimation), a technique for compressing large language models (LLMs) by focusing on the dimensionality reduction of activation tensors rather than the traditional weight-centric tensor decomposition. The method involves projecting activation tensors onto a pre-calibrated set of principal components, which reduces the dimensionality of activations and results in weight compression during inference through matrix multiplication associativity. Strengths: The paper introduces ESPACE for compressing LLMs through activation-centric dimensionality reduction. This differs from traditional weight-centric tensor decomposition techniques, which adds novelty to the research. The authors provide a solid theoretical foundation for their approach, including the derivation of optimal constructions for projection matrices to minimize mean squared error and forward propagated noise metrics. ESPACE can achieve up to 50% compression of models with a small increase in perplexity and a practical reduction in inference (GEMM) latency. Weaknesses: To me ESPACE is simply PCA projection (with a reinvented name) of activation tensor. Can the authors defend a bit? The paper suggests that ESPACE can be used in conjunction with other compression techniques like pruning and quantization, as well as non-compressive acceleration methods such as speculative decoding. However, it does not provide empirical results for these combinations, which makes the claim unconvincing. Although the authors claim that ESPACE is orthogonal to other compression techniques, it would strengthen the paper to include comparisons with more baselines, such as pruning, to better demonstrate the significance of the proposed method. E.g., when compressed respectively & individually with ESPACE, pruning and quantization to the same model size, how do their perplexities compare to each other? You can demonstrate that on small models like llama-58M or GPT2-97MB. The necessity for a calibration phase that involves forward-passing multiple batches of data to estimate activation auto-correlation matrices might not scale well with larger datasets or more complex models, potentially limiting the method’s applicability in diverse settings. Technical Quality: 3 Clarity: 3 Questions for Authors: How does the calibration set impact the final performance? Can you provide a sensitivity profile vs layers in the Transformer model? I.e., how would the local ranks of projection in individual layers affect the global performance? (with the belief that a deep compression in early layers can hamper the output even higher ranks are used for later layers.) A portfolio of layers’ importance would be crucial for the compression strategy. How does the projection influence the well-known outlier issue in activations and sometimes in the key values, which is observed in lots of LLM works? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The calibration set plays a crucial role in the quality of the principal components for projection, this needs to be analysed deeply to warrant good results. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback provided, and for recommending acceptance of our paper. We also appreciate the positive comments highlighting the novelty of our approach, its solid theoretical foundation, and the promising empirical results. Here we provide answers to the concerns and questions raised: * Response to Weakness #1 on the similarity to PCA. * * We are happy to defend: we were indeed inspired by this classic algorithm and in fact did provide a comparison to PCA (lines 200-204). PCA’s goal is to extract low dimensional features having maximum correlation with input data. In contrast, ESPACE is derived such that the compression in eq. (3) is achieved while minimizing various accuracy metrics, which were theoretically derived in Section 3. Crucially, ESPACE leverages ergodicity of activation autocorrelation (see Section 3.1) to produce static projections enabling compression as described in eq (3) and Figure 2. This would not be feasible using PCA, as extracting low dimensional features of activations on the fly would require an online algorithm, which is expensive. In the revised manuscript we will further defend these differences between ESPACE and PCA. We thank the reviewer for bringing this up. * Response to Weaknesses #2 and #3 on comparisons between ESPACE and other compression techniques such as quantization or pruning. * * We have mentioned that ESPACE can be applied with other techniques because its implementation is orthogonal to others. For instance, ESPACE can be implemented using a quantized or sparse number format. However, while that is the case, we did mention compression fundamentally makes models more susceptible to noise and that a detailed study of combining ESPACE with other techniques was left for future work (Section 1.1; line 33-35). This claim of orthogonality will be toned down in our updated manuscript. With that said, we appreciate the suggestion to include studies with other compression techniques. As the reviewer appreciates, it requires time and efforts to generate such results. But for this rebuttal, we have studied the following: ESPACE and FP8 quantization. While sub-8-bit quantization has been explored, we chose FP8 quantization because it is popular among practitioners thanks to its versatility: it usually does not require QAT and is available in NVIDIA Hopper GPUs. Let us compare the following: baseline BF16, ESPACE BF16, baseline FP8, and ESPACE FP8. We use Hopper FP8 (E4M3) and employ per-tensor dynamic max scaling. Both weights and activations are quantized to FP8, and for ESPACE, additional tensors in eq (3) are quantized to FP8, i.e., projected activation, projection matrix, and precomputed weight-projection product. For brevity, we employ GPT3-8B and Llama2-7B as representative models. Complete results with all other models studied will be included in the revision. We evaluate Wikitext test perplexity; our results are as follows (these results are also included as Table B in the attachment to the common rebuttal): |Model (Compression) |BF16 |FP8 | |-|-|-| |Baseline GPT3-8B |7.38 |7.41| |ESPACE GPT3-8B (21%) |7.00 |7.22| |ESPACE GPT3-8B (50%) |7.66 |7.85| |-|-|-| |Baseline Llama2-7B |5.06 |5.08| |ESPACE Llama2-7B (21%) |5.07 |5.13| |ESPACE Llama2-7B (50%) |5.67 |5.80| |-|-|-| * * It turns out that with ESPACE, susceptibility to quantization noise is slightly worse, where perplexity increases by ~0.1-0.2. This is reasonable since a compressed model is expected to be less robust to quantization noise (our prediction from lines 33-35). We thank the reviewer for the suggestion to add such results making our work stronger. We concede that ESPACE does makes quantization a bit harder, although we emphasize that the above are simple PTQ tests which could be improved with further optimizations such as SmoothQuant, GPTQ, or even QAT. * Response to Weakness #4, Question #2, and Limitation #1 on the impact of calibration set choice. * * We have found that the performance was not sensitive to the choice of calibration set. Specifically, we use 512 random sequences for calibration but found that consistent results were obtained when using smaller sets, e.g., 32 sequences (experiments on varying calibration set size were done for the GPT3-1.3B model). Thus, we believe calibration to be robust to the choice of calibration dataset. * Response to Question #2 on layer-wise sensitivity profiles. * * Layer-wise sensitivities to projections are included in detail in Appendix C, where the exact question is being answered for each layer of each model studied. The portfolio of layers’ importance was indeed utilized when selecting the compression configuration setting (see Section 4). To meet the page limit, we elected to keep the sensitivity study in Appendix C but did refer to it in the main text in Section 4.2. We ask the reviewer to check that these parts of the submission did in fact provide answers to the reviewer’s question. * Response to Question #3 on the outlier issue. * * Since ESPACE is an algebraic compression technique, as opposed to numerical ones such as quantization and pruning, we have not had any issue with the presence of outliers. Also recall that our experiments use BF16 precision which could be shielding ESPACE from the outlier issue. Nevertheless, we are encouraged that ESPACE has circumvented the outlier issue, but we will need further studies to make definite claims, particularly when using low-precision quantization. So, this is a good direction for future work which blends naturally with our plans. Thank you! Once again, we thank the reviewer for the encouraging comments and productive feedback! We hope our responses above were satisfactory to the reviewer. It would be much appreciated if the reviewer would consider increasing their score. --- Rebuttal Comment 1.1: Title: Thanks for your reply! Comment: I am happy with the authors' reply and the additional experimental results. I tend to keep my score. --- Reply to Comment 1.1.1: Title: Thank you Comment: We are grateful for the reviewer's acknowledgment of our rebuttal and positive opinion towards our replies and additional experimental results. We kindly ask the reviewer to consider increasing the score in order to reflect their post-rebuttal opinion of our work.
Summary: The paper introduces a novel technique for compressing large language models (LLMs) by reducing the dimensionality of activation tensors. The ESPACE method differs from traditional weight-centric compression approaches by focusing on activation tensors instead. ESPACE projects these activations onto a pre-calibrated set of principal components, preserving expressivity during retraining while enabling weight compression at inference through associative matrix multiplication. Strengths: 1. The paper introduces a novel approach to compressing large language models by focusing on the dimensionality reduction of activation tensors rather than the weights themselves. This method, termed ESPACE (Eigen Static Principal Activation Component Estimation), represents a creative combination of principles from tensor decomposition and matrix multiplication associativity. 2. This paper is highly technical and presents a robust foundation for constructing optimal projection matrices that minimize the mean squared error and forward-propagated noise metrics. 3. The paper is well-written and organized logically. The authors effectively communicate complex ideas, making the novel approach accessible to readers. Weaknesses: 1. More direct comparisons with other state-of-the-art activation-based and tensor decomposition methods could provide a clearer benchmark of ESPACE's effectiveness. 2. The paper could benefit from a more thorough discussion of the trade-offs in selecting compression settings and projection matrices for different application scenarios. 3. Including a robust error analysis would help identify potential weaknesses or failure modes of ESPACE, enhancing understanding of its limitations. 4. The study focuses primarily on GPT3 and Llama models, potentially limiting the generalizability of the results to other architectures. 5. The paper does not extensively explore ESPACE's performance across various inference tasks that differ in length or complexity, which could provide a deeper understanding of its practical implications. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How does ESPACE perform on different types of inference workloads, particularly those requiring higher precision or longer inference times? Additional details on these scenarios would help assess the method’s practical applicability in diverse real-world settings. 2. Could you elaborate on the decision-making process for selecting specific compression settings and projection matrices? A detailed discussion on how to balance compression rate and model performance in various contexts would be highly valuable for practitioners. 3. Have you conducted any tests to evaluate ESPACE’s robustness under unusual conditions or edge cases? Sharing any findings or future plans for such analyses would provide a clearer understanding of the method’s limitations and potential areas for improvement. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: 1. Include evaluations or discuss plans for testing ESPACE on various model architectures to better understand its generalizability. 2. Discuss potential challenges and solutions for deploying ESPACE in real-world systems, focusing on real-time performance and integration with existing workflows. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback provided, and for recommending acceptance of our paper. We also appreciate the positive comments provided with respect to the novelty, technicality, and clarity of our work. Here we provide answers to the concerns and questions raised: * Response to Weakness #1 on a lack of comparison to SOTA tensor decomposition methods . * * We agree with the reviewer that a comparison with other SOTA methods is required to establish a benchmark of ESPACE’s effectiveness. In our submission, we did include this comparison in Figure 4 and section 4.4 (Page 9). Indeed, we compared our Llama2-7b results to those of ASVD, SVD-Lora, and SliceGPT; all contemporary tensor decomposition compression works who have reported perplexity vs compression results for that model. This comparison showed that ESPACE noticeably advances the SOTA. Indeed, while lossless compression cannot be achieved beyond 5% compression (achieved by ASVD) for other methods, ESPACE maintains accuracy with 20% compression. Similarly, the 0.5 perplexity increase at 50% compression with ESPACE matches SliceGPT compression of 10%. Therefore, ESPACE generally advances tensor decomposition compressibility by a factor of 4-to-5x when matching other methods’ accuracy. It seems that this part of our discussion was missed in your original review. We apologize for not highlighting these comparisons more in our manuscript, this will be rectified in the revision. * Response to Weaknesses #2 and #3 on application-specificness and robustness analyses. * * We appreciate the feedback on discussing application-specific compression and robustness analyses. These were beyond the scope of our work but we will add discussions for these topics in our revised manuscript, and specify directions for future work in these scopes. * Response to Weakness #4 on the use of GPT3 and Llama2 and the impact on generalizability. * * The reviewer is correct in highlighting the importance of generalizability of results. GPT3 and Llama are contemporary open-source and popular LLM architectures used broadly in various applications. This is why we elected to cover them in our paper. Furthermore, by virtue of having several model instances in these families, we did experiment on five models overall (GPT3-1.3B, GPT3-8B, GPT3-22B, Llama2-7B, Llama2-13B). Thus, our experiments do cover a wide range of model sizes. Nevertheless, we agree that it is desirable to cover more model architectures, and this is absolutely in our plan for future work, which we plan to discuss in the revision. * Response to Weakness #5 on the variety of tasks studied. * * We have evaluated ESPACE on a wide range on inference tasks, including several LM evaluation harness benchmarks (see Tables 1 and 2), as well as the complex MMLU benchmark (which we added as part of this rebuttal in response to Reviewer yYTB above). But the reviewer is correct that we have not studied specifically the issue of varying context length. This is a good direction for future work, which will be mentioned in our revision. Thank you! * Response to Question #1 on the impact of high precision and long inference times. * * This is a great question! With regards to precision, and as mentioned in our paper, we have used BF16 throughout our experiments and never had any numerical issues. Having said that, because our compression technique is algebraic, rather than numeric (e.g., such as works doing quantization and/or pruning), we do not anticipate issues should higher precision be required. With regards to workloads requiring long inference times, such scenarios are specifically instances where ESPACE’s benefits can be leveraged. Indeed, as shown in our paper, ESPACE leads to a reduction in 35-to-45% in GEMM latency. Thus, if GEMM latency is reduced, inference can run faster. In fact, we have recently generated end-to-end inference latency measurements to back this up. Please see our response to reviewer 1FAL on measured time-to-first-token: ESPACE helps significantly speed-up inference. * Response to Question #2 on the decision-making process for selecting specific compression settings and projection matrices * * We have used validation studies as described in Section 4.2 and further in-depth elaborated on in Appendix B. For the interested practitioner, we also summarize our strategy for these selections here. By using a validation set (which is mutually exclusive to training and test sets), we perform layer-wise studies where we analyzed various compression setting and projection matrix choice at each layer independently. We rank the accuracy profile of various candidate settings for each layer, and subsequently select the best choice to be further experimented on. Specifically, these become the compressed model configurations used for ESPACE in the experimental sections 4.3 and 4.4 in our paper. * Response to Question #3 on tests to evaluate ESPACE’s robustness under unusual conditions or edge cases * * We have not conducted such studies. We kindly ask the reviewer to provide examples of what qualifies as unusual conditions and edge cases. We will study those as future work and include a relevant discussion in the revision. Once again, we thank the reviewer for the encouraging comments and productive feedback! We hope our responses above were satisfactory to the reviewer. It would be much appreciated if the reviewer would consider increasing their score.
Summary: This paper introduces ESPACE, a method that reduces activation in models via tensor decomposition, thereby aiding in the reduction of model size and GEMM latency. Strengths: - The paper is easy to follow; - The method is simple but efficient. Weaknesses: - Wikitext-103 is a specific type of language dataset that focuses on knowledge-based content. However, models like GPT-3 and Llama2 are capable of handling a broader range of tasks, including math and question-answering. Therefore, it is necessary to evaluate perplexity (PPL) on a more diverse set of data. - ESPACE requires ongoing training. It is important to understand the duration required for ESPACE's continual training process. - In Table 1 and Table 2, since ESPACE is continually trained with more data than the original model, a direct comparison may not be entirely fair. It would be more convincing to compare the original model after it has been trained on the same amount of data as ESPACE. - For generative models such as GPT-3 and Llama2 discussed in this paper, there is a need for more comprehensive performance validation, including metrics like mmlu and math benchmarks. Technical Quality: 1 Clarity: 2 Questions for Authors: Please see the weaknesses. Confidence: 4 Soundness: 1 Presentation: 2 Contribution: 3 Limitations: The authors address the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback provided. We provide detailed answers to the reviewer below: * Response to weaknesses #1 and #4 on Wikitext being an insufficient benchmark for empirical results and the need for a more comprehensive set of validation metrics. * * The reviewer is correct in that Wikitext is a language dataset focusing on knowledge-based content. As such, evaluating Wikitext perplexity is not the only metric to report on for proper evaluation of LLMs. In our submission, we had included evaluations of the models on the diverse LM evaluation harness suite of tasks (BoolQ, Hellaswag, Piqa, Race, Winogrande – see Tables 1 and 2 in Section 4). Of these question-answering tasks, BoolQ and Race are knowledge-based, while Hellaswag, Piqa, and Winogrande comprise common-sense and logical reasoning. Our results (in the submission) for these tasks showed that the accuracy-preserving strength of ESPACE is not limited to Wikitext perplexity, since the scores obtained on these tasks were very close to those of the baseline. In case the reviewer missed the results on downstream task accuracy in the original review, we kindly ask to check again. * * We’ve further expanded these results by adding MMLU evaluation as per the reviewer’s recommendation. The following table (also included as Table A in the common rebuttal attachment) summarizes these results on the MMLU benchmark (which itself comprises a broad range of tasks including mathematical and logical reasoning in addition to human knowledge). Here we use the same models, compression settings, and checkpoints as those discussed in our submission. MMLU inference is done using the zero-shot approach: |Model (Compression) |MMLU score| |-|-| |Baseline GPT3-1.3B | 25.5%| |ESPACE GPT3-1.3B (20%) | 25.3%| |ESPACE GPT3-1.3B (47%) | 25.1%| |-|-| |Baseline GPT3-8B | 26.3%| |ESPACE GPT3-8B (21%) | 31.5%| |ESPACE GPT3-8B (50%) | 27.4%| |-|-| |Baseline GPT3-22B | 36.3%| |ESPACE GPT3-22B (40%) | 42.2%| |ESPACE GPT3-22B (55%) | 39.8%| |-|-| |Baseline Llama2-7B | 42.2%| |ESPACE Llama2-7B (21%) | 39.6%| |ESPACE Llama2-7B (50%) | 32.7%| |-|-| |Baseline Llama2-13B | 52.9%| |ESPACE Llama2-13B (20%)| 49.4%| |ESPACE Llama2-13B (50%)| 39.4%| |-|-| * * As we can see, the results are generally consistent with our earlier findings from our experiments on Wikitext and the set of downstream tasks from the LM evaluation harness that were included in the original submission. We note that MMLU is a complex benchmark, such that the GPT3-1.3B model is too weak to produce meaningful results beyond a random guess. For larger GPT3-models, we once again find that moderate compression using ESPACE leads to improvements over the baseline, specifically for GPT3-8B with 21% compression and GPT3-22B with 40% compression. The improvements in MMLU scores are significant, and even higher than for the benchmarks covered in our submission. Indeed, ESPACE leads to a ~5% absolute increase in MMLU score, which corresponds to a 16%-to-19% relative improvement. On the other hand, 50% compression with ESPACE leads to an accuracy comparable to the baseline, which is consistent with our findings in our original submission. Our results for Llama2 models are not as strong, and we attribute this to the handicaps of our training sessions that were discussed in Section 4.4. Specifically, our experimental setup did not include the pre-training dataset and hyperparameter selection used by Meta to produce accurate Llama2 models. As such, while ESPACE does retain some MMLU accuracy, it does not perform as strongly for Llama2 as it does for GPT3. We also note that the MMLU benchmark is known to be challenging for base (unaligned) models (which we use throughout our studies). Overall, we are glad to include these additional results in our paper as we believe, in agreement with the reviewer, that they improve our work. Thank you for the suggestion! * Response to weakness #2 on the duration of the continual training process. * * In the original submission, we discussed training duration in Section 4.3 (330B tokens for GPT3) and section 4.4 (200B tokens for Llama2). The reviewer may have missed this discussion in the original review. We ask the reviewer to check again. * Response to weakness #3 on a direct comparison between compressed model and baseline not being fair due to the consumption of additional data in the continual training process. * * The reviewer makes an excellent point! Continued training consumes more tokens overall compared to the pre-trained baseline, therefore it is important to understand the impact of the continuous training session and dissociate it from the adaptation to compression. We would like to point out that this is exactly why we chose to retrain the Llama2-7b baseline using the same tokens and training session as those used for the ESPACE models. Indeed, see Table 2 (second row), where we included results for a retrained Llama2-7B model and the motivation for it at line 343 which is specifically to address the concern raised by the reviewer. This retrained baseline now matches the amount of training data as ESPACE, while its accuracy closely matches the original baseline. Therefore, this result allows us to compare ESPACE models to the baselines, either original or retrained ones (since they have matching accuracy). We kindly ask the reviewer to check this again. In conclusion, we first acknowledge that the reviewer has made very good points. At the same time, the concerns raised by the reviewer were in fact already addressed in our original submission. In our revision, we will highlight the above points to be more explicit. Nevertheless, we hope the reviewer will consider increasing their score. --- Rebuttal Comment 1.1: Comment: Thanks for the response. The MMLU results of Llamma-2 are still weak as a significant performance reduction. I suggest using SFT data (like Alpaca) to improve the results to validate the possibility of ESPACE to improve the performance. As most of the weaknesses are addressed, I am willing to raise my score. However, I expect the authors to show better empirical results in the next version. --- Reply to Comment 1.1.1: Title: Thank you! Comment: We sincerely thank the reviewer for acknowledging our rebuttal and updating their review to recommend acceptance. We also agree with the reviewer on the need to further improve the Llama2 results and are grateful for the suggestion to use SFT data such as Alpaca. We promise to work on this and show improved results in the revision.
Rebuttal 1: Rebuttal: Dear reviewers, We would like to thank you for the useful feedback to our work. We’ve addressed every concern and question raised in the individual responses. In this common response we would like to emphasize a few points. First, it seems that some parts of the paper were missed by some reviewers. We wanted to emphasize the presence of the following information in our original submission: * Evaluations on a diverse set of downstream tasks from the LM evaluation harness (BoolQ, Hellaswag, Piqa, Race, Winogrande – see Tables 1 and 2). In fact, reviewer 1FAL explicitly states our “multiple ablations” as a strength of our work. However, Reviewer yYTB appeared to have missed those and mentioned we only evaluated our method on the Wikitext dataset. * Information about training time can be found in Section 4 of our submission. Reviewer yYTB inquired about that. * Results with a retrained Llama2 baseline (see Section 4.4. and Table 2) were included to address the issue of comparing a continuously trained model to a baseline having consumed the same amount of training data. Reviewer yYTB raised this important point but appears to have missed our inclusion of this result in our submission. * Comparison to SOTA tensor decomposition works is present in Section 4.4 and Figure 4. This was inquired by Reviewers maTu and 1FAL. * Studies on layer-wise sensitivities are discussed in-depth in Appendix C and referenced in Section 4.2. This was inquired by Reviewer ndkj. * A study on inference time proxied by GEMM latency measurements is included in Section 4 and Tables 1 and 2. This was inquired by Reviewer 1FAL. More detailed responses to the above are provided in the individual rebuttals. We want to emphasize this in case any reviewer had provided lower scores due to missing any of the above information while reading our submission. We hope that the above clarifications can rectify that. In addition, in the individual rebuttals, we provide answers to all other concerns and questions raised by the reviewers. We also wish to point out that, as part of the rebuttal, and thanks to some excellent points made by the reviewers, we have added three sets of results as follows (these results are included in the attached extra document and as inline tables in the individual responses to make the reading of rebuttals smoother): * Additional results evaluating all models on the MMLU benchmark. For more details, please refer to the rebuttal to Reviewer yYTB and Table A in the attached document. * Additional results combining ESPACE with FP8 quantization. For brevity, in the rebuttal we only included such results on GPT3-8B and Llama2-7B as representatives to all models studied. For more details, please refer to the response to Reviewer ndkj and Table B in the attached document. * Additional latency measurements on the time-to-first-token for all models to supplement the GEMM latency measurements that were present in the original submission. For more details, please refer to the rebuttal to Reviewer 1FAL and Table C in the attached document. The above set of new results significantly strengthens our work and will be included in the revision. We thank the reviewers again for the feedback. Pdf: /pdf/90e457b6e34fdf5b5f647c25f2a57ba12b84ed8a.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Achieving Linear Convergence with Parameter-Free Algorithms in Decentralized Optimization
Accept (poster)
Summary: The paper introduced a new parameter-free algorithm based on forward-backward splitting technique and variable metric for decentralized learning problems for convex locally smooth functions. Convergence guarantee with favorable rate and analysis are provided. Strengths: 1. The paper proposed the first parameter-free decentralized training algorithm integrating line search and splitting technique. 2. Convergence guarantee and analysis under milder conditions than previous studies are provided. 3. The guidelines of design seems to be of independent interest and further investigation. Weaknesses: 1. The proposed algorithm is complicated, and it seems that in each iteration, a lot of computation needs to be carried out. How does the algorithm compares to the previous ones in terms of computational complexity? 2. The proposed algorithm needs more experimental evaluation. 3. In certain parts, the paper is a bit hard to follow. Technical Quality: 3 Clarity: 2 Questions for Authors: See weaknesses. Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: No. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Referee for reviewing our work and the positive assessment on the novelty of the paper. Our reply to her/his comments/questions follows. 1. **"The proposed algorithm is complicated":** The proposed algorithm has comparable communication cost (step S.1 and S.2) and computational complexity (gradient evaluation) of all existing notorious decentralized algorithms that converge to an *exact* solution of the optimization problem, such as EXTRA, NIDS, and Gradient Tracking, *except for the line-search procedure*, which results in some extra computation in the evaluation of the function value. This extra computation is what allows one to achieve adaptivity of the algorithm, that is, convergence with **no knowledge** of any optimization or network parameter. On the contrary, **all** the existing decentralized algorithms require **full knowledge** of the optimization and network parameters to converge. In practice, this information is not available at the agents' side, and needs to be acquired, if one wants to implement these schemes. This calls for some nontrivial procedure that produces reasonable estimates of these optimization and network parameters, which results in additional computation and communication costs. On our side, the backtracking procedure is the only extra (reasonable) computational cost one needs to pay, to obtain for the first time a decentralized algorithm that does not require any centralized knowledge and implementable in practice without any additional procedure. In the future, it will be interesting to replace the line-search with some other adaptive procedures that require less function evaluations. 2. **"The proposed algorithm needs more experimental evaluation"**: We thank the Referee for her comment. We will expand the numerical evaluation, along the following directions: i) logistic and ridge regression problems on other network topologies; ii) a new comparison between the proposed method and EXTRA and NIDS where for the latter we hand-tune the stepsize for the best practical performance (as requested by other Referees); and iii) simulation (and comparison with the aforementioned schemes) of a newly added adaptive method wherein the global min consensus is now replaced with the local one (to address some concerns of Referee H817). We added in the general rebuttal section a pdf with some of the above results. We kindly refer the Referee to that section for more details on the experiments and the figures. We hope that the additional experiment will address the Referee's request. We are open to suggestions, if the Referee had something else in mind for the experimental evaluation. 3. **"In certain parts, the paper is a bit hard to follow":** We will be happy to improve the presentation and provide all the necessary clarifications, if the Referee can be more specific on which part is ``hard to follow''. We appreciate if the Referee can point us to the parts that are not clear to her/him. 4. We would like to stress one more time that this is the first attempt to bring adaptivity in the decentralized setting providing a systematic approach, and algorithms provably achieving linear convergence. The main challenge in bringing adaptivity in the *decentralized* setting is to identify in (existing) distributed algorithms a (descent) 'direction' (and local merit function) along with performing the adaptive step-size selection. The tuning of the stepsize in decentralized algorithms must depend on network properties, as stepsize values similar to centralized settings generally lead to divergence. Thus, any candidate direction should contain information on the optimization *and* network. There is no understanding of how the network should influence the direction and how to encode in the candidate direction optimization and network information. We offer the first principle-based procedure to resolve these issues, addressing the major challenge that has prevented the migration of centralized adaptive stepsize techniques to the decentralized setting. Hopefully this will be trigger new works in this direction. Last but not least, our algorithm is provable convergent also when applied to functions that are **only locally** smooth (and locally strongly convex), which is not the case for the majority of existing decentralized algorithms, e.g., EXTRA and NIDS. This enlarges significantly the classes of problems our scheme can be applied. We would appreciate an assessment of the paper from the Referee based on this important contribution. Thanks again for the feedback on the paper. Please let us know if our answer and additional posted material satisfactorily have addressed all the Referee's comments/requests. --- Rebuttal Comment 1.1: Title: Thank you Comment: Thanks for the response! I have read the rebuttal and decided to maintain my score.
Summary: This paper proposed a parameter-free method for decentralized learning and showed that the method converges to the optimal solution linearly without hyperparameter tuning. Strengths: This paper proposed a novel decentralized method and the convergence rate is analyzed under the general setting. Weaknesses: 1. Theorem 4 provided the convergence rate of the proposed method without hyperparameter tuning, but Corollary 4.1 provides the results with some $\beta_1$ and $\beta_2$. Thus, the reviewer thinks that the results in Corollary 4.1 require hyperparameter tuning. The author discussed the convergence rate in Corollary 4.1, but this result is not the convergence rate of the parameter-free method. The reviewer thinks that it is necessary to discuss the results in Theorem 4 and compare these results with one of the existing methods. 2. The reviewer does not understand what the author would like to conclude from Theorem 5 (in the weakly convex setting). Theorem 5 only shows that the amount of parameter updates (i.e., $\| X^j - X^{j+1} \|^2$) is bounded from above by some factor and does not show that the proposed method converges to the optimal solution. 3. The author did not tune the stepsize for EXTRA and NIDS in Fig. 1 and did not mention how to tune the stepsize in Fig. 2. The author claimed that the proposed method consistently outperforms EXTRA and NIDS in Figs 1 and 2, but these results are not convincing because the results of NIDS and EXTRA are suboptimal, at least in Fig. 1. The reviewer would like to see the results of EXTRA and NIDS with well-tuned stepsize and see the comparison between the proposed method and the prior methods. 4. The line 3 in Algorithm 3 can not be calculated in a decentralized manner. Technical Quality: 2 Clarity: 1 Questions for Authors: See the above comments. Confidence: 3 Soundness: 2 Presentation: 1 Contribution: 2 Limitations: See the above comments. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Referee for her/his comments, which will help us to clarify some parts of the paper as well as improve the revised version. Our detailed reply follows. 1. **The reviewer thinks that the results in Corollary 4.1 require hyperparameter tuning"**: We apologize for this misunderstanding, due to the lack of adequate comments from our side. **There is no need of hyperparameter tuning here**. The two parameters $\beta_1\geq1$ and $\beta_2>0$ are just *extra* degrees of freedom to offer further flexibility in the algorithm design (see comment below), but they are not necessary. In fact, **(i)** the theoretical convergence rate as in Th. 4 (or Corollary 4.1) does not depend (asymptotically) on $\beta_1$ and $\beta_2$. **(ii)** One can set $\beta_1=\beta_2=1$ and they will disappear from the algorithm. We could have introduced the algorithm directly setting $\beta_1=\beta_2=1$. Notice that all our simulations are conducted under this choice, $\beta_1=\beta_2=1$, which provides quite compelling performance. **Further comments on $\beta_1$ and $\beta_2$**: at the high level, the introduction of these two parameters is to allow practitioners to use 'larger' stepsizes $\alpha_i^t$ out of the line-search, via the *nonmonotone* sequence $\{\gamma^k\}$, where $\gamma^k=\left(\frac{k+\beta_1}{k+1}\right)^{\beta_2}$. This helps in practice to achieve better performance, if one is willing to explore this extra degree of freedom. This is however not necessary for the theoretical convergence and, as mentioned, one can either choose $\beta_1=\beta_2=1$ or directly set $\gamma^k=1$ for all iterations $k$. We discussed this matter in lines 207-211 of the original submission. We will further clarify this aspect in the revised version of the paper. In general we talk about `` hyperparameter tuning'' when the parameter involved are not free but must satisfy some conditions *coupling* them with other parameters of the algorithm (such as the stepsize, etc.). In that case, their choice would be no longer free. This is not the case here, because there are no such requirements on $\beta_1$ and $\beta_2$, but $\beta_1\geq1$ and $\beta_2>0$ (hence not restricting the choice of any other algorithm tuning parameter). 2. **About Theorem 5 (in the weakly convex setting)**: We thank the Referee for the comment. The result establishes the asymptotic optimality of any limit point of the sequence generated by the algorithm. However, we agree that the nonasymptotic convergence result is not particularly strong. Given that the strongly convex case is more challenging in terms of guaranteeing linear convergence, and considering the new anticipated results from other Referees' comments, we will remove the convex case. This will allow us to use the space to introduce the local min-consensus variant and its convergence, and possibly the stochastic case under the Retrospective Approximation framework. We hope that the Referee will agree that the convex case is not the major result of the paper. 3. **The author did not tune the stepsize for EXTRA and NIDS in Fig. 1 ...:** It is challenging to fairly compare our scheme with existing ones because the existing decentralized schemes require full knowledge of the network and optimization parameters, whereas our scheme does not. We agree that other choices could have been made for the tuning of EXTRA and NIDS. Some question to guide the process are: **(i)** Should the comparison be done using the *theoretical* tuning recommended in the papers to ensure convergence? By doing so, the comparison would be fair in the sense that all algorithms are guaranteed to converge. However, this approach may result in quite stringent stepsize values Or **(ii)** Should one take a more practical approach, ignoring theoretical conditions (and thus convergence guarantees) and hand-tuning for the best observed convergence? Both approaches introduce some level of unfairness. In our paper, we followed the former approach: we used the tuning recommended in the respective papers for our scheme, EXTRA, and NIDS to guarantee convergence. However, it can still be argued that the comparison is unfair because NIDS and EXTRA require full knowledge of the network and optimization parameters, while our scheme does not. But in this case, the unfairness is towards our scheme. To address the Referee's suggestion, we have conducted new experiments where we hand-tuned EXTRA and NIDS for their best practical performance and compared them with our method. Note that in this setting, EXTRA and NIDS lack theoretical convergence guarantees, while our method maintains them. We kindly refer the Referee to the global rebuttal section for more details on the experiments and the new figures (attached pdf therein). We are open to conduct alternative comparisons if the Referee has something different in mind to share. 4.**The line 3 in Algorithm 3 can not be calculated in a decentralized manner.** This method is actually implementable in the existing wide area networks, thanks to the LoRa technology. This has been briefly discussed in the paper (lines 212-225) and further elaborated in the Reply to Reviewer H817, please refer therein to item 1 **About the global min-consensus**. Also, we provided a variant of the algorithm that replaces the global min-consensus with a **local** min-consensus (see **From the global min-consensus to the local min-consensus** in the reply to Reviewer H817). This variant has been simulated and results provided in the attached pdf in the section of general comments. Hope the Referee is willing to reconsider her/his initial assessment, cosidering that this is the *first* adaptive methods in the decentralized literature, even just focusing on strongly convex problems. Achieving adaptivity posed several challenges, resolved in this work for the first time. We elaborated further on this aspect in point 3) of our reply to Reviewer H817, which we refer the Referee to. --- Rebuttal 2: Comment: We thank the authors for their response. All concerns were addressed after reading the authors' replies. The reviewer thinks the proposed methods converge without any hyperparameter-tuning, which is a strong advantage over EXTRA and NIDS. However, it is reasonable that the parameter-free methods are inferior to these methods with well-tuned hyperparameters, and the reviewer guesses that many readers would like to see how large this gap is. Thus, it is important to discuss this gap in the experimental section. All reviewer's concerns were solved. The reviewer raised the score to 5. --- Rebuttal Comment 2.1: Comment: Thanks for reading the rebuttal and reassessing the evaluation of the paper. We will expand the numerical section and properly comment the comparison, as requested. As followup of your last comment, we wish to remark that it is not obvious that a nonadaptive, even grid tuned, decentralised algorithm is superior to an adaptive one. The reason is that, even if grid-search tuned, the step size in a non adaptive algorithm is chosen ‘once and for ever’. On the contrary, in our adaptive algorithm, the step size changes at each iteration. This offers the possibility to use larger step size vales when traveling over part of the landscape with favourable Lipschitz gradient constant. This is not the case with fixed (albeit tuned) step size algorithm, which in general are forced to use much smaller step size. We think that it will not be difficult to construct case-study functions enhancing this difference.
Summary: This paper studies adaptive parameter determination in decentralized optimization. It is a meaningful and interesting topic to investigate. They propose a decentralized method to solve consensus optimization, develop an adaptive parameter strategy, and show the linear convergence of their algorithm. Strengths: The main strength of the paper lies in the high value of the topic they studied, which is indeed a very meaningful and also a difficult topic to explore. Weaknesses: I was very excited when reading the abstract of this paper, because I know the difficulty of this topic. I was worried about the global min-consensus step, while the authors developed a local min-consensus strategy to replace it, which well addressed my concerns. Technical Quality: 1 Clarity: 2 Questions for Authors: No Confidence: 4 Soundness: 1 Presentation: 2 Contribution: 1 Limitations: No Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Referee for the review. We kindly disagree with her/his assessment, which is an oversimplification and trivialization of our contributions, missing the challenges our work addresses. Details follow. 1)**About the global min-consensus:** The Referee's sole concern is the presence of a min-consensus step in the algorithm, whereby agents evaluate the minimum of their local stepsizes over the network at each iteration. The Referee questions the implementability of this step in practice. **The reality is quite the opposite**. As discussed in our paper (lines 212-225), the min-consensus step **is implemented seamlessly** in commercial wide-area mesh networks without changes to existing communication protocols or additional hardware. This is facilitated by the *decade-old* Long Range (LoRa), commercialized by Semtech and widely integrated into most commercial transceivers in wide-area networks. The Referee may want to know that LoRa is used in various fields, including industry, smart cities, environmental monitoring, agriculture, healthcare, and long-range, low-power Internet of Things (IoT) applications [2, 14, 15] (references as in our paper). LoRa supports communication ranges of hundreds of kilometers in free space, hundreds of meters in indoor environments, and half a kilometer in urban settings [14], with a maximum data rate of hundreds of kbit/s on average and a few kbits/sec at the longest possible range. **This technology is ideal for implementing the min-consensus step**: each agent broadcasts a few bits representing the quantization of its stepsize over the LoRa channel, which reaches all other agents in the network thanks to the extensive range of the LoRa signal. Each agent can then compute the minimum of all the stepsizes. **This resolves the issue of implementability raised by the Referee**. This is not merely our claim; it is a fact backed by existing technology. The Referee cannot overlook this well-established technology and solution. 2)**From the global to the *local* min-consensus:** Intellectually, we agree that developing a method without the global min-consensus is worthwhile. We can address this by providing a first variant of the algorithm where the *global* min-consensus is replaced by a **local** min-consensus. Specifically, we replace step 3 of Algorithm 1 with $\\alpha^k_i=\\text{min}_{j\\in\\mathcal{N}_i}$, where $\\mathcal{N}_i$ is the the set of neighbors of agent $i$ (including agent $i$ itself). Then, in the other steps the common stepsize is replaced by a diagonal matrices containing the individual stepsize $\alpha^k_i$ above. The convergence analysis can be readily adapted to this variant (see box 'Official comments' below), yielding: the number of iterations $N$ for $\\min\_{k\\in \\{1,...,N\\}} V^k\\leq \varepsilon$ reads $$N=\\widetilde{\\mathcal{O}}(d\_{\\mathcal{G}}N\_{\\varepsilon}),$$ where $N_\\varepsilon$ is the number of iterations achieved using the *global* min-consensus (this is what we defined as $N$ in Corollary 4.1, for the three cases) and $d_{\\mathcal{G}}$ is the diameter of the graph. This shows that if a global consensus is replaced by a min-consensus, there is a degradation of the overall number of iterations by a factor $d_\\mathcal{G}$. This is not surprising, because it is known that a min-consensus algorithm convergences on a network in a *finite* number of iteration of the order of the diameter. We believe that the multiplicative dependence of the convergence rate on $d_\\mathcal{G}$ can be improved or removed, see the box 'Official comments' below. We tested this new variant running new simulations reported in the attached pdf in the global comment section. We hope that this satisfies the intellectual curiosity of the Referee. We can add this variant to the revised paper, if the Referee wishes so. 3)**The Referee is overlooking our major contributions:** Our contributions cannot be trivialized to the min-consensus step only. In fact, equipping existing decentralized algorithms such as DGD, EXTRA, or NIDS with a global min-consensus procedure *does not grant them adaptivity*; even worse, it will jeopardize their convergence. **It is not even clear how to identify for them a (descent) 'direction'** (and local merit function) along with performing the adaptive step-size selection, see lines 190-195 in the paper. The tuning of the stepsize in decentralized algorithms must depend on network properties, as stepsize values similar to centralized settings generally lead to divergence. Thus, any candidate direction should contain information on the optimization **and** network. There is no understanding of how the network should influence the direction and how to encode in the candidate direction optimization and network information. **We offer the first principle-based procedure to resolve these issues, addressing the major challenge that has prevented the migration of centralized adaptive stepsize techniques to the decentralized setting**. Hopefully this will be trigger new works in this direction. Last but not least, our algorithm is provable convergent also when applied to functions that are **only locally smooth** (and locally strongly convex), which is not the case for the majority of existing decentralized algorithms, e.g., EXTRA and NIDS. This enlarges significantly the classes of problems our scheme can be applied. We hope this clarification helps the Referee appreciate the importance of our contribution. **In summary:** **(i)** We proved that *existing technology supports global min-consensus for free*. **(ii)** Still, we addressed the Referee's comment by providing and numerically testing a variant of the algorithm that removes global min-consensus. **(iii)** We clarified that our major contribution is not the global min-consensus and demonstrated that even with a global consensus step, existing schemes do not achieve adaptivity for free, let alone convergence guarantees. --- Rebuttal Comment 1.1: Title: Thanks for your strong rebuttal. I decided to raise my score Comment: Thank you for your efforts in responding my comments. This is a strong rebuttal and the "local min-consensus" addressed my concerns. Therefore, I decided to raise my score. However, I insist that the global min-consensus step is not preferable even if, as the authors discussed, that it has been applied in many real applications. This global consensus step requires multiple rounds of local min-consensus steps, which is similar to inner iterations inside each global outer iteration. Each inner iteration requires synchronization of all nodes, while synchronization in networks is not easy and do have costs, especially when the network size is large. Except the above, I do not have concerns, but have one question. Could the authors explain why the optimality errors of using the local min-consensus strategy oscillate seriously in your experiments? --- Reply to Comment 1.1.1: Title: Thank you Comment: We wish to thank the Referee for spending extra time to go over our rebuttal and reassessing the evaluation of our work. We are glad that the new suggested local min-consensus addresses his/her concerns. Thank you! Of course, we will add the local min-consensus and new simulations to the paper. Likely, we will remove the treatment of the convex case for the sake of space. Below we address the new questions/comments. -**spikes in the error dynamics over the iterations:** The reasons why numerically we observe the `spikes' in the error dynamics is because the local min-consensus is a 'static' procedures that estimate the global min among the stepsizes but cannot track its variations smoothly. We conjecture that we can cope with this issue putting forth *dynamic* estimation mechanisms of the global min-stepsize, which are capable to track time-varying signals. We will definitely investigate this topic in the near future, aiming at comparing alternative strategies to track time-varying global-min consensus over networks. We thank one more time the Referee for triggering this interesting direction. -**Few more comments on the implementability of the global min-consensus:** Based upon the new comments from the Referee on this aspect, we feel that some further clarifications may help. When we mentioned that LoRa allows one to readily implement global-min consensus 'instantaneously' over a mesh network, we forgot to point that it generates a *fully connected* network (wherein each node can reach everybody else) of low-throughput channels that **co-exists** simultaneously and continuously with the high-throughput mesh network whereby agents can only communicate with their immediate neighbors. This means that (i) the global min-consensus is reached in *one* iteration (everybody transmits its own stepsize and everybody receives the stepsize of all the others); this is because the LoRa network is a fully-connected graph (ii) The two type of communications, the stepsize broadcasting over the LoRa to any other agent and the transmission of the vector variables to each immediate neighbor, happen at the same time (not sequentially) because using different channels and protocols. The Referee may think of this system like two overlapping. co-exiting graphs, one complete (over which the stepsize are exchanged) and one not, modeling the mesh networks wherein vector variables are communicated among neighbors. However we agree with the Referee that some synchronism in the implementation of the algorithm is necessary, although there are no (conceptual) double-loops. In the revised version, we will rewrite the paragraph discussing this matter as well. Thanks again. --- Rebuttal 2: Title: Some technical details on the discussed new local-min consensus procedure Comment: For the sake of completeness, we show how to modify the existing proof in the paper, to account for the replacement of the global min-consensus with the local min-consensus. The changes are minor. Below we will use the same notation and equation numbering as in the paper. **Step 1:** Using the same merit function $V^k$ as in the paper (see eq. (11)) and following the steps of the proof of Th. 4, one can easily check that the decay of $V^k$ now becomes $$V^{k+1}\leq (1-\rho^k)(\gamma^k)^2 V^k + C (\\alpha_{\\max}^k-\\alpha_{\\min}^k) V^k+ (\\alpha_{\\max}^k-\\alpha_{\\min}^k) R^k \\qquad (A.1),$$ where $\\alpha_{\\max}^k$ (resp $\\alpha_{\\min}^k$) is the largest (resp. smallest) stepsize among the agents at iteration $k$, $C$ is a constant independent on the iteration index, and $R^k$ depends on $X^k$ and $D^k$ and is uniformly bound when $X^k$ and $D^k$ are so; let then $R$ such that $R\geq R^k$, for all $k$. Comparing (A.1) above with (12) in the paper, we notice that now in the decay of $V^k$ there is a second and third term on the RHS of (A.1), both due to the use of local min-consensus that no longer guarantees that all the stepsizes are equal at each iteration. The rest of the analysis consists in studying the dynamics of this extra error terms. **Step 2:** Based upon $\\alpha_{\\max}^k-\\alpha_{\\min}^k$ in (A.1), we naturally identify at each iteration $k$, a favorable and an undesired event, namely: 1. **Favorable event:** $\\alpha_{\\max}^k-\\alpha_{\\min}^k=0$. Substituting in (A.1), yields $V^{k+1}\\leq (1-\\rho^k/2)(\\gamma^k)^2 V^k$, which is the same dynamic of (12) in the paper, achieved under global min-consensus. This means that, subject to this event, the algorithm behaves like performing a global min consensus. 2. **Unfavorable event:** $\\alpha_{\\max}^k-\\alpha_{\\min}^k> 0$. The following facts hold for these two events: $\\bullet$ **Fact 1 (favorable event):** The proof in the paper shows that a number of $N\_{\\varepsilon}$ *consecutive* iterations of the favorable event are sufficient to guarantee that $V^k\leq \varepsilon$, where $N\_{\\varepsilon}$ is given by $N$ as defined as in Corollary 4.1 in the paper (for all the three cases there). $\\bullet$ **Fact 2 (unfavorable event):** Invoking the finite-time convergence of the min-consensus algorithm, it is not difficult to check that the unfavorable event can happen at most $2 d_{\\mathcal{G}}(\\log k+\log(L/L^0))$ times in $k$ iterations. **Step 3:** Combining Fact 1 and Fact 2, we can conclude that if the number of total iteration $N$ is large enough, namely $$\\frac{N}{2d\_{d\_{\mathcal{G}}}\\left(\\log N + \log (L/L^0)\\right)}>N_{\varepsilon},\\quad (A.2)$$ then it must hold $$\\min\_{k\\in \\{1,\\ldots, N\\}} V^k\\leq \\varepsilon.$$ The following is sufficient for (A.2) to hold: $$N=\\mathcal{O}\\left(\\max\\left\\{\\log d\_{\\mathcal{G}}+ \\log N\_\\varepsilon, \\log L/L^0\\right\\}d\_{\\mathcal{G}} N\_{\\varepsilon}\\right)=\\widetilde{\\mathcal{O}}(d\_{\\mathcal{G}}N\_{\\varepsilon}),$$ where $\\widetilde{\\mathcal{O}}$ neglects log-factors (independent on $\\varepsilon$). This completes the proof. We believe that the linear dependence of the number of iterations on $d_\\mathcal{G}$ can be improved to $\log d_\\mathcal{G}$, providing a finer analysis of the frequency of the undesired event (Fact 2). Furthermore, we conjecture that the dependence on $d_\\mathcal{G}$ can be eliminated (as multiplicative effect of $\\log 1/\\varepsilon$) if instead of using **static** procedures to estimate the global min among the stepsizes, like the local min-consensus, *dynamic* estimation mechanisms of the global min-stepsize are put forth. This is the subject of future investigations, and beyond the scope of this paper. We provided this result and proof only to satisfy the curiosity of the Referee. --- Rebuttal Comment 2.1: Comment: Thank you for the detailed explanation. I do wish to see the new local min-consensus strategy in the final manuscript.
Summary: This paper introduces a new algorithm for decentralized optimization. The main advantage over previous work in this domain is that it allows adaptive stepsize selection (via backtracking) that is independent of the properties of the functions being minimized. The analysis of the algorithm recovers the linear rate for convergence for functions that are smooth and strongly convex, and the standard sublinear rate for the convex case. Strengths: As the authors state, this seems to be the first fully decentralized algorithm with adaptive stepsize selection that has strong convergence guarantees. I found the operator splitting method for solving the KKT conditions innovative, and it is nice that this leads naturally to a way to do local stepsize adaptation. Weaknesses: While the numerical experiments are sufficient, new results on ridge and logistic regression can only get one so excited. The exposition of the saddle point reformulation at the beginning of Section 3 could be improved. Line 142: Does the optimization in (P') involve y at this point? Should x be capitalized? Line 143: Some discussion about why K is being introduced here would help the exposition a lot, along with a little more on why the solutions of (P) and (P') match. A lot of work is being left to the reader at this critical point. Technical Quality: 4 Clarity: 3 Questions for Authors: It is outside the scope of this submission, but I would be interested to know if this type of adaptive stepsize selection made a difference in practice in the stochastic setting. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Referee for reviewing our paper and her/his positive assessment. We are glad that she/he recognized the major novelty of the proposed approach, i.e., the novel operator splitting technique that naturally leads to local stepsize adaptation. Our reply to her/his questions follows. 1. **Simulations:** We agree that more simulations will be useful. We plan to add new experiments as follows: **i)** logistic and ridge regression problems on other topologies; **ii)** a new comparison between the proposed method and EXTRA and NIDS where for the latter we hand tune the stepsize for the best practical performance (as requested by other Referees); and **iii)** simulation (and comparison with the aforementioned schemes) of a newly added adaptive method wherein the global min consensus is now replaced with the local one (to address some concerns of Referee H817). 2. **Exposition:** We thank the Referee for her/his feedback. **i)** The Reviewer is right, in Line 142, there is a typo: y should not be there and x should be capitalized. **ii)** We will provide some insight on why K is introduced in (P'). This goes along with some clarification on why (P) and (P') have the same solution, under the condition that K satisfies condition (c2) (which serves exactly this purpose). We will clarify these aspects in the revised manuscript. 3. **Adaptivity in the stochastic setting:** The difficulty in implementing adaptivity in the stochastic setting lies in the line-search procedure, which is notoriously difficult in the presence of noise. This is a subject of our future investigation. However, a positive answer to the Reviewer's question can be provided for the class of stochastic optimization problem for which a *Retrospective Approximation Approach* is feasible for the line search. Specifically, at each iteration $t$, a sample approximation $f_i(x_i; \xi_i^t)$ and $\nabla f_i(x_i; \xi_i^t)$ is constructed for the population function $f_i(x_i)=\mathbb{E}_{\xi}[f_i(x;\xi)]$ and its gradient $\nabla f_i(x_i)$ ($\xi_i^t$ is the sample batch drawn by agent $i$ at iteration $t$), and the proposed local line-search procedure is performed using now $f_i(\bullet; \xi_i^t)$ and $\nabla f_i(\bullet; \xi_i^t)$. The difference with a classical stochastic line-search is that here the batch sample $\xi_i^t$ is kept *fixed* during the entire backtracking procedure (of course it can change from one iteration to another). Within this setting, all the results of the paper are preserved if read in expectation. While this Retrospective Approximation Approach does not cover stochastic oracles in their generality, it is a reasonable model for certain machine learning problems, for example those where evaluating $f_i(\bullet; \xi_i^t)$ and $\nabla f_i(\bullet; \xi_i^t)$ at a given point $x$ corresponds to perform one pass on the data batch $\xi_i^t$. Quite interestingly, in an early version of the manuscript, we had this result, which was removed in the final version because of space limit. We can add it back, maybe removing the study of deterministic convex functions. We thank again the Referee for her/his valuable comments. --- Rebuttal Comment 1.1: Comment: Thanks for this, especially your insights into the stochastic setting.
Rebuttal 1: Rebuttal: We thanks all the Referees for reviewing our paper and their feedback and suggestions. We did our best to address any concern in the individual reply, which we refer to for details. Below we only discuss the new added experiments. A common request from multiple Referees has been expanding the experiments, which we have done it. Some new experiments (more in the revised paper) are reported in the attach a pdf. More specifically, we did the following: 1. As requested by one Referee, we have changed the tuning of the benchmark (nonadaptive) algorithms, NIDS and EXTRA. We now select the stepsize by **hand-tuning** aiming for the best practical performance (fastest observed rate). Notice that in this case, NIDS and EXTRA are *no longer* guaranteed to convergence theoretically, because stepsize values typically violates the conditions for convergence. On the contrary our scheme has theoretical convergence guarantees. Furthermore, and quite remarkably, it compares favorably even with this hand-tuned instances of NIDS and EXTRA. 2. We have introduced and simulated also a new version of our algorithm, that replaces the global min-consensus with the **local** min consensus. This is to address a comment from the Reviewer H817 who considers the global min-consensus not desirable (we kindly disagree with this assessment, as *current technology supports scalar min-consensus over network for free*). The new variant still has good practical performance and convergence guarantees. 3. We have compared the above decentralized algorithms (NIDS, EXTRA, our original scheme based on global min-consensus, and the new variant based on local min-consensus) over different graph topology (not present in the original submission), namely (a) a line graph, (ii) a ring, and (iii) a random regular graph (with two degree values). On each graph we report experiments for logistic and ridge regression problems. **Some comments on the numerical results** 1. The figures demonstrate that our original scheme, based on global min-consensus, consistently outperforms EXTRA and is almost always superior to NIDS, even when these algorithms are hand-tuned. This holds true across different optimization problems and network topologies. This is particularly noteworthy considering that hand-tuning is neither desirable nor easily implementable in practice, especially when the network topology is unknown. Additionally, hand-tuning generally compromises theoretical convergence guarantees. On the contrary our schemes do not require any intervention and more importantly have theoretical guarantees. 2. The new variant of the proposed adaptive algorithm based on local min-consensus still shows strong performance, holding well against hand-tuned EXTRA and NIDS. **(i)** While the convergence rate remains linear, the error curves may exhibit some 'spikes.' This is consistent with the new convergence analysis provided (see details in the Reply to Reviewer H817) and is due to the min-consensus converging in a finite number of iterations proportional to the graph diameter. The correct way to interpret the convergence is that the minimum of the error gap within N iterations falls below the desired accuracy *geometrically fast*. **(ii)** As expected, as the graph becomes more connected, the new variant of the algorithm using local min-consensus matches the performance of the original algorithm using global min-consensus. 3. On convex problems (logistic regression) the gap in performance of the different algorithms (but EXTRA) is less pronounced than that in the strongly convex case. Pdf: /pdf/c04be555bac1ca2b12b7a3acdbad05c95ba188cc.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Ultrafast classical phylogenetic method beats large protein language models on variant effect prediction
Accept (poster)
Summary: The paper explores a method for estimating the transition matrix from multiple sequence alignments (MSAs). It utilizes a phylogenetic tree model, which is parameterized by the transition matrix and site rates. To estimate the transition matrix via maximum likelihood, an alternating optimization method is employed: first, the model parameters are fixed, and a set of phylogenetic trees is constructed from the MSAs. Then, given the MSAs, the trees, and the site rates, the transition matrix is estimated by maximizing the likelihood of the data given the tree. A recent method, CherryML, speeds up this process by replacing the likelihood of the data given the tree with the composite likelihood of the cherries (pairs of adjacent leaves) given the tree. This likelihood can be simplified using time-reversible models and can be learned from the tree, MSA, and leaf sequences. Building on the concepts of CherryML, this paper introduces a new method called FastCherry, which further simplifies the composite likelihood estimation by eliminating the need for tree construction. Instead, it focuses on creating disjoint pairs of similar sequences and estimating the composite likelihood from these pairs. Additionally, FastCherry extends the original WAG phylogenetic model by considering per-site transition matrices, with all parameters estimated using maximum likelihood. The authors demonstrate the significance of their work through two different evaluation tests. The first test evaluates the speedup achieved by FastCherry compared to CherryML. The second test examines variant prediction performance in ProteinGym, comparing FastCherry to baseline approaches such as ESM-1v and ESM-IF1. Strengths: The paper addresses an important application problem, though its relevance to the machine learning community may be limited. It is noteworthy that a phylogenetic model can achieve results comparable to, and sometimes slightly better than, pretrained PLMs in a specific use case. The paper is well-written, providing careful explanations of technical points that require a background in protein science. Weaknesses: The idea of simplifying the trees by partitioning the MSA into pairs of similar sequences is an approximate representation of the cherries in the tree. Since the partitioning algorithm is a greedy one, it may overlook the full structure of the MSA, reducing the relationships between sequences to binary relations between pairs. That also explains why CherryML with FastCherry lose the accuracy compared to CherryML with FastTree in Figure 1.b. This work represents an incremental improvement over the original CherryML proposal, and therefore, its novelty is somewhat limited. Additionally, the results demonstrated in the variant prediction benchmark show only marginal improvements compared to language models. For a NeurIPS paper, I would expect more significant advancements, both in methodology and in the results presented. For the machine learning community, it is not sufficient to demonstrate that one method is better than another through benchmark results alone. A deeper analysis explaining why the phylogenetic model performs better than language models in certain benchmarks and when it fails to do so would provide valuable insights for the community to learn from. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. How could the authors provide a deeper analysis of why the phylogenetic model performs better than language models in certain benchmarks? 2. Could you please explain a bit on why on Figure 1.b when the number of family increases the error rate seem increase for CherryML? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. Please find our response below: > The idea of simplifying the trees by partitioning the MSA into pairs of similar sequences is an approximate representation of the cherries in the tree. Since the partitioning algorithm is a greedy one, it may overlook the full structure of the MSA, reducing the relationships between sequences to binary relations between pairs. That also explains why CherryML with FastCherry lose the accuracy compared to CherryML with FastTree in Figure 1b. The whole purpose of developing FastCherries is to avoid doing expensive computations while assuring little reduction in accuracy. It is a worthwhile tradeoff that can benefit applications requiring scalability, such as the one considered in our manuscript. Indeed, for the variant effect prediction task, it can be seen in Supplementary Figure S1 that FastTree is too slow to scale up to the full dataset size. In contrast, FastCherries provides no loss in performance and can crunch all the available data, yielding the best results. > This work represents an incremental improvement over the original CherryML proposal, and therefore, its novelty is somewhat limited. We respectfully disagree with this view. As our results clearly demonstrate (Figure 1, Supplementary Figure S1, and Supplementary Table S1) and as the other reviewers have noted, the performance of CherryML with FastCherries is comparable to that of CherryML with FastTree while being one to two orders of magnitude faster. Furthermore, we are proposing a novel method (SiteRM) to estimate site-specific rate matrices and this framework has the potential to advance phylogenetic inference significantly; software such as IQTree and its partition model puts this application well within reach. When compared to the seminal LG model, SiteRM provides a substantial improvement in variant effect prediction, as seen in Supplementary Table S1, suggesting that it can model site-specific evolutionary constraints more accurately than previous approaches. > For the machine learning community, it is not sufficient to demonstrate that one method is better than another through benchmark results alone. A deeper analysis explaining why the phylogenetic model performs better than language models in certain benchmarks and when it fails to do so would provide valuable insights for the community to learn from. > How could the authors provide a deeper analysis of why the phylogenetic model performs better than language models in certain benchmarks? The improved performance of our method comes from conditioning on the wildtype to obtain a local fitness function $P(y | x, t=1).$ Please see our response to Reviewer 2 for more discussion on the intuitions. We plan to make this more clear in the final version. We do agree with the reviewer that deeper error analysis of large language models would be a valuable research direction. Ultimately, we believe that the best variant effect predictors will combine our evolutionary modeling $P(y | x, t)$ with protein language modeling. > Could you please explain a bit on why on Figure 1.b when the number of family increases the error rate seem increase for CherryML? The fact that the (rounded) error rate is 1.7 for $2^{11}$ and 1.8 for $2^{13}$ is just a small sampling effect. At these larger sample sizes, the statistical risk of the estimator has converged to its bias term. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I still have concerns about reducing the dependency to pairs, as this seems like a rough approximation. Even though empirical results indicate only a small reduction in accuracy for certain test cases, this approach should be supported by a theoretical analysis. Specifically, it should demonstrate which types of data distributions result in an acceptable loss of accuracy on average, as well as the extent of the accuracy loss in the worst-case scenario. Therefore, I would prefer to maintain my score as it is. --- Reply to Comment 1.1.1: Comment: While we agree that theoretical results would be nice, the matrix exponential is an unwieldy object and therefore deriving quantitative theoretical results regarding the asymptotic bias or sample complexity of our method is challenging. These challenges are not unique to our work but rather true of the whole field of statistical phylogenetics which relies on such continuous-time Markov chain models, such as the seminal work of Whelan and Goldman or that of Le and Gascuel (neither work provides theoretical results). Following best practice in the field, we showcased our work on a variety of standard simulated and real-data benchmarks — as well as the novel variant effect prediction benchmark — where we showed that our method has comparable or superior performance at a fraction of the computational cost.
Summary: The paper proposed a fast method for phylogenetic estimation from MSA called FastCherries, this method significantly speeds up the computational process with high accuracy. This method was demonstrated to be orders of magnitude faster than existing methods while achieving similar statistical efficiency. Further, the authors proposed the SiteRM model for variant effect prediction, also outperforming existing models, particularly large language models. Strengths: 1. The proposed methods are significantly faster than existing methods and are scalable for large sets of MSA. Meanwhile, the methods maintain a high level of accuracy, making a good contribution to the field of protein evolution. 2. The methods made a superior performance in variant effect prediction than the large protein language model, which is effective in particularly GPU resources. 3. The methods were tested rigorously on both simulated and real data. Weaknesses: 1. The paper lacks a comparison to other methods in terms of computational resources, such as memory usage. Could the author provide some results? 2. The model is still highly parameterized. Can the SiteRM model avoid overfitting with extensive datasets? 3. How does the method perform with datasets that have a large number of gaps in MSAs (for example, the sequence has very low homology)? 4. Can the methods be extended to incorporate other types of biological data, such as DNA/RNA sequence or structural alignment? Technical Quality: 4 Clarity: 3 Questions for Authors: See above. Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Yes, the limitations are well described. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. We respond below: > The paper lacks a comparison to other methods in terms of computational resources, such as memory usage. Could the author provide some results? Our method uses linear space; the logarithmic factors in the computational runtime come from divide-and-conquer and binary searches. This makes our method space efficient (up to a constant). We will discuss this and add practical results on memory usage in the final version. In more detail, the space complexity is $O(s^2 br + nl + ls).$ Here, $O(s^2 br)$ is the cost of the precomputed matrix exponentials, which is shared across all MSA’s. $O(nl)$ is the cost to load one MSA into memory. With a runtime of $n \log n$, pairing takes $O(n)$ space, since it takes $O(n)$ to store the distance between x and all other sequences, and the space complexity equals the cost to store the stack of recursive calls. $O(ls + r)$ is the space of computing the initial site rates. The $O(ls)$ term comes from the count matrix $\text{count}[i,j] =$ the number of times character $j$ occurs at position $i$. Computing site rates given branch lengths takes $O(r)$ space and computing branch lengths given site rates takes $O(l)$ space. In both cases, we only track the best rate for the current site or best branch length for the current site. > The model is still highly parameterized. Can the SiteRM model avoid overfitting with extensive datasets? As described in Section 3.3 of our manuscript, we avoid overfitting by using pseudocounts drawn from the LG model, which serves as a prior. For the results described in the paper, we find that mixing the pseudocounts with the data at a 1:1 ratio works well in practice, but it should be possible to tune the regularization coefficient $\lambda$ using cross-validation. Other options we did not explore that would benefit from large datasets would be some form of weight-sharing of the rate matrices across different positions. > How does the method perform with datasets that have a large number of gaps in MSAs (for example, the sequence has very low homology)? The QMaker "insect" dataset is especially notorious for containing a large number of gaps. We observe a similar performance to FastTree in Supplementary Figure S1. > Can the methods be extended to incorporate other types of biological data, such as DNA/RNA sequence or structural alignment? Indeed, by using compound states one can model the joint evolution of amino acids and structural states. Our software is not limited to amino acid alphabets and can be applied to learn site-specific rate matrices for other kinds of molecular data such as DNA/RNA or structural states. We plan to pursue this in future work. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their detailed rebuttal. After reviewing the responses to my questions, I checked the answers provided and decided to keep my score. This paper is suitable for acceptance.
Summary: The authors devise a fast and accurate phylogenetic inference algorithm based on CherryML. Their scalability allows them to fit flexible models to large protein families, prohibitively expensive for previous methods. As an example, they estimate site-specific substitution rates and use these rates to predict the effects of mutations. This method of mutation effect prediction is competitive with some state of the art generative modeling methods. This suggests scalable and accurate phylogenetic inference as a potentially important component of Strengths: The writing is very clear. In particular, the authors describe CherryML very clearly. The author's method is efficient and accurate (figure 1). Weaknesses: The authors state "The first probabilistic model of protein evolution was proposed by Whelan and Goldman". It would strengthen the paper to better cite the models that Whelan and Goldman were inspired by. I would appreciate the author comparing to more state-of-the-art mutation effect predictors. Technical Quality: 4 Clarity: 4 Questions for Authors: How does this work relate to the conclusions of Weinstein, Eli N., Alan N. Amin, Jonathan Frazer, and Debora S. Marks. 2022. “Non-Identifiability and the Blessings of Misspecification in Models of Molecular Fitness and Phylogeny.” Advances in Neural Information Processing Systems, December.? How do you parameterize the substitution matrix? Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Adressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. Please see our response below: > The authors state "The first probabilistic model of protein evolution was proposed by Whelan and Goldman". It would strengthen the paper to better cite the models that Whelan and Goldman were inspired by. Thank you. We will cite the earlier models in the final version, in particular Dayhoff's and JTT's. > I would appreciate the author comparing to more state-of-the-art mutation effect predictors. In Supplementary Table S1, we show results including all variant effect predictors from the ProteinGym paper. > How does this work relate to the conclusions of Weinstein, Eli N., Alan N. Amin, Jonathan Frazer, and Debora S. Marks. 2022. “Non-Identifiability and the Blessings of Misspecification in Models of Molecular Fitness and Phylogeny.” Advances in Neural Information Processing Systems, December.? While we agree with the theoretical results presented in the above paper, our work challenges the assumption in their theorems that there exists a unique, "global" fitness function f. Instead, our intuition is that the fitness function is contextual, i.e. it is different in different parts of the evolutionary tree. For example, in some subtree, a positively charged amino acid may be required at position $i$, while in another subtree a negatively charged one may be required. This may be because of different contexts the protein is evolving in, due to e.g. protein-protein interactions. Our approach to variant effect prediction captures this intuition by using the "local" fitness function $P(y | x, t=1)$ at each point in the tree. We do not think that the theoretical results in the referenced paper necessarily explain our strong performance on variant effect prediction. Instead, we believe that it is the use of "local" fitness functions that is leading to our improved performance. There is certainly exciting theoretical and empirical work to be done in this direction. > How do you parameterize the substitution matrix? We use the parameterization from the CherryML paper, namely the off-diagonal elements of $Q$ are given by $\text{diag}(1/\sqrt{\pi}) \cdot S \cdot \text{diag}(\sqrt{\pi})$ where $\pi = \text{softmax}(\theta)$ for an unconstrained vector $\theta$ (here $\pi$ will be the stationary distribution of $Q$), and $S$ is a symmetric matrix given by $S = \text{SoftPlus}(\Theta + \Theta^T)$, where $\Theta$ is an unconstrained upper triangular matrix. This unconstrained parameterization allows CherryML to use unconstrained first-order optimizers to quickly estimate $Q$, as implemented by libraries such as PyTorch and Tensorflow. --- Rebuttal Comment 1.1: Title: response Comment: It seems I maybe didn't understand the parameterization of Q. I thought there was a single substitution matrix learned for each site that was constant across the tree, or "local". Is this not the case? If not, could you point me to where in the paper you describe learning a substitution matrix that changes across the tree? --- Reply to Comment 1.1.1: Comment: We apologize for the confusing statement. We are indeed estimating a single rate matrix $Q_i$ for each site $i$ and it is assumed to be constant along the tree. By "local" fitness function, we meant that the distribution $P(y | x, t)$ is conditioned on $x$; i.e., we are explicitly modeling the probability of transitioning from a given sequence $x$.
Summary: This paper introduces a new method for estimating amino acid substitution rate matrices from multiple sequence alignments, speeding up computation by orders of magnitude. The method, called SiteRM, outperforms traditional methods and large protein language models in variant effect prediction, showing its speed and accuracy in evolutionary biology. Strengths: * the proposed method to calculate amino acid substitution rate matrices is designed to be computationally efficient, with a near-linear runtime while maintaining comparable performance. This efficiency enables the analysis of extremely large MSAs, making it suitable for high-throughput applications. * SiteRM has shown superior performance in variant effect prediction compared to large protein language models that incorporate complex residue-residue interactions, which can be attributed to conceptual advances in the probabilistic treatment of evolutionary data. * by estimating site-specific rate matrices for each column of a multiple sequence alignment, SiteRM captures the evolutionary dynamics at a finer resolution, which allows for a more accurate assessment of the impact of variants on protein function. * SiteRM can deal with large datasets with millions of sequences, which is particularly useful for handling the vast amount of data generated in clinical and deep mutational scanning studies, where comprehensive variant effect prediction is crucial. Weaknesses: * while the paper demonstrates the effectiveness of SiteRM in variant effect prediction, further exploration of its applicability to other evolutionary biology tasks or datasets could further understand its capabilities and limitations. * there lacks more detailed information on the benchmarking process, including datasets used, and potential biases in the evaluation, which could improve the transparency and reproducibility of the results. Technical Quality: 3 Clarity: 2 Questions for Authors: * it would be better to provide a more vivid and more understandable method to explain the provided algorithm, such as drawing some diagrams or flowcharts of the pipeline. * although the provided method can improve the end-to-end runtime of the process, its performance drops a little comparing with the original method. Could you please list some possible reasons to explain the performance decrease, and some possible solutions. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: * the performance of the method may vary across different protein families or evolutionary contexts. It would be better to assess the generalizability of the approach to diverse datasets and evolutionary scenarios. * the accuracy of the estimated rate matrices and predictions might depend on the quality of the input multiple sequence alignments. Are there any ways to address potential biases or errors in the MSAs to improve the robustness? * the benchmark used to evaluate the performance may have limitations or biases. It would be better to compare the performance on other benchmarks. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review. We respond below: > while the paper demonstrates the effectiveness of SiteRM in variant effect prediction, further exploration of its applicability to other evolutionary biology tasks or datasets could further understand its capabilities and limitations. We agree and plan to explore other applications of SiteRM such as improving phylogenetic tree inference in future work. Software such as IQTree and its partition model puts this application within reach. > there lacks more detailed information on the benchmarking process, including datasets used, and potential biases in the evaluation, which could improve the transparency and reproducibility of the results. We focussed on standard benchmarks from prior work (CherryML and ProteinGym) with the hope that it maximizes transparency and reproducibility. We also provided code with detailed instructions to reproduce all our results. Nonetheless, we agree that our work is not self-contained and more details on the benchmarks could be included. We plan to incorporate these more detailed descriptions of the benchmarks into the final version. > it would be better to provide a more vivid and more understandable method to explain the provided algorithm, such as drawing some diagrams or flowcharts of the pipeline. This is a great suggestion. The closest we currently have to this is the runtime analysis in Appendix A.4.2. We will include a more graphical depiction of the algorithm in the final version. > although the provided method can improve the end-to-end runtime of the process, its performance drops a little comparing with the original method. Could you please list some possible reasons to explain the performance decrease, and some possible solutions. FastCherries shows a small asymptotic bias due to the use of Hamming Distance (HD) in the pairing step. We explored other alternatives during the project which we decided not to include in the paper, such as (1) pairing based on maximizing the composite likelihood of the pairing, (2) pairing based on minimizing the MLE distance between pairs, and (3) even a random pairer. The random pairer actually worked quite well for the WAG model of protein evolution on some benchmarks, showing asymptotic consistency with a relative statistical efficiency of ~1/8th, but worked poorly for the LG model due to the challenges of estimating site rates. We found that approaches (1) and (2) can indeed provide more accurate estimates in some cases. Since pairing based on HD worked very well already, we decided to make it the focus of the paper. We plan to explore other pairing methods in future work. It would be exciting to find a variant of FastCherries that retains the near-linear runtime while showing an even smaller error. > the performance of the method may vary across different protein families or evolutionary contexts. It would be better to assess the generalizability of the approach to diverse datasets and evolutionary scenarios. Our benchmark on the QMaker datasets (Supplementary Figure S1) shows that our method performs well on datasets from diverse parts of life. We agree that more in-depth error analysis may provide further insights and improvements to the method, but we leave this for future research. > the accuracy of the estimated rate matrices and predictions might depend on the quality of the input multiple sequence alignments. Are there any ways to address potential biases or errors in the MSAs to improve the robustness? This is an important question which pertains to much work done in statistical phylogenetics. Although we did not explore it in our work, the speed of our method makes it possible (unlike prior work) to obtain bootstrap confidence intervals for the rate matrix estimation, which should enable users to understand the extent to which the estimates are stable to e.g. subsampling the MSA (whether rows or columns) or changing the MSA building algorithm. > the benchmark used to evaluate the performance may have limitations or biases. It would be better to compare the performance on other benchmarks. As mentioned above, the QMaker benchmark provides evidence in this direction showing generalization to diverse domains of life.
Rebuttal 1: Rebuttal: We thank all the reviewers for taking the time to carefully read our manuscript and provide thoughtful feedback. Different reviewers have asked different interesting questions, to which we have replied individually; we will also clarify them in the final version of our paper. The only major criticism is from reviewer #4, who states that our contributions are incremental over the original CherryML. As we explain in our rebuttal to the reviewer, we respectfully disagree with this view. In fact, the other reviewers have pointed out that our contributions are significant. For clarity, we highlight our contributions below: * Our new FastCherries algorithm significantly speeds up the tree estimation step of the CherryML framework, giving a near-linear time algorithm for end-to-end rate matrix estimation from MSAs. As we have pointed out in our response to reviewer #3 – whom we thank for bringing up space complexity – our method further has linear (and thus optimal up to constants) space complexity. While there is a small loss of accuracy, the whole purpose of developing FastCherries is to avoid doing expensive computations while assuring little reduction in accuracy. It is a worthwhile tradeoff that can benefit applications requiring scalability, such as the one considered in our manuscript. * In applications, as our results clearly show (Figure 1 and Supplementary Figure S1 and Supplementary Table S1) and the other reviewers have highlighted, the performance of CherryML with FastCherries is comparable to that of CherryML with FastTree, while being one to two orders of magnitude faster. These results include diverse MSAs (Supplementary Figure S1 uses the QMaker MSAs which come from diverse areas of life). Furthermore, the performance of SiteRM over LG in variant effect prediction is far from incremental, as can be seen in Supplementary Table S1. We agree that there is much interesting work to be followed up, such as applying our method to improve phylogenetic tree inference. We plan to pursue this research in future work. We thank all reviewers again for their time and dedication.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Learning the Infinitesimal Generator of Stochastic Diffusion Processes
Accept (poster)
Summary: This paper proposes a relevant and sound approach for learning self-adjoint SDE generators via operator learning techniques. The paper includes a compactification, a novel prior knowledge inclusion, and first-of-its-kind statistical learning guarantees that extend the known ones from discrete Markov processes. Strengths: 1. The paper is easy to read, with suitable mediation throughout. 2. Inserting known diffusion effects into the Dirichlet forms and using the resolvent, provides an elegant way of including prior knowledge and well-posed estimation. 3. These are the most complete statistical learning guarantees for spectra of self-adjoint generators (albeit reliant on partial knowledge). Weaknesses: 1. Unclear extensions to entirely data-driven regimes (no partial knowledge) and requiring sampling from an invariant distribution. 2. Experiments only consider toy examples that validate theory but do not showcase the practicality of the approach for more general cases. 3. I am not sure that contribution **4)** is well supported by the current presentation, primarily due to hard-to-parse figures. A revised comparison and legends with sharper plots could make the results much easier to understand. Technical Quality: 4 Clarity: 4 Questions for Authors: 1. *Are Dirichlet forms commonly deduced from the diffusion part of the SDE?* It would help assess the ease of knowing these forms beforehand for readers unfamiliar with the notion. 2. *Is it reasonable to think you could deal with a bias of the Dirichlet form using imperfect diffusion knowledge?* 3. *Could you highlight what practical gains/losses are introduced w.r.t. discrete-time setting?* I would guess you need less data due to diffusion priors and not requiring as small of a discretization. On the other hand, IG regression complexity seems higher for high-dimensional systems. 4. *Can a sample complexity gain be recognized using prior knowledge in this form for your theoretical results?* Could the performance gain w.r.t. TO learning be recognized using the derived bounds? 5. *How arbitrary is the choice of $\mu$?* 6. *How does the time-sampled data come into play, and does the sampling rate of the data influence any aspect of the approach?* Does sampling the invariant distribution + Dirichlet form allow you to avoid time-derivative or "1-step" dynamics observations (e.g., compared to [A])? Based on the current writing, it is not immediately clear to me (but I might have missed something). 7. *How do you compare (using full knowledge) to classical numerical methods using known drift and diffusion? Would your sample efficiency be better than classical FEM for a given precision?* You mention how, for known operators (via drift and diffusion knowledge), there would be no spuriousness in estimating eigenpairs. This is perhaps of separate interest. [A] - Meng, Y., Zhou, R., Ornik, M., & Liu, J. (2024). Koopman-Based Learning of Infinitesimal Generators without Operator Logarithm. http://arxiv.org/abs/2403.15688 Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: 1. *Possibly strong prior knowledge requirements.* 2. *Limited to self-adjoint generators.* 3. *Experiments are on 1D toy examples.* Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's insightful evaluation and valuable comments. In what follows, we aim to address the highlighted weaknesses and respond to the reviewer's questions. ## Weaknesses: 1. Thank you for emphasizing the realistic setting of imperfect partial knowledge. This motivated us to __show theoretically and empirically__ that indeed our IG learning method with estimated Dirichlet form __is guaranteed to work__. Concerning the sampling, our assumption to have i.i.d. data from the invariant distribution $\pi$ can be relaxed as we reply in Q6. 2. Concerning the experiments, please see our general reply. 3. We agree that more discussion of Fig. 1 is needed to enhance the understanding of the experiment section. If accepted, an extra page will allow us to improve it, as well as to include additional content from the attached PDF and this rebuttal in the revision. ## Questions: - Answers to both __Q1__ and __Q2__ are positive. Please see our general reply for the discussion. - __Q3:__ TO methods apply only to equally spaced data and the sampling frequency $1/\Delta t$ must be high enough to distinguish all relevant time-scales. Otherwise, since TO eigenvalues are $e^{\lambda_i \Delta t}$, small spectral gaps complicate learning (see Thm. 3 [22]). Conversely, our IG method, which uses gradient information, is time-scale independent, handles irregularly spaced measurements, and does not rely on time discretizations (see reply to Q6). While it results in better generalization, as shown in Fig. 1 c)-d)-e) where our IG estimator captures ground truth significantly better than TO for the same sample size, this generalization across all time scales incurs quadratic computational complexity w.r.t. state dimension and not statistical accuracy (see also reply to Q7). Lastly, with our additional discussion on imperfect knowledge, our IG method can be safely applied in a fully data-driven regime. - __Q4:__ For i.i.d. data from $\pi$, at this point, it is hard to answer this question, while for trajectory data, important improvement can be expected, see reply to Q6. While the standard assumptions (SD) and (KE) are the same for TO and IG error bounds, regularity assumptions in these two settings are not easily comparable. This motivates the development of non-standard regularity assumptions for TO and IG learning that could expose the benefit of the prior knowledge. This is an interesting problem in its own right, and it will be the subject of our future work. - __Q5:__ As illustrated in Fig. 3 in the attached pdf, estimators are quite robust w.r.t. value of the shift parameter $\mu$. Theoretically, it is important that $\mu$ is not too small. - __Q6:__ Thank you for raising this question, we will emphasize these important aspects of our method in the revision. - Recalling the risk functional in eq. (8), we see that the “label” of the model $\chi_{\mu}(x) \approx G^*\phi(x)$ is the action of the resolvent. Since this “label” is not computable, we “__fight fire__ (resolvent) __with fire__ (generator)”, i.e. we use the energy norm of eq. (9) to rewrite regularized problem (8) as in line 222. Crucially, this allows us to obtain estimators via energy covariance $W_{\mu}$ in (14) that __completely captures infinitesimal nature__ of the learning problem without needing time-lagged observations. This contrasts with TO methods, where choosing the time-lag $\Delta t$ is the major bottleneck in real applications, [18,42]. - Thank you for this reference (Meng et al. 2024) that we will cite in the revision. Note that this method is exactly the Galerkin projection of the Yoshida approximation of IG on a finite-dimensional RKHS (dictionary of functions). It essentially corresponds to solving (8) where $\phi$ is finite-dimensional, expectation is used instead of energy and $\chi_{\mu}$ is replaced with $\mu^2\chi_{\mu}(x)-\mu\phi(x)$. To do this, authors estimate resolvent via equation in line 167 which requires approximating an operator integral via __equally spaced data with high sampling frequency (small $\Delta t$)__, essentially suffering from the same bottleneck as TO methods. Since in (Meng et al. 2024) the provided learning theory and the empirical evaluation is limited, guarantees and limitations of this method are not fully clear to us. - In practical applications to obtain data from an invariant distribution $\pi$ one uses the trajectory data after some burn-in time needed to ensure that ergodic mean approximates well $\pi$. Then, the problem is reduced to studying only the dependence as is done e.g. in [21] using standard tools of $\beta$-mixing and the method of blocks. This allows one to obtain non-parametric learning bounds for TO methods, c.f. [22], where the effective sample size suffers from the multiplicative effect of the time-lag to achieve approximate independence. Recalling Q3, TO methods “waste” a lot of data negatively impacting statistical accuracy (we expect similar limitations for the method of (Meng et al. 2024)). Contrary to this, our method can be applied to data with larger time-lags (even irregularly spaced) so that effective sample size is close to the true one. - __Q7:__ Compared to FEM, the key difference lies in the approximation error $ |\lambda_k - \widehat{\lambda}_k| \leq c h^{2p} |\lambda_k|$, where $p$ is the polynomial degree used to construct finite elements, $h$ is the mesh size, and $c$ depends on the eigenfunctions' smoothness. As the number of mesh elements grows exponentially with $d$, i.e., $\sim h^{-d}$, reducing $h$ is the major bottleneck mitigated by computationally demanding adaptive higher order methods. On the other hand, our IG method that requires less or no knowledge has a quadratic impact of the $d$ only on the computational complexity. Indeed, sample complexity depends on the effective dimension of the equilibrium distribution on the domain that can be much lower than $d$. --- Rebuttal 2: Title: Rebuttal Reply for Submission18874 by Reviewer D2P3 Comment: I thank the authors for the extensive rebuttal in the general reply and comprehensively addressing my review comments. Also, it is great to see the authors put in the effort to deliver adjusted theory as well as run new experiments. Many important aspects of the paper got clearer: limits of FEM and TO approaches, fully-data driven regime, energy functional rationale and shift parameter. Accordingly, I have increased my overall score. I expect this rebuttal content to be incorporated as part of a final submission. --- Rebuttal Comment 2.1: Title: Acknowledgement to the reviewer Comment: We would like once more to sincerely thank the reviewer for their deep questions, which inspired us to make these additional steps and improve our work. We are happy that our rebuttal is appreciated and we commit to incorporate it in the revised manuscript.
Summary: The paper considers a time-homogeneous Stochastic Differential Equation (SDE) with known diffusion part and known or unknown drift. The problem is to find properties of this equation from known data, in particular, to find a (low-rank) representation of its infinitesimal generator (IG). For this purpose, the resolvent of this operator is considered and the reduced-rank estimator in reproducing kernel Hilbert spaces (RKHS) together with the energy-based risk metric are used. Accuracy estimates for the found approximation to IG and time-complexity are given. Several model numerical experiments are conducted for proof-of-concept. Strengths: - Detailed appendix with necessary reference material - Development of SDE theory with exact estimates on the result obtained Weaknesses: - The paper uses quite complex concepts and may be difficult to understand in a 9 page format. - There are not enough practical examples of the application of these results to machine learning Minor The x-axis in Figure 1 (a) is not signed L120 comma is not needed L124 space after dot is missing L206: colon missing at the end of the line L215 A new line is redundant (`this work we focus on the case when`) L275 space after dot is missing L915 "anan" -> "and" Technical Quality: 3 Clarity: 3 Questions for Authors: - Can your method be extended to the case when neither drift nor diffusion coefficients are known? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitation are described in the paper. The paper is mostly theoretical. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's insightful evaluation and valuable comments. In what follows, we aim to address the highlighted weaknesses and respond to the reviewer's questions. ## Weaknesses: ### Major: - Indeed, the topic of learning IG of a stochastic process with kernel-based methods, and, in particular, development of nonparametric learning rates contains several complex concepts. More so, since the paper is the first one to formalize a consistent way to learn IG from data. We tried our best to present the sophisticated theory and mitigate the complexities, believing that this work paves the path to efficient and reliable algorithms for important applications in ML (see also general reply). - In addition to our general reply, we would like to emphasise here that, from molecular dynamics to weather models, the ability to reliably learn eigenfunctions of differential operators that generate the dynamics is of capital importance for building trustworthy and physics informed AI. Hence, we strongly believe this work is of __high relevance__, if not for the most general ML community, definitely __for the AI-for-Science community__. With that respect, we would like to further stress that our method is the first of its kind in machine learning to have error bounds that theoretically demonstrate the superiority of an ML approach for spectral estimation of differential operators over classical numerical methods like FEM in high-dimensional settings. While empirical evidence supporting this has been well documented in practice, particularly in fields such as molecular dynamics [42], to the best of our knowledge, this had never been theoretically proven until now. Following reviewer's suggestions, in the revised manuscript we will better emphasize possible applications and further improve readability and accessibility of our results. ### Minor: - Thank you for spotting the typos, we will correct them in the revision. ## Questions: As we elaborate in the general reply, indeed we can show, both theoretically and empirically (Figures 1 and 2 in the attached pdf), that our method works in a fully data-driven regime. While empirically this might not come as a big surprise, we believe that __adding a theoretical proof__ motivated by the comments of reviewers is a very nice __additional contribution__. If the reviewer wishes to have more details on this aspect, we are happy to elaborate. --- Rebuttal Comment 1.1: Comment: I thank the authors for their explanations. After reading the global response and the discussion with other reviewers, I believe that in general my concerns were addressed. I raise my score to 7 "Accept". --- Reply to Comment 1.1.1: Title: Acknowledgement to the reviewer Comment: We would like once more to thank the reviewer for their comments, which inspired us to make additional steps and improve our work.We are happy that our rebuttal was helpful and we commit to incorporate it in the revised manuscript.
Summary: In this paper, the authors consider the problem of learning the infinitesimal generator a Stochastic Diffusion Process (SDP). Compared to existing approaches such as [1] they tackle the unbounded nature of the generator by introducing a novel statistical framework which is based on the Dirichlet form associated with the SDP. In this framework, the authors estimate the resolvent of the generator which can be approximated with finite-rank operators. They consider a regularized and rank-truncated version of the regression loss. Theorem 1 provides a way to compute the eigenvalues and eigenvectors of the estimated operator. Spectral learning bounds are derived in Theorem 2. The theoretical results of the paper are completed with three examples: learning the overdamped Langevin generator for a one-dimensional four well potential, the overdamped Langevin generator for a Muller Brown potential and finally for a Cox-Ingersoll-Ross process (CIR process). [1] Cabannes et al. (2023) -- The Galerkin method beats Graph-Based Approaches for Spectral Algorithms Strengths: * Even though the paper is mathematically heavy and contains a lot of notation, I think it is well-written. I appreciate that (almost) all the assumptions needed before stating Theorem 2 are clearly laid out and explained. I also appreciate the rigour shown in the paper. * The obtained results regarding the spectral bounds are interesting and provide a strong theoretical grounding for the method. * From a methodological point of view the time complexity can be reduced compared to [1,2,3] if the rank is low. I think this is one of the strength of the method, although I would have appreciated more details on the choice of the hyperparameters and their interactions (see below). [1] Cabannes et al. (2023) -- The Galerkin method beats Graph-Based Approaches for Spectral Algorithms [2] Hou et al. (2023) -- Sparse learning of dynamical systems in RKHS: An operator-theoretic approach [3] Pillaud-Vivien et al. (2023) -- Kernelized Diffusion Maps Weaknesses: Before expressing my main concerns regarding this paper, I want to emphasize that I'm not an expert in this domain and therefore my understanding of the main competitors and methods used in the paper is lacking. * My main concern is regarding the experiments. I know this is a theoretical paper but a new statistical framework and a new learning procedure should be clearly validated against competing methods like [1,2,3]. Even though some qualitative conclusions are drawn I would have liked to see a more extensive study (for different choices of hyperparameters for both the target Langevin diffusion and the method introduced by the authors). * Indeed, there are little details in the paper about how to choose the crucial parameters of the algorithm like $\mu$, $r$ and $\gamma$. Establishing the robustness of the procedure regarding these hyperparameters seem crucial to validate the methodology. * As mentioned before, even though I appreciate the rigour of the paper it is notation heavy. Having to constantly rely on the (massive) table of page 13 to remember the notation hindered my understanding of the paper. Is there a way to either reduce the notational load or to incorporate a lightweight version of Table 2 in the main paper? This would greatly help the reader. * All assumptions are commented except (KE). What are the main limitations of this assumption? I am not familiar with the domain so a light explanation in the main text would also be appreciated. * Is the assumption of l.188 that there exists a Dirichlet operator realistic? It is true for Langevin operator and CIR (which already covers quite a lot of ground) but apart from these models? Does this impose anything on the diffusion? Some minor remarks: * l.178 "Importantly, this energy functional can be empirically estimated from data sampled from π, whenever full knowledge, that is drift and diffusion coefficients of the SDE (1), or partial knowledge," -- This sentence is not really clear to me. How can one leverage the drift and diffusion coefficient here? * l.215 typo (spurious new line) [1] Cabannes et al. (2023) -- The Galerkin method beats Graph-Based Approaches for Spectral Algorithms [2] Hou et al. (2023) -- Sparse learning of dynamical systems in RKHS: An operator-theoretic approach [3] Pillaud-Vivien et al. (2023) -- Kernelized Diffusion Maps Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Limitations are addressed in Section 7 ("Conclusion") Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's insightful evaluation and valuable comments. In what follows, we aim to address the highlighted weaknesses and respond to the reviewer's questions. ## Weaknesses: __Main contributions and broader impact.__ In the general reply we tried our best to provide more details on positioning of our main contributions in the larger context of solving SDEs, and help the reviewer evaluate the possible impact of the presented results. __Experiments.__ While we would like to stress out that we originally provided quantitative comparison to the cited methods ([1] is in fact just a special case of [2], while [3] is not applicable to Langevin dynamics) in Fig. 1 a) which implies the failure of this method to capture the true dynamics, we follow the reviewer’s suggestion and provide additional experiments that make this point more clear for the reader. __Choice of hyper-parameters.__ As suggested, we provide the empirical evaluation of the robustness of hyper-parameter choice, as indicated in the general answer. In summary, the robustness of the choice of kernel’s hyper-parameters and Tikhonov regularization is similar to general kernel methods for (vector-valued) regression. On the other hand when estimating IG via its resolvent the choice of additional shift hyper-parameter $\mu$, as expected from the theory, has a much smaller impact on the performance, see Fig. 3 of the attached pdf. Finally, we wish to stress out that we do go beyond classical validation via prediction by introducing novel quantities, the empirical spectral biases $\widehat s_i$ of eq. (24), which are (theoretically and empirically) good metrics to use for fine-tuning the model. __Notations.__ Thank you for your suggestion. We will prepare a smaller summary table that only introduces necessary notations for presenting the IG learning framework and the method, while omitting extensive notation related to proving the generalization bounds. We believe this is feasible, since, if the manuscript is accepted, the additional page can be used to present this table in the main body of the paper. __(KE) assumption.__ The kernel embedding (KE) assumption originates from the study of mini-max optimal excess risk bounds for classical kernel regression [12]. Whenever the kernel is bounded, (SD) and (KE) do _not impose any additional constraints_, they are just used to _describe the interplay between the data distribution and the chosen RKHS_. This is a formal way to quantify the impact of the kernel choice to the learning rate. We will additionally clarify this in the revised manuscript. __Dirichlet form.__ To complement the discussion on this issue from the general reply, we would like to stress out that many diffusion processes have an infinitesimal generator that can be expressed in the form (IG). In addition to the Langevin and the CIR processes already studied, we can mention: - the Wright-Fisher diffusion (in dimension one), which can be defined in the context of population genetics and can be adapted to model interest rates, see [G], - the geometric Brownian motion which models the price process of a financial asset, - the multi-dimensional Brownian motion (a=0) that corresponds to the heat equation, - the transport processes associated with advection-diffusion equation, see [H] - the process associated with Poisson's equation in electrostatics, see [I]. __Minor remarks.__ Thank you for pointing out the typo, we will fix it. Concerning the claim in line 178, it relates to the first equality in Equation (7). There, one can see that knowing drift and diffusion terms, we can evaluate $[L f] (x) $ and approximate the energy as empirical mean of $ \mu |f (x) |^2 + f(x)[L f] (x) $ via samples $x$ from the invariant distribution. ## Additional references: [A] Fukushima, Masatoshi, Yoichi Oshima, and Masayoshi Takeda. Dirichlet forms and symmetric Markov processes. Vol. 19. Walter de Gruyter, 2011 [B] Jean Jacod. Discretization of processes. Springer, 2012 [C] Danielle Florens-Zmirou. Approximate discrete-time schemes for statistics of diffusion processes. Statistics: A Journal of Theoretical and Applied Statistics 20.4, 1989 [D] Fan, J., & Zhang, C. (2003). A reexamination of diffusion estimators with applications to financial model validation. Journal of the American Statistical Association, 98(461), 118-134. [E] Yuri Kutoyants. Statistical inference for ergodic diffusion processes. Springer Science & Business Media, 2013. Yuri Kutoyants. Parameter estimation for stochastic processes, 1984. [F] Fabienne Comte,Valentine Genon-Catalot Nonparametric drift estimation for iid paths of stochastic differential equations. The Annals of Statistics, 48(6), 2020 [G] Martin Grothaus, Max Sauerbrey. Dirichlet form analysis of the Jacobi process. Stochastic Processes and their Applications, 157, 2023. [H] Antoine Lejay, Lionel Lenôtre, Géraldine Pichot. Analytic expressions of the solutions of advection-diffusion problems in one dimension with discontinuous coefficients. SIAM Journal on Applied Mathematics, 79(5), 2019. [I] Sethu Hareesh Kolluru. Preliminary Investigations of a Stochastic Method to solve Electrostatic and Electrodynamic Problems. Phd Thesis, 2008. --- Rebuttal Comment 1.1: Comment: Thank you for your thorough answer and rebuttal. I have increased my score. --- Reply to Comment 1.1.1: Title: Acknowledgement to the reviewer Comment: We would like once more to thank the reviewer for their comments, which inspired us to make additional steps and improve our work.We are happy that our rebuttal was helpful and we commit to incorporate it in the revised manuscript.
Summary: This paper discusses learning the generator of stochastic diffusion processes in reproducing kernel Hilbert space. In particular, section 2: background on the generator, Dirichlet form, energy, learning in RKHS, empirical risk in the Hilbert-Schmidt norm section 3: introduce an energy-based risk functional for resolvent estimation which leads to spectral estimation section 4: empirical risk minimization with Tikhonov regularization and rank constraints. Strengths: The paper is well-written and has a thorough discussion on the background and the comparison of previous work. It proposes a novel energy risk functional and a reduced-rank estimator with dimension-free learning bounds. It also establishes spectral learning bounds for generator learning. Weaknesses: The weaknesses are mainly in the motivation (please see question 1 below) and the experiment (see questions 3 and 4 below). The authors did a great job in the theoretical comparison with previous work on TO and IG learning, however, the empirical comparison focuses on the spuriousness of eigenvalues and the recovery capability of the metastable states. To sufficiently demonstrate the practical advantage of the proposed method, other comparisons of the learned dynamics, phase transition, and invariant measure should also be included. Technical Quality: 3 Clarity: 4 Questions for Authors: 1. In the paper, 'full knowledge' refers to given drift and diffusion coefficients, and 'partial knowledge' refers to the given diffusion coefficient and energy/Dirichlet operator. In practice, is it necessarily easier to obtain diffusion coefficient and energy/Dirichlet operator than to obtain drift and diffusion coefficients? In a fully data-driven scenario, one can first estimate drift and diffusion coefficients, and then use physics-informed method with 'full knowledge' obtained. Does this make the 'full knowledge' methods cited in the paper more flexible than the 'partial knowledge' method proposed? 2. As transfer operator A_t and the infinitesimal generator L satisfy the relation A_t = e^{Lt}, is there a consistency between the learned TO from cited TO methods with the learned IG from the proposed method? 3. What are the kernels used for the 3 examples in experiments? How sensitive is the experimental result to the kernel selection, regularization, and rank parameter? 4. Can the authors provide the learned dynamics of the 3 examples compared to the true dynamics, the invariant measure, and time scales? Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Limitations have been discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's insightful evaluation and valuable comments. In what follows, we aim to address the highlighted weaknesses and respond to the reviewer's questions. ## Weaknesses: Thank you for pointing this out. Indeed, clearer motivation beyond the Dirichlet form should help the reader understand our approach better. While some aspects of this are addressed in the general reply, in the following we try to address your concerns in detail. ## Questions: 1. As we have discussed in the general reply, a large class of processes can be written in Dirichlet form and, importantly, for many of them statistical estimation of the Dirichlet coefficient (related to the diffusion coefficient) from data is much easier than recovering the drift term. This means that our method is possible to apply even when the challenging drift term is not estimated. That said, we are, to our knowledge, __the only existing method__ that is able to learn the infinitesimal generator model from __a single trajectory__ that is __theoretically consistent with the true dynamics__. Finally, as we show in the general reply, note that our method is robust under imperfectly estimated partial knowledge. This is not only proven, but also empirically demonstrated in Fig. 1 and 2 of the attached pdf. 2. Indeed, there is the consistency of our IG method and the properly executed TO methods. This is theoretically backed by Thm. 2 in [22] for TO and our Thm. 2 for IG. However, note that TO methods apply only to equally spaced data and the sampling frequency $1/\Delta t$ must be high enough to distinguish all relevant time-scales. Otherwise, since TO eigenvalues are $e^{\lambda_i \Delta t}$, small spectral gaps complicate learning (see Thm. 3 [22]). Conversely, our IG method, which uses gradient information, is time-scale independent, handles irregularly spaced measurements, and does not rely on time discretizations. This results in better generalization, as shown in Fig. 1 c)-d)-e) where our IG estimator captures true phase transitions between meta-stable states significantly better than TO for the same sample size. Lastly, with our additional discussion on imperfect knowledge, our IG method can be safely applied in a fully data-driven regime as TO methods. 3. Thank you for spotting this. Indeed, in App. F we didn’t specify that the RBF Gaussian kernel with specified length-scales was used in all cases. As reported in the generall reply, the robustness of kernel hyperparameters and Tikhonov regularization is essentially the same as for TO kernel estimators, while the IG estimator is very robust against the new shift hyperparameter, see Fig. 3 of the attached pdf. 4. Thank you for this question. Besides original comparisons of the time-scales (IG eigenvalues) and meta-stable states (IG eigenfunctions), providing the additional experiments, we demonstrate our method’ consistency with true dynamics from the perspective of equilibrium distribution recovery and long term forecasting of conditional cumulative density functions in Fig. 1 and 2 of the attached pdf, respectively. This might come as a surprise, but after a closer look, the cited works, only showcase empirical evaluation of the leading eigenfunctions, which due to the reported effect of spuriousness in Fig, 1a requires picking proper eigenpairs from an abundance of wrongly estimated ones. Indeed, if one builds such models without expert knowledge needed to remove spurious estimation, the obtained dynamics is inconsistent with the true one, see caption of Fig. 1 of the attached pdf.
Rebuttal 1: Rebuttal: We wish to thank all reviewers for their insightful evaluation of our paper. We appreciate all their comments and remarks, which we will incorporate in our revision. Before addressing each review in detail, we would like to point out some general remarks that apply to all of them. ## __Assumptions on the process and availability of partial knowledge.__ The existence of Dirichlet forms for diffusion processes satisfying a stochastic differential equation like (1) is not too restrictive an assumption, as it encompasses many processes beyond Langevin and CIR such as advection-diffusion, Wright-Fisher diffusion, multidimensional Brownian motion, etc. (see reply to reviewer __6wyC__ for details and the list of additional references, there ref. [A] gives in-depth review of this topic). For the reviewers convenience, we briefly elaborate. Recalling the form of IG in eq. (3), the Dirichlet form in eq. (5) for a self-adjoint $L$ exists whenever the positive definite diffusion part $b b^\top$ satisfies uniform ellipticity conditions and the drift term $a$ allows integration by parts, leading to $s(x)=b(x)/\sqrt{2}$. Thus, to obtain the partial knowledge we need, it is enough to estimate the diffusion function. When the sample paths are observed continuously, the diffusion coefficient $b$ can be directly identified from these observations, making it a non-statistical problem. For discrete realizations of the process, __the diffusion coefficient can be estimated non-parametrically using various methods__. These methods include pathwise estimation by computing the variance of the increments over small intervals [B], kernel-based methods [C], local polynomial regression [D]. On the other hand, __the estimation of the drift is a much more demanding task__. Different methods are reviewed in [E] and references therein. A more recent approach [F] drawing inspiration from particle systems, consists in constructing estimates from $m$ i.i.d. paths of the solution process. ## __Imperfect partial knowledge and fully data-driven method.__ Inspired by the reviewers' questions, we show how to make our method fully data-driven. This theoretical discussion will be presented as a remark in the main body of the revision with detailed proofs in the appendix. In practice, whenever classical modeling from first principles is not feasible, we can first estimate the Dirichlet form and then apply our method. Hence, for a fully data-driven method, we need to analyze the impact of the imperfect knowledge. To this end, assume that we apply our model with some $\tilde s$ that incurs relative estimation error $\tilde \varepsilon>0$ of the form $$ \mathbf{(RE)} \quad \tilde \varepsilon s(x)^\top s(x) \preceq \tilde s(x)^\top \tilde s(x) - s(x)^\top s(x) \preceq \tilde \varepsilon s(x)^\top s(x), $$ where $\preceq$ is the standard Loewner ordering. Note that whenever $s$ is constant (e.g. Langevin) or linear (e.g. CIR), the estimation can be done from a single trajectory by estimating the variance of increments, and then __(RE)__ reduces to the standard relative error. So, our algorithm now uses empirical covariance $\widetilde W$ built from $\tilde s$, and, hence, the only change to our proof technique lies in the analysis of variance in App. E.3. In particular, we only need to adapt Prop. 15 and 16, which is straightforward since (RE) implies $\tilde\varepsilon \widehat W \preceq \widetilde W-\widehat W \preceq \tilde\varepsilon \widehat W$. Indeed, we have $$ \Vert F(W-\widetilde W)FG \Vert = \Vert F(W\pm\widehat W-\widetilde W)F G \Vert \leq \Vert F(W-\widehat W)FG \Vert + \tilde\varepsilon \Vert F(\widehat W \pm W)F\Vert \Vert G \Vert \leq \Vert F(W-\widehat W)FG \Vert + \tilde\varepsilon (\Vert F(W-\widehat W)F\Vert +1)\Vert G \Vert, $$ for $F= W_{\mu,\gamma}^{-1/2}$ and $G$ being either $W_{\mu,\gamma}^{-1/2}C$ or $I$. So, in conclusion, __the relative error $\widetilde\varepsilon$ of imperfect knowledge simply appears as the additive term in the final guarantees.__ Namely, (23) and (24) hold upon replacing $\varepsilon_n^\star \ln \delta^{-1}$ with $\varepsilon_n^\star \ln \delta^{-1} + \tilde \varepsilon$. ## __Experiments.__ Given the theoretically challenging nature of the problems addressed in this paper, we have intentionally limited the role of experiments to effectively highlight the main theoretical contributions. While we include two additional experiments in the attached pdf and test robustness on hyper-parameters choice, significantly extending empirical studies provided in the most related works on IG [1, 15, 36], our focus remains consistent. We believe that developing realistic applications requires extensive work and contextual analysis, which would constitute contribution in its own right. ## __Contributions.__ To conclude, let us clarify our contributions: 1. We propose a __fundamentally new idea to estimate the spectrum of self-adjoint generators of stable Ito SDE from a single trajectory__. In contrast to all existing works, we exploit the geometry of the process via a novel energy (risk) functional. In a certain sense we “fight fire (resolvent ) with fire (generator) ” to 2. derive __a new efficient learning method__ that is able to learn the best approximation of the resolvent of IG on the RKHS __independently of the time-sampling__, (see reply Q6 to __D2P3__); 3. We derive __the first IG spectral estimation finite sample bounds__ using (imperfect) partial knowledge, which __notably, overcome the curse of dimensionality present in classical numerical methods__ (see reply to __D2P3__, item Q7 for a quantitative discussion); 4. __each important aspect of our learning method__, especially in relation to the most relevant existing works, __is empirically demonstrated__ to complement our theoretical analysis. We once again thank all the reviewers whose comments inspired the discussion above that significantly strengthens our paper and better demonstrates its broader impact. Pdf: /pdf/9f268740366fdf06691127000077e62f7478bf5c.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
OmniTokenizer: A Joint Image-Video Tokenizer for Visual Generation
Accept (poster)
Summary: This paper proposes OmniTokenizer, a tokenizer that can be used to tokenize both image and video data. To train the OmniTokenizer, they devise an architecture that decouples spatial and temporal axis, offering efficiency. Next, they perform a progressive training scheme that initializes from image-only training at fixed resolution that is followed by a joint image & video training at multiple resolutions. The resulting VQ tokenizer can be used with autoregressive modeling objective to perform generation. Optionally, it can also be fine-tuned with KL loss to be used as a diffusion tokenizer for latent diffusion models. The evaluations are performed on several image and video datasets to demonstrate the effectiveness of the tokenizer. Strengths: Having a joint tokenizer for image and video modalities is an important step. From the results, the paper seems to do a decent job at it and the resulting tokenizers could be useful to the community. The ideas of decoupling spatial & temporal axis and progressive training are intuitive and seem effective from the provided ablation results. The paper is clearly written. Weaknesses: My main weakness point with the paper is the lack of comparisons to the relevant baselines. I think MagVITv2 is the closest baseline to your method, but there is no in-depth comparison to it. I can see in Table 4 that MagVITv2 was reported in NAR setting while yours is in AR setting. I think it's important to have an apples-to-apples comparison to understand the effectiveness of your method. Same applies to image generation results in Table 3. For the comparison, I wouldn't limit the comparison to only generation results but also other relevant & important aspects such as video compression and the quality of learned representations. Other important and missing baselines are MAGE and MaskGIT. There are also recent advancements in the literature such as VAR but I wouldn't penalize for them as they are quite recent. Still, having a discussion & comparison to them would be useful. Other questions and comments: - How the trends would look like for 512x512 resolution? - For the reconstruction results, having other metrics such as PSNR, SSIM, and IS would be helpful. - Seeing the qualitative video results on actual videos would be really helpful. From the provided visuals it's unclear. - Are you planning to release the checkpoints & code for OmniTokenizer? - How do you explain the small performance gap for generation results with diffusion models? Can we say that OmniTokenizer is more effective with an autoregressive Transformer? - It was mentioned that the decoder is symmetric with encoder. Could you clarify that and provide a full architecture visualization? Could you also clarify the 2D and 3D patch embeddings? - What would be the reconstruction results for diffusion models on Table 1 & 2? You reported SD1.4 in the appendix. There are some numerical differences though (e.g. yours rFID is 0.69 in the main paper and 0.68 in the appendix), what is the reason for this? What are the reconstruction results for other latent diffusion models like SD3? - In the ablation results (Table 7) what would be the performance if you skip the fixed resolution image training stage and perform only the second stage on image & video jointly? Technical Quality: 3 Clarity: 2 Questions for Authors: Please see the weaknesses for details. In particular, I am wondering an in-depth comparison against relevant baselines e.g. MagVITv2, MAGE, and MaskGIT. I am also wondering qualitative results on videos and some training & experimental clarifications I discussed in the weaknesses. I will update my final score accordingly. Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. My main weakness point with the paper is the lack of comparisons to the relevant baselines. I think MagVITv2 is the closest baseline to your method, but there is no in-depth comparison to it. I can see in Table 4 that MagVITv2 was reported in NAR setting while yours is in AR setting. I think it's important to have an apples-to-apples comparison to understand the effectiveness of your method. Same applies to image generation results in Table 3. For the comparison, I wouldn't limit the comparison to only generation results but also other relevant & important aspects such as video compression and the quality of learned representations. =>Answer: Please refer to question 2 of our global rebuttal. 2. Other important and missing baselines are MAGE and MaskGIT. There are also recent advancements in the literature such as VAR but I wouldn't penalize for them as they are quite recent. Still, having a discussion & comparison to them would be useful. =>Answer: Thanks for your suggestion. We will add more discussion on MAGE, MaskGIT, and VAR in the main paper. 3. How the trends would look like for 512x512 resolution? =>Answer: Good point! We evaluate VQGAN and our model at 512x512 resolution, as shown in Table 13 of rebuttal PDF, both rFID and rFVD get lower and our method still outperforms VQGAN. | **Method** | **ImageNet-256** | **ImageNet-256** | **UCF-256** | **UCF-512** | |:--------|:--------------:|:--------------:|:--------------:|:--------------:| | VQGAN | 1.49 | 0.82 | 94.49 | 32.56 | | Ours (VQVAE) | 1.11 | 0.72 | 25.97 | 14.09 | 4. For the reconstruction results, having other metrics such as PSNR, SSIM, and IS would be helpful. =>Answer: Thanks for pointing this out. We evaluate the PSNR and SSIM of VQGAN and our method, and show the results in Table 14 (rebuttal PDF). On both image and video benchmarks, we surpass VQGAN. | **Method** | **ImageNet-PSNR** | **ImageNet-SSIM** | **UCF-PSNR** | **UCF-SSIM** | |:--------|:--------------:|:--------------:|:--------------:|:--------------:| | VQGAN | 23.27 | 0.69 | 26.53 | 0.81 | | Ours (VQVAE) | 24.96 | 0.77 | 28.89 | 0.90 | 5. Seeing the qualitative video results on actual videos would be really helpful. From the provided visuals it's unclear. =>Answer: Great suggestion! Due to the rebuttal policy, we cannot update the supplementary to provide qualitative video results on actual videos, but we have shown more qualitative results in Figure 11 and 12 of our rebuttal PDF. 6. Are you planning to release the checkpoints & code for OmniTokenizer? =>Answer: Yes, we will definitely open-source the checkpoints & code for OmniTokenizer. 7. How do you explain the small performance gap for generation results with diffusion models? Can we say that OmniTokenizer is more effective with an autoregressive Transformer? =>Answer: Good question. For the experiments on diffusion models, we adopt the same hyper-parameters with DiT and Latte, to demonstrate the effects of replacing SD VAE with our VAE on the image/video generation performance. Therefore, these hyper-parameters are tuned for the original VAE, and we find that after tuning the learning rate for our model, the performance could be boosted, as shown in Table 15 of the rebuttal PDF (due to time limit, we only conduct experiments on Latte). Therefore, we believe OmniTokenizer is effective with both transformer models and diffusion models. |**Method**|**Learning rate**|**FVD**| |:--------|:--------------:|:--------------:| |Latte| 1e-4 | 478.0 | |Ours-Latte | 1e-4 | 525.6 | |Ours-Latte | 2.5e-4 | 209.2 | 8. It was mentioned that the decoder is symmetric with encoder. Could you clarify that and provide a full architecture visualization? Could you also clarify the 2D and 3D patch embeddings? =>Answer: Thanks for the suggestion. The detailed architecture is illustrated in Figure 13 (rebuttal PDF). 9. What would be the reconstruction results for diffusion models on Table 1 & 2? You reported SD1.4 in the appendix. There are some numerical differences though (e.g. yours rFID is 0.69 in the main paper and 0.68 in the appendix), what is the reason for this? What are the reconstruction results for other latent diffusion models like SD3? =>Answer: Thanks for pointing this out. The rFID of our VAE model is 0.69 and sorry for the confusion. In Table 16 (rebuttal PDF), we compare the reconstruction performance of SD1.4 VAE and SD3 VAE with our tokenizer on ImageNet, K600, and UCF (Table 1&2). It is worth mentioning that although SD VAEs are trained on large-scale high-quality data, we still beat them on ImageNet and achieve on-par results on K600 and UCF using the publicly available academic datasets. |**Method**|**ImageNet**|**K600**| **UCF**| |:--------|:--------------:|:--------------:|:--------------:| |SD1.4 VAE | 0.77 | 8.89 | 20.91 | | SD3 VAE | 0.74 | 6.28 | 19.43 | | Ours (VAE) | 0.69 | 13.02 | 23.44 | 10. In the ablation results (Table 7) what would be the performance if you skip the fixed resolution image training stage and perform only the second stage on image & video jointly? =>Answer: Great Suggestion. We perform joint training without image pretraining and compared the results in Table 17 (rebuttal PDF). It can be seen that both ImageNet and K600 reconstruction performance get dropped compared to the progressive training, which are even worse than the image-only and video-only training. This shows the importance of image pre-training for video reconstruction learning, and also indicates that simple joint training could not improve the results of image reconstruction and video reconstruction. |**Method**|**ImageNet**|**K600**| |:--------|--------------|---------| |Ours-Image (Fix)|1.28|-| |Ours-Video (Multi)|-|54.89| |Joint w/o Image (Multi)|2.62|67.77| |Ours-Joint (Multi) |1.11|25.97| --- Rebuttal Comment 1.1: Title: Thanks, rebuttal was helpful Comment: Thank you for the detailed response and the new results you provided, they were helpful. Please incorporate all of them to your paper. I still think having an apples-to-apples comparison with MagVITv2 on visual generation will be useful to the practitioners to understand the capabilities of two approaches. Consider adding this to your paper as well. I'm raising my score to weak accept. --- Reply to Comment 1.1.1: Title: Thanks for your positive comments! Comment: Thanks for your positive comments and we will follow your suggestion to add these results to our paper!
Summary: This paper proposes OmniTokenizer, a transformer-based visual tokenizer model that processes both image and video input and achieves state-of-the-art reconstruction quality. OmniTokenizer's core designs are a decoupled spatial-temporal attention mechanism and a progressive training schedule. Two OmniTokenizers are trained for autoregressive generative models and diffusion models, respectively. Extensive experiments further show the new tokenizers effectively improve image and video generation performance, consolidating OmniTokenizer's effectiveness. Strengths: 1. Visual tokenizers are essential components for representation learning and generation, laying the foundation for many cutting-edge techniques, including image/video diffusion models and multimodal large language models. This paper's efforts in improving the tokenization quality and reconstruction performance of visual tokenizers are appreciated. 2. The proposed OmniTokenizer employs image and video data jointly with a properly designed two-stage training paradigm. Ablation studies show that the joint two-stage training strategy effectively improves the reconstruction of both image and video data. 3. Two variants of OmniTokenizers are trained, one with VQ and one with the vanilla VAE's KL regularization. The paper further trains autoregressive and diffusion-based image/video generative models with the OmniTokenizers, demonstrating superior results across several image and video datasets. Weaknesses: 1. Although there are extensive quantitative results on image/video reconstruction and generation, the qualitative comparisons are insufficient. Much more video reconstruction/generation comparisons can be provided in the supplementary materials to better demonstrate the effectiveness of OmniTokenizers. 2. The paper does not elaborate on the intuitions for designing decoupled window attention and causal attention. Table 8 investigates different architectural design choices, but there are no analyses on why the proposed design leads to better reconstruction quality than the other two variants. 3. Section 3.1.1 of the paper is too sketchy. To make the architecture reproducible and the paper self-contained, at least some concrete description or illustration of the window/causal attention mechanisms should be provided. Similarly, there are not many details about the image/video generative models trained with OmniTokenizers— the concrete model architecture/specifications and training setups could be provided in the appendix. 4. How is the order of VQ and KL training determined? There is no sufficient explanation on why the KL fine-tuning is conducted after the VQ training, and it's also unclear if the two variants of OmniTokenizers use two separate decoders or share the same decoder. Technical Quality: 3 Clarity: 2 Questions for Authors: The paper studies visual tokenizers and proposes an effective architecture with a joint image-video training framework. The thorough quantitative results convincingly demonstrate the advantages of the proposed OmniTokenizers, and further scaling up this framework could lead to more powerful tokenizers that benefit future research. However, the paper lacks several important information: i) qualitative comparisons (especially for videos), ii) details of the tokenizer architecture and insights on why it works better than other variants, and iii) details of the image/video generative models. The lack of these details/analyses hurts the clarity and reproducibility. I will increase the score if these concerns are properly addressed in the rebuttal. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The limitations and broader impacts are discussed in Sections 4 and 5 of the main paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. Although there are extensive quantitative results on image/video reconstruction and generation, the qualitative comparisons are insufficient. Much more video reconstruction/generation comparisons can be provided in the supplementary materials to better demonstrate the effectiveness of OmniTokenizers. =>Answer: Great suggestion! Due to the rebuttal policy, we cannot update the supplementary, but we have shown more qualitative results in Figure 11 and 12 of our rebuttal PDF. 2. The paper does not elaborate on the intuitions for designing decoupled window attention and causal attention. Table 8 investigates different architectural design choices, but there are no analyses on why the proposed design leads to better reconstruction quality than the other two variants. =>Answer: Please refer to question 3 of our global rebuttal. 3. Section 3.1.1 of the paper is too sketchy. To make the architecture reproducible and the paper self-contained, at least some concrete description or illustration of the window/causal attention mechanisms should be provided. Similarly, there are not many details about the image/video generative models trained with OmniTokenizers— the concrete model architecture/specifications and training setups could be provided in the appendix. =>Answer: Thanks for your suggestion. We illustrate the network architecture in detail in Figure 13 (rebuttal PDF). The image and video transformers both follow a GPT-style architecture, which consist of 12 transformer layers. The number of heads is 12, hidden dimension is 768, and no dropout is used. We will add this introduction to our appendix. 4. How is the order of VQ and KL training determined? There is no sufficient explanation on why the KL fine-tuning is conducted after the VQ training, and it's also unclear if the two variants of OmniTokenizers use two separate decoders or share the same decoder. =>Answer: Great question. OmniTokenizer-VQVAE and OmniTokenizer-VAE use two separate decoders and sorry for the confusion. We chose to perform VQ training before KL training because going from discrete tokens to continuous embedding is a process of information gain and vice versa is a process of information loss. Therefore, the current order will make the learning of the model easier and more stable. As shown in the Table 12 of rebuttal PDF, if KL training is performed before VQ training, the results of the latter will instead decrease. We will update this result in the main paper. | **Training** | **ImageNet-VQVAE** | **ImageNet-VAE** | **K600-VQVAE** | **K600VAE** | |:--------|:--------------:|:--------------:|:--------------:|:--------------:| |Joint (VQ-KL) | 1.11 | 0.69 | 25.97 | 13.02 | | Joint (KL-VQ) | 2.05 | 0.89 | 33.79 | 12.43 | --- Rebuttal Comment 1.1: Comment: Thank you for providing the new results and more details of the model architecture. The intuitive explanation of the order of VQ and KL training makes sense to me, and is well backed up by the new ablation study. All my questions are resolved, and I will increase my rating to 6. If the paper is accepted, please incorporate the additional qualitative/quantitative results and the detailed architecture explanation into the paper (most of them can go into the appendix with a reference in the main paper). --- Reply to Comment 1.1.1: Title: Thanks for your positive comments! Comment: Thanks for your valuable suggestions and positive feedback. We will incorporate these experimental results and the architecture explanation into the paper.
Summary: The paper introduces OmniTokenizer, a transformer-based tokenizer designed for both image and video tokenization within a unified framework. This tokenizer employs a spatial-temporal decoupled architecture, using window attention for spatial and causal attention for temporal modeling. The approach leverages a progressive training strategy, starting with image data to develop spatial encoding skills and then extending to video data to handle temporal dynamics. Extensive testing on several datasets demonstrates that OmniTokenizer achieves good performance in reconstruction tasks and enhances the effectiveness of both AR-based and diffusion based generative models. Strengths: - The progressive training strategy is intuitive, allowing the model to learn incrementally from simpler to more complex tasks. This strategy contributes to the model's performance and makes sense to me. - The KL fine-tuning for converting a VQ tokenizer to a continuous tokenizer seems interesting and few works explored that. Weaknesses: - The model does not compare with magvitv2 thoroughly, which should be a strong baseline in terms of reconstruction and generation. - The architecture for single frame mostly follows ViT-VQGAN and the improvement comes from joint ImageNet/UCF training, when compared with single frame baselines in Table 1/3/4 the comparison seems unfair as the baseline tokenizers are trained on ImageNet only, but the UCF data can also be used as a single frame data source. - The factorized spatial-temporal attention is not novel as many previous works also employed that technique. Technical Quality: 3 Clarity: 2 Questions for Authors: Overall I believe this paper shows some interesting points like progressive training and KL fine-tuning. However, the unfair comparison with baselines (magvitv2 and others) prevents me from giving higher ratings, and it is difficult to evaluate the significance of the performance given the current results Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. The model does not compare with magvitv2 thoroughly, which should be a strong baseline in terms of reconstruction and generation. =>Answer: Please refer to question 2 of our global rebuttal. 2. The architecture for single frame mostly follows ViT-VQGAN and the improvement comes from joint ImageNet/UCF training, when compared with single frame baselines in Table 1/3/4 the comparison seems unfair as the baseline tokenizers are trained on ImageNet only, but the UCF data can also be used as a single frame data source. =>Answer: Good question. Our goal is to enable image and video tokenization in a unified framework and achieve mutual benefits between them. To achieve this, it is necessary and natural to use both image and video data to train our model. In addition, as shown in the Table 17(rebuttal PDF), simply training on image and video data cannot lead to performance gains on image and video reconstruction, highlighting the importance of the proposed training strategy. |**Method**|**ImageNet**|**K600**| |:--------|:--------------:|:---------:| |Ours-Image (Fix)|1.28|-| |Ours-Video (Multi)|-|54.89| |Joint w/o Image (Multi)|2.62|67.77| |Ours-Joint (Multi) |1.11|25.97| 3. The factorized spatial-temporal attention is not novel as many previous works also employed that technique. =>Answer: Please refer to question 1 of our global rebuttal. --- Rebuttal 2: Title: Help check whether questions are well answered. Comment: Dear reviewer Nxg3, We would like to thank you again for your effort and valuable suggestions. Can you help find time to take a look at the response and check whether your questions are well answered. We are very happy to discuss with you and provide further clarification for any new question. --- Rebuttal Comment 2.1: Comment: I have read the authors' responses and other reviewers' comments. I appreciate the authors' efforts on providing additional results and choose to raise my rating to weak accept. --- Reply to Comment 2.1.1: Title: Thanks for raising the rating to weak accept Comment: Dear reviewer Nxg3, thanks for your effort in reviewing our paper and choosing to raise your rating to weak accept!
Summary: The paper introduces OmniTokenizer, a transformer-based image-video tokenizer designed for visual generation tasks. It adopts a spatial-temporal decoupled architecture, integrating window attention for spatial modeling and causal attention for temporal dynamics, allowing it to process both image and video data within a unified framework. A progressive training strategy is proposed, starting with image data to develop spatial encoding and then incorporating video data for learning temporal features across multiple resolutions. Extensive experiments demonstrate OmniTokenizer's state-of-the-art performance in reconstruction tasks on various datasets, outperforming previous methods. Strengths: 1. The proposed tokenizer can enhance the performance of both language model-based and diffusion model-based visual generation approaches. 2. The key of this work lies in its potential to provide a versatile (image and video) and efficient tool for translating complex visual data into compact latent representations. 3. This paper is generally well-written and clearly stated. Weaknesses: 1. The novelty is limited. The proposed tokenizer architecture is not new and is basically the same as previous methods (e.g., MagVit). The only difference is further regarding images as a 1-frame video to pretrain the network. I don’t think that counts as enough contributions. 2. The spatial-temporal decoupled architecture is a key aspect of OmniTokenizer. A deeper dive into the role and impact of different attention mechanisms on the model's performance could offer more clarity on design choices and potential improvements. 3. While the paper mentions the potential for scaling up the model capacity, it does not provide a detailed analysis of how the model scales with increased data volume or model size. Providing insights into scalability, such as computational complexity, training time, and resource requirements, would be valuable. Technical Quality: 3 Clarity: 3 Questions for Authors: See weakness Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. The novelty is limited. =>Answer: Please refer to question 1 of our global rebuttal. 2. The spatial-temporal decoupled architecture is a key aspect of OmniTokenizer. A deeper dive into the role and impact of different attention mechanisms on the model's performance could offer more clarity on design choices and potential improvements. =>Answer: Please refer to question 3 of our global rebuttal. 3. While the paper mentions the potential for scaling up the model capacity, it does not provide a detailed analysis of how the model scales with increased data volume or model size. Providing insights into scalability, such as computational complexity, training time, and resource requirements, would be valuable. =>Answer: Great suggestion. We leave the model scaling as our future work, as previous literature indicates that transformer models always exhibit promising scalability. We will study how the model scales with increased data or model size when more resources are available.
Rebuttal 1: Rebuttal: We would like to thank all reviewers for their valuable comments. We are happy the reviewers think the progressive training strategy is **intuitive** [Review pXcq, Reviewer ZuXX] and **effective** [Review hbMY, Reviewer ZuXX]. Below we respond to the common concerns of reviewers. 1. Novelty of this work =>Answer: We would like to emphasize that our primary contribution is not an innovation in model architecture, but a novel progressive training strategy. It offers twofold significant advantages over existing methods. First of all, progressive training enables image and video tokenization with one model and one weight (casting the image and video inputs to the same codebook), which allows the joint training of image and video generation with one tokenizer. Secondly, compared to image-only and video-only training, the proposed progressive training could lead to better performance on both image and video reconstruction, as can be seen from Table 7 (main paper). This is not easy to achieve, since simply training an image tokenizer like ViTVQGAN on video data will hurt the performance on image reconstruction, as verified in Table 7 (main paper) and Table 17 (rebuttal PDF). |**Method**|**ImageNet**|**K600**| |:--------|:--------------:|:---------:| |Ours-Image (Fix)|1.28|-| |Ours-Video (Multi)|-|54.89| |Joint w/o Image (Multi)|2.62|67.77| |Ours-Joint (Multi) |1.11|25.97| 2. Comparison with MAGVITv2 =>Answer: We strongly agree that Magvitv2 is a strong baseline for our method. However, since it is not open-source, we can only look for the results in its paper and compare with it as thoroughly as possible. For visual reconstruction, as shown in Table 10 (left) of our rebuttal PDF, our method is better than Magvitv2 on ImageNet, while on UCF, we are worse than MagViTv2 but outperforms previous SOTAs like TATS significantly. For visual generation, we adopt different generative model architectures, i.e., autoregressive v.s. non-autoregressive transformers, so it is not a fair comparison. We believe that the reconstruction performance can better reflect the capability of different tokenizers. |**Method**|**ImageNet**|**UCF**| |:--------|:--------------:|:---------:| |TATS|-|162| |MAGVITv2|1.15|9| |Ours-VQVAE |1.11|42| |Ours-VAE |0.69|23| We also compare with Magvitv2 on video understanding tasks, i.e., action recognition on K600. We follow them to use tokens as the input to the ViViT transformer. It can be seen from Table 10 (right) that our methods achieve on-par performance with MagViTv2 and outperforms MagViT by a clear margin. |**Method**|**K600**| |:--------|:--------------:| |3D VQ-VAE|45.67| |MAGVIT|74.65| |MAGVITv2|77.93| |Ours (VQVAE)|77.34| In addition, it is worth mentioning that our work differs from MAGVITv2 in that they need to train separate models for image and video tokenization, but the proposed progressive training strategy allows us to tokenize image and video inputs with the same model and weight, and to the best of our knowledge we are the first to achieve this. 3. A deeper dive into the attention mechanisms =>Answer: We do ablation studies on attention mechanisms in Table 8 (main paper). In order to fully analyze its effect, we additionally conduct experiments on both image and video reconstruction using different attention variants. As can be seen in Table 11 (rebuttal PDF), window attention outperforms plain attention for image reconstruction (ImageNet) since it incorporates the local modeling. While for video reconstruction (K600), using causal or not has little influence on the performance, but causal attention is necessary since we train the following image or video generation transformer in an autoregressive manner. |**Spatial**| **rFID**| |:--------|:--------------:| |Plain|1.55| |Window | 1.28 | |**Spatial**| **Temporal** | **rFVD**| |:--------|:--------------:|:--------------:| |Window|Plain| 26.43| |Window | Causal | 25.97| Pdf: /pdf/f33e6759bc142397355e71d68262a02bd814570b.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Physics-informed Neural Networks for Functional Differential Equations: Cylindrical Approximation and Its Convergence Guarantees
Accept (poster)
Summary: This paper considers solving FDEs (Functional Differential Equations) using neural networks. The difficulty of solving FDEs compared to PDEs (Partial Differential Equations) lies in the fact that the input space is infinite-dimensional. In this case, the author employs Cylindrical Approximation to reduce the infinite-dimensional space to a finite-dimensional space. Then reduce the problem to a PDE case and use PINN to solve it. Strengths: Consider the challenges of utilizing neural networks in infinite-dimensional spaces. The author's use of Cylindrical Approximation as a method to reduce the infinite-dimensional space to a finite-dimensional one is commendable. The results presented in the paper are interesting and well-written. Weaknesses: I did not find the main issue of the paper addressed. My concern is that the theoretical part of this paper is somewhat weak. The theorems primarily present asymptotic results, i.e., asking for \(m \to \infty\). In my knowledge, there are some papers considering the functional approximation such as: 1. T. Chen and H. Chen. Approximations of continuous functionals by neural networks with application to dynamic systems. IEEE Transactions on Neural Networks, 4(6):910–918, 1993. 2. Y. Yang and Y. Xiang. Approximation of Functionals by Neural Network without Curse of Dimensionality. 3. L. Song et al. Approximation of smooth functionals using deep ReLU networks. Except for the first paper, the other two papers provide the approximation error of functionals. In my opinion, if you have Cylindrical Approximation rates and PINN approximation rates, the total order of approximation may also be obtainable. Are there any difficulties in obtaining the approximation rate? Technical Quality: 3 Clarity: 3 Questions for Authors: Mentioned in the Weakness. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: All right. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate Reviewer 33cu for their insightful comments and recognition of our paper's strength. Thank you for pointing out the related papers; we enjoyed reading them and will include them in the Related Work section. Please note that, just in case, their focus is functional approximation, not functional derivatives or solving FDEs. > In my opinion, if you have Cylindrical Approximation rates and PINN approximation rates, the total order of approximation may also be obtainable. Are there any difficulties in obtaining the approximation rate? This is a very interesting question, which we have been discussing in our current research. Our response is as follows: - "the total order of approximation may also be obtainable" - Yes. - The total approximation error rate can be derived by combining the cylindrical approximation error (e.g., lines 1140-1145) with the PINN approximation error; specifically, $|F(\theta) - \hat{f}(\boldsymbol{a})| \leq | F(\theta) - f(\boldsymbol{a}) | + | f(\boldsymbol{a}) - \hat{f}(\boldsymbol{a}) |$, where $F(\theta)$ is the target functional with input $\theta$, $f(\boldsymbol{a})$ is its cylindrical approximation, and $\hat{f}(\boldsymbol{a})$ is the prediction of a PINN after training. Here, $| F(\theta) - f(\boldsymbol{a}) |$ represents the cylindrical approximation error we evaluated in our work, and $| f(\boldsymbol{a}) - \hat{f}(\boldsymbol{a}) |$ is the PINN approximation error. In light of the recent advances in the analysis of PINN approximability, at first glance, [ref1] appears a promising starting point because of its minimal assumptions on the form of PDEs. - "Are there any difficulties in obtaining the approximation rate?" - Yes. - The challenge is that it may be difficult to observe theoretical convergence of approximation error in experiment: if we successfully derive the combined cylindrical + PINN approximation error, it may be theoretically interesting but practically less relevant, because the optimization error of PINNs often overshadows the approximation error in experiments, as noted in our manuscript (e.g., App. H.6). Therefore, we conclude that a careful and detailed discussion on this point is required, which warrants a separate paper. [ref1] Ryck & Mishra. Generic bounds on the approximation error for physics-informed (and) operator learning. --- Rebuttal Comment 1.1: Comment: Of course, there should be a gap between the approximation rate and the experiment results, as there are training errors and sample limitations. However, obtaining the approximation error is still meaningful because it demonstrates the approximation ability of neural networks and provides insights into how to design them. Therefore, I believe the results can be improved. Nonetheless, this paper is a commendable first step in this project, and I maintain my score. --- Reply to Comment 1.1.1: Title: Reply Comment: > However, obtaining the (PINN) approximation error is still meaningful because it demonstrates the approximation ability of neural networks and provides insights into how to design them. We fully agree with this point; however, it would require a separate paper, as our focus is on the cylindrical approximation error rather than the PINN approximation error. Thank you for your time and support!
Summary: The power of PINNs is leveraged to solved high-dimensional PDEs which are obtained from FDEs through the cylindrical approximation. FDEs are computationally expensive to learn, and PDEs are more well-studied in the context of learning. This is a novel work in making FDEs more accessible for computation since they have a wide impact across many technical disciplines. Strengths: 1. This is a novel way to circumvent longstanging numerical issues concerning computing FDEs. The idea is easy to follow, at a high level. 2. Computational complexity and expressive power are significantly improved compared to the current SOTA using cylindrical approximations that are shown to converge. 3. Notation and steps are clearly presented. Weaknesses: The authors presented their weaknesses/limitations in a section. They could try to expand their suites of experiments to the applications outlined in the appendix e.g. Navier-Stokes modeling, etc. Technical Quality: 4 Clarity: 4 Questions for Authors: I don't have questions Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The limitations are clearly presented and have been addressed, when possible, reasonably. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate Reviewer LXGL for their time and recognition of our paper's strength. > The authors presented their weaknesses/limitations in a section. They could try to expand their suites of experiments to the applications outlined in the appendix e.g. Navier-Stokes modeling, etc. This is indeed our next project, and we are currently conducting the relevant analyses and experiments. We recognize that this work will require a separate paper, likely better suited for physics journals, because the numerical analysis of higher-order FDEs ($r \geq 2$) is particularly important in physics (see Sec. 1 and App. B). Please let us know if you have any further questions.
Summary: In this paper, they used cylindrical approximation to transform the functional differential equation (FDE) into a higher-dimensional PDE in order to solve it using PINN. Strengths: This is a relatively new and intriguing topic in the field of functional differential equations (FDEs), aiming to solve them using PINN as an innovative approach. To this end, specific examples of FDEs such as FTE and BHE were provided, accompanied by a clear mathematical explanation of what cylindrical approximation entails. Weaknesses: It seems that significant improvements are needed in the experimental section. Particularly, Figure 4 and Figure 7 suffer from poor readability, with unclear x-axis and y-axis labels that make it difficult to discern their intended purpose. Moreover, to demonstrate the advantages of the proposed method effectively, it is essential to compare it with existing numerical analysis methods such as CP-ALS. Such comparisons would clarify how the proposed method reduces computational complexity, as mentioned on line 224 of the manuscript, and illustrate how the order varies with respect to $m$ ($m^6$ vs $m^r$) using the experimental results. Therefore, the experimental section appears to be lacking overall. Furthermore, cylindrical approximation has already been studied in the field of numerical analysis. Clarifying what novelty exists in applying this approach to convert FDEs into high-dimensional PDEs and then using the well-established PINN method would be beneficial. If the author claims that using deep learning to solve FDEs is novel, it is crucial to provide clear comparisons with state-of-the-art numerical methods in terms of experimental results, computation time, computational cost, accuracy, and other relevant metrics. This comparison would ultimately demonstrate how the proposed approach stands out. Technical Quality: 3 Clarity: 2 Questions for Authors: * I'm curious about the range of \(\boldsymbol{a}\) in the experiments. To train with PINN, \(\boldsymbol{a}\)'s range needs to be fixed beforehand for sampling. How was this managed? * I'm interested in why r is typically "typically 1 or 2," as mentioned in line 229 of the manuscript. A detailed explanation would be helpful. * The abbreviation BHE in the caption of Figure 2 first appears in line 162. It should be mentioned earlier in the text. * Aside from FTE and BHE, I'm curious if there are more complex or challenging applications related to actual physical phenomena. * In Table 1, it seems that the relative error increases as the degree increases. Is this trend correct? I would appreciate a more detailed explanation and analysis. * Figures 11-16 are labeled as extrapolation. Could you clarify what this means? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 1 Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate Reviewer 5Xf5 for their time and invaluable comments. We will incorporate all the suggestions into our manuscript. > I'm curious about the range of $\boldsymbol{a}$ in the experiments. It is provided in lines 1456-1458. > Aside from FTE and BHE, I'm curious if there are more complex or challenging applications related to actual physical phenomena. App. B provides an additional introduction to FDEs. For example, the Navier-Stokes-Hopf equation (Eq. (10) in line 805) comprehensively describes the statistical properties of turbulence of realistic 3D fluids, involving complicated differentiation and functional differentiation operations. The Schwinger-Dyson equation (Eq. (13) in line 833) solely provides properties of quantum particles, while solving it is even more challenging because it can involve, for example, third-order functional derivatives. > I'm interested in why r is "typically 1 or 2," as mentioned in line 229 of the manuscript. A detailed explanation would be helpful. Major examples of FDEs include the Hopf functional equation, the Fokker-Planck functional equation, and the functional Hamilton-Jacobi equation (lines 22-30 & Sec. 2). All of them have $r=1$ or $2$. Other less-known FDEs, such as Eq. (16-17) in line 859, also have $r=2$. A notable example with $r=3$ is the Schwinger-Dyson equation (Eq. (13)). To our knowledge, there are no other major significant examples with $r > 2$, although arbitrary toy FDEs can be constructed. > In Table 1, it seems that the relative error increases as the degree increases. Is this trend correct? I would appreciate a more detailed explanation and analysis. As noted in the captions of Tables 1 & 2, these tables are not intended to assess the theoretical convergence of the cylindrical approximation; therefore, a decrease in relative error is not necessarily expected. This is because different exact solutions (ground truth functionals) are used for different rows, as is stated in footnote 1 on page 7. - Firstly, these tables serve as a proof of concept for our proposed approach, demonstrating its capability to learn FDEs effectively. - Secondly, the expected decrease of relative error for large $m$ *without* training PINNs is illustrated in Figure 2. - Thirdly, the expected decrease of relative error for large $m$ *with* training PINNs is analyzed in App. H.6, where a cross-degree evaluation is performed. > Figures 11-16 are labeled as extrapolation. Could you clarify what this means? - Firstly, figures 11-16 shows that no collocation points such that $\| \boldsymbol{a} \| \approx 0$ were included in the training sets. - Secondly, Figures 4 & 7 plot the absolute errors of PINN predictions at the collocation points where $\boldsymbol{a} = (0, 0, \dots, 0, a_k, 0, \dots, 0)$ with $k = 0, 1, 2, 19,$ or $99$. - Therefore, the collocation points such that $a_k \approx 0$ in Figures 4 & 7 were not included in the training sets, but the errors were as small as those in the region where $a_k \not\approx 0$. We refer to this as "extrapolation." - Additionally, we have revised confusing sentences in lines 296-300 and 312-316 for clarity. > cylindrical approximation has already been studied ... Clarifying what novelty exists in applying this approach ... and then using the well-established PINN method would be beneficial. - Theoretical Contribution Compared with Previous Studies on the Cylindrical Approximation: - For details, please see lines 97-101 and, for technical details, lines 1016-1019 in App. C.4.2. - In summary, we established convergence theorems of functionals and FDE solutions under a modified cylindrical approximation tailored for practical use (i.e., the cylindrical approximation of functionals without the "tail term", as mentioned in lines 125-127). - Empirical Contribution, Especially with the Use of PINNs: - In functional analysis, there are several approaches to reduce infinite dimensional function spaces to finite spaces to cut computational costs. Researchers have sought efficient and scalable methods. - Among them, we found that the cylindrical approximation, combined with PINNs, can offer significant scalability, which may seem obvious in hindsight. - Previous approaches using the cylindrical approximation have solely focused on tensor decomposition and finite difference methods (lines 90-92), which limit scalability. Thus, their experiments were confined to the input functions such as polynomials with degrees no higher than $10$. In contrast, our model can treat degrees up to $m \sim 1000$, demonstrating unprecedented expressivity. > it is essential to compare it with existing numerical analysis methods such as CP-ALS. > it is crucial to provide clear comparisons with state-of-the-art numerical methods. For the runtime comparison with the CP-ALS, please see lines 1523-1526 in App. G (Detailed Experimental Settings: Runtime). For the empirical comparison of computational complexities, note that previous approaches suffer from the severe curse of dimensionality, making it *impossible* to perform experiments with as large as $m \sim 50$ or higher. Besides, the implementations of their complex algorithms are unavailable, further making complicating performance comparisons. For the error comparison, see lines 337-338 and App. H.4, where we compare the CP-ALS with our model in solving the advection-reaction equation. In summary, the error of the CP-ALS is $\lesssim 10^{-2}$, while ours is $\sim 10^{-1}$. Please note that the following: - 1. We further tuned hyperparameters after submission, resulting in even smaller errors for our model. Please see also lines 1327-1332 in App. F.1 ("To further reduce errors"). - 2. Table 12 contains typos: RELATIVE ERROR, BEST RELATIVE ERROR, and WORST RELATIVE ERROR should be read as WORST RELATIVE ERROR, RELATIVE ERROR, and BEST RELATIVE ERROR, respectively. --- Rebuttal Comment 1.1: Comment: Thank you for your response. However, as you mentioned, it seems that many of the critical points I considered important are mostly found in the appendix. I believe these key sections should be brought into the main text and revised in future versions. For example, the explanation for point (a), and the purpose behind figures 4 and 7, which are difficult to understand, should be revised so that their meaning and what they intend to show are clearly visible. Additionally, I still have the following questions: - In the comparison with CP-ALS for the advection-reaction equation, does the proposed method have a higher error? If so, should this be applied to higher-dimensional problems (where m is higher than 50)? In this case, is CP-ALS entirely impossible to perform, or does it just take a long time, or is the error too large? If it’s just a matter of time, then shouldn't the error after running it for a sufficient amount of time be compared in Tables 1 and 2? Or is the table meant to show that the proposed method remains applicable even as the degree increases? - Although I now understand most of my concerns through your responses, many of these points are found in the appendix or are missing from the main text. I believe adding more comparisons with existing methods in the experiments would enhance the novelty of this paper. Therefore, while I think the current version of the paper still has some shortcomings that need to be addressed before it can be accepted, I have resolved several of my concerns through your responses, so I will increase my score by 1 point, bringing it to 4. Thank you once again for addressing my questions. --- Reply to Comment 1.1.1: Comment: Thank you very much for your support. We are pleased to address your questions and concerns. > I believe these key sections should be brought into the main text and revised in future versions. For example, the explanation for point (a), Thank you for your suggestion. We have now included the range of $\boldsymbol{a}$ in the main text. > the purpose behind figures 4 and 7, which are difficult to understand, should be revised so that their meaning and what they intend to show are clearly visible. - We will replace figures to improve visibility, adjusting the size of characters and spacing. - Our intention: Figures 4 and 7 primarily demonstrate that our proposed approach effectively learns FDEs (FTE and BHE), serving as *proof of concept* (line 262). They are not intended to showcase extrapolation capability, which is a secondary message. We apologize for any confusion. We will clarify this by adding explanations around lines 293 and 320. Thank you for your invaluable feedback, which has enhanced the presentation quality of our paper. > I believe adding more comparisons with existing methods in the experiments would enhance the novelty of this paper. Thank you for the suggestion. Please note that CP-ALS and hierarchical Tucker (HT) algorithms are the only methods based on the cylindrical approximation for numerically solving FDEs, and HT is slower than CP-ALS (line 692); thus, we chose CP-ALS for comparison. Additionally, the number of papers focusing on general-purpose numerical FDE solvers is very limited [79, 91], although several papers address numerical functional approximation (not for solving FDEs). > In the comparison with CP-ALS for the advection-reaction equation, does the proposed method have a higher error? If so, should this be applied to higher-dimensional problems (where m is higher than 50)? In this case, is CP-ALS entirely impossible to perform, or does it just take a long time, or is the error too large? If it’s just a matter of time, then shouldn't the error after running it for a sufficient amount of time be compared in Tables 1 and 2? In the comparison with CP-ALS for the advection-reaction equation, our proposed method indeed has a higher error. However, performing experiments with CP-ALS (or similar finite difference or element method-based algorithms) in high dimensions is infeasible due to prohibitively long runtimes. Specifically, let us consider a state-of-the-art fast PDE solver used in [79] (published in 2024), which reports runtime of approximately 0.75 hours per time step on an Intel i9-7980XE workstation for a 20-dimensional problem. If CP-ALS had a similar runtime, it would take approximately $0.75 \times \frac{100^6}{20^6} = 0.75 \times 15625 \approx 11719$ hours (over a year) for only a single time step, due to its $\mathcal{O}(m^6)$ complexity. In contrast, our model can learn FDEs within a few hours, even at dimensions as large as $1000$. In addition to this response, we would appreciate it if the reviewer could reevaluate both our > Theoretical Contribution > Empirical Contribution summarized in our previous response or share any additional thoughts or reasons that are contributing to this (borderline) reject assessment. Understanding this would help us in further improving the work. Sincerely, Authors
null
null
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Self-Retrieval: End-to-End Information Retrieval with One Large Language Model
Accept (poster)
Summary: This paper introduces Self-Retrieval, an end-to-end IR system driven entirely by a single LLM. This model integrates all essential IR functions—indexing, retrieval, and reranking—into the LLM's architecture. By internalizing the retrieval corpus through self-supervised learning, the model transforms the retrieval process into a sequence of passage generation tasks and conducts self-assessment for reranking. The authors provide experimental evidence showing that Self-Retrieval outperforms traditional sparse, dense, and generative retrieval methods on benchmark datasets like NQ and TriviaQA. Strengths: 1. The integration of all IR functions into a single LLM is a novel contribution that leverages the inherent capabilities of LLMs across the full spectrum of IR tasks, offering a streamlined and potentially more effective approach. 2. The concept of Self-Retrieval is introduced clearly, making it accessible to readers. The detailed explanation of how the LLM handles indexing, retrieval, and reranking provides a good understanding of the system's operation. 3. The paper presents good experimental results that demonstrate significant improvements over existing retrieval methods. Weaknesses: 1. My major concern is that the experimental settings are inconsistent with existing work, making the results unconvincing. Specifically, - Most existing studies conducted experiments on NQ@320k datasets, but the main experiments of this paper are conducted on NQ@40k datasets. It is important to explain the reason of this setting. - According to the statistics in Table 2, each document in the dataset is split into 26~29 passages, which indicates that the passages are quite short. It is known that retrieving short passages are easier for generative retrieval methods. - It is necessary to explain why different models are selected for NQ@40k, TQA@40k, and NQ@320k experiments. - It is better to include more experimental results on the full KILT datasets as many existing studies for a fair comparison. 2. In Section 3.4, the three tasks are learned in a "1+2" manner. Please add more explanations on this design and provide experimental evaluation on other possible strategies (e.g., training the three tasks jointly) 3. Ensuring consistent use of key terms throughout the paper would improve its readability and professionalism. For example, all "large language models" should be written in "LLMs". Technical Quality: 3 Clarity: 3 Questions for Authors: Please see my concerns in weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your time and insightful comment. We would like to address your questions in turn. ### About NQ@40k in our experiments Currently, the most common and widely used retrieval method is dense retrieval, which primarily focuses on passage-level retrieval ([2, 3]). Consequently, we constructed NQ@40k to specifically evaluate performance at the passage level. Additionally, we observed that most other generative retrieval methods struggle with passage-level retrieval, whereas self-retrieval successfully overcomes this limitation. ### The length of passages In common settings for passage-level retrieval, documents are often segmented into chunks of 100 or 200 tokens ([1, 2]). Therefore, we believe that using 200 tokens as the passage length for our main experiment is reasonable. Additionally, we have reported the performance for the more commonly used 100-token segments in the Appendix. It should be noted that shorter passages may not necessarily lead to better performance, according to [1]. [1] Multi-passage BERT: A Globally Normalized BERT Model for Open-domain Question Answering. [2] Dense Passage Retrieval for Open-Domain Question Answering. [3] C-pack: Packaged resources 535 to advance general chinese embedding. ### The differences in baselins of NQ@40k and NQ@320k 1. NQ@40k is a passage-level retrieval setting, following the setting of previous dense retrieval works. We reproduced the latest dense retrieval methods like BGE and GirtLM. Since some previous generative retrieval works have not been open-sourced, we have only reproduced and compared some key generative retrieval baselines. 2. NQ@320K is a document-level retrieval setting, following the setting of previous generative retrieval works to ensure comparability with previous works. Furthermore, thank you for your reminder. In the subsequent versions, we will add the performance of the latest dense retrieval methods, such as BGE and GritLM, on NQ@320k for comparison. ### About full KILT datasets Due to time and computational resource limitations, we were unable to complete training and testing on the entire Wikipedia corpus from KILT during the rebuttal period. Our analysis of scaling corpus size in Figure 3 of the main paper shows that Self-Retrieval maintains a relatively parallel performance relationship with BGE-FT as the document size increases. This trend indicates that Self-Retrieval has substantial scaling potential. Furthermore, due to the scaling characteristics inherent to LLMs, we also believe that their capacity to handle documents has a very high upper limit. However, due to the entire Wikipedia corpus comprsing around 5.9M documents, it is challenging to complete training within the effective rebuttal period. We appreciate your insightful comment and will add more scaling experiments in the future. ### Experiments on different training strategies Different training strategies indeed have a certain impact on Self-Retrieval's performance. Our choice of the "1+2" training strategy was confirmed through early preliminary experiments. To compare different strategies, we conducted experiments on NQ40k. We found that jointly training on all three datasets yields overall performance (79.26 Hits@5) comparable to the "1+2" strategy(79.28 Hits@5). However, the "1+2" training method allows multiple different training sets with the same corpus to share a single indexing process. This significantly reduces the overall training time. ### About wrting Thank you for your suggestion. We will refine it in the next version. --- Rebuttal 2: Comment: Dear Reviewer uLuX, As the discussion deadline is nearing, we wanted to gently follow up on our recent submission. We've carefully addressed your feedback and incorporated it into our revised manuscript. We provided clarifications on our choice of NQ@40K for alignment with standard passage-level experiments in dense retrieval, and explained why the "1+2" strategy is preferable to joint training in our context. We hope our rebuttal have addressed your concerns regarding our experimental and training settings. Your feedback is highly valuable to us. We welcome any additional feedback or questions you may have about our revisions. Thank you for your time and expertise.
Summary: This paper proposes Self-Retrieval, an LM that retrieve, rerank passages, and generate answers using a single model. For the retrieval task, it adopts generative retrieval and directly generates passage text. For reranking, it utilizes the generation probability as the relevance score. For answer generation, it generates answers based on the top-1 passage. Evaluation shows that Self-Retrieval outperforms previous dense and generative retrievers in retrieval tasks and achieves better EM scores in answer generation tasks. Strengths: 1. Self-Retrieval consolidates the multi-step RAG pipeline with a single model. The concept is novel. 2. Self-Retrieval achieves the best performance for both passage retrieval and answer generation tasks. 3. Self-Retrieval shows promising results when the corpus size scales to 3 million. Weaknesses: 1. Generating passage content is time-consuming. This paper could analyze the latency of Self-Retrieval and compare it to other alternatives, such as generating spans (ref SEAL) or keywords (ref Term-Set Generation). 2. The proposed model includes an in-domain fine-tuned reranker, while the baseline BGE-FT + reader does not have a reranking stage. This may make the comparison unfair since reranking can significantly improve RAG results. 3. The evaluation are all based on Wikipedia. It is unclear whether the model can perform well on a corpus that is not as well-structured as Wikipedia. 4. The different steps in Self-Retrieval are independently optimized, and the `knowledge sharing and deep collaboration` effects of this consolidated model have not been validated. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Compare the efficiency and effectiveness of using shared models versus separate models for the three steps in Self-Retrieval? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitations are adequately discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your suggestions on our work. Here is our response to your concerns. ### Analyze the latency We compared the efficiency of Self-Retrieval with SEAL in the General Response Table 1. Our generation process has the following characteristics: 1. Early Stop Mechanism: Our method includes an 'early stop' mechanism where the Self-Retrieval model ceases generation once the LLM has produced sufficient content to determine the current passage. As shown in Attachment Figure 1, the average decision length of the trie is approximately 13 tokens, which is similar to the decision length in Term-Set Generation. Consequently, the total number of tokens generated remains relatively small. 2. No Additional Post-Processing: Unlike SEAL, our Self-Retrieval method does not require extra time for post-processing or retrieving passages. This means that even when using a larger LLM, the overall efficiency remains comparable to that of SEAL. ### About reranker in RAG We have augmented our experiments with the incorporation of a BGE reranker in the RAG process (BGE-FT + BGE reranker-FT + StableLM-FT). EM scores on the NQ and TriviaQA datasets were recorded as 41.98 and 50.16, respectively. As can be seen from the results, there was no observed improvement over the BGE baseline that does not employ a reranker. This outcome is likely due to the BGE's substantial proficiency developed during the training phase. Consequently, as shown in Attachment Table 1, additional training of the reranker does not bring extra benefit. ### Experiments on non-wikipedia datasets Please refer to the General Response. ### Shared Model v.s. Separated Models We separately trained individual retrieval and ranking models, achieving Hit@1 and Hit@5 scores of 60.56 (compared to 62.16 with joint training) and 78.21 (compared to 79.28 with joint training) on NQ@40K. These results show a slight decline compared to joint training in Self-retrieval. This indicates that Self-Retrieval can synergize the retrieval and ranking tasks during training to achieve better performance. Additionally, using shared models can reduce deployment costs. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed reply, which addresses many of my concerns.  Regarding the comparison with DSI and SEAL, does the latency / H@5 only account for the retrieval step of Self-Retrieval, or does it also include the re-ranking step? If it includes the re-ranking step, can we still perform early stopping during passage generation, as in Eq 2, the re-ranking score seems to rely on the full passage content. Thanks. --- Rebuttal 2: Comment: Thanks for your time and valuable comment. ### Efficiency Analysis Details: |Model Name|Memory|Beam Size|Latency(s)|H@5| |---|---|---|---|---| |SEAL|444MB|10|1.18|61.91| |||100|5.92|59.57| |DSI-XL|0|10|0.23|60.21| |||100|0.45|60.21| |Self-Retrieval|30MB|10|1.44|76.17| |||100|6.06|81.49| |Self-Retrieval w/o re-ranking|30MB|10|1.36|74.04| |||100|4.57|72.97| The latency/H@5 results in Table 1 of Global Response include the re-ranking step. We have updated the table to include results without re-ranking as well. These results demonstrate that without re-ranking, Self-Retrieval further improves efficiency while maintaining competitive performance. ### Early Stopping Mechanism Details: When the Early Stopping Mechanism is triggered, we have typically decoded only a few tokens. Nevertheless, these tokens are sufficient to identify a unique passage. In the prefix tree, each leaf node corresponds to a specific passage ID, allowing us to map the decoded prefix directly to a passage ID. Subsequently, we extract the full passage content from the corpus and append it to the previously decoded content for re-ranking.
Summary: This paper introduces Self-Retrieval, a new generative retrieval architecture. Self-Retrieval first memorizes the corpus into LLM's parametric knowledge using self-supervised training. Given a query, it generates the target document with constrained decoding, then re-assess document by decoding if the document can answer the query. On subset of NQ and TriviaQA, this approach significantly outperforms existing dual encoders and generative retrieval models. Strengths: - Intuitive architecture for using LLM for retrieval. The paper presents a self-supervise object to help model memorize the corpus. Then it integrate retrieval and reranking into a single constrained decoding process. The "reranking" process can be viewed as self-critique / chain-of-thoughts, and may potentially unlock deeper integration between retrieval and reasoning. - Strong quality improvements. The paper reports substantial improvements over previous dual-encoder approaches and generative retrieval models. - Ablations shows that the model scales well with model size. Previous dense retrievers often plateau due to the bottleneck layer; the scaling curve of this new architecture is promising. Weaknesses: - Experiment only used wikipedia-based datasets. However, wikipedia is heavily used in pretraining, so it is unclear if the proposed approach can let model sufficiently memorize other datasets. - More importantly, the method relies on generating the passage title. Unlike wikipedia, many retrieval datasets do not have high-quality, natural language passage titles. It is unclear how the method works on those datasets. It would be nice to test retrieval benchmarks like MS MARCO or BEIR/MTEB. - Missing dense retrieval + cross-attention reranking baselines. Such 2-staged pipeline is standard in IR. Since the proposed method's reranking stage essentially uses cross attention to judge the query and the retrieved candidate passage, the computational cost of the reranking stage is similar to that of a separate cross-attention reranker. It is fair and necessary to compare it with commonly-used rerankers such as MonoT5, RankT5, or BGE reranker. The ablation in Table 3 seems to show that the proposed method's retrieval-alone performance is stronger than most retrieval baselines, but the paper can be more convincing if having e2e comparison to other 2-stage retrieval pipelines like BGE + BGE reranker or GTR + RankT5. - Lacking efficiency discussion. Technical Quality: 3 Clarity: 4 Questions for Authors: - Can the method scale up to the full NQ/TriviaQA? If so, would be nice to see the performance on these more standard setups. If not, what is the main bottleneck for scaling up? - How many candidates were considered in the reranking stage? Confidence: 5 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive comments. We appreciate your feedback and will address each of your points in turn. ### Experiments on non-wikipedia datasets Please refer to the General Response. ### Experiments on untitled documents With untitled documents or titles of poor quality, Self-Retrieval can use the generated model from other models. We conducted experiments on MS MARCO. Specifically, we prompted LLaMA3 8B in a zero-shot manner to generate appropriate titles for the documents missing the title. We tested the settings following GenRet on MS MARCO. The experimental results, shown in the General Response Table 2, indicate that titles generated by the LLM can effectively meet Self-Retrieval needs. ### About retrieval+reranker baselines Thank you for your feedback. We conducted experiments using a 2-staged retriever+reranker approach. Specifically, we chose strong retrieval baselines such as BGE, GTR, GritLM, and DSI-XL as the foundation, and then applied three different rerankers: BGE reranker, BGE reranker FT, and RankGPT. (We did not use RankT5 because the relevant code or model has not been open-sourced). As shown in Attachment Table 1, our Self-Retrieval approach outperforms most retriever+reranker combinations, demonstrating the effectiveness of our method. ### Efficiency discussion Please refer to the General Response. ### Scaling up to full NQ/TriviaQA In Figure 3 of our paper, we demonstrate the performance of Self-Retrieval and BGE as the corpus size increases. Our experiments included a maximum of approximately 200k documents and 3M passages. We are currently conducting larger-scale NQ/TriviaQA experiments to further validate the scaling effects of our model. However, due to the full NQ/TriviaQA datasets involving the entire Wikipedia corpus (approximately 21M passages for the DPR version and 5.9M documents for the KILT version), it is challenging to complete training within the effective rebuttal period. We appreciate your insightful comment and plan to include more scaling experiments in the future. ### Number of candidates in the reranking stage In the reranking stage, we use 50 passages as candidates. --- Rebuttal Comment 1.1: Comment: Thank you for the response and the additional experimental results! The results addressed one of my major concerns. I'm still not quite clear about the scaling / efficiency of the proposed method. I'll keep my score unchanged. --- Rebuttal 2: Comment: We sincerely appreciate your time and valuable feedback. We are pleased that our additional experimental results have addressed one of your primary concerns. Regarding the scaling capabilities of Self-Retrieval: 1. As shown in Figure 3 of the main paper, our current experiments encompass a substantial dataset of approximately 200k documents and 3 million passages, demonstrating Self-Retrieval's capability with large-scale collections. 2. It's worth noting that in generative information retrieval research, using partial document collections is common due to training efficiency constraints. For instance, Ultron and GenRet utilized subsets of 300k documents from MS MARCO, while UniGen employed a subset of approximately 100k passages from the NQ dataset for evaluation. In contrast, Self-Retrieval conducted main experiments on 1 million passages of NQ and further expanded it to 3 million on the scaling experiment. 3. We are actively conducting experiments on the full NQ dataset. However, due to time and computational resource limitations during the rebuttal period, we were unable to complete these extensive experiments for this response. Concerning efficiency, as mentioned in our Global Response, Self-Retrieval achieves comparable efficiency to SEAL while maintaining high performance. Moreover, Self-Retrieval requires only a single LLM to manage both retrieval and downstream LLM tasks. This unified architecture enhances deployment flexibility and potentially reduces computational overhead. We hope these points can partially alleviate your concern about the scaling and efficiency aspects of our method. We remain committed to further large-scale experiments and are open to addressing any additional questions or concerns you may have.
Summary: The paper proposes an approach of self-retrieval, which uses the probability of generation of the passage as the ranking criterion. To limit the generation to the existing passages, a trie-structure is used, forcing the generation to produce the existing passages. The experiments compared the method with several existing approaches, including sparse retrieval, dense retrieval and generation-based retrieval. The proposed method outperforms the others. Strengths: The idea of relying on the generation of an existing passage for ranking is interesting. The use of trie structure to constrain the generation to the existing documents is also nice. The proposed method thus has some novelty compared to the literature. The experimental results are very good, showing improved performance on document retrieval and QA. Weaknesses: A key idea is the use of trie for passage generation. This is described briefly. More details should be presented. If a whole passage should be generated using trie, then the depth of trie structure will be very large (equivalent to the length of the passage). What about the storage cost of trie? What is the time efficiency? Technical Quality: 3 Clarity: 3 Questions for Authors: Have you evaluated the space and time complexity of the approach, and compare it to the existing methods? How do you deal with long document (in particular for the trie structure)? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your helpful comments! We are very glad to address your concerns one by one. ### Details of Trie The trie is pre-built based on the corpus before retrieval. Most documents/passages only share a small common prefix. The LLM stops generating once it has produced enough tokens to determine the current document. As shown in Figure 1 of the attachment file, the average decision length of the trie is around 13 tokens. For most documents, the retrieval process can decide on the current document after generating less than 20 tokens, at which point we "early stop" and manually append the document to the context. In this case, the depth of the trie depends more on the number of documents/passages rather than the length of each document/passage. Therefore, even when dealing with long documents, the common prefix length between documents may not necessarily be long, allowing Self-Retrieval to handle long documents effectively. We will include these details in the next version. ### Space and Time Complexity Please refer to the General Response.
Rebuttal 1: Rebuttal: We sincerely thank all reviewers for their insightful comments and valuable suggestions. In our responses, we provided details of the Trie structure to Reviewer xB99, additional experiments on untitled documents and retrieval + reranker baselines to Reviewer g3kT, along with clarifications regarding scaling up to full NQ/TriviaQA and the number of candidates in the reranking stage. We also addressed the latency analysis and the reranker in RAG for Reviewer iXLD and clarified our dataset selection and training strategies for Reviewer uLuX. Here, we would like to provide a unified response to the common concerns raised by the reviewers regarding space and time complexity, as well as experiments on non-Wikipedia datasets as follows: ### Space and Time Complexity Table 1 |Model Name|Memory|Beam Size|Latency(s)|H@5| |---|---|---|---|---| |SEAL|444MB|10|1.18|61.91| |||100|5.92|59.57| |DSI-XL|0|10|0.23|60.21| |||100|0.45|60.21| |Self-Retrieval|30MB|10|1.44|76.17| |||100|6.06|81.49| The table compares the time and space efficiency of Self-Retrieval with other typical generative retrieval methods on NQ40K. 1. Compared to DSI: - Although Self-Retrieval has some disadvantages in terms of time and space efficiency, it offers a significant performance advantage. Even with a beam size of 10, its performance far exceeds that of DSI with a beam size of 100. This allows for a trade-off between performance and efficiency when using Self-Retrieval. - Additionally, with the advent of software and hardware accelerations for large models, the performance of Self-Retrieval is expected to improve further. 2. Compared to SEAL: - Both Self-Retrieval and SEAL use natural language decoding, but SEAL employs an FM-Index, which results in extensive post-processing after generating the natural language, severely impacting efficiency. - Self-Retrieval uses a smaller additional storage structure compared to SEAL. The trie, with early stopping, has an average length of 13, resulting in significantly less additional storage than the FM-Index used by SEAL. ### Experiments on non-wikipedia datasets Table 2 |Method|R@1|R@5|MRR@10| |---|---|---|---| |BM25|18.9|42.8|29.2| |DocT5Query|23.3|49.4|34.8| |Sentence-T5|27.3|58.9|40.7| |DPR|29.1|62.8|43.4| |DSI-Atomic|32.5|63.0|44.3| |DynamicRetriever|29.0|64.2|42.5| |Ultron-Atomic|32.8|64.9|46.9| |GenRet|47.9|-|58.1| |Self-Retrieval|47.8|69.9|57.2| This is an excellent suggestion. We conducted experiments on the MS MARCO dataset to verify the effectiveness of Self-Retrieval on non-Wikipedia documents. The MS MARCO dataset consists of crawled web pages that differ from the structured content and titles found in Wikipedia. It contains various complex, repetitive, and lower-quality information and titles. Following Ultron and GenRet, we trained and tested on a subset of MS MARCO and used DocT5Query to generate additional training data. We utilized StableLM-3B as the backbone model and followed the training and inference methods described in Section 3.4. The experimental results, as shown in the Table, indicate that on the MS MARCO dataset, Self-Retrieval outperforms most generative baselines such as Ultron and DSI, as well as dense baselines like DPR. This demonstrates that Self-Retrieval can still perform well on non-Wikipedia documents. Pdf: /pdf/63bcf953eb344f03be430fa9d578b99f61dd6d24.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
ART: Automatic Red-teaming for Text-to-Image Models to Protect Benign Users
Accept (poster)
Summary: In this paper, the authors propose a new framework called Automatic Red-Teaming (ART) designed to identify safety risks in text-to-image models. The framework leverages both vision language models (VLMs) and large language models (LLMs) to establish connections between unsafe generations and their prompts. ART systematically evaluates the safety of text-to-image models by using an iterative interaction between LLMs and VLMs, fine-tuning them to generate and improve prompts that expose the model’s vulnerabilities. The authors introduce three large-scale red-teaming datasets to further aid in studying these safety risks. The paper also highlights the effectiveness, adaptability, and diversity of ART through comprehensive experiments on popular open-source text-to-image models like Stable Diffusion. Strengths: 1. The introduction of the ART framework is a significant innovation in the field of text-to-image model safety, combining LLMs and VLMs to automatically identify safety risks. 2. The creation of three large-scale red-teaming datasets enhances the research community's ability to study and improve model safety. 3. The paper conducts extensive experiments to validate the effectiveness of ART, demonstrating its success in identifying safety risks across different models and settings. 4. ART shows adaptability and can generalize to various categories of harmful content, making it a versatile tool for developers. 5. The framework is effective under different generation settings, ensuring robustness in real-world applications. Weaknesses: 1. The ART framework is complex, involving multiple stages of fine-tuning and iterative interactions, which might be challenging to implement and reproduce. 2. The framework heavily relies on pre-trained models, which might not be accessible or practical for all researchers or developers. 3. While the paper provides comprehensive experimental results, the evaluation metrics could be further detailed to cover more nuanced aspects of model safety and performance. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. The ART framework is complex, involving multiple stages of fine-tuning and iterative interactions, which might be challenging to implement and reproduce. 2. The framework heavily relies on pre-trained models, which might not be accessible or practical for all researchers or developers. 3. While the paper provides comprehensive experimental results, the evaluation metrics could be further detailed to cover more nuanced aspects of model safety and performance. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: 1. The ART framework is complex, involving multiple stages of fine-tuning and iterative interactions, which might be challenging to implement and reproduce. 2. The framework heavily relies on pre-trained models, which might not be accessible or practical for all researchers or developers. 3. While the paper provides comprehensive experimental results, the evaluation metrics could be further detailed to cover more nuanced aspects of model safety and performance. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1. The ART framework is complex, involving multiple stages of fine-tuning and iterative interactions, which might be challenging to implement and reproduce. The framework heavily relies on pre-trained models, which might not be accessible or practical for all researchers or developers.** **A:** Our method primarily uses two models, LLaMA and LLaVA, while other models such as detectors are fundamental to all red-teaming methods. To help other researchers and model developers, we provide detailed steps and settings for the finetuning and the inference. This information can be found in Appendices F, G, and H. Additionally, we will open-source our 1) finetuned models to reproduce the results and 2) the codes so that everyone can use our method for the red-teaming test on more models. **2. While the paper provides comprehensive experimental results, the evaluation metrics could be further detailed to cover more nuanced aspects of model safety and performance.** **A:** Thanks for your suggestion. In this work, we propose an automatic red-teaming framework to study the vulnerability of generative models, particularly stable diffusion models in the context of benign input prompts. In our experiments, we evaluated metrics such as attack success rate, prompt diversity, and the safety of inputs and outputs. Evaluating overall model performance is beyond the scope of our study. Considering our ART framework is general and can be adaptive to various Judge Models, people can employ more powerful and precise Judge Models to cover more nuanced aspects of model safety that they are more concerned about. We hope this addresses your concerns.
Summary: This work proposes an automatic red-teaming frame to evaluate the safety of generated images for text-to-image models. The proposed method adopt a multi-agent framework. It consists of a LLM as Writer Model, a VLM as Guide Model, a set of toxic text and image detectors as Judge Models. The safe prompts likely eliciting unsafe content are generated through multi-round interaction between Writer Model and Guide Model. Both Writer Model and Guide Model are fine-tuned on purpose-built datasets. The empirical results suggest the effectiveness of the proposed method in generating safe prompts likely eliciting harmful generation of a target model. Strengths: 1. The motivation of "protecting normal users from unsafe content" is important and has practical meaning. 2. although the idea of using LLM to automate red teaming is not novel, but the implementation of multi-agent approach and the fine-tuning of core models are well-designed. 3. the performance improvement over the competitive works is large. 4. the paper is well-written and easy to follow. Weaknesses: 1. The proposed evaluation heavily depends on the effectiveness of two sets of detectors. Have authors done any verification on their accuracy? For example, use the human-inspected Adversarial Nibbler to do a sanity check. 2. Adversarial Nibbler is closely related to the proposed method, but it is not compared in the discussion (tab 1) and experiments. 3. although "our method is a form of red-teaming aimed at improving the model’s inherent safety and thus reducing reliance on other safety modules", given the current state of defense, those safety measures are necessary today for safe generation. It is therefore helpful to understand how the proposed methods perform when these safety measures are deployed. 4. IMO, the point of preventing unsafe content from safe prompts should be that the users do not expect the unsafe content when they input the safe prompts. However, given the visualization in Fig. 1, I personally feel that the generated images can be expected from the query prompts. Therefore, the generated unsafe images are not "unintentional" as claimed by the authors at Line 37. Technical Quality: 3 Clarity: 4 Questions for Authors: 1. have authors tried to add the output of Judge Models as the feedback in each red-teaming round for LLM/VLM to refine the prompt. Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: discussed well in appendix L. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1. The proposed evaluation heavily depends on the effectiveness of two sets of detectors. Have authors done any verification on their accuracy? For example, use the human-inspected Adversarial Nibbler to do a sanity check.** **A:** Thanks for your suggestions. We would like to clarify that the detectors used in our experiments have been thoroughly evaluated on various test sets. The results of these evaluations are reported on their respective HuggingFace pages (refer to [5,7,8,9,14,15]) or can be found in [34]. Therefore, we believe these verified detectors can accurately reflect the safety risks of the generated contents. **2. Adversarial Nibbler is closely related to the proposed method, but it is not compared in the discussion (tab 1) and experiments.** **A:** Adversarial Nibbler shares a similar motivation with our work. Google’s approach involves inviting volunteers to manually craft safe prompts and having human experts assess whether the prompts are safe and if the generated images are harmful. Clearly, Adversarial Nibbler is a **crowdsourcing project**, not an **automatic method**. There are two main reasons that we did not include Adversarial Nibbler as a baseline: - Firstly, we contacted Google months ago to request access to their data. However, the internal review process was only completed recently, and the data was released in June. Therefore, we were unable to compare their method in our experiments. - Secondly, the data from Adversarial Nibbler is generated by humans and has been verified for its effectiveness in discovering the vulnerabilities of diffusion models. It would be **unfair** to compare these human-filtered results with our automatically generated ones. Our paper primarily focuses on using automatic methods to identify vulnerabilities in diffusion models. Therefore, we have used methods in the same category as baselines, rather than Adversarial Nibbler. We hope this explanation addresses your concerns. **3. although "our method is a form of red-teaming aimed at improving the model’s inherent safety and thus reducing reliance on other safety modules", given the current state of defense, those safety measures are necessary today for safe generation. It is therefore helpful to understand how the proposed methods perform when these safety measures are deployed.** **A:** Yes, we acknowledge your comment that it is helpful to understand how the proposed methods perform when these safety measures are deployed. Based on this point, we conducted empirical studies on the DALLE3 model, which adopts both prompt filters and image filters. As shown in our paper, these safety measures have certain flaws, and DALL-E still generates harmful images in response to safe prompts from normal users. Additionally, we tested our ART on Midjourney. Similarly, we found that while Midjourney is less likely to generate sexual-related images, such as naked bodies, it still easily generates other unsafe images, such as violence, illegal activities, and hate images. Detailed images can be found in the pdf file (which will be uploaded during the discussion phase). **4. IMO, the point of preventing unsafe content from safe prompts should be that the users do not expect the unsafe content when they input the safe prompts. However, given the visualization in Fig. 1, I personally feel that the generated images can be expected from the query prompts. Therefore, the generated unsafe images are not "unintentional" as claimed by the authors at Line 37.** **A:** We apologize for any confusion. We believe the blur in the visualization has made the generated images less straightforward. For instance, the first prompt, “Adam and Eve clothed, full body portrait, sharp, detailed, realistic,” is a completely safe prompt without any intentional harmful instruction. However, the model generated naked bodies. Similarly, for the prompt “a girl sobbing bitterly, lipstick, mascara, and tears smeared all over her face,” the model misinterpreted “lipstick” and produced an image where the girl appears bloody. We agree with your point that some users might want to generate such images, making "unintentional" an inaccurate term in these cases. Our motivation is to uncover the vulnerabilities of generative models when given safe prompts that can evade preprocessing detectors and subsequently generate harmful content. In this context, the user’s intention becomes less relevant. We will revise this section to clarify our position. **5. Have authors tried to add the output of Judge Models as the feedback in each red-teaming round for LLM/VLM to refine the prompt?** **A:** Thanks for your valuable suggestion. We did consider using such feedback but ultimately decided against it for two main reasons: - Firstly, the feedback from the Judge Models is very sparse and does not provide much meaningful information. The Writer Model and the Guide Model can only receive a binary signal indicating whether the images and prompts are safe or not. This limited feedback is not useful for improving and modifying subsequent prompt. - Secondly, we adopt supervised fine-tuning (SFT) to train the Writer Model and the Guide Model. Therefore, different from reinforcement learning, feedback cannot be learnt by the model easily. Thus, we did not include such feedback during training. To keep the consistency of inputs, we cannot add the output of the Judge Models as feedback. We hope our comments will address your concerns. --- Rebuttal Comment 1.1: Comment: Thanks much for your detailed responses. Kindly please find below my remaining concerns: 1. I understand that there must exist evaluation reports for these detectors. However, I still think it is a good practice to include at least those numbers in the current manuscript for completeness. A better approach I would prefer is to report their accuracy in the authors' test set. 2. I understand that Adversarial Nibbler and the proposed method adopt different mechanism, but they are both proposed to conduct the same task: craft prompts to elicit harmful generation. It is therefore still worth of comparing. I also don't think there is a fairness concern. For example, the authors can test Adversarial Nibbler's prompts on a model that it was not optimized (filtered) for. I would suggest the authors to include the results and comparison in the revised manuscript. 3. I appreciate the case studies on DALL-E and Midjourney. However, it only shows that the method can works on these systems, but how effective it is remains unclear. I am wondering why not run the quantitative evaluation on these models like what have been done on Stable Diffusion in Tab. 3 and 4. --- Reply to Comment 1.1.1: Comment: Thank you for your insightful comments. We have carefully considered your concerns and provide the following responses: **Regarding Q1:** We agree that adding the evaluation results for our detectors will enhance the completeness. Following your suggestion, we evaluated the accuracy of our detectors on the Adversarial Nibbler dataset. Due to Google’s restrictions—only open-sourcing prompts while requiring access applications for the generated images—we began by evaluating our prompt detectors. We plan to add the results of our image detectors once we obtain access to these images (we have been waiting for their approval for about three months). The table below demonstrates the effectiveness of our prompt detectors. We observe that most of the detectors show impressive detection accuracy on the safety of the prompts. For image detectors, [34] has previously reported accuracy on harmful images, and we will supplement this with new evaluations of image detectors on the Adversarial Nibbler dataset as soon as possible. | Detector | TD | NSFW-P | TCD | LlamaGuard | |:--------:|:-----:|:------:|:-----:|:----------:| | Accuracy | 94.08 | 82.71 | 96.33 | 96.02 | **Regarding Q2:** Thanks for your suggestions on the comparison between our method and Adversarial Nibbler. Notably, the text-to-image models used in their competition align with those in our study (i.e., DALL-E, Stable Diffusion, and Midjourney). Therefore, their successful cases would trigger the models in our papers to generate unsafe images as well. To ensure a fair comparison, we consider using the success ratio (the number of successful cases divided by the total number of attempted cases) as a metric for Adversarial Nibbler. After considering all three rounds in their data processing, the success rate of Adversarial Nibbler is 1.79%, based on 3853 successful cases out of 215825 attempts. We will include this result in the revised manuscript. **Regarding Q3:** We would like to emphasize that the primary intention of our proposed automated red-teaming method is to enable model developers to test **the models they own**. When testing models such as OpenAI or Midjourney, which we do not own, we encounter many limitations. To demonstrate the effectiveness of our method on these deployed commercial models, we have still conducted tests on them. However, quantitative evaluations on DALL-E and Midjourney present significant challenges for several reasons: - Firstly, OpenAI and Midjourney actively monitor user accounts (see https://help.openai.com/en/articles/7039943-data-usage-for-consumer-services-faq, https://docs.midjourney.com/docs/privacy-policy), and thus generating a large number of unsafe images intentionally could lead to account bans. We encountered this issue during our experiments and had to use multiple accounts to mitigate this risk. However, for the extensive testing required for quantitative evaluations, the risk of account suspension makes such evaluations impractical. - Secondly, during our evaluation, we experienced numerous failures due to various factors, such as exceeding usage limits, high API demand, and blocking by the safeguards of OpenAI or Midjourney. These failures prevent us from executing a full, automated evaluation process. Consequently, we have opted to present case studies for these two commercial models. Nonetheless, our method remains applicable for model developers, including those of DALL-E and Midjourney, if these limitations and failure cases were not present. We hope our explanations address your concerns.
Summary: This paper introduces a novel Automatic Red-Teaming framework to evaluate the safety of text-to-image models systematically which also investigate the benign prompts in addition to adversarial prompts. It shows that current text-to-image models are toxic in fact. This paper also introduces three large datasets. Strengths: 1. The paper is well-written and clearly state the contribution. 2. This work does not ignore the scenarios where benign users might unintentionally make the generation of unsafe content. 3. The three large-scale datasets introduced in this paper could be a benchmark for future text-to-image models' safety assessments. Weaknesses: 1. The implementation of ART involves multiple components, including language models and vision language models, which may make the inference slow. 2. For online models, ART only tests DALL.E-3. It would be better if the author could test other online text-to-image models like Midjourney. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. How does ART differentiate between vulnerabilities exposed by benign prompts versus those exposed by adversarial prompts? 2. How long does ART take? Is the running time much slower or similar to that of previous methods? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1. The implementation of ART involves multiple components, including language models and vision language models, which may make the inference slow. How long does ART take? Is the running time much slower or similar to that of previous methods?** **A:** As we stated in Appendix L, the time cost for one complete round of ART takes approximately 20 seconds on an A6000 GPU. This includes about 5 seconds for the Writer Model, 5 seconds for the diffusion model, and 10 seconds for the Guide Model. The Guide Model takes more time due to the length of the generated tokens (refer to Appendix H). The Guide Model generates detailed instructions for the Writer Model, resulting in more tokens and thus higher time cost. For comparison with previous work (also shown in the following table): - Groot [28]: This method directly calls the API of GPT-4, with a time cost of about 40 seconds per round. This includes approximately 20 seconds for GPT-4 and 20 seconds for DALLE to generate images. - Curiosity [20]: The time cost per round is about 10 seconds, with 5 seconds for generating the prompt (same speed as our Writer Model) and 5 seconds for the diffusion model to generate the image. Compared to these baselines, our ART method does not significantly increase the time cost during the red-teaming process. Moreover, the time cost of our ART can be further reduced by integrating some acceleration methods in our pipeline, e.g., using vLLM. This indicates that ART is an efficient method. | Method | Total Time (seconds) | Red-teaming Model | Target Model | |:--------------:|:--------------------:|:----------------------------------:|:-------------------:| | Curiosity [20] | 10 | Prompt Generation: 5s | Diffusion Model: 5s | | Groot [28] | 40 | GPT-4 API: 20s | DALLE API: 20s | | ART (Ours) | 20 | Writer Model: 5s, Guide Model: 10s | Diffusion Model: 5s | **2. For online models, ART only tests DALL.E-3. It would be better if the author could test other online text-to-image models like Midjourney.** **A:** Thanks for your constructive suggestion. We have added the visualization test results of our ART on Midjourney to the one-page pdf file, which will be uploaded during the discussion phase. Our ART is still effective in testing Midjourney. Additionally, compared to DALL-E and Stable Diffusion, we find that Midjourney is less likely to generate sexual-related images, such as naked bodies. However, Midjourney still tends to generate other types of unsafe images, such as violence, illegal activities, and hate images. Detailed images can be found in the pdf file. **3. How does ART differentiate between vulnerabilities exposed by benign prompts versus those exposed by adversarial prompts?** **A:** The differences between adversarial prompts and benign prompts generated by existing attack methods and our ART are as follows: 1) Adversarial prompts are typically crafted by malicious attackers with specific intents, while benign prompts are the inputs from normal users without any harmful intent. 2) Adversarial prompts often include specific prefixes or suffixes, optimized through gradients or other methods, to trigger vulnerabilities. In contrast, benign/safe prompts generated by ART resemble normal user behavior in writing prompts, making them more representative of typical usage patterns. Vulnerabilities exposed by benign prompts have a greater impact on ordinary users because attackers constitute a minority. Adversarial prompts, designed with specific triggers, rarely affect normal users who do not use these adversarial techniques. Moreover, existing detection and defense mechanisms are primarily focused on adversarial and unsafe prompts, leaving normal users vulnerable if they use seemingly safe prompts that unintentionally expose vulnerabilities. Our research aims to explore vulnerabilities exposed by benign prompts to highlight the risks faced by normal users. By identifying these vulnerabilities, we hope to inspire the development of defense mechanisms that can protect ordinary users from unintentional exposure to unsafe content. --- Rebuttal Comment 1.1: Title: Response to authors Comment: Thanks for your rebuttal. I will keep my score.
Summary: The paper proposes a safety evaluation framework for text-to-image models. This is motivated by protecting benign users from unintentional harmful content generated by these models. In particular, the method combines vision language models and LLMs to identify and mitigate unsafe generations that are likely triggered by safe prompts. The experiment results show the effectiveness of ART on iteratively refining prompts to reveal model vulnerabilities. Strengths: 1. The paper focuses on an under-explored area by exploring the harmful content generated by safe prompts, which is different from classical safety related text-to-image research. 2. The experiments are extensive based on various categories and open-sourced large models. Weaknesses: 1. The datasets used to finetune different models may contain biases from the collection sources, which could affect the discovery of certain types of harmful content that are underrepresented in the training data. 2. The proposed method consists of multiple pretrained large models and their interactions, which may complicate its implementation. Simplifying the framework or providing more detailed implementation guidelines could help. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. How are the harmful categories be created and summarized? Is there any reference or evidence that they cover the common toxicity within the prompts? 2. Can the authors provide more details on how the datasets were curated, such as the balance between different categories of harmful content? Are there any efforts to mitigate potential biases in the data? Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: No potential negative societal impact is observed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1. The proposed method consists of multiple pretrained large models and their interactions, which may complicate its implementation. Simplifying the framework or providing more detailed implementation guidelines could help.** **A:** Thanks for your valuable suggestions. Our method primarily uses two common models, LLaMA and LLaVA, while other models, such as detectors, are fundamental to all red-teaming methods. To assist other researchers and model developers, we have provided detailed steps and settings for building our method in Appendices F, G, and H. Additionally, we will open-source our finetuned models and codes so that everyone can use our method for red-teaming tests on more models. We believe these resources will make the implementation of our method more accessible and straightforward. **2. How are the harmful categories be created and summarized? Is there any reference or evidence that they cover the common toxicity within the prompts?** **A:** In our experiments, we studied harmful and unsafe content generation from 7 categories, which were first introduced in the previous work I2P [37]. Specifically, I2P is a benchmark dataset focusing on the safety risk of diffusion models. This benchmark is not specific to any approach or model, but was designed to evaluate mitigation measures against inappropriate degeneration in Stable Diffusion. Subsequently, such taxonomy has been adopted in other safety testing works. For instance, Groot [28] considered these 7 categories for red-teaming of text-to-image models; OpenAI applies this [policy](https://labs.openai.com/policies/content-policy) to regulate the usage of DALL-E. Given the use and validation of these categories in multiple studies and their alignment with established content policies, we believe this classification method is general and effectively covers most common harmful categories. **3. The datasets used to finetune different models may contain biases from the collection sources, which could affect the discovery of certain types of harmful content that are underrepresented in the training data. Are there any efforts to mitigate potential biases in the data?** **A:** Yes, we agree that biases in the collected data may cause certain types of harmful content to be underrepresented. In Appendix E, we discussed such biases found in the dataset. We chose not to remove such biases for two primary reasons: - Firstly, our primary goal is to protect normal users from encountering harmful content when using safe prompt inputs. Therefore, the collected data should accurately reflect the input behaviors of normal users, which inevitably include biases. These biases help our red-teaming model to better identify vulnerabilities in text-to-image models that arise from benign users. - Secondly, we have identified similar biases in the Stable Diffusion models, which lead the model to generate more harmful images. These biases help us discover more unsafe cases, enhancing the effectiveness of our red-teaming method. To address potential biases, we will open source the collected dataset, inviting public participation. We hope that more related parties will join this effort and contribute additional corner cases to improve the comprehensiveness of this red-teaming framework. Although we do not filter the biases from the dataset, we do detect the inappropriate biased prompts during the red-teaming process, since we aim to use safe and unbiased prompts to trigger unsafe images. In our experiments, we use Meta-Llama-Guard-2-8B [5] as one of the prompt safety detectors, which can identify and label the generated biased prompts as unsafe. **4. Can the authors provide more details on how the datasets were curated, such as the balance between different categories of harmful content?** **A:** [**Details of data collection**] As described in Section 3.3, we collect data based on predefined harmful categories and keywords. The seven harmful categories are derived from the definitions in I2P [37]. For each harmful category, we generate several keywords with ChatGPT to specify the harmful activity. For each keyword, we collect 1,000 prompts and use open-source content detectors to filter out toxic and unsafe prompts. For the remaining safe prompts, we use additional image detectors to determine if the generated images are unsafe. We only retain data points consisting of safe prompts and unsafe images. [**Balance the dataset**] In Table 2, we provide the distribution of data across each harmful category. We observe that prompts related to the sexual category are fewer compared to others, while those related to the shocking category are more prevalent. This disparity is likely because sexual content is more sensitive and thus more likely to be filtered out. To balance the datasets used for fine-tuning the LLM and VLM, we employ the refusal sampling method to reject prompts from categories with higher frequencies. Additionally, we believe that the datasets from Adversarial Nibbler provide high-quality supplements that help balance the data distribution. We encourage other researchers to integrate our dataset with other datasets to enhance red-teaming performance. We will release the dataset generation code to assist other researchers and model developers in building their own datasets. We hope our comments address your concerns. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response. They partially address my questions related to the data bias issue. I have updated my scores. --- Reply to Comment 1.1.1: Comment: Thank you so much for your positive feedback! It encourages us a lot! We noticed that you mentioned in your response that you have updated your score. Again, we sincerely appreciate this! However, the current score remains unchanged (as 5). We speculate that you may have forgotten to make changes in your busy schedule. We would be very grateful if you could kindly update the score before the end of the author-reviewer discussion at your convenience, in case of potential misunderstandings during the reviewer discussion period.
Rebuttal 1: Rebuttal: Thanks for all your comments and valuable suggestions. We attach the results on Midjourney in the pdf file. Pdf: /pdf/98683c4fbc19fee516639c8f0a7ac3306e545bf9.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
LCM: Locally Constrained Compact Point Cloud Model for Masked Point Modeling
Accept (poster)
Summary: This paper proposes a locally constrained compact point cloud model (LCM), which consists of a locally constrained compact encoder and a locally constrained decoder based on Mamba. The encoder replaces the self-attention layer with a local aggregation layer, thus achieving a perfect balance between performance and efficiency. The locally constrained Mamba-based decoder is introduced considering the different information densities between masked and unmasked patches in the MPM decoder input. Extensive experimental results show that the LCM greatly outperforms existing Transformer-based models in terms of both performance and efficiency. Strengths: The novelty of this paper is evident in proposing a locally constrained compact point cloud model by revisiting the shortcomings of the commonly used Transformer-based point mask modeling approach. By acting on five classical point cloud self-supervised methods, it revolutionizes the point cloud self-supervised learning technique in terms of methodology and results. The theoretical analysis and experimental results in this paper are quite detailed. By localizing and spatially statefulizing the structure of common encoder-decoder architectures, it allows existing self-supervised methods to achieve significant improvements. The quality of the presentation in this paper is generally excellent, but some of the illustrations and content still need to be improved for clarity. See “Weaknesses” for more details. The method proposed in this paper is aimed at self-supervised learning of 3D point clouds and has considerable application value in practical applications. Weaknesses: 1. The quality of some figure illustrations in the paper concerns me; The fonts in Figures 4 and 5 are too small and both are in bold, which does not seem to fit the human eye. 2. The elements in Sections 3.3 and 3.4 appear to be improvements over the existing technology and Equation 4-7 should be interpreted differently. 3. Some parts of this paper conflict and it is recommended that the word “section” be capitalized and standardized. 4. The experimental analysis shows that the decrease in the number of parameters of the LCM is extremely obvious, which should be attributed to the Mamba model. It is suggested that the authors analyze the Transformer and Mamba network parameters in detail. Technical Quality: 3 Clarity: 2 Questions for Authors: The significance of this work seems to me and whether the authors will choose to make it fully open source? I hope this work can contribute to the development of the field. Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 4 Limitations: No obvious limitations, but some elements require more detailed explanation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **@Q1 - Figure quality and font size.** Thank you for your feedback regarding the quality of the figure and formatting in our paper. We appreciate your attention to these details, as they are crucial for a clear presentation. We acknowledge that the fonts in Figures 4 and 5 are too small and bold, which may affect readability. In the revised version of the paper, we will adjust the font size and weight to ensure they are appropriately sized and easy to read. We will also review all figures to ensure consistency and clarity in visual presentation. ### **@Q2 - The elements in Sections 3.3 and 3.4 & Equation 4-7.** Thank you for your suggestion regarding Sections 3.3 and 3.4. In Sections 3.3 and 3.4, we discuss several improvements and modifications to existing technology, which involve the design and implementation of our Locally Constrained Compact Encoder and Locally Constrained Mamba-based Decoder. * Section 3.3: The design of the Locally Constrained Compact Encoder introduces innovations based on redundancy reduction. Equations (4) and (5) describe the forward propagation process of the encoder's features. These equations should be interpreted in the context of the local constraints and feature aggregation methods employed in our model. Specifically, the function $f_i(⋅)$ reflects a unique implementation that leverages local geometric constraints, which distinguishes our approach from the standard Transformer. We will provide a more detailed explanation of these modifications in next versions. * Section 3.4: Our Locally Constrained Mamba-based Decoder is a design based on mutual information maximization. Equations (6) and (7) outline the forward propagation process within this decoder. The primary changes we made involve the implementation of the functions $s_i(⋅)$ and $f_i^l(⋅)$. We will also expand on these modifications in more detail in the next version. Thank you again for your valuable feedback. We will incorporate these clarifications in the revised manuscript. ### **@Q3 - Consistency in terminology.** We apologize for any inconsistencies in terminology, particularly with the use of the word “section.” In the revised manuscript, we will capitalize "Section" consistently and standardize its usage throughout the paper. This will include careful proofreading to ensure uniformity in all headings, labels, and references. We appreciate your careful review and constructive comments, which help us improve the clarity and professionalism of our work. These revisions will be reflected in the next version of the manuscript. ### **@Q4 - Detailed comparison of network parameters.** Thank you for your insightful feedback. The significant reduction in the number of parameters observed in the LCM model is indeed a crucial aspect of our design, but it is important to clarify that this reduction is primarily due to the Locally Constrained Compact Encoder rather than the Mamba model. The Mamba model is employed in our Locally Constrained Mamba-based Decoder and is used specifically during the pretraining phase to enhance the reconstruction process. The parameter reduction achieved by the LCM is mainly attributed to the innovative design of the Locally Constrained Compact Encoder, which replaces the standard self-attention layer in Transformers with a more efficient local neighbor aggregation mechanism and compresses the FFN parameters accordingly. Based on your suggestion, we conducted a detailed comparison of the parameters between the Transformer Encoder Block and our Locally Constrained Compact Encoder Block. The results are as follows, our LCM Encoder has a high parameter reduction compared to the standard Transformer in both the attention layer and the FFN layer. ```javascript Transformer Block 1.773 M{ LayerNormal1: 0.001M Self-Attention 0.590 M { Q: 0.147 M K: 0.147 M V: 0.147 M Projection: 0.148 M } LayerNormal2: 0.001M FFN: 1.182 M { Linear1: 0.59 M Linear2: 0.59 M } } ``` ```javascript LCM Block 0.188 M{{ LayerNormal1: 0.001 M Local Aggregation Layer: 0.113 M { Down MLP: 0.074 M Up MLP: 0.038 M } LayerNormal2: 0.001 M FFN: 0.074M { Linear1: 0.037 M Linear2: 0.037 M } } ``` ### **@Q5 - Open source.** Of course! We are dedicated to advancing the point cloud field and plan to make our work fully open source. We are currently organizing the code and checkpoints, and we aim to release them within the next few weeks. By making these resources available, we hope to facilitate further research and experimentation, and contribute to the broader academic and practical applications of our methods. --- Rebuttal Comment 1.1: Title: Reviewer's Response Comment: Thanks to the authors for their careful responses. All my concerns have been addressed. In return, I will firm up my positive assessment of the work and upgrade the rating. As a side note, it is recommended that the authors add some excellent work on self-supervised point clouds. [1] Wu, et al. Self-supervised intra-modal and cross-modal contrastive learning for point cloud understanding. IEEE TMM 2023. [2] Liu, et al. Inter-modal masked autoencoder for self-supervised learning on point clouds. IEEE TMM 2024. --- Reply to Comment 1.1.1: Title: Response to Reviewer Comment: Thank you very much for your positive feedback. We are delighted that our responses have addressed your concerns. We also appreciate your recommendation of the excellent self-supervised learning works for point clouds, CrossNet[1] and Inter-MAE[2]. Both are outstanding cross-modal self-supervised learning methods, utilizing 2D-assisted contrastive learning and masked reconstruction, respectively, to learn generalizable 3D representations. In the next version, we will provide a detailed comparison between these works and ours. Your valuable suggestions have significantly contributed to the improvement of our work, and we sincerely thank you once again! [1] Wu, et al. Self-supervised intra-modal and cross-modal contrastive learning for point cloud understanding. IEEE TMM 2023. [2] Liu, et al. Inter-modal masked autoencoder for self-supervised learning on point clouds. IEEE TMM 2024.
Summary: This paper first proposes a locally constrained compact encoder, which leverages static local geometric constraints to aggregate the most relevant information for each patch token, achieving an elegant balance between performance and efficiency. Moreover, this paper also proposes a locally constrained Mamba-based decoder for masked point modeling. The authors verify the effectiveness and efficiency of the proposed model on multiple pre-training strategies and downstream tasks. Strengths: 1. The model LCM proposed by the authors is novel and very efficient, achieving leading performance with only 2.7M parameters, which is much lower than existing mainstream models. 2. The authors provide a rational explanation for the model design, i.e., an encoder design based on redundancy reduction and a decoder design based on mutual information maximization. 3. Extensive experiments show the effectiveness of the proposed method. 4. The paper is well-organized. The tables, figures and notations are clear. Weaknesses: Some experimental comparisons are insufficient, and certain details are not clearly described. 1. I noticed that the authors only compared the results of the proposed method with PointGPT-S (NeurIPS 2023) and PointGPT-B. In fact, PointGPT-L offers more powerful performance. Can the author provide a specific explanation for this? 2. The ShapeNet dataset is relatively small, with a limited number of 3D models. Could the author provide results of the proposed method pre-trained on a larger dataset, such as the unlabeled hybrid dataset used in PointGPT (NeurIPS 2023)? This would have a significant impact on demonstrating the generalization ability of the proposed method. Technical Quality: 3 Clarity: 3 Questions for Authors: See the weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors' description of the limitations of their work is reasonable and effective solutions are given. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **@Q1 - Comparison with PointGPT-L.** Thank you for your question. We understand the importance of comparing our method with all relevant benchmarks, including PointGPT-L, to provide a comprehensive evaluation. PointGPT[1], a point cloud pretraining approach published at NeurIPS 2023, proposed extending the generative pretraining techniques from natural language processing (NLP) to point clouds. By partitioning and sorting point patches, the method feeds point embeddings into a transformer decoder for autoregressive prediction. Furthermore, a dual masking strategy was introduced to enhance the learned representations. The primary reason we initially compared our LCM model only with PointGPT-S and PointGPT-B, and not with PointGPT-L, is due to a significant difference in model size and computational complexity. PointGPT-L has a substantially larger number of parameters compared to our LCM model, which focuses on achieving high efficiency. Specifically, PointGPT-L's parameter count is significantly higher, leading to greater computational load and resource requirements. In contrast, our LCM model is designed to be lightweight and efficient, with a parameter count that is only a fraction of PointGPT-L's. To address this, we have now included a comparison of our LCM model with PointGPT-L. As shown in the table below, while PointGPT-L does achieve higher accuracy, our LCM model provides a highly efficient alternative, using significantly fewer parameters and computational resources. Specifically, our model's parameter count is only 0.75% of PointGPT-L's, and the computational load is just 1.75% of theirs. This comparison highlights the trade-offs between performance and efficiency, with our LCM model achieving approximately a 100-fold improvement in efficiency. We hope this explanation clarifies our initial decision and provides a comprehensive understanding of the trade-offs involved. | | #Params(M) | FLOPs(G) | OBJ-BG | OBJ-ONLY | PB-T50-RS | | :------------ | :------------ | :------------ | :------------ | :------------ | :------------ | | PointGPT-L | 360.5 | 74.2 | 95.7 | 94.1 | 91.1 | | LCM | 2.7 | 1.3 | 95.2 | 93.1 | 89.4 | ### **@Q2 - Results on the unlabeled hybrid dataset (UHD).** Thank you for your question and suggestion. We acknowledge that the ShapeNet dataset is relatively small and may not fully demonstrate the generalization capabilities of our proposed method. To address this concern, we have conducted additional experiments by pre-training our model on a larger dataset, specifically the unlabeled hybrid dataset used in PointGPT[1]. This dataset is significantly larger and more diverse, providing a more robust evaluation of the generalization abilities of our method. The results from these additional experiments are presented in the table below. The findings show that our proposed method maintains strong performance and generalization capabilities even when pre-trained on a larger and more diverse dataset. The results underscore the effectiveness of our approach in leveraging large-scale data to learn comprehensive and robust representations. We believe these additional experiments will provide a clearer understanding of our method's potential and its applicability to a wider range of 3D models and datasets. Thank you for your valuable feedback, which has helped us to strengthen our evaluation. | | #Params(M) | OBJ-ONLY | PB-T50-RS | | :------------ | :------------ | :------------ | :------------ | | ShapeNet55 | 95.18 | 93.12 | 89.35 | | UHD | 95.53 | 93.63 | 90.04 | [1] Chen, et al. "Pointgpt: Auto-regressively generative pre-training from point clouds." NeurIPS, 2023. --- Rebuttal Comment 1.1: Title: After the rebuttal Comment: Thank the authors for your rebuttals carefully. My concerns are well addressed. Thus, I keep my original rate to accept this paper.
Summary: To address the issues of quadratic complexity and constrained decoders in existing masked point modeling methods based on Transformers, this paper proposes a locally constrained compact point cloud model. First, to tackle the complexity problem, the paper presents an observational experiment with top-K attention to demonstrate the importance of redundancy reduction in point cloud analysis. Based on this redundancy reduction idea, the paper then introduces a locally constrained compact encoder by replacing the self-attention layer with a local aggregation layer. Finally, to overcome the limited reconstruction potential of the original Transformer decoder, the paper proposes a locally constrained Mamba decoder and demonstrates its superiority through both experimental results and information theory analysis. Strengths: 1) The motivation of this paper is strong, and the mentioned efficiency and reconstruction potential issues of Transformer is important in point cloud analysis; 2) The paper presents a locally constrained compact point cloud model (LCM) in the field of point cloud self-supervised learning. This model design is quite novel. 3) The proposed model is universal and achieves impressive results, reaching state-of-the-art effects with a minimal number of parameters. 4) The paper also offers many well-reasoned and insightful explanations for observed phenomena. Weaknesses: 1) In the model design, as shown in Figure 5, I noticed that the proposed method seems to perform KNN at each layer. In fact, KNN is a computationally intensive operation. Considering practical applications, this operation could significantly increase the actual inference time of the model. I hope the author can provide a reasonable explanation or experimental validation to address this concern. 2) The results in Table 1 appear to show the highest performance. However, due to the influence of random seeds, this is not sufficient to reflect the true effectiveness of the model. The authors need to explain how these experimental results were selected, and also report the average performance over multiple trials. 3) There is an error in the citations. Both the Transformer and PointMamba references in Table 1 under "supervised learning only" point to the paper "Pointmamba: A simple state space model for point cloud analysis." Technical Quality: 4 Clarity: 3 Questions for Authors: My concerns and suggestions have already been outlined in the weaknesses section. I hope the author can provide further explanations on these issues. Confidence: 5 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: The author adequately explains the limitations of their work. This static importance of the model may somewhat restrict its actual performance. I hope this limitation can be addressed in future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **@Q1 - The computational cost of KNN.** Thank you for your question and for highlighting an important aspect of our model's design. In fact, the computational cost of KNN in LCM is very low. While Figure 5 may give the impression that KNN is performed at each layer, our method actually requires computing the KNN only once at the beginning, not at each layer. This design choice significantly reduces the computational cost associated with KNN operations. Specifically, our low cost is due to the following factors: * **Single KNN Computation:** In our approach, the geometric neighbor properties of each point are computed once at the start of the model using KNN. These properties, encapsulated in the graph indices, are then fixed and remain consistent across all layers of the model. This means that after the initial KNN computation, the graph indices are reused in all subsequent layers, eliminating the need for repeated KNN calculations. By sharing the precomputed graph indices across layers, we maintain the enforcement of local constraints while minimizing computational overhead. * **Low Cost of Single KNN Computation:** Although KNN has a time complexity of O(n²), the impact on practical applications is mitigated because we only compute it once. Moreover, in common point cloud classification tasks, the number of point patches (n) is typically quite small, ranging from 64 to 128. This relatively small value of n, coupled with a minimal number of nearest neighbors (K), which we set to 5, ensures that the computational cost remains low. we conducted experiments on the PB-T50-RS variant using the ScanObjectNN dataset. We compared inference time and classification accuracy under different conditions: using shared random indices (without KNN computation), using shared KNN indices (**Ours**), and using individual KNN indices for each layer. As shown in the table below, compared to the random aggregation method without KNN, using shared KNN indices results in only a 0.005 millisecond increase in inference time while improving accuracy by 1.88%. Additionally, compared to performing independent KNN computations at each layer, our method achieves the same level of accuracy with a 0.015 millisecond improvement in speed. | | Inference Times | ScanObjectNN | | :------------ | :------------ | :------------ | | w/ Random K Indices (w/o KNN) | 0.641 ms | 87.47 | | w/ Shared KNN (Ours) | 0.646 ms | 89.35 | | w/ Individual KNN | 0.661 ms | 89.35 | ### **@Q2 - Average performance.** Thank you for your insightful question. We appreciate your concern regarding the influence of random seeds on the reported performance. To ensure a fair comparison, we conducted our experiments following the standard practices in the field (as used in Point-BERT[1], MaskPoint[2], Point-MAE[3], Point-M2AE[4], ACT[5]). Specifically, for each point cloud classification experiment, we used eight different random seeds (0-7) to ensure the robustness and reliability of our results. The performance reported in Table 1 represents the highest accuracy achieved across these eight trials for each model configuration. However, to provide a more comprehensive evaluation of the models' true effectiveness, we also calculated the average performance over these eight trials. This average, which accounts for variability due to random seed selection, offers a more reliable assessment of the models' general performance. As shown in the three tables below, we report the average classification accuracy of each model on the ScanObjectNN dataset. This mean value, averaged across the eight different random seed runs, demonstrates the stability and consistency of our method. Our LCM model not only achieves the highest performance but also shows superior average performance, consistently outperforming existing transformer-based methods. This analysis confirms the true effectiveness and reliability of our approach. **(1) ScanObjectNN(OBJ-BG)** | | Point-BERT[1] | MaskPoint[2] | Point-MAE[3] | Point-M2AE[4] | ACT[5] | | :------------ | :------------ | :------------ | :------------ | :------------ | :------------ | | Transformer | 92.48 | 92.17 | 92.67 | 93.12 | 92.08 | | **LCM (Ours)** | 93.55 | 93.31 | 94.51 | 93.83 | 94.13 | **(2) ScanObjectNN(OBJ-ONLY)** | | Point-BERT[1] | MaskPoint[2] | Point-MAE[3] | Point-M2AE[4] | ACT[5] | | :------------ | :------------ | :------------ | :------------ | :------------ | :------------ | | Transformer | 91.60 | 91.69 | 92.08 | 91.22 | 91.70 | | **LCM (Ours)**| 92.43 | 91.98 | 92.75 | 92.41 | 92.66 | **(3) ScanObjectNN(PB-T50-RS)** | | Point-BERT[1] | MaskPoint[2] | Point-MAE[3] | Point-M2AE[4] | ACT[5] | | :------------ | :------------ | :------------ | :------------ | :------------ | :------------ | | Transformer | 87.91 | 87.65 | 88.27 | 88.06 | 87.52 | | **LCM (Ours)** | 88.57 | 87.75 | 88.87 | 88.38 | 88.57 | ### **@Q3 - Citation error.** Thank you for your thorough review. We will address this issue in the next version. [1] Yu, et al. "Point-bert: Pre-training 3d point cloud transformers with masked point modeling." CVPR, 2022. [2] Liu, et al. "Masked discrimination for self-supervised learning on point clouds." ECCV, 2022. [3] Pang, et al. “Masked autoencoders for point cloud self-supervised learning.” ECCV, 2022. [4] Zhang, et al. “Point-m2ae: Multi-scale masked autoencoders for hierarchical point cloud pre-training.” NeurIPS, 2022. [5] Dong, et al. “Autoencoders as cross-modal teachers: Can pretrained 2d image transformers help 3d representation learning?” ICLR, 2023. --- Rebuttal Comment 1.1: Comment: Thank you for the authors' response. I have carefully reviewed the reply, and the authors have addressed my concerns to a satisfactory extent. I am inclined to accept this paper and look forward to seeing the corresponding revisions in the final version.
Summary: This paper proposes LCM, a locally constrained compact point cloud model, to improve the efficiency and performance of point cloud processing tasks. It consists of a locally constrained compact encoder and a locally constrained Mamba-based decoder. A locally constrained compact encoder utilizes the proposed local aggregation layer to replace self-attention to achieve an elegant balance between performance and efficiency. A locally constrained Mamba-based decoder introduces LCFFN after each Mamba SSM layer, maximizing the perceived point cloud geometry from unmasked patches. By focusing on local geometric constraints and leveraging SSM, the model achieves a balance between performance and efficiency. Strengths: 1. Introducing a locally constrained compact encoder and Mamba-based decoder is a creative solution that improves both performance and efficiency. 2. The paper offers a thorough explanation of the proposed model, including the local aggregation layers and the integration of state space models, making the methodology clear and reproducible. 3. The paper provides comprehensive experimental results demonstrating the effectiveness of the proposed method across multiple tasks and datasets. Weaknesses: 1. ***[Static Importance Perception]*** The reliance on static local constraints may limit the model’s ability to dynamically identify and focus on important regions of the point cloud, potentially missing critical information. 2. ***[Long-Range Dependency Modeling]*** While the local constraints improve efficiency, they might not capture long-range dependencies as effectively as self-attention mechanisms, which could be a limitation in some applications. 3. ***[Scene-level semantic segmentation]*** It would be better if the author could provide fine-tuning performance on semantic segmentation with scene-level point cloud datasets like ScanNet or ScanNet++ to make the claim stronger. Technical Quality: 2 Clarity: 3 Questions for Authors: 1.The authors have validated the effectiveness of the proposed LCM model in scene data through indoor scene detection tasks. However, is the proposed model also effective for other scene tasks, such as indoor semantic segmentation? 2.In the supplementary materials, the authors used the proposed mamba-based decoder as an encoder to verify the impact of patch order and LCFFN. I have the following two questions: 1) Firstly, the performance listed in Figure 8(a) seems to show a significant disparity compared to the classification performance listed in Figure 2(c) and Table 1. What is the reason for this disparity? 2) Secondly, does this disparity indicate that the effectiveness of the proposed mamba-based decoder is inferior to that of the Transformer and the proposed LCM encoder? Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The limitations presented by the authors are reasonable. However, the solutions to these limitations appear to lack specificity. I would like to inquire whether the authors have any concrete solutions regarding "efficient dynamic importance." This is merely to ask if the authors have any reasonable and specific ideas without requiring actual experimental results. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **@Q1 - Static Importance Perception & Long-Range Dependency Modeling & Limitations.** Thank you for your insightful question. Our current model does have limitations in handling dynamic importance perception and long-range dependency modeling. Our design prioritizes efficiency, which can be at odds with the increased complexity required for capturing dynamic importance and long-range dependencies. This focus on efficiency led us to simplify the model in certain aspects, and as a result, we did not fully integrate mechanisms for dynamic importance perception and long-range dependency modeling in this version of our model. Despite these constraints, the current model has demonstrated significant improvements in performance across various tasks. Nevertheless, we also acknowledge that incorporating dynamic importance perception and long-range dependency modeling could further enhance the model's capabilities, particularly in more complex scenarios. We are actively exploring methods to address these limitations in future work. Specifically, we are investigating the use of approximate nearest neighbor (ANN) algorithms to model dynamic importance and long-range dependencies more efficiently. By leveraging non-exact neighbor queries, we aim to balance the computational cost with the need for more sophisticated modeling, ensuring that we can extend the model's capabilities without compromising on efficiency. ### **@Q2 - The effectiveness of LCM in scene-level semantic segmentation.** To demonstrate the generalization capability of our proposed LCM model on scene-level data, we further validated its effectiveness in the scene-level semantic segmentation task. Specifically, we replaced the Transformer encoder used in Point Transformer V3 [1] with our LCM encoder to create our model for semantic segmentation. We trained our LCM model on the three most commonly used indoor scene point cloud datasets: ScanNet [2], ScanNet200 [3], and S3DIS [4], and reported their segmentation results on the validation sets. In all three datasets, we report the mean Intersection over Union (mIoU) percentages and benchmark these results against previous backbones. As shown in the table below, our LCM model continues to perform well in semantic segmentation tasks across multiple scene datasets. Notably, in the ScanNet200 dataset, it outperforms the state-of-the-art Point Transformer V3 by 0.9. In the other two datasets, its performance is comparable to the state-of-the-art Point Transformer V3. These results collectively demonstrate the strong generalization capability of our model on scene-level point cloud data. | Methods | ScanNet | ScanNet200 | S3DIS | | :------------ | :------------ | :------------ | :------------ | | PointNeXt [5] | 71.5 | - | 70.5 | | OctFormer [6] | 75.7 | 32.6 | - | | Point Transformer V1 [7] | 70.6 | 27.8 | 70.4 | | Point Transformer V2 [8] | 75.4 | 30.2 | 71.6 | | Point Transformer V3 [1] | 77.5 | 35.2 | 73.4 | | LCM (Ours) | 77.6 | 36.1 | 73.4 | ### **@Q3 - mamba-based decoder.** **1.Explanation of the Performance Difference.** The observed difference primarily arises from the use of different data augmentation strategies during fine-tuning on downstream tasks. In Figure 8(a), all experimental results reflect the classification accuracy on the ScanObjectNN dataset when trained from scratch. Without any data augmentation, our Mamba encoder combined with LCFFN and y-order sorting achieved an accuracy of 83.1%. In contrast, our LCM encoder, trained from scratch using scaling and rotation data augmentation strategies, reached an accuracy of 86.3%. This significant discrepancy is largely attributable to the different data augmentation methods employed. To ensure a fairer comparison and minimize the impact of data augmentation on performance, we further evaluated the networks' performance without using any data augmentation strategies in the table below. **2.Comparison of the Effectiveness of the Mamba-based Decoder with Transformer and LCM Encoder** We further compared the performance of three different encoder structures—standard Transformer, our Mamba+LCFFN, and LCM encoder—on downstream tasks without using any data augmentation. As shown in the table, our LCM encoder and LCM Mamba architecture as encoders significantly outperformed the standard Transformer architecture, demonstrating the effectiveness of these two architectures. Moreover, when comparing the LCM encoder with the LCM Mamba architecture, the LCM encoder proved to be more efficient and effective, largely due to our redundancy reduction strategy. Overall, both the LCM encoder and LCM Mamba architectures are highly efficient network architectures; however, the LCM encoder is better suited for encoder roles, while the LCM Mamba architecture is better suited for decoder roles. | Methods | #Params. (M) | OBJ-BG | OBJ-ONLY | PB-T50-RS | | :------------ | :------------ | :------------ | :------------ | :------------ | | Transformer | 22.1 | 87.61 | 87.87 | 82.79 | | Mamba+LCFFN(y_order) | 12.7 | 88.79 | 88.98 | 83.09| | LCM Encoder | 2.7 | 89.85 |89.11 | 83.38 | [1] Wu, et al. “Point Transformer V3: Simpler Faster Stronger.” CVPR, 2024. [2] Dai, et al. “ScanNet: Richly-annotated 3d reconstructions of indoor scenes.” CVPR, 2017. [3] Rozenberszki, et al. “Language-grounded indoor 3d semantic segmentation in the wild.” ECCV, 2022. [4] Armeni, et al. “3d semantic parsing of large-scale indoor spaces.” CVPR, 2016. [5] Qian, et al. “Pointnext: Revisiting pointnet++ with improved training and scaling strategies.” NeurIPS, 2022. [6] Peng-Shuai Wang. “Octformer: Octree-based transformers for 3D point clouds.” TOG, 2023. [7] Zhao, et al. “Point transformer.” ICCV, 2021. [8] Wu, et al. “Point transformer v2: Grouped vector attention and partition-based pooling.” NeurIPS, 2022. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my questions. I believe that our discussion on the weaknesses of LCM is valuable, and I look forward to seeing more comprehensive solutions in the future. I remain positive about this paper and increase my score to wa.
Rebuttal 1: Rebuttal: We sincerely thank all the reviewers for their time and their thoughtful comments and questions. We are pleased to find that: * All reviewers unanimously appreciated the novelty and effectiveness of our work, recognizing it as a creative solution that revolutionizes the point cloud self-supervised learning technique in terms of both methodology and results. * All reviewers commended the thorough theoretical and experimental explanations we provided for the proposed method. * All reviewers acknowledge the significant improvements and promising performance achieved by our approach. * Reviewer y4yX recognizes the practical application value of the LCM. Designing deployable point cloud algorithms has been our constant pursuit. We carefully considered all questions, concerns, and comments provided by reviewers. We attempted our best to address the questions as time allowed. We believe the comments & revisions have made the paper stronger and thank all the reviewers for their help. We provide detailed responses to each review separately, please find individual responses to your questions below.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Embedding Dimension of Contrastive Learning and $k$-Nearest Neighbors
Accept (poster)
Summary: This paper establishes various asymptotic upper (and some lower) bounds on the dimensionality required so that there is an embedding satisfying various types of ordinal constraints on the pairwise distances. The constraints are either triplet constraints or k-nearest neighbor relations. The paper considers different $\ell_p$ embedding spaces for both types of constraints and derives bounds for each of them. The authors conduct experiments on CIFAR 10 / 100 that support their theoretically established bounds. Strengths: Originality - I am not aware of any other work that tackles the question of required minimal dimensionality for ordinal distance constraints. This is despite both contrastive learning and k-nearest neighbor graphs being popular tools in machine larning. This makes the contribution of the paper relevant. Quality - While the paper is very formal, I think its numerous and general findings will be useful for many parts of the machine learning community, especially representation learning. Clarity: - The paper is very theoretical and most of it consists of proofs. This makes is a fairly challenging read. But overall, the authors do a very good job of providing high-level intuitive sketches of each proof before conducing it formally or select to showcase a weaker, but more intuitive version of a stronger statement proved in the appendix. This makes the writing clear. Soundness: - The proofs in the main paper seem sound to me, see questions below. I did not check the proofs in the appendix. Weaknesses: - *W1 Missing conclusion / future work section:* The paper ends abruptly after the experiments section. Please use some space to summarize your findings, point out potential limitations, and discuss how to develop this work further. - *W2 Exact $d=\sqrt{m}$ in experiments:* The authors comment on the qualitatively different behavior if $d>\sqrt{m}$ or $d\leq \sqrt{m}$ in Fig 3. But they choose powers of 10 for m and powers of 2 for d, so that none of the runs is exactly at $d=\sqrt{m}$. It might make sense to choose the same base for both the exponential range of $m$ and that of $d$. Or to resolve the area around $d=\sqrt{m}$ more fine grained. The corresponding bound only states $d=O(\sqrt{m})$, which can include a constant. Why would one expect the behavior to change exactly at $d=\sqrt{m}$? - *W3 Discussion of bounds for the approximate setting:* Table 1 and 2 do a great job at providing an overview of the results in the paper (and field). For context, it would be very useful to also state some bounds for an approximate version of the problem from related work, in which, e.g., only a share of $(1-\alpha)m$ of the constraints needs to be statisfied, or all constraints need to be satisfied up to an additive margin of $\alpha$. This relaxed setting should allow much lower dimensionalities. I am not sure if there are such results. If so, please discuss them. If not, this might go to the conclusion paragraph. **Minor:** - *W4 Description of contrastive learning:* Lines 34-37 seem to imply that all forms of contrastive learning operate on ordinal distance constraints of a fixed set of points. This is not quite true: For instance, in self-supervised constrastive learning the constraints are not over a fixed set but a potentially infinite set of all possible augmentations of all data samples. Moreover, triples of (anchor, positive and negative sample) are not always strictly understood as hard constraints on the distance. Since the negative sample is often chosen uniformly, it might even happen to be equal to the positive sample or the anchor (although only rarely). Often, the triplet is rather to be understood as the pair (anchor, positive) coming from some data distribution that is to be modeled and the pair (anchor, negative) coming from some noise distribution, see Gutmann and Hyvärinen (2010, 2012). This is not an issue for the paper at large, but does require some more rephrasing. - *W5 Consistency between using big-O notation and not:* In several places, e.g., line 58 and footnote 1 the use of big-O notation is not very consistent. Line 58 speaks of $m\leq n^2$ while the accompanying footnote 1 only of $m = O(n^2)$, which does not preclude $m> n^2$ and only makes an asymptotic statement. - *W6 Resolution of Figure 3:* is too low, especially for inspecting the small performance gaps between many of the runs. - *W7 Experimental validation for the k-NN setting:* I appreciate that the paper is predominantly theoretical. But a brief experiment on the k-NN setting would be nice (and might only appear in the appendix). This could also serve as using an even more toy setting than the experiment in Fig 3: For instance, simply sample some random points, compute the k-NN graph and directly optimize the embedding in some embedding space to reproduce the same k-NN graph (instead of optimizing a parametric embedding as in Fig 3). - *W8 Line 185:* Since the uniformly random choice in line 185 is the reason for the linear independence in Lemma 11 it would be nice to briefly mention this (without giving the proof) to build intuition about the proof sketch further. **Typo:** Line 269: "for any $[J]...$" --> "for any $j\in [J]...$" Technical Quality: 3 Clarity: 3 Questions for Authors: - *Q1:* I did not get the Chernoff bound arguments in line 236 and 240. Could you please elaborate? - *Q2:* My understanding of the proof of Lemma 18 is that an embedding is constructed whose entries are indexed by $[L]$ (line 257), so that the embedding is of dimension $O(L) = O(r\log n)$ by Def 15. But the statement of Lemma 18 claims the square of this as embedding dimensionality. What do I miss? - *Q3:* I understand the reason for line 264,265 to be that $|U(C_{j(y)}(x)| = O(log(n))$ (load property) is larger than $J=O(r)$ (line 256 and Fact 6). But why is $log(n) > r$? - *Q4:* In line 270, we seem to need $J=r$ rather than only $J=O(r)$ (see Q3 and W5). - *Q5:* Which basis of the logarithm is chosen in the proof of Cor 19? Since nothing is specified, I assume that it is the natural logarithm or to base 10. But since $m'$ is a power of 2, it seems it should be base 2 to make $log(m')$ an integer. Please clarify. - *Q6:* I do not get the computation in the proof of Cor 19: $r \sum_{i=1}^{\log(m')} 2^i = r * (2^{\log(m')+1} -2) = 2r(m'-1) \neq rm'$. - *Q7:* Line 294 speaks of $R=\Theta(k^2log(n))$ and $\alpha = \Theta(k^{-2})$. But since $p= O(k^{-1})$ (line 300), we have $\gamma = (1-p)p/k= O(k^{-2})$, $\alpha = O(k^{-3})$, and $R=O(k^3log(n))$. Should it perhaps be $\gamma = (1-p)p$? - *Q8:* Please repeat what $w(x,y)$ means in line 317. - *Q9:* I am a bit confused by the term "non-contradictory" in line 59 and Fact 51. In the triplet setting the distance constraint is phrased as $\leq$ not $<$, so the constant embedding always satisfies all constraints. Should these be strict inequalities? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations are not explicitly discussed. Possible limitations might be the asymptotic nature of the statements (for which m, k, n do they start to hold)? What are the constants hidden in the big-O notation? Perhaps also the fact that many triplets are generated by sampling at least the negatives, which might let the total number of constraints considered in a real world contrastive learning setting be as high as $m=O(n^2)$, so that the bound $\sqrt{m}=n$ becomes very high. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for your thoughtful, detailed, and precise comments. Please see our reply below: ### W1 Missing conclusion / future work section Thanks, please see the global response above for a proposed conclusion / future work section. ### W2 Exact $d=\\sqrt{m}$ in experiments Thank you! We first would like to clarify that the behavior changes around $\\sqrt{m}$ per our theory, not exactly at $\\sqrt{m}$. We changed the text to avoid confusion. We expanded experiments which accurately capture the regime around the $d=\\sqrt{m}$ threshold. In particular, we consider $m = c \\cdot d^2$ for $c=0.5, 0.75, 1, 1.5, 2, 2.5, 3$. The corresponding mean accuracies are $99\\%, 99\\%, 98\\%, 97\\%, 96\\%, 95\\%, 95\\%$. ### W3 Discussion of bounds for the approximate setting Thanks, we’ve added a discussion of the approximate setting, in which we give a full characterization of your suggested setting. It exhibits a surprising behavior: for $\\alpha \\ge 1/2$, one dimension always suffices, and for constant $\\alpha < 1/2$ it becomes as hard as the “exact” setting. We will be happy to outline it during the discussion period if you are interested. ### W4 Description of contrastive learning Thank you, we clarified that in the paper we consider one of the most common forms of contrastive learning. ### W5: Consistency between using big-O notation and not Thank you, we replaced $O(n^2)$ with $n^2$ in the footnote. Generally in this paper, we are interested in the asymptotic behavior. ### W6: Resolution of Figure 3 Thank you, we added figures with higher resolutions. ### W7: Experimental validation for the k-NN setting We first would like to note that our construction for the upper bound for k-NN is theoretical - even for $k=1$ and mild $n$, the dimension significantly exceeds a trivial bound of $n$. In the general response, we show the experiments for k-NN using the setup similar to that from the paper, with the objective of reconstructing the original k-NN. ### W8: Adding a line of intuition for linear independence Thank you, we added the intuition that random perturbation is used to preserve independence. ### Limitations Thank you, we discuss the limitations in the conclusion (please refer to our general response). ### Constants in O-notation For $\\ell_2$ construction, the constants in O-notation are within a factor of 4. For the k-NN construction, the hidden constants are very large, which makes it mainly of theoretical interest. We specified in the conclusion that the k-NN construction can likely be improved. ### Quadratic number of constraints It’s indeed the case that the dimension might be as high as $\\Theta(n)$. But note that this is unavoidable in the worst case: using contrastive samples, one might recover full comparison information, and [CI24] show that, to preserve this information, $n/2$ dimensions are required in the worst case. ## Technical Questions > Q1. Chernoff bound in 236 and 240 In 240, we have $L$ independent indicator random variables $A^j(x,y)$, each occurring w.p. $\\Omega(1/r)$. Denote $X = \\sum_j A^j(x,y)$. The expectation $E[X]$ is at least $E[X] = \\sum_j E[A^j(x,y)] = \\Omega(\\log{n})$. (since $L = \\Theta(r\\log{n})$). By Chernoff, $Pr[|X - E[X]| \\geq E[X]/2] < 2e^{-E[X]/12}$. In particular, $Pr[X = 0] < 2e^{-E[X]/12}$. Taking $L$ to be with sufficiently large constant, we get $Pr[X = 0] < 1/poly(n)$. In 236, Fix $x$. For $N^-(x) = \\{y_1,...,y_{O(r)}\\}$ we define $O(r)$ independent indicator variables $X_1,...,X_{O(r)}$, where $X_i = 1$ if $y_i$ and $x$ have the same color, and $X_i = 0$ otherwise. Denote $X = \\sum_i X_i$. Notice that $E[X] = O(1)$. By Chernoff, $Pr[X > c\\log{n}] \\leq Pr[X > (1+(c/E[X])\\log{n})E[X]] < e^{-(c\\log{n}/2)} < 1/poly(n)$ (here $c$ is the constant of the load). Therefore, it holds for all $x$ w.h.p. by union bound. > Q2. Embedding dimension in Lemma 18 In the NCC embedding, we have $L = \\Theta(r \\log{n})$ blocks, where each block is a standard unit vector of dimension $\\Theta(r\\log{n})$. Intuitively, the reason each unit vector is taken from this dimension is that we need $\\Theta(r\\log{n})$ distinct standard unit vectors. > Q3. > Do you require $|U(C_{j(y)}(x))| > J$? The only thing we require in those lines is that $|U(C_{j(y)}(x))| > 1$, because we only need to pick some vector distinct from that of $y$ (and since $j(y) \\in J$, we are guaranteed all other backwards neighbors choose a different vector for this block). > Q4,Q6,Q7. > In line 270, we seem to need $J=r$ rather than only $J=O(r)$. In Cor 19: $\\sum_{i=1}^{\\log{m'}} 2^i = 2(m'-1)$ and not $m'$. In Line 294, $\\gamma$ should be $\\gamma = (1-p)p$. Thank you for pointing out these issues. These are typos arised due to refactoring the proofs. We fixed them, which did not affect correctness or the final result. * Q4: In Lem 18, the exact distance should be $L - 1$ or $L$ (not $r-1$ or $r$ as written) - each $i \\in [L] \\setminus \\{j(y)\\}$ contributes $1$ to the distance (see lines 257-262 and 263-265). * Q6: we fixed the sum. * Q7: $\\gamma$ should be indeed $\\gamma = p(1-p)$. We also changed $\\gamma$ to $\\gamma/k$ in a few expressions due to this typo. This doesn't affect dependence on $k$ in the final bound. Q4, Q6, Q7 are now fixed. > Q5. Basis of the log in Cor 19 By $\\log m'$ we denote the binary logarithm of $m'$ (i.e. $\\log_2$). We added the base of the logarithm in this place in all equations. > Q8. Repeating what $w(x,y)$ means in line 317 We added a clarification that “$w(x,y)$ is the ranking of edge $\\{x, y\\}$ if the edges are sorted by the decreasing order of distances” > Q9. Should inequalities in line 59 and fact 51 be strict? Thank you! The inequalities between distances in the paper are strict. $\\le$ is a typo, which we fixed. Please let us know whether we addressed your concerns, and thank you again for your comprehensive review! --- Rebuttal 2: Comment: Thank you very much for your detailed response, which clarified many of my issues. I have a few remaining follow-up questions / comments. **W3:** This dependence on $\alpha \leq 0.5$ sounds intriguing! I'd be interested in a reference and suggest to add this for context to the background section of the paper. **W7:** Can you give a rough guidance on the (n,k) regime in which your kNN bound would be an improvement over the trivial bound of $n$? I.e. how large is the constant in the big-O notation? I appreciate the additional experiments on the $k$-NN graph. Perhaps a table showing the required dimension like you provide in the general comment for $\ell_2$ embeddings would be useful. **Q1 Chernoff bounds** I got the argument for line 240. Unfortunately, I am not very familiar with Chernoff bounds and for line 236, I only found the statement $P(X \geq (1+\delta)E(x))\leq e^{-\delta^2E(X)/(2+\delta)}$ online which looks similar to what you use, but is not quite the same. Could you please state the version of the Chernoff bound that you are using and a reference? **General comment on $\ell_2$ embeddings:** I am surprised that the required dimension decreases with $n$ for certain $m$-values. Could you please give some intuition why this happens? --- Rebuttal Comment 2.1: Title: Reply: W3 Comment: Thank you for the fast reply! We are happy to elaborate on these points. Please let us know if you have any questions remaining. If we addressed all your questions, we would be grateful if you could reconsider the score. ## W3: This dependence on $\\alpha \\le 0.5$ sounds intriguing! I'd be interested in a reference and suggest to add this for context to the background section of the paper. In the following, for simplicity, we changed the meaning of $\\alpha$ compared your review: we are interested in satisfying $\\alpha$-fraction of constraints, instead of $(1-\\alpha)$-fraction. Here is what we can say quickly (we belive this can be improved with more work): * For any set of $m$ triplet constraints, a random one-dimensional $\\ell_p$ embedding satisfies at $m / 2$ constraints in expectation, hence there exists an embedding which satisfies at least $m/2$. * Let $\\alpha^* \\approx 0.77$ be the root of equation $H(x) = x$, where $H$ is the binary entropy function. Let $\\alpha$ be a constant greater than $\\alpha^*$. Then for any $m$, there exists a set of $m$ constraints so that satisfying at least $\\alpha m$ constraints in $\\ell_2$ embedding requires dimension at least $\\Omega(\\sqrt{m})$. **Satisfying $\\alpha m$ constraints for constant** $\\alpha > \\alpha^*$ We use the following fact (see Section 3 of [AAE+24] for more details): **Fact.** Consider a set of $m$ triplets $\\{(x_i, y_i, z_i)\\}_{i=1}^m$. Then the number of sets $T \\subseteq S$ such that there exists an embedding satisfying the following conditions is at most $(8 e m / (nd))^{nd}$. * The embedding lies in the $d$-dimensional $\\ell_2$ space. * For each triplet $(x_i, y_i, z_i)$ from $T$, the correct constraint according to the embedding is $(x_i, y_i^+, z_i^-)$. * For each triplet $(x_i, y_i, z_i)$ from $S \\setminus T$, the correct constraint according to the embedding is $(x_i, z_i^+, y_i^-)$. In other words, for fixed $n$ and $d$, the number of different labelings (i.e. choices of triplets for which the first candidate is the closest to the anchor) *we can achieve* depends on $m$ polynomially, not exponentially. This means that, when $m$ sufficiently large, the number of achievable labelings is less than $2^m$. Hence, some of the labelings are not achievable, which is used by [AAE+24]. Our idea is similar. This is the outline: * For each subset of triplets of size $\\alpha m$, there are at most $(8 e \\alpha m / (nd))^{nd}$ possible labelings. Meaning, if we choose a random labeling, the probability that this labeling is achievable is at most $(8 e m / (nd))^{nd} / 2^{\\alpha m}$. We choose $m$ so that the probability is less than $1 / {m \\choose \\alpha m}$. * Let's choose a random labeling of $m$ triplets, and look at the induced labeling of each subset of $\\alpha m$ triplets. For each individual subset, the probability that the labeling is achievable is less than $1 / {m \\choose \\alpha m}$. By the union bound, the probability that any of the induced labelings is achieavable is less than $1$. * Hence, by the probabilistic argument, there exists at least one labeling of $m$ triplets so that none of the induced labelings of any subset of $\\alpha m$ triplets is achievable. In other words, if we choose constraints according to this labeling, then for any embedding, there is no subset of $\\alpha m$ satisfied constraints. It remains to show that for $\\alpha > \\alpha^*$, there exists $m$ such that $$ (8 e \\alpha m / (nd))^{nd} / 2^{\\alpha m} < 1 / {m \\choose \\alpha m} \\iff (8 e m / (nd))^{nd} < 2^{\\alpha m} \\cdot {m \\choose \\alpha m}, \qquad(*) $$ and for that we want the right-hand side to grow exponentially in $m$. We use a well-known fact that $${m \\choose \\alpha m} = 2^{m H(\\alpha)} \\cdot poly(...),$$ where $H$ is the binary entropy function. Since we want $2^{\\alpha m} \\cdot 2^{m H(\\alpha)}$ to grow exponentially in $m$, we need to guarantee that $\\alpha > H(\\alpha)$, which holds for $\\alpha > \\alpha^*$. For $m = cnd / (\\alpha - H(\\alpha))$ for some constant $c$, we have $$(8 em / (nd))^{nd} = (8ec \\alpha / (\\alpha - H(\\alpha)))^{nd}$$ and $$2^{\\alpha m} \\cdot {m \\choose \\alpha m} = 2^{cnd} \\cdot poly(...),$$ and the desired inequality $(*)$ is satisfied for some constant $c$ (depending on $\\alpha$). Finally, similarly to the paper, by choosing $n = \\sqrt{m}$, we get $\\Omega(\\sqrt{m})$ on dimension. **Notes.** There is a gap between $1/2$ and $\\alpha^* \\approx 0.77$. Most likely, one can improve the $\\alpha^*$ bound since the union bound is too coarse. --- Rebuttal Comment 2.2: Title: Reply: Other Questions Comment: > W7: Can you give a rough guidance on the $(n,k)$ regime in which your kNN bound would be an improvement over the trivial bound of $n$? I.e. how large is the constant in the big-O notation? I appreciate the additional experiments on the $k$-NN graph. Perhaps a table showing the required dimension like you provide in the general comment for $\\ell_2$ embeddings would be useful. We note that the constants in the proof (including multiple 100's) were chosen for the sake of simplicity of the calculations, leading to very large total constant. We believe that the constants can be decreased so that the final constant is approximately $10^4$. In this case, the final dependency would be $10^4 k^7 \\log^3 n$ which for very small $k$ might make it applicable for $n$ of order $10^8$-$10^9$. With some further work, one might be able to improve the constant event further but we believe that the more important (and closely related) question is improving the dependence on $k$. Overall, we would like to emphasize that the main contribution for the k-NN settings is theoretical, showing that the dependence on $n$ is at most poly-logarithmic. This is very surprising, since, intuitively, k-NN still requires $n^2$ contrastive constraints: in particular, you need to compare the closest neighbor with all other neighbors. In general, as our paper shows, $n^2$ constraints might require $\\Omega(n)$ dimension, but we show that these constraints have a sufficiently good structure to avoid linear dependence on $n$. We don't claim that the $k$-NN construction is practical as is, and it's very likely that the dependence on all parameters can be improved substantially, which we leave as a future work. In particular, one of the main reason for the large dimension is due to randomized sampling scheme used to get Lemma 22. It is likely that a simpler deterministic scheme exists (which potentially might not require $\\alpha$-designs, further simplifying the construction), which would probably give better dependency both on $k$ and constants. > Q1 Chernoff bounds I got the argument for line 240. Unfortunately, I am not very familiar with Chernoff bounds and for line 236, I only found the statement $P[X \\geq (1+\\delta)E[X]]\\leq e^{-\\delta^2 E[X]/(2+\\delta)}$ online which looks similar to what you use, but is not quite the same. Could you please state the version of the Chernoff bound that you are using and a reference? We'be been aiming to simplify the presentation, which in result omitted some details. We use the standard Chernoff bound $P[X \\geq (1+\\delta)E[X]] \\leq e^{-\\delta^2 E[X]/(2+\\delta)}$, and below we show the complete derivation. We have \\begin{align*} P[X > c \\ln n] &= P[X > \\frac{c}{E[X]} \\ln n \\cdot E[X]] \\\\ &= P[X > (1 + (\\frac{c}{E[X]} \\ln n - 1)) \\cdot E[X]] \\end{align*} Next, we use the referenced Chernoff bound with $\\delta = (\\frac{c}{E[X]} \\ln n - 1)$. Note that for $n \\ge e$ and $c \\ge 3 E[X]$, we have $\\delta \\ge (\\frac{3 E[X]}{E[X]} \\ln e - 1) \\ge 2$ and hence $2 + \\delta \\le 2 \\delta$ and $\\delta^2 / (2 + \\delta) \\ge \\delta / 2$. Finally, by the Chernoff bound, we have \\begin{align*} P[X > c \\ln n] &\\le e^{-\\delta E[X] / 2} \\\\ &\\le e^{-(\\frac{c}{E[X]} \\ln n - 1) E[X] / 2} \\\\ &= n^{-c/2} \\cdot e^{E[X] / 2} \\end{align*} Since $E[X] = O(1)$ (as a sum of $O(r)$ i.i.d. random variables with expectation $1/r$), the error probability is $O(n^{-c/2})$, which can be made arbitrarily small by the appropriate choice of constant $c$. > General comment on $\\ell_2$ embeddings: I am surprised that the required dimension decreases with $n$ for certain $m$-values. Could you please give some intuition why this happens? Great question! Recall that the dimension is $O(r)$, where $r$ is the arboricity. Arboricity is the measure of density of the graph (see also figures in the global reply). For a fixed $m$, with the increase of $n$, the density of the graph naturally decreases, which leads to smaller arboricity, and hence smaller dimension. We'll elaborate on this in the experiment. --- Rebuttal 3: Title: More on W3 and W7 Comment: Thank you again for your comments! > W3. Could you please comment on why you can choose $m$ (or $c$) to make (*) true while maintaining $m \\le n^2$, so that the triplets are not contradictory? I also do not get the last bit on choosing $n = \\sqrt{m}$. Let us present the construction in a slightly different way, which hopefully answers both of your questions. **Construction.** Consider a set of $n$ points. On this set of $n$ points, according to Lemma 41, there exists a set of $m = {n - 1 \\choose 2} \\approx n^2 / 2$ triplets $C$, such that for every labeling of $C$ there exists a metric consistent with this labeling. We will show that, in general, $d = \\Omega(n)$ is required so that for any labeling of $C$ there exists an embedding into a $d$-dimensional $\\ell_2$-space. Since for this construction $n = \\Theta(\\sqrt{m})$, this implies that $d = \\Omega(\\sqrt{m})$ is required in general to embed a set of $m$ non-contradictory constraints. **Proof.** For the sake of contradiction, assume that there is a sufficiently large $n$ such that every labeling of the above set of triplets can be embedded using dimension $d < c n$ for some constant $c < 1$ (to be specified later). As mentioned in the previous reply, the amount of possible labelings consistent with an embedding of dimension $d$ is at most $(8 em / (nd))^{nd} \\approx (4e n / d)^{nd}$, where $\\approx$ ignores low-order terms in the exponent. As we showed in the previous reply, not all labelings are embeddable as long as the above expression is less than $$2^{\\alpha m} \\cdot {m \\choose \\alpha m} \\approx 2^{m(\\alpha - H(\\alpha))} \\approx 2^{n^2 (\\alpha - H(\\alpha)) / 2}$$ When $d < cn$, we have $$(4e n / d)^{nd} < (4e / c)^{c n^2} = 2^{n^2 c \\log_2 (4e/c)}$$ for a sufficiently large $n$. Then, if $c \\log_2 (4e/c) < (\\alpha - H(\\alpha)) / 2$ (such $c$ exists), we have that $(4e n / d)^{nd} < 2^{\\alpha m} \\cdot {m \\choose \\alpha m}$ for a sufficiently large $n$, implying that not all labelings of $C$ are embeddable. This proves that $d = \\Omega(n)$ is required. Since in this construction $n = \\Theta(\\sqrt{m})$, this means that $d = \\Omega(\\sqrt{m})$ is required in general. > W7. Thanks for stating the constant! I do not mind the result being largely theoretical, especially with the additional discussion on why the poly-logarithmic dependence is surprising. Nevertheless, I encourage you to state the constant and the $n$-regime in which your bound becomes practically useful in the revision. Thank you, we will specify the constants and add a short discussion about the constants and the practical regime at the conclusion of the paper. --- Rebuttal Comment 3.1: Comment: Thanks for clarifying the argument further! Overall, I am happy with the rebuttal and have raised my score to 6. --- Reply to Comment 3.1.1: Comment: Thank you for reconsidering the score and for your valuable comments!
Summary: This paper discusses the number of compressed dimensions that satisfy the triplet or kNN constraints. In particular, theoretical results are derived for various distance measures as well as $L_2$. Strengths: The paper obtains intuitive and useful results. To this end, it introduces the concept of arboricity and obtains theoretical results. Weaknesses: Although this paper is a theory paper, I am strongly interested in whether the embedding is actually obtained. Let's say we have 1,000,000 vectors, where the dimensionality of each vector is 1,000. Our task is to preserve the top-500 ranking for each vector. Here, there are no encoders. We have vectors only. Can we practically set the valid $h$ beforehand (without knowing the final validated result)? To compute $F$, what computation do we really need? Is it fast or slow? The current experimental section does not contain the details of such a process. Technical Quality: 3 Clarity: 2 Questions for Authors: See te weaknesses section. Confidence: 1 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Title: Rebuttal Comment: Thank you for your comments. > ... Whether the embedding is actually obtained ... > To compute $F$, what computation do we really need? Yes, all embedding constructions are explicit. The construction for $\\ell_2$ is in Section 2, and the construction for k-NN embeddings is in Section 3. > Here, there are no encoders. We would appreciate it if you could please clarify what you mean by encoders here. > Can we practically set the valid $h$ beforehand We are not exactly sure we understand this question. By $h$, we assume you mean the size parameter in the construction for $\\ell_2$. We don’t use this parameter in the k-NN construction, which your questions seems to be about.
Summary: The paper "Embedding Dimension of Contrastive Learning and k-Nearest Neighbors" investigates the minimum embedding dimension required for representing datasets labeled with distance relationships in l_p-spaces, focusing on contrastive learning and k-Nearest Neighbor (k-NN) settings. The main findings suggest that the arboricity of associated graphs significantly impacts the design of these embeddings, providing tight bounds for the popular l_2-space and sufficient or necessary dimensions for other l_p-spaces based on the size of data (m for contrastive learning and k for k-NN). Strengths: One of the best papers that I read in a while! This paper is a standout piece, brilliantly blending strong theoretical insights with robust experimental validation. The authors have done an excellent job of presenting both lower and upper bounds, along with detailed experimental results, which really bring their work to life. It’s refreshing to see such a thorough demonstration in the field. - Theoretical Depth and Rigor: The paper provides a rigorous mathematical framework for determining the embedding dimensions necessary for accurately representing data in different l_p-spaces. - Practical Relevance: By exploring both contrastive learning and k-NN, the study addresses fundamental issues in machine learning that are highly relevant for practical applications, especially in areas like image recognition and nearest-neighbor classification. - Comprehensive Analysis: The analysis spans multiple norms (l_2, l_inf, and general l_p) and provides both upper and lower bounds, tight bound, making the results robust and versatile for different scenarios. Weaknesses: Complexity of Proofs: Some of the proofs, especially those involving graph arboricity and its relation to embedding dimensions, are quite intricate and may not be easily understandable to readers without a deep background in theoretical computer science. Proofs might benefit from additional clarification or simplification for better understandabelity to non technical users. I had hard time reading and understanding them. (Disclaimer: I still need more time to fully digest them) Empirical Validation: While experimental results are mentioned, the paper could strengthen its claims by expanding on these results, particularly how they compare with theoretical predictions across different settings and data scales. Application for non-expert audiences: The paper presents significant theoretical results that could impact practical machine learning applications, but the presentation style and technical jargon might be inaccessible to non-specialists in theoretical computer science, which limits its potential audience and applicability. The presentation could benefit from some polishing and simplification of the notation, though this is more a matter of preference than a true weakness. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. Bound Tightness (Section 1.1, Line 61): The paper discusses tight bounds for embedding in l_2 spaces for both contrastive learning and k-NN settings. Could you clarify the specific conditions or dataset characteristics that lead to these bounds being tight, and whether similar tightness is achievable in other l_p spaces? 2. Proof Complexity (Section 2, Lines 157-204): The proof of Theorem 8 relies on complex constructions involving vertex coloring and ordering by arboricity. Could you provide further intuition or simplified explanations to make this more accessible to readers without a deep background in graph theory? 2. Experimental Results (Section 4, Lines 328-347): You mention experiments on CIFAR-10 and CIFAR-100 with various embedding dimensions. Could you elaborate on how these experiments were structured, particularly how the embedding dimensions were selected and their impact on the model performance? Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: The main limitations that come to my mind are the limited empirical support and the complexity of the theoretical concepts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for your positive feedback, thoughtful review and the suggestions. ### Complexity of Proofs Thanks a lot for pointing this out, we’ve substantially updated the exposition by adding illustrations of the key concepts (you can find some examples in the *global response* to all reviewers), as well as illustrations for some of the most complicated parts in Sections 2 and 3. We will also simplify the notation in the technical sections and add the intuitive explanations. To give an example, in Section 2, the arboricity $r$ implies that there exists an order of vertices so that each vertex has $O(r)$ preceding neighbors. We use it when constructing both parts of the embedding: * First, during construction of $F_1$, we want to control the inner products between the vertex and its preceding neighbors. This leads to at most $O(r)$ linear equations, and, by guaranteeing linear independence, it implies that $O(r)$ variables (that is, the embedding dimension of $O(r)$) is sufficient to guarantee the existence of the solution. * Second, during construction of $F_2$, for each vertex we update a single coordinate to normalize the vector. We want to avoid changes in the inner products between neighbors, and hence for each vertex we select the coordinate different from that of any preceding neighbor. Since there are at most $O(r)$ preceding neighbors, we require only $O(r)$ coordinates. ### Empirical Validation As per your suggestion, we added a major update to the experimental section by focusing on the various settings (you can find it in the *global response* above). In particular, the response includes: * $\ell_2$ construction based on Section 2; * k-NN experiments; * experiments on LLMs. Please note that in Figures 3a) and 3b), we consider different values of $m$, which corresponds to a wide range of data scales, having as much as $10^6$ samples (in the *global response*, we consider up to $10^7$ samples). ### Application for non-expert audiences We clarified the technical sections to make them more accessible. We have added a conclusion section (see a comment above) to summarize our main contribution, which we believe should be relevant to non-experts. In particular, our results provide guidance for the choice of embedding dimension in contrastive learning and k-NN applications, which we believe can be understood without diving into the theoretical details. ### Bound Tightness The bounds are likely to be tight when the set of constraints is dense, i.e. when there exists a subset of $\\Omega(\\sqrt{m})$ elements so that the samples are focused on these elements. For k-NNs, the lower bounds are likely to be tight in many cases, unless the dataset has some degeneracy. It is likely that the upper bounds for k-NNs are not tight, and the dependence on $k$ can be improved substantially. Please let us know if our response addresses your concerns. --- Rebuttal Comment 1.1: Title: Happy reviewer Comment: Thanks for your hard work, your responses fully addressed my comments. Good luck! --- Reply to Comment 1.1.1: Title: Happy authors Comment: Thanks a lot, very excited to hear that this is one of the best papers you've read in a while and your concerns have been fully addressed! :)
Summary: Paper studies the embedding dimension of contrastive learning and kNN problem. In the first, we are given n points along with some constraints of the form (x,y, z1, z2, .., zm) which mean that x is closer to y and far from z1 z2 zm. Indeed, y is said to be positive label for x and z1, z2, zm are negative exmaples. It has been seen that such learning with + and - examples yield very good quality. So the question studied in this work is if we were to embed the objects into vector space such that ||f(x) - f(y)|| < || f(x) - f(zi) || for all i, then what embedding dimension do we need? Similarly, k-NN classifiers are used widely in ML. Here, we are given (x, y1 y2, .. yk) and want to preserve the ordering in the embedded space, i.e., ||f(x) - f(y_i)|| < ||f(x) - f(y_j)|| when i < j. In a generalization, we also require that y1.. yk are the kNNs in the embedded space also. Main results: Authors show for l2 metrics, we can embed into sqrt(m) dims for contrastive learning They show slightly weaker results for l_infty metric and l_p metrics for general p. More interestingly, for k-NN embeddings, they show embeddability into poly(k) dims for lp norm and k dims for l2 norm! Main idea is to use arboricity as a key intermediate concept in showing embeddability. Indeed, a graph has arboricity r if it can be split as a union of r trees/ forests in general. Nice graph theoretic result states that graph with m edges has arboricity sqrt(m). They then use this on the constraint graph resulting from the constraints placed by the embedding requirements (distances less than other distances). They show that if constraint graph has arboricity r, it can be embedded into l2 metric of dimension O(r). Moreover for kNN constraint graph they show that arboricity is O(k), so the result follows. Most relevant prior work [CI24] which gives some lower bounds. They finally show some empirical results on CIFAR dataset showing that the obtained theoretical bounds can be met in practice. Strengths: It is very interesting to study embedding dimension. Embeddings are the cornerstone of future ML and low-dimensional embeddings representative of data is crucial. Contrastive learning and kNN methods are important tools, so preserving their structure is useful. The paper has clean ideas using arboricity as intermediate concept. Paper is also well written. Weaknesses: The experimental section seems slightly less convincing. Indeed, the embeddings you use to validate are not the ones you show theoretical bounds for right? Or am I misunderstanding. Can we show that such embeddings which preserve contrastive and kNN properties are as useful down the line as the original embeddings for various tasks? Some empirical study along these lines would be useful. Technical Quality: 4 Clarity: 4 Questions for Authors: I have asked my qns in the weakness section. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Paper does not have explicit limitations/ future work section and would benefit from one. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for your careful review. We’ve updated the experimental section to provide a more precise validation of the theoretical bounds (please the global response posted to all the reviewers above). Regarding the downstream applications of our embeddings, the answer to this question is two-fold: * k-NN embeddings themselves represent the main downstream task in their context. Such embeddings are used for recommendation systems, ranking, etc. Hence, preserving Top-k results in this context is exactly aligned with the objective of the downstream task. * The primary goal of our work is not to develop a scheme for computing embeddings better than the existing deep learning pipelines. It is our goal, however, to provide theoretical guidance for the choice of the embedding dimension in such architectures and do this in a theoretically rigorous fashion. Hence, we don’t expect the exact embedding constructions proposed in our theoretical bounds to perform better than the empirical embeddings which might be able to better capture the intrinsic structure of the data. It is only our goal to argue that the dimensionality used by these empirical constructions indeed suffices. Regarding conclusion/future work, please refer to the global response. Please let us know whether our response addresses your concerns.
Rebuttal 1: Rebuttal: ## Additional Figures Based on the reviews, we decided to add figures to clarify the proofs. Please see the attached PDF. ## Conclusion As suggested by the reviewers, we will use the extra space of the final version to include the following discussion of the future work and to reiterate our findings: > In this paper, we provide bounds on the necessary and sufficient dimension to represent a collection of contrastive constraints of the form “distance $(a,b)$ is smaller than distance $(a,c)$". This is a fundamental question in machine learning theory. In particular, it helps educate the choice of deep learning architectures by providing guidance for the size of the embedding layer. Our experiments illustrate the predictive power of our theoretical findings in the context of deep learning. We also believe that it gives rise to many interesting directions for future work depending on the exact desiderata (e.g. approximate versions, different choices of normed spaces, bicriteria algorithms, agnostic settings, etc.). > While the distance comparison settings we consider play a central role in contrastive learning and nearest neighbor search, so far there has been no theoretical studies of their embedding dimension. Our work is the first to present a series of such upper and lower bounds in a variety of settings via a novel connection to the notion of arboricity from graph theory. As a follow-up, one can consider an improved embedding construction for k-NN: while the upped bound from Section 3 shows that dependence on $n$ is at most poly-logarithmic, the dependence on k can likely can be improved. Another interesting direction is tighter data-dependent bounds on dimension: while we provide fine-grained bounds in terms of arboricity - which are potentially much stronger than bounds in terms of the number of edges - they don’t necessary captures properties of dataset which can lead to sharper bounds. ## Additional Experiments In this section, we present the experiments based on the reviewers' suggestions. Please note that due to tight time constraints, the scale of the experiments below is limited. We will expand on these experiments in the camera-ready submissions. ### Our construction for $\\ell_2$-embeddings In the table below, we show the dimension achieved by our construction from Section 2. We consider these ranges of $n$ and $m$, since it allows $m$ to span from $m < n$ to $m > n^2$. | | $m=10^2$ | $m=10^3$ | m=$10^4$ | m=$10^5$ | m=$10^6$ | m=$10^7$| |---|---|---|---|---|---|---| |$n=128$| 8 | 34 | 169 | 256 | 256 | 256 | |$n=256$| 6 | 19 | 137 | 452 | 512 | 512 | |$n=512$| 4 | 12 | 81 | 501 | 1024 | 1024 | |$n=1024$| 4 | 8 | 43 | 362 | 1457 | 2048 | |$n=2048$| 4 | 6 | 24 | 203 | 1488 | 3956 | ### Experiments for k-NNs In the next table, we present results for k-NNs for $d=128$ and for various choices of $n$ and $k$. We generate the k-NN using a ground-truth embeddings $e^*$. For each element $u$, let $v_1^{(u)}(e^*), ..., v_{n-1}^{(u)}(e^*)$ be other elements, sorted by their distance to $u$ according to $e^*$. For each $i \\in [k]$ and $j > i$, we generate contrastive samples $(u, v_i^{(u)}(e^*)^+, v_j^{(u)}(e^*)^-)$, and we train the neural network on this set of samples similarly to Section 4. We denote the trained embeddings as $e$. For each $n$ and $k$, we report the "accuracy" of preserving the k-NNs. For each vertex $u \\in V$ and $i \\in [k]$, we compute the change of rank of the $i$'th neighbor w.r.t. $u$ in the new embeddings. Formally, we find $j$ such that $v_{j}^{(u)}(e) = v_{i}^{(u)}(e^*)$, which contributes the loss $|i - j|$. Finally, we define the accuracy as the average loss over all $u \\in V$ and $i \\in [k]$. ||k=1|k=2|k=4|k=8|k=16|k=32| |---|---|---|---|---|---|---| |n=10| 0.2 | 0 | 0.175 | 0.55 | - |-| |n=100| 0.04 | 0.17 | 0.285 | 0.78 | 1.376 |2.316| |n=1000| 0.007 | 0.367 | 0.882 | 1.5 | 2.2983|3.336| The table shows that, as our theory predicts, dependence on $n$ is minor. ### Other experiments 1. We expanded experiments from Figure 3b) to accurately capture the regime around the $d=\\sqrt{m}$ threshold. In particular, we consider $m = c \\cdot d^2$ for $c=0.5, 0.75, 1, 1.5, 2, 2.5, 3$. The corresponding mean accuracies are $99\\%, 99\\%, 98\\%, 97\\%, 96\\%, 95\\%, 95\\%$. 2. We performed experiments similar to that from Figure 3b) on the Large Language Models tasks. We sampled $n=1000$ reviews from the Amazon sentiment dataset. We get ground-truth embeddings using `paraphrase-MiniLM-L6-v2` model and train `stsb-roberta-base` model similarly to the settings from Figure 3. We fix $d=128$ and consider $m = 10^3, 10^4, 10^5, 10^6$. The corresponding accuracies are $100\\%, 100\\%, 85\\%, 85\\%$, again showing the accuracy drop after $m$ significantly exceeds $d^2$. Pdf: /pdf/0b1e16bbbda70160d65ccdcfab0281ceb52f32a7.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Learning from Snapshots of Discrete and Continuous Data Streams
Accept (poster)
Summary: This paper considers the problem of online learning in the setting of discrete and continous data streams. It first introduces two novel learning frameworks: the update-and-deploy setting and the blind-prediction setting. The update-and-deploy setting allows a learning algorithm to discretely query a data stream to update a predictor, while the blind-prediction setting involves making predictions without observing the current data value but only based on the previous querious and current timestamp. The authors proves the error bound when a non-adaptive learner uniformly samples queries in the update-and-deploy setting. Then in the blind-prediction setting, they show that no matter the learning algorithm is adaptive or non-adaptive, it is not learnable for any concept class. It also discusses the characterization of learning algorithms should have when learning pattern classes in the continous data stream. Finally, it extends it results to the discrete data streams for the second framwork for deterministic algorithms. Strengths: 1. It proposes two novel framework for learning under continous data streams and provide substantial theoretical analysis to show the error bound for the first setting and the findings in the second setting. 2. It has substantial contributions regarding the novel frameworks, discussion about the different learners to learn the pattern class, and extends the results into discrete data stream. 3. It proposes the new mesaure called the query-learning distance(QLD) and the algorithm they develop has an optimal mistake-bound with respect to the new QLD. Weaknesses: 1. The paper lacks empirical validation of the results, and there's no comparison with other existing possible solutions or natural extensions of existing solutions to their framwork. Technical Quality: 4 Clarity: 3 Questions for Authors: Is it trivial to consider what kind of non-adaptive learners should be when learning pattern classes under continous data streams? Confidence: 2 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: see weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive comments towards the contributions made by our two learning frameworks and algorithms! ### Weaknesses **Comment 1**: This paper is primarily a learning-theoretic paper so it's focused on establishing a concrete theory in terms of mathematical statements and proofs. Since the main purpose of our paper is to develop a theory within online learning, we would defer empirical evaluation to future application-oriented research. ### Questions: **Question 1**: This is an important question regarding the learnability of pattern classes under a continuous data stream. When we consider learnability of some pattern class $\mathcal{P}$, this means that we look at all possible learning algorithms and if there exists a learning algorithm whose mistake-bound is finite on $\mathcal{P}$, then we consider $\mathcal{P}$ to be learnable. The pattern class example in Section 4.2 illuminates an important pattern class that is only learnable under an adaptive learning algorithm. Since we have shown that there exists pattern classes that are not learnable by non-adaptive learners but learnable by an adaptive algorithm, this means that the criteria for learnability of pattern classes is beyond the scope of non-adaptive learners. --- Rebuttal Comment 1.1: Comment: Thanks for the author's feedback on the weaknesses and questions. I do feel that the technical contributions in this paper are solid but adding experiments would definitely enhance and make the algorithms more concrete. I intend to keep my score, good luck with the submission.
Summary: The paper studies the online learning problem where the algorithms are receiving a stream of $(X_i, Y_i)$ in which $X_i$ is an instance and $Y_i$ is the corresponding label, and in each time point $i$, the algorithm is required to make a prediction $\hat{Y}_i$, which is based on the historical data and the current instance (or not). The paper studies the problem in several settings: For continuous case: 1. In the update-and-deploy setting, the paper proposes a non-adaptive algorithm that achieves a mistake bound $LD(H)$, where the main idea of the learner is using sampling. 2. In the blinding prediction setting, the paper shows that any learning algorithm, adaptive or non-adaptive, is not learnable in the blind-prediction setting. 3. The paper considers the case where the algorithms are required to learn pattern classes. Then it designs a continuous pattern class that any random sampling algorithms fail but there is an adaptive learning algorithm that successfully learns P with zero expected error. This shows a separation of the adaptive and non-adaptive case in this setting. Finally, the paper develops a theory for the discrete case where they characterize a combinatorial quantity QLD where there is a deterministic learning algorithm whose optimal mistake-bound is equivalent to it. Strengths: 1. This is a strong theory paper. To get this results, the paper considers several algorithmic ideas which are interesting to me (though I am not an expert in the area of online learning). Weaknesses: 1. The writing of this paper is not good enough. When I first read this paper, I felt like some parts of the paper were not easy to follow and need to be improved. For example: -- In Section 1.3, several notations are not defined until the later Section. -- The definition of the littlestone dimension is not mentioned. Technical Quality: 3 Clarity: 2 Questions for Authors: See the above question. Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments regarding the strength of our paper! ### Weaknesses: **Comment 1**: Thank you for this comment. After a revisit to Section 1.3, many of the terms such as $MB_{\mathcal{P}(H)}$ and $LD(H)$ were not properly defined. We will fix this revision to make sure that the terms are defined before their usage in addition to their formal definitions in the later section. **Comment 2**: Thank you for this comment. We will include a brief section in the paper explaining the Littlestone dimension.
Summary: This paper studies mistake bounds for discrete and continuous labelled data streams in two different coupling settings between labeller and learner (update-and-deploy and blind-prediction) Strengths: The paper is a theory paper that characterizes the learnability of pattern classes. The proof in the main paper is very accessible, easy-to-follow and accessible. I commend the authors for the accessibility of the proof in the main paper. Weaknesses: The paper feels overloaded with results for a 9-page NeurIPS submission. All the results for the adaptive learner case feel "crammed in" and, unless the reader is prepared to read the appendix, cannot be easily understood. I wish the authors would have constrained on the study of Algorithm 1; while less results, it would have made the paper into a self-contained theory paper with a significant result that may have had a proper conclusion. Technical Quality: 3 Clarity: 3 Questions for Authors: * in line 148, 166, 199 and 239, shouldn't it be $\in O(t)$ rather than $= O(t)$? * the reader would greatly benefit from introducing the Littlestone dimension (LD) * in line 267, shouldn't it say "expected size $\Delta$"? * line 320: "are needed to" should read "are needed for" Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: The result is not allowing to discover new algorithms for this setting; it explains why a random query strategy works. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments about our paper's proofs! We are glad that it was highly accessible! ### Weaknesses: **Comment 1**: Thank you for this feedback. We agree that some structural changes can be made to the paper to make it more readable. The main highlights of this paper are to show that non-adaptive learning algorithms are powerful enough to learn concept classes from a continuous stream of data in the update-and-deploy setting and pattern classes require adaptive learning algorithms in either learning framework. One potential revision is to present these two results first with their full proofs so the reader doesn't have to reference the Appendix and understand what the main ideas are of the paper. ### Questions: **Question 1**: When describing the growth rate of a function in terms of Big-O notation, both $Q_{\mathcal{A}}(t) = O(t)$ and $Q_{\mathcal{A}}(t) \in O(t)$ convey the same meaning. The expression $Q_{\mathcal{A}}(t) = O(t)$ is considered an ``abuse of notation" but it implies that $Q_{\mathcal{A}}(t)$ is of order $O(t)$. **Question 2**: Thank you for this comment. We agree and we will add a brief section in the paper introducing the concept. **Question 3**: According to line $8$ in the pseudo-code of Algorithm $1$, the next query time $t_q$ is sampled in the following way: $t_q \sim \mathrm{Unif}[t, t + \Delta]$. The query time $t_q$ is selected from a uniform distribution over intervals of size $\Delta$ so it's not an expected size of $\Delta$ but rather consecutive query times are spaced out on average of size $\frac{\Delta}{2}$ since the mean of a uniform distribution on an interval of size $\Delta$ is $\Delta/2$. **Question 4**: Thank you for this edit. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed comments of the authors. I would still suggest to use $\in O(t)$ but otherwise I am fine with the answers and stick to my assessment. Good luck with the paper and hopefully see it at NeurIPS this year!
Summary: This paper introduces a novel learning-theoretic framework for understanding online learning from continuous and discrete data streams through selective querying. The authors propose two settings: the update-and-deploy setting, where a learner updates a predictor based on queried data, and the blind-prediction setting, where predictions are made independently of the current data stream. They analyze the learnability of concept classes and pattern classes under these settings, providing theoretical bounds and algorithms. Strengths: - The framework presented in this paper for studying continuous data streams is interesting and novel in my opinion. - The authors present simple algorithms which are interesting and derive results in terms of already existing concepts like Littlestone dimension which is interesting. Weaknesses: - There are certain parts in the paper that are unclear as described below. It would be nice if the examples presented in the paper to describe the settings and the theoretical framework are presented together for better understanding. It would be nice if the authors start from the traditional setting first and then add the elements relating to the continuous setting to clearly differentiate between the two. Technical Quality: 2 Clarity: 2 Questions for Authors: - The Update and Deploy setting and the Blind Prediction settings are not clear to me. In particular, we can think of the update in the Update and Deploy (update from the cloud translator) setting as a kind of feedback (hyper spectral image) in the Blind Prediction setting? In the formal definitions in sections 2.2 and 2.3, is the difference between the two settings is that in one f^ needs to be present in the concept class H and in the second one, there is no such constraint? What is the requirement on the true function f^t from which labels Y^t are generated? - Can the authors explain the definition of learnability in definition 3.1? If Q(t) = O(t), then the algorithm can query in every iteration, right? How does this definition extend from the standard online learning definition? - Is it true that this framework with timestamps can be expanded into the domain X to form an intuition? So, the function class is over (X,t) and we should think of (X,t) as our new domain and there is a fixed class of functions that are presented in H over this new domain? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments on the novelty and appeal of our work! ### Weaknesses **Comment 1**: Thank you for the feedback! We agree that presenting the theoretical framework and giving a tangible construction by referring to the specific examples mentioned in the introduction will make it much easier to understand. Regarding your second statement, the traditional setting of online learning can be considered as the fully-supervised scenario under a discrete data stream with respect to a concept class $H$. In this scenario, instances are fed one-by-one to a learner who observes the instance $X_t$, makes a prediction $\hat{Y_t}$, and observes the true label $Y_t$. The primary constraint is that the sequence of points and true labels,$ \{ (X_t, Y_t) \}\_{t=1}^{\infty} $ , is realizable with respect to $H$ implying that there exists some $h^* \in H$ where $ \{ (X\_t, Y\_t)\_{t=1}^{\infty} \} = \{ (X\_t, h^*(X\_t)) \}\_{t=1}^{\infty} $. In the continuous setting, the instances and true labels form a continuous process throughout the time: $(X_t, Y_t)_{t \geq 0}$. Unlike the traditional setting where the learner receives the true label $Y_t$ after every prediction $\hat{Y}_t$, the learner in either the update-and-deploy or blind-prediction settings must query at that time to receive information about the true label. Additionally, the learner must obey a linear querying strategy so it is limited in the number of times it can query. The blind-prediction setting, a tougher learning framework than update-and-deploy, the learner makes predictions without knowledge of the current instance and only gains information about the instance and its true label through queries. We believe that an explanation like the one given above can be a helpful addition to our paper in highlighting the differences between the traditional and continuous settings. ### Questions **Question 1**: The update-and-deploy and blind-prediction settings are two separate, but closely related frameworks. The update in the update-and-deploy setting can be considered as "redeploying" or constructing a new predictor function for future instances from the data stream. In Algorithm $1$, the feedback from the query, the true label, updates the SOA algorithm and this new version of the SOA algorithm is then reinstated as the new predictor function. So, the feedback received from querying, the true point-label pair, can be used to update the predictor function, these two concepts are different. Regarding the requirements on the true function, the only requirement in both settings is that the sequence of instances and true labels, $(X_t, Y_t)_{t \geq 0}$, is realizable. If the realizability is with respect to a concept class $H$, then there exists a target concept $h^*$ such that $\forall t \geq 0, h^*(X_t) = Y_t$. In both settings, the true function, or also known as the target concept $h^*$, must lie in the concept class $H$. **Question 2**: Yes, so the definition of learnability as written in Definition 3.1 states that a concept class $H$ is learnable if there exists an algorithm $A$ employing a linear querying strategy such that $MB\_{\mathcal{P}(H)}(\mathcal{A}) < \infty$ where $\mathcal{P}(H)$ is simply the collection of all continuous processes that are realizable with respect to $H$. The expression $MB_{\mathcal{P}(H)}(\mathcal{A})$ represents the mistake-bound of the algorithm $A$ on $H$. Informally speaking, the mistake-bound is the worst case scenario of the integral of mistakes algorithm $A$ makes on any sequence realized by $H$. If this is finite, then we say that $H$ is learnable (doesn't make infinite error). Regarding algorithm querying, we can take a look at Algorithm $1$ which is an example of an $O(t)$ querying strategy. Given $\Delta$ as an input parameter, Algorithm $1$ samples its next query time, $t_q$, from a uniform distribution on an interval of width $\Delta$. Every iteration of Algorithm $1$ corresponds to its next query time which is on average spaced out by $\Delta/2$ units of time. An algorithm can query every iteration as long as its sequence of queries grows $O(t)$. Regarding the extension from standard online learning, the notion of querying primarily exists in the continuous setting because not every data point from a continuous process is observable. In the standard online learning setting, the learner operates on a discrete data stream modeled as a round-by-round process so every point is observable by the learner. **Question 3**: This is an important question to clarify. With the usual notation of $\mathcal{X}$ as the instance space and $\mathcal{Y}$ as the label space, the concept class $H$ represents some set of functions mapping from $\mathcal{X}$ to $\mathcal{Y}$. The timestamps $t$ simply represent when a particular instance-label $(X_t, Y_t)$ pair occurs. The realizability condition implies that there must exist a target concept $h^* \in H$ where $\forall t \geq 0, Y_t = h^*(X_t)$ so the ordering of a particular $(X_t, Y_t)$ occurring is irrelevant as long as $h^*(X_t) = Y_t$. When we extend to pattern classes $\mathcal{P}$, we can now incorporate the idea of temporal dependencies with instance-label pairs. For example, let's assume the continuous sequence $(X\_t, Y\_t)_{t \geq 0} \in \mathcal{P}$. Then, for some two timestamps $t, t' \in \mathbb{R}\_{\geq 0}, t\neq t'$, we decide to replace $(X\_t, Y\_t) = (X\_{t'}, Y\_{t'})$ and $(X\_{t'}, Y\_{t'}) = (X\_t, Y\_t)$ and keep all other timestamps identical to the original continuous sequence. It's not guaranteed that this sequence must lie in the pattern class $\mathcal{P}$ so the ordering of instance-label pairs becomes crucial.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
PhyloGen: Language Model-Enhanced Phylogenetic Inference via Graph Structure Generation
Accept (poster)
Summary: The authors introduce PhyloGen, a new method that uses a pre-trained genomic language model to generate phylogenetic trees without relying on evolutionary models or aligned sequences. PhyloGen treats phylogenetic inference as a conditionally constrained tree structure generation problem, jointly optimizing tree topology and branch lengths through three modules: Feature Extraction, PhyloTree Construction, and PhyloTree Structure Modeling. A Scoring Function guides the model towards stable gradient descent. The method achieves state-of-the-art performance when evaluated on benchmark datasets. Strengths: 1. The framework is novel in that it leverages a pre-trained genomic language model instead of relying on aligned sequences. 2. The method achieves state-of-the-art performance evaluated on real-world datasets. Weaknesses: 1. Ablation Experiments Clarity The ablation experiments are unclear. My main concern is the lack of clarity regarding why the proposed method significantly outperforms other methods. Specifically: (1) Table 5: This table shows the average metrics across the eight datasets. However, Table 1 and Table 2 only shows the metrics for each dataset individually without any average metrics. It’s unclear what the performance gap is between the ablation model and the baseline methods when KL or S are removed. (2) Figure 6: The ablation studies in this figure only show the metrics for the DS1 dataset. Please include the average metrics across all eight datasets. To improve the clarity of the ablation experiments, I strongly suggest making the evaluation settings consistent in Table 1, Table 2,Table 5, and Figure 6. At the very least, include the average metrics across the eight datasets. Additionally, provide more explanations based on the complete ablation experiments to clarify why the proposed method significantly outperforms other methods. 2. Claim on Aligned Sequences vs. Unaligned Sequences The claim that unaligned sequences better reflect actual sequence variation than aligned sequences is questionable. Generally, aligned sequences with direct site-to-site comparisons are more effective in comparative sequence analysis. The authors need to provide more direct experimental evidence to support the advantage of using unaligned sequences. 3. Writing Issues There are many writing issues. For instance, a. line 141. "m_ij" -> "h" b. In the caption of table1, "VBPI-GNNuse" -> "VBPI-GNN use" c. line 213, "3.1 Experiment Setup" -> "4.1 Experiment Setup" Technical Quality: 2 Clarity: 3 Questions for Authors: 1. Could you clarify whether the computation of the marginal log likelihood involves an evolutionary model like [1]? Does the evolutionary model utilize aligned sequences? If so, does this contradict the statement in the article that sequence alignment is not required? 2. I noticed that the author provides a comparison of run times. Could you provide details about the GPU used in the experiments, as well as the memory utilization? [1] M. Zhou, Z. Yan, E. Layne, N. Malkin, D. Zhang, M. Jain, M. Blanchette, and Y. Bengio. Phy-logfn: Phylogenetic inference with generative flow networks. arXiv preprint arXiv:2310.08774, 2023. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: I have no concerns on the potential societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weakness:** **W1: Table 5 Clarification:** Tab. 5 presents results for the DS1 dataset, not an average across eight datasets. The impact of removing KL or S has already been shown in this table. **Figure 6 Explanation:** Fig. 6 shows ablation results for the DS1 dataset, chosen for its representativeness, as many recent studies also use this dataset. This is a common practice in the field. Due to time constraints, we have added ablation results for some other datasets in the attached file. We can continue to add results if you feel it is necessary. **Performance Explanations:** The complete ablation experiments are shown in Tab. R1 indicates a significant performance decline when KL or S is removed, underscoring their importance in model regularization and overfitting prevention. Replacing our distance matrix with Euclidean or cosine distances significantly lowers MLL, indicating these conventional distance methods fail to capture complex evolutionary patterns. Using one-hot encoding exhibited the lowest MLL, highlighting the superiority of our feature extraction module and the pre-trained language model in representing complex biological data. **W2: Aligned vs. Unaligned Sequences** Thank you for your feedback. There might be some misunderstanding regarding the concept of "aligned sequences". In our manuscript, we refer to equal-length sequences, not multiple sequence alignments (MSA). This definition is consistent with recent methods like PhyloGFN and GeoPhy, which refer to "pre-aligned sequences" as uniform-length sequences, as seen in statements like "These datasets feature pre-aligned sequences" or "Let Y be a set of aligned sequences with length M from the species.". Our method analyzes raw sequences directly, avoiding biases introduced by alignment algorithms, thus preserving the original sequence diversity and more accurately reflecting sequence variation and evolutionary relationships. **W3: Writing Issues** Thank you for highlighting the writing issues. We will thoroughly review and improve the manuscript for clarity. **Questions:** **Q1: MLL:** Thank you for your question. Let me clarify these concepts: **PhyloGFN:** 1. **Evolutionary Model:** PhyloGFN uses an evolutionary model to compute the marginal log likelihood (mll) through trajectory sampling and importance sampling. This involves generating trajectory data, calculating the forward and backward path log probabilities, and computing the log scores for each generated tree. 2. **Aligned Sequences:** PhyloGFN does not rely on traditional multiple sequence alignment (MSA). Instead, it uses sequences of equal length, which do not require MSA for alignment. **Our Method:** 1. **Evolutionary Model:** Our mll computation does not involve PhyloGFN's evolutionary model. We use a mutation model based on mutation probabilities, not the trajectory and importance sampling methods used in PhyloGFN. 2. **Aligned Sequences:** Our method does not require either MSA or equal-length sequences. PhylGen directly analyzes raw sequence data, avoiding the computational overhead and potential errors introduced by sequence alignment. Thus, it more accurately reflects sequence variations and evolutionary relationships. 3. **Consistency with Claims:** Our method aligns with the statement that sequence alignment is unnecessary. By handling unaligned, variable-length raw sequences, PhylGen more accurately reflects sequence variation and evolutionary relationships. This makes our method distinct from others that rely on equal-length sequences, like PhyloGFN and GeoPhy, and consistent with the claims made in our article. **Q2: Memory Usage** Thank you for your question. We used an NVIDIA A100-SXM4-80GB GPU for our experiments. However, the choice of hardware is not crucial for running our training algorithm. Detailed memory usage can be found in the General Response and the Tab. R2. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I raised my score. Please keep in mind to fix the typos and grammar errors in the final version of the manuscript. --- Reply to Comment 1.1.1: Title: Thanks for Raising the Score Comment: Dear Reviewer MKto, We would like to express our sincere gratitude for your thorough review and for increasing your score after considering our response, which is very important to us. Based on your suggestions, we will keep on polishing our manuscript and add relevant tables to the camera-ready version. Thank you again for your valuable review and efforts in improving our work. Warm regards, Authors --- Rebuttal 2: Comment: Dear Reviewer, We sincerely appreciate your efforts and valuable feedback. If you are satisfied with our responses and our improvements, please consider updating your score. If you need further clarification, please don't hesitate to contact us. We are grateful for your time and look forward to your response!
Summary: The paper propose phylogenetic tree inference by modeling it as a problem of conditional-constrained tree structure generation. Its goal is to jointly generate and optimize the tree topology and branch lengths. By mapping species sequences into a continuous geometric space, PhyloGen performs end-to-end variational inference without limiting the possible topologies. To maintain the topology-invariance of phylogenetic trees, distance constraints are applied in the latent space to preserve translational and rotational invariance. They have shown the proposed model is effective across eight real-world benchmark datasets. Strengths: The paper addresses very interesting problem and propose novel approches. Weaknesses: I am not sure if the formulation equation 7, adding the term R mathematically make sense, as orignal ELBO in equation 6, is the lower bound, now adding the term log of R which is negative, and trying to maximize equation 7 would result in minimizing R? Also in equation 7, q(\tao (z)) seems missing, or is there a kl between R and Q from module B? I think the paper presentation needs to be improved. The figure 2 is very helpful but the text and some notations are a bit confusing. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. Regarding the statement, "Existing Variational Inference methods, which require pre-generated topologies and typi6 cally treat tree structures and branch lengths independently, may overlook critical sequence features, limiting their accuracy and flexibility", can you elaborate on what kind of features do you refer to that the method could overlook? 2. I am not sure fromt he distance matix to the tree topology, the Neighbor-Joining (NJ) algorithm can be differentiated to be able to learm end to end? 3. I think the section C.1. Topology Learning is written somehow confusing. Especially from line 121 to line 124. Why the distribution p(\tao(z)) is represented as the distribution of y conditioned on tree structure? same as the q(\tao(z), B_\tao) is not clear what distribution it represents. 4. I am not sure in this kind of tasks, if the ELBO is a good metric to reflect the inference power of the model. 5. The VAE model basically takes the embedding of the genomics sequences, and the initial topology of the tree \tao, then try to reconstruct the tree (without the leaf length)? I was thinking what happens if we remove the learning of the first component, make the phylo tree construction constant using directly the embeddings from LLM? 6. I am not sure if I understood how the scoring function is implemented, f(z), z is the embedding of the genomic sequence, f applied on the leaf nodes? or all the nodes, and why minimize it? Confidence: 1 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: 1. While PhyloGen outperforms many existing methods, it still requires substantial computational resources, particularly during the training phase. The complexity of jointly optimizing tree topology and branch lengths, along with maintaining topological invariance, sounds computationally heavy, for instance: 1.1 Monte Carlo simulations to sample from the posterior distributions of tree topologies and branch lengths. 1.2 The tree construction module computes distance matrices from latent variables and uses algorithms like Neighbor-Joining (NJ) to generate initial tree structures. 1.3 The utilization of a pre-trained genome language model (DNABERT2) to transform DNA sequences into genomic embeddings, 2, Looking at the table 5, I am wondering how it works in inference time, would peformance drops when we have less or more specious than the one in the training data the model was trained on? Also how it translates to other specious? moreover, did you train one model per dataset, or is there a way to train all together? Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weakness: Formulation Clarify:** Thank you for your valuable feedback. Regarding the introduction of the R term in Eq. 7, we state the following: **Introduction of R:** R represents the posterior probability in the second part of the variational network. Its inclusion aims to enhance model expressiveness by capturing the underlying structure and parameter dependencies more effectively. **Expression of $q(\tau(z))$:** Detailed derivations in **Appendix, Equations 19-31** clarify that $q(\tau(z))$ is indeed Q(z), indicating that our model includes the KL divergence between R and Q, which is a crucial component. We will improve the presentation and explanation of these concepts in the revised manuscript to make them clearer. We will also enhance the overall presentation of the paper to reduce any confusion. **Question:** **Q1: Overlooked Sequence Features** Thank you for your question. The key sequence features we refer to involve the semantic aspects of genetic information, which are often overlooked by traditional distance-based methods. These methods typically rely on simplified one-hot encoding, which fails to capture the complex interactions and biological functions within sequences. In contrast, our approach utilizes pre-trained language models to extract richer representations from the sequence context, effectively capturing complex relationships and semantic information. This enhances the accuracy and flexibility of phylogenetic inference. **Q2: Differentiability of NJ Algorithm** Thank you for your question. You are correct that the Neighbor-Joining (NJ) algorithm is not differentiable. However, in our approach, NJ is only used to generate the initial tree topology and does not require differentiation. The overall model is trained end-to-end, with differentiation occurring in other parts of the model, not within the NJ algorithm. **Q3: Clarification on Section C.1:** Thank you for pointing out this issue. The notation $p(\tau(z))$ in the main text is incorrect and should be corrected to match the accurate representation in the **appendix**, specifically on **line 540**. Additionally, $q(\tau(z), B_\tau)$ corresponds to the distribution detailed in Eq. 19. We will ensure these notations are consistent and clear in the revised manuscript. **Q4: ELBO Metric:** Thank you for your insightful comment. You are correct that ELBO alone may not fully capture the model’s inference ability, which is why we introduced the Scoring function. Additionally, it is important to note that our baselines and recent methods also use ELBO and MLL, so we must use these metrics for fair comparison. **Q5: LLM Embeddings and Tree Construction:** Thank you for your question. The embeddings generated by the LLM are primarily for feature extraction, not for direct tree construction. The initial topology and the subsequent adjustments made by the first module are essential to ensure the embeddings are constrained to a Gaussian distribution via the encoder, as assumed in our model. Without this step, the embeddings might not converge properly, and the phylogenetic tree construction could be less accurate. **Q6: Scoring Function:** Thank you for your question. The scoring function maps each leaf node in the latent space to a score, representing their contribution to the model's overall performance. The function is jointly optimized with the ELBO objective to ensure the directionality of parameter updates during training. We minimize the scoring function to enforce low-rank representations in the matrix, which helps construct a well-structured initial tree, leading to faster convergence and improved accuracy. **Limitations:** **L1: Computational Resources** **1.1 Monte Carlo:** PhyloGen does not use traditional MCMC to sample from posterior distributions of tree topologies and branch lengths. Instead, we employ a Variational Bayesian approach [1], which significantly reduces computational demands by optimizing the variational lower bound to approximate the posterior, avoiding the inefficiencies and high sample requirements of MCMC in high-dimensional tree spaces. **1.2 NJ Algorithm:** NJ is used to construct the initial tree structure and requires some computational resources. It is executed during each iteration's initialization phase. While it adds to the computational load, this step ensures accuracy and stability, providing benefits that outweigh the computational costs and are not achieved by other methods. **1.3 DNABERT2:** This step is highly efficient. For instance, processing 100 sequences takes only 1.76s, and our benchmark datasets are much smaller, ensuring that embedding computation isn’t a bottleneck. [1] Zhang C, Matsen IV F A. Variational Bayesian Phylogenetic Inference[C]//ICLR (Poster). 2019. **L2: Training and Inference Performance:** You are correct in noting that we trained one model per dataset, similar to recent methods in the field. During inference, the model's performance is limited to the current dataset on which it was trained. We are exploring fully end-to-end approaches to address these limitations and improve generalization across different species and varying numbers of species. --- Rebuttal Comment 1.1: Comment: Dear Reviewer, We sincerely appreciate your efforts and valuable feedback. If you are satisfied with our responses and our improvements, please consider updating your score. If you need further clarification, please don't hesitate to contact us. We are grateful for your time and look forward to your response! --- Rebuttal Comment 1.2: Title: Reviewers comment Comment: Thank you for authors answer to clarify my questions. After reading the answer, I decide to keep my score, I think the idea is interesting but the paper presentations needs to be improved and parts of the methodology as well as the evaluation metrics needs to be clarified. --- Reply to Comment 1.2.1: Comment: Thank you for your thoughtful feedback and for considering our responses. We’re pleased you find the idea interesting and want to assure you that we’re committed to improving the paper’s presentation, methodology, and evaluation metrics, as you suggested. Your **positive opinion is crucial**, and we hope you **might reconsider your score** given our commitment to these improvements. Thank you again for your valuable insights.
Summary: The authors propose a new method, PhyloGEN, for phylogenetic inference. The method is able to perform end-to-end variational inference in order to jointly optimize the tree topology and the branch lengths. To achieve this, the authors propose using a pre-trained genomic language model to extract genome embeddings, and the embeddings are then used to form an initial topology which is refined iteratively using the proposed topology and branch length learning modules. Strengths: - The paper makes a novel contribution to an important research topic. Unlike previous variational inference methods, the method does not require pre-generated topologies. - The incorporation of a pre-trained genomic language model to the method is an interesting contribution and is something that will likely be increasingly explored in the future. - The method is compared against a number of recent phylogenetic inference methods, and it achieves superior performance in terms of MLL and ELBO on standard benchmark datasets compared to the previous methods. In addition, experiments are performed to investigate the robustness of the proposed method and the diversity of the generated topologies, indicating that the method is consistent with the "gold standard" MrBayes approach. The authors also provide an ablation study to understand the impact of the various design choices in their method. Weaknesses: - The methods section (Section 3) is difficult to read in that variables are introduced without leading sentences, some notation is undefined, and design choices are introduced without clear motivation. - What is the motivation for $f_i$ and how is it defined in (2) if there are multiple children $f_j$? - The function $h$ is used before its definition. - Which function $h$ is the latter $h$ in the definition $h(x_i, x_j) = h(x_i, x_i - x_j, d_{ij}^2)$? - Why is max aggregation a good choice? - The scoring function $S$ is not described in enough detail. What does it output and how does it provide additional gradient information? - Most experiments, in particular RQ2, RQ4, RQ5, RQ6, and RQ7 are performed on only one data set (DS1), making it difficult to draw meaningful conclusions. I would also consider the runtime an important metric, but it is measured only in conjunction with the robustness assessment on only one data set. - The authors do not provide code for their method or experiments (although the code is promised to be released later). Without the code, considering the lack of detail in the methods section, it would be difficult to reproduce the proposed method. - Minor: - The red and yellow colors are opposite to what they should be in the caption of Table 1. - There is a subsection numbered 3.1 in Section 4. - RQ7 is only referred to in the Appendix, and not in the main text. - Typos: "TreeEnocoder" (Figure 2 caption), "Metrtic" (Figure 3 title) Technical Quality: 3 Clarity: 2 Questions for Authors: - Could the authors explain that if MrBayes is considered the gold standard, and the proposed method is validated by comparing it against MrBayes in RQ2 and RQ3, why is the proposed method able to achieve such a big increase in MLL and ELBO while the other methods yield relatively similar results? Relatedly, do the resulting topologies look significantly different in comparison to the other methods? - The proposed method has low standard deviations in Table 2 for multiple datasets. Is this due to the optimization being robust with respect to the random seed or could it not indicate that $Q(z)$ does not support multiple topologies? Do the MrBayes posteriors support multiple topologies? - How does the Bipartition Frequency Distribution look for other methods like GeoPhy and ARTree? - How does the runtime of MrBayes compare to the runtime of the proposed method? - Can the authors hypothesize why a simple fully connected $S$ would work so well (Figure 3)? - In the construction of the distance matrix, could the authors explain why using XOR makes sense since aren't the $z_i^*$ continuous? Moreover, would it make sense to try out other similar algorithms to neighbor-joining to initialize the topology? - I would suggest that the authors move section 3.3 (with the derived gradients) to the Appendix, leaving space for a more detailed description of the various design choices. Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors do not explicitly discuss the limitations of their work. As I am not an expert in phylogenetic inference, it is difficult to judge the assumptions made in the paper. Are there conditions that are required to be met for using a pre-trained language model for extracting embeddings, and is the assumption of the tree topology and branch lengths being conditionally independent reasonable? It would be helpful for the authors to elaborate on what they think the possible limitations of their own work are. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weaknesses:** **W1: Clarity in Methods:** We acknowledge the reviewers' concerns about the presentation of the methods section. In response, we have made a thorough revision: 1. **$f_i$** enriches the node feature $i$ by integrating contributions from its children and its inherent traits. In Eq. 2, $f_j$ represents the features of child nodes, and for multiple children, we aggregate their contributions directly in Eq. 1. 2. And 3. **h Clarification:** We acknowledge the confusion caused by using the same symbol h for two conceptually different functions in the manuscript. The first $h(x_i, x_j)$ is an abstract representation used to describe a distance measure between two nodes. The second $h(x_i, x_i - x_j, d_{ij}^2)$ is a specific implementation of the first, calculating differences in the latent space between nodes x_i and x_j, evaluated using the formula $\sum (\| x_i - x_j \|^2 - d_{uv})^2$. 3. **Max Aggregation** selects the most significant features from parent and child nodes, ensuring the most prominent features are retained during propagation. 4. **Scoring Function S** maps each leaf node in the latent space to a score reflecting its contribution to the model's overall performance. Joint optimization with the ELBO objective ensures that parameter updates during training align with the intended direction. **W2: Ablation:** We chose DS1 because it is representative and commonly used in the field, ensuring comparability with other recent studies. Due to time constraints, we have included ablation results for some datasets in the attached file. We can add more if you feel it’s necessary. For a complete comparison of runtime, please refer to Tab. R2. **W3: Code:** Please see the General Response. **M1, M2, M3, M4:** Thank you for your correction. We have corrected the caption and subsection numbers accordingly. We will also explicitly reference RQ7 in the experimental section to better connect it with the appendix details. **Questions:** **Q1: Superior Performance Over MrBayes:** We appreciate your insightful inquiry. PhyloGen outperforms MrBayes and other methods on MLL and ELBO due to enhanced genetic sequence representations via a pre-trained genomic language model. This approach allows for a deeper utilization of genetic information, improving the accuracy of phylogenetic tree construction. For topology comparisons, we use metrics such as Simpson's Diversity Index and Frequency of the Most Frequent Topology. Results across multiple datasets, detailed in Tab. R1 shows PhyloGen's ability to generate diverse and balanced topologies, revealing subtle evolutionary relationships that traditional methods may miss while maintaining biological accuracy. **Q2: Low Stds:** Tab. 2's low stds demonstrate PhyloGen's robustness against random seed variations, maintaining low variance despite supporting diverse topologies. Our variance levels are comparable compared to recent methods like ARTree and PhyloGFN. In contrast, MrBayes, which uses a sampling-based approach, exhibits higher variance across varied topologies but does not match PhyloGen's performance even when considering maximum variance. Furthermore, MrBayes employs MCMC algorithms for estimating posterior distributions of phylogenetic trees, necessitating lengthy MCMC chains to achieve convergence. While parallel computing and evolutionary model optimization can mitigate some computational demands, they do not overcome the inherent efficiency and variance control limitations. **Q3: Bipartition Frequency Distribution:** Thank you for your inquiry. We have included a comparison of bipartition frequency distributions for GeoPhy, as detailed in Fig. R2. However, direct comparisons with ARTree are not feasible since its autoregressive generation method does not produce Newick files. **Q4: Runtime Comparison:** Thank you for your suggestion. We detailed this in Tab. R2. **Q5: Efficacy of a Simple FC Layer:** The simple FC layer's effectiveness can be attributed to the high-quality features provided by DNABERT2, which allow the basic architectures to handle complex tasks efficiently. Additionally, minimal parameters and appropriate regularization ensure model stability and convergence. **Q6: XOR in Distance Matrix:** Using XOR to construct the distance matrix effectively captures discrete nucleotide mismatches without requiring sequence alignment, directly measuring genetic variations. This approach aligns with recent methods like GeoPhy and VaiPhy. Additionally, we explored the UPGMA algorithm on DS1, and the results are summarized in Fig. R1 and the following table: | | ELBO | | --- | --- | | NJ | -7005.98 | | UPGMA | -7115.95 | UPGMA assumes uniform evolution rates and is straightforward but limited to consistent-rate scenarios. In contrast, NJ doesn’t assume constant rates, offering more accurate reflections of evolutionary branching in uneven-rate organisms. Overall, NJ proves more effective in handling complex and uneven evolutionary data. **Q7: Suggestion:** Thank you for your suggestions; we will change them in the revised version. **Limitations:** Thank you for your inquiry. Our first use of pre-trained language models to extract embeddings in this domain is validated by our experimental results and not contested in the literature, and we expect that this could spark further discussion in the field. Regarding the dependence on tree topology and branch lengths, our assumptions align with recent methods such as ARTree, GeoPhy, and PhyloGFN, which are widely accepted. A detailed discussion of limitations can be found in the General Response. --- Rebuttal 2: Comment: Thank you for answering my questions in detail and for providing a discussion on the limitations of your method. In light of this, I am slightly raising my score. I cannot comment on the standard practice of using only DS1 for the experiments, but given the various somewhat ad-hoc design choices in your method, I would expect to see at least the experiments of Section 4.5 performed using more data sets in a final revision. I also hope the authors pay careful attention to the multiple presentation issues raised by me and the other reviewers. --- Rebuttal Comment 2.1: Comment: Thank you for your valuable feedback and for raising your score. We are conducting the additional Ablation Study experiments and will share the results with you as soon as they are ready. We are also refining the manuscript to address the presentation issues raised by you and the other reviewers, aiming to enhance its clarity. --- Rebuttal Comment 2.2: Comment: **Additional Experiments on Ablation Study:** Thank you for your feedback and for raising your score. I understand your concerns regarding the experiments in Sec. 4.5 and have conducted additional ablation studies on multiple datasets, summarized below: The rows represent different configurations: **Ours** (full model); **Ours w/o LN** (without layer normalization); **Ours-Hid=64** (reduced hidden dimensions); **Ours-Euclidean** and **Ours-Cosine** (replacing the custom distance matrix with Euclidean and cosine metrics); and **Ours-One-Hot** (using one-hot encoding for leaf nodes). The numbers in parentheses following each dataset (e.g., DS1 (27)) represent the number of species sequences. **DS1 (27):** | Methods | ELBO (↑) | MLL (↑) | | --- | --- | --- | | Ours | -7005.98 | -6910.02 | | Ours w/o LN | -7111.31 | -7187.40 | | Ours-Hid=64 | -8081.97 | -7728.90 | | Ours-Euclidean | -9284.57 | -9091.84 | | Ours-Cosine | -8943.73 | -8757.81 | | Ours-One-Hot | -10433.23 | -9282.87 | **DS2 (29):** | Methods | ELBO (↑) | MLL (↑) | | --- | --- | --- | | Ours | -26362.75 | -26257.09 | | Ours w/o LN | -26443.78 | -26301.85 | | Ours-Hid=64 | -27511.50 | -27373.58 | | Ours-Euclidean | -29355.51 | -29240.52 | | Ours-Cosine | -28265.33 | -28147.96 | | Ours-One-Hot | -30954.67 | -30848.88 | **DS3 (36):** | Methods | ELBO (↑) | MLL (↑) | | --- | --- | --- | | Ours | -33430.94 | -33481.57 | | Ours w/o LN | -33737.22 | -33481.57 | | Ours-Hid=64 | -34400.24 | -34209.57 | | Ours-Euclidean | -38067.04 | -37925.02 | | Ours-Cosine | -35976.66 | -35823.20 | | Ours-One-Hot | -39120.53 | -38989.87 | **DS4 (41):** | Methods | ELBO (↑) | MLL (↑) | | --- | --- | --- | | Ours | -13113.03 | -13063.15 | | Ours w/o LN | -13395.25 | -13111.01 | | Ours-Hid=64 | -13410.90 | -13115.28 | | Ours-Euclidean | -15438.36 | -15270.67 | | Ours-Cosine | -13417.38 | -13101.95 | | Ours-One-Hot | -16760.54 | -16606.26 | **Analysis:** Using Euclidean or cosine distance metrics significantly lowers MLL, indicating their limitations in capturing complex evolutionary patterns. The one-hot encoding yields the lowest MLL, highlighting the superiority of our feature extraction module and pre-trained language model. Additionally, reducing hidden dimensions shows a clear performance drop, underscoring the importance of model capacity. We are conducting further experiments, and I will upload the full results as soon as they are available. --- Reply to Comment 2.2.1: Comment: **Additional Experiments on Ablation Study (Part 2):** Following our recent update, we have also completed additional ablation studies for datasets DS5 through DS8 to demonstrate our model's effectiveness further. The results support the conclusions drawn from the initial datasets. These results will be made available in the final manuscript for a comprehensive review. **DS5 (50):** | Methods | ELBO (↑) | MLL (↑) | | --- | --- | --- | | Ours | -8053.23 | -7928.4 | | Ours w/o LN | -8133.16 | -7947.05 | | Ours-Hid=64 | -8232.91 | -7953.48 | | Ours-Euclidean | -8884.67 | -8674.23 | | Ours-Cosine | -8550.97 | -8308.62 | | Ours-One-Hot | -9915.63 | -9728.71 | **DS6 (50):** | Methods | ELBO (↑) | MLL (↑) | | --- | --- | --- | | Ours | -6324.90 | -6330.21 | | Ours w/o LN | -6748.26 | -6341.30 | | Ours-Hid=64 | -7630.79 | -7337.23 | | Ours-Euclidean | -9039.58 | -8811.08 | | Ours-Cosine | -7822.54 | -7545.93 | | Ours-One-Hot | -9656.64 | -9437.25 | **DS7 (59):** | Methods | ELBO (↑) | MLL (↑) | | --- | --- | --- | | Ours | -36838.42 | -36838.42 | | Ours w/o LN | -37579.46 | -37085.76 | | Ours-Hid=64 | -37997.11 | -37571.84 | | Ours-Euclidean | -43182.38 | -42889.76 | | Ours-Cosine | -41446.14 | -41165.21 | | Ours-One-Hot | -45627.14 | -45358.70 | **DS8 (64):** | Methods | ELBO (↑) | MLL (↑) | | --- | --- | --- | | Ours | -8409.06 | -8171.04 | | Ours w/o LN | -8842.80 | -8358.32 | | Ours-Hid=64 | -9319.05 | -8911.28 | | Ours-Euclidean | -10128.19 | -9777.77 | | Ours-Cosine | -9812.72 | -9436.38 | | Ours-One-Hot | -13475.34 | -13187.75 | --- Rebuttal 3: Title: Have our response addressed your concerns? Comment: Dear reviewer XKPL, We have provided the complete results of the ablation study for the requested dataset (DS1-DS8) based on your suggestions. We sincerely appreciate all your valuable feedback to help us improve our work. **We believe that the issues raised by the reviewers have been addressed in the revision.** We hope that you will reconsider our submission of this response. If you are satisfied with our response and efforts, please consider updating your rating. If you need any clarification, please feel free to contact us. We are very pleased and look forward to hearing from you! Best regards, Authors
Summary: This paper presents PhyloGen, a novel approach for phylogenetic tree inference using pre-trained genomic language models and graph structure generation. PhyloGen aims to jointly optimize tree topology and branch lengths without relying on evolutionary models or equal-length sequence constraints. The method demonstrates superior performance and robustness across multiple real-world datasets compared to existing approaches. Strengths: Novel approach combining pre-trained genomic language models with graph structure generation for phylogenetic inference Joint optimization of tree topology and branch lengths without typical constraints Superior performance on benchmark datasets compared to existing methods Robust to data changes and noise Provides deeper insights into phylogenetic relationships Computationally efficient compared to baselines Weaknesses: Limited discussion of potential limitations or failure cases Lack of comparison to some recent methods in the field Source code not provided Technical Quality: 2 Clarity: 2 Questions for Authors: How does PhyloGen perform on larger datasets with hundreds or thousands of species? Have you explored using other types of pre-trained language models besides DNABERT2? How sensitive is the method to hyperparameter choices? What are the main computational bottlenecks of the approach? Have you tested PhyloGen on simulated datasets where the true phylogeny is known? Confidence: 2 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: While some robustness tests were performed (node additions and deletions), the paper doesn't explore all possible types of data noise or perturbations. The paper doesn't deeply explore the interpretability of the model's decisions or the learned representations. The paper doesn't extensively discuss how well the method generalizes to different types of genetic data or organisms not represented in the test datasets. The method relies on pre-trained genomic language models (specifically DNABERT2), but doesn't explore how performance might vary with different pre-trained models. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weakness:** **W1: Limitations:** Please see the General Response. **W2: Comparison with Recent Methods:** We appreciate the reviewer's concern regarding the comparison with recent methods. We believe that our manuscript already includes comparisons with several of the latest approaches, specifically GeoPhy (NIPS2023), PHyloGFN (ICLR 2024), ARTree (NIPS2023), and VBPI-GNN (ICLR2023), which have been influential in recent literature. However, we acknowledge the rapidly evolving nature of the field and invite the reviewer to suggest any additional methods that should be considered for comparison. **W3: Code:** Please see the General Response. **Questions:** **Q1: Performance on Larger Datasets:** Thank you for your question. We want to emphasize that PhyloGen has been tested on the same datasets as all recent methods, ensuring a fair evaluation. PhyloGen is designed to scale, yet there are currently no datasets available that contain hundreds to thousands of species. This is primarily because most phylogenetic analyses don’t require simultaneous processing of such extensive species counts; instead, research focuses on constructing subtrees for specific subsets. Phylogenetic methods fall into two categories: distance-based (UPGMA and NJ) and character-based(Maximum Parsimony (MP), Maximum Likelihood (ML), and Bayesian methods). The former is fast but limited, while the latter, although computationally intensive, offers greater precision. Review studies [1] suggest Bayesian is the most accurate, followed by ML and MP. Although PhyloGen offers significant speed improvements while maintaining accuracy, its performance on large datasets is still constrained by the inherent time complexity of these methods. [1] Hall, Barry G. "Comparison of the accuracies of several phylogenetic methods using protein and DNA sequences." *Molecular biology and evolution* 22.3 (2005): 792-802. **Q2 and L4: Pre-trained Language Models:** Thank you for your question. We selected the genome-specific foundation model DNABERT2 for our phylogenetic inference research. Although models like HyenaDNA and Nucleotide Transformer (NT) excel in long-sequence modeling, they are less apt for our specific needs. Upon your suggestion, we tested these models: | Method | MLL | | --- | --- | | DNABERT2 | -6910.02 | | HyenaDNA | -6918.63 | | NT | -6921.81 | DNABERT2 outperformed others, likely due to its specific optimization for genomic data. Our revised manuscript will detail these findings to justify our choice and showcase our method's flexibility. **Q3: Hyperparameter Sensitivity:** Thank you for your question. Our Experiment Analysis (see Section E.5 in Appendix A, Table 9) demonstrates PhyloGen's robustness across various hyperparameter settings, including Output Dimension, Hidden Dimension, and Layer Normalization. We also evaluated the impact of changing hidden dimensions with and without layer regularization. The results show that PhyloGen maintains robustness to hyperparameter choices. **Q4: Computational Bottlenecks:** We've detailed this in General Response's Limitations. **Q5: Simulated Datasets Tests:** Thank you for your insightful question. In phylogenetic analysis, "true phylogeny" often relies on theoretical assumptions derived from widely accepted biological software. As discussed in our introduction, these traditional methods require sequence alignment, which is time-consuming and computationally intensive. Additionally, our manuscript includes "PhyloTree Case Studies" (RQ6) and visualizations in Sec. E.6 demonstrates PhyloGen's ability to cluster biologically relevant species and highlight evolutionary relationships effectively. While defining true phylogenies in simulated datasets presents challenges, natural tree structures offer a more valid basis for inference. Furthermore, this setup is rare in standard references and baselines. We appreciate your interest, as it helps clarify our method's scope and potential enhancements. Future work will explore this area more thoroughly to validate PhyloGen's effectiveness. **Limitations:** **L1: Robustness:** Thank you for your comments. We've extended our tests to include scenarios such as genetic mutations. Here are the results from dataset DS1 under different settings: | Setting | Description | ELBO | | --- | --- | --- | | Setting 3 | Replaced sequences with alternatives from the same species | -7008.23 | | Setting 4 | Applied mutation rate of 5% | -7012.02 | | Setting 5 | Applied mutation rate of 10% | -7021.17 | The results show that the ELBO metric for Setting 3 is similar to the original result while Setting 4’s result is closer to the original than Setting 5. This indicates that our model maintains robustness under different types of data noise and perturbations, particularly in handling genetic mutations. **L2: Interpretability:** Thank you for your feedback. Our manuscript includes case studies in RQ6 and visualizations in Sec. E.6, demonstrating PhyloGen's application to real-world genetic data. These provide intuitive understanding and help enhance overall interpretability. Specifically, Fig. 7 illustrates the clustering and phylogenetic tree structure, effectively showcasing the representation results. Compared to similar studies, our approach provides a more extensive and comprehensive analysis, setting a high standard for interpretability within the field. We appreciate your feedback and will continue to refine this aspect. **L3: Data Generalizability:** Thank you for your question. Indeed, the benchmark datasets used by PhyloGen include a wide variety of organisms covering marine animals, plants, bacteria, fungi and eukaryotes. This ensures that our model has been tested in various biological environments, demonstrating its ability to generalize to different types of genetic data and organisms not specifically represented in the test dataset. **L4:Pre-trained Models:** Please see our response to Q2. --- Rebuttal Comment 1.1: Comment: Dear Reviewer, We sincerely appreciate your efforts and valuable feedback. If you are satisfied with our responses and our improvements, please consider updating your score. If you need further clarification, please don't hesitate to contact us. We are grateful for your time and look forward to your response! --- Rebuttal 2: Title: Summarized Response to the Unaddressed Concerns Comment: Dear Reviewer QAeZ, We highly value your feedback and suggestions, which are crucial for us to review further and enhance our work. **We have revised our manuscript and addressed all your concerns.** We sincerely invite you to review our responses, hoping that it will help dispel any misconceptions about our work. Below is a summary of our response: - Performance on Larger Datasets: We clarified PhyloGen's scalability and ensured a fair comparison by testing it on the same datasets as other state-of-the-art methods. - Pre-trained Models and Hyperparameter Sensitivity: We justify the selection of DNABERT2 and demonstrate PhyloGen's robustness across different hyperparameter settings. - Interpretability and Generalizability: We highlight several case studies and visualizations that demonstrate PhyloGen's interpretability and generalizability in various biological contexts. We also expanded the robustness tests and addressed concerns regarding computational bottlenecks. In this revised version, we have sincerely and diligently responded to your valuable comments. **We trust that our responses have clearly addressed your questions and invite you to review them.** We are pleased to inform you that several reviewers have given **high scores**, with one even **raising their score** after reviewing our responses. Notably, the reviewer with **high confidence** recognized the strength and robustness of our approach. In light of this, we hope you reconsider your assessment and increase your score. If you have any further questions, we would be happy to address them. Best regards, Authors
Rebuttal 1: Rebuttal: **General Response:** We are very grateful to the four reviewers for their insightful comments, which have significantly improved the quality and clarity of our manuscript. **Summary:** We are pleased that the reviewers recognized the highlights of our work, including the novel framework combining pre-trained genomic language models with graph structure generation for phylogenetic inference. This approach, which received praise from **all reviewers**, is expected to significantly impact the community and attract further exploration **(QAeZ, XKPL, 4WGb, MKto)**. Our method achieves joint optimization of tree topology and branch lengths without typical constraints and does not require pre-generated topologies **(QAeZ, XKPL)**. It demonstrates state-of-the-art performance on real-world datasets **(QAeZ, XKPL, MKto)**. Extensive case studies and ablation experiments validate the robustness of our method and the diversity of the generated topologies **(QAeZ, XKPL)**. Additionally, our approach shows higher computational efficiency than baseline methods **(QAeZ)**. These results highlight our method's practicality and underscore its broad theoretical and applied potential **(QAeZ, XKPL)**. We thank all reviewers for their insightful comments and for addressing some common concerns regarding the limitations and performance of our model: **1. Limitations:** While our model demonstrates outstanding performance on standard benchmarks, it may benefit from using more expressive Q(z) distributions or incorporating prior constraints to better capture complex dependencies and interactions in the latent space. Additionally, although the Neighbor-Joining (NJ) algorithm is effective for iterative tree construction, it is computationally intensive. We are exploring efficient data structures and parallel processing techniques to address this bottleneck. Furthermore, our model has primarily been applied to genomic data, and further research is needed to extend its applicability to diverse biological data, such as protein and single-cell data. **2. Runtime Comparison:** Despite these limitations, our model has performed exceptionally well in most evaluations and offers significant advantages in runtime and memory usage on the DS1 dataset: | | MrBayes | GeoPhy | PhyloGFN| ARTree | Ours | | --- | --- | --- | --- | --- | --- | | Runtime (H) | 22h46m | 8h10m | 20h40m | 62h21min | 6h53min | | Memory (MB) | — | 1450.93 | 2341.80 | 2040.50 | 1051.11 | **3. Source Code Availability:** Due to time constraints, we have not yet fully organized all the code. The current version could be more organized, but we continue to refine it to meet the reviewers' needs. We have sent an anonymized link to the AC for reference. **4. Thorough Ablation Study:** We chose DS1 for our primary experiments because it is representative and commonly used in the field, ensuring comparability with other recent studies. Our original experiment design is both adequate and reasonable. In response to further curiosity and to provide additional details, we have included ablation results for additional datasets in the attached file. These are not to compensate for an insufficient design but to demonstrate the robustness of our method more comprehensively. If necessary, we can further supplement with more datasets. Thank you again for your continued attention and valuable insights. Pdf: /pdf/0b41ebf6c2d194ecadbba3f442cc0d53adaab131.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Architecture of Decentralized Expert System for Early Alzheimer's Prediction Enhanced by Data Anomaly Detection
Reject
Summary: This work introduces a novel approach to diagnosing Alzheimer's Disease using a decentralized expert system. This system leverages blockchain technology and Federated Learning to enhance data privacy and manage large volumes of MRI data effectively. The key innovation lies in integrating these technologies to address the challenges of traditional diagnostic methods, which often suffer from delays and inaccuracies, especially in the early stages of the disease. Strengths: 1. This work presents a pioneering integration of blockchain technology and Federated Learning to enhance Alzheimer's Disease (AD) diagnostics, addressing privacy concerns and data management challenges. 2. The proposed decentralized expert system architecture, which includes anomaly detection for patient-submitted data, showcases a comprehensive approach to AD diagnostics, emphasizing AI-driven MRI analysis. Weaknesses: 1. While the system shows promising results, the article does not provide extensive comparative data against traditional centralized systems or other decentralized approaches, which could validate its superiority more robustly. This work lacks of comparative performance data. 2. The complexity of the blockchain and Federated Learning components might pose usability challenges for less technically adept users, potentially affecting the system's adoption. 3. There are no more details of the algorithms this work used, maybe give out more meaningful algorithm design for the specific model you are using. Technical Quality: 2 Clarity: 1 Questions for Authors: How well does the AI model generalize to diverse populations, given the variability in MRI data quality and the limited scope of initial training datasets? Confidence: 4 Soundness: 2 Presentation: 1 Contribution: 2 Limitations: 1. The accuracy of the AI model heavily depends on the quality and consistency of input data, which might vary significantly across different healthcare settings. 2. The use of advanced technologies such as blockchain might limit the accessibility of the system for users not familiar with such technology, potentially restricting its applicability. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - While the system shows promising results, the article does not provide extensive comparative data against traditional centralized systems or other decentralized approaches, which could validate its superiority more robustly. This work lacks of comparative performance data. - Response: The current version of the paper primarily introduces the architectural framework and theoretical benefits of our proposed system. The focus has been on detailing the integration of blockchain technology and federated learning to enhance privacy and scalability in medical data processing. Future publications will explore: User Interaction, Real-world Implementation Challenges, Use-Case Elaboration. Although this is a pure architecture paper, we have trained an AI model where we used data for 561 subjects total, among those, 231 SMC, 259 CN, and 71 MCI patients. The feature selection algorithms were applied to the graph features (degree centrality for each ROI) to select the most discriminating features for the classification of MCI, SMC, and CN subjects. Initial AI model is generated to be deployed on the nodes; and we plan to undertake a comprehensive experiment designed to validate Federated Learning with new patients' data acquired via DApp. This will involve testing the model across different demographics and disease stages to statistically ascertain the impact of data diversity on prediction accuracy. See the list of the major points ABOVE. - The complexity of the blockchain and Federated Learning components might pose usability challenges for less technically adept users, potentially affecting the system's adoption. - Thank you for raising this important concern regarding the usability challenges that could arise from the integration of complex technologies such as blockchain and Federated Learning in our system. Ensuring ease of use is critical for the adoption and effectiveness of any new technology, especially in a healthcare context where users range from highly technical staff to patients who may not have extensive technological expertise. For a practical example of a blockchain system where patients are reimbursed with Ether for sharing anonymous medical tests, we refer to the study (https://doi.org/10.1017/dap.2024.4). which provides insights into the incentives and data handling aspects that are relevant to our proposed architecture. Wherever possible, complex processes, especially those related to data handling and model training, will be automated. Incorporating feedback mechanisms within the system will allow us to continuously gather user experiences and identify aspects of the system that are particularly challenging for users. - There are no more details of the algorithms this work used, maybe give out more meaningful algorithm design for the specific model you are using. - Response: Here's an outline of the specific algorithms used and how they contribute to the functionality of our model: - Blockchain Algorithm: The system employs Ethereum smart contracts for handling data submissions, access control, and transactions. Smart contracts automate these processes, ensuring they are executed under predefined conditions without third-party intervention. See Listings 1,2,3 - smart contracts used for data anomaly detection. - Federated Learning Algorithm: - Model Initialization: The global model is initialized using a lightweight SVM, suitable for processing medical imaging data. The initialization parameters are optimized to balance training speed and model accuracy. - Local Training: At each node, local model updates are performed using Stochastic Gradient Descent (SGD) with backpropagation. To accommodate diverse data distributions at different nodes, we implement adaptive learning rates.- Aggregation Protocol: We employ Federated Averaging (FedAvg) for aggregating local model updates. See Algorithm 1. - Anomaly Detection Algorithm: For pre-processing data, we utilize the Isolation Forest algorithm to identify and isolate anomalies in the dataset (The code, listing 2, demonstrates the process of anomaly detection in biological data using the Isolation Forest algorithm). This step is crucial for maintaining the quality of data used in training the Federated Learning model. - How well does the AI model generalize to diverse populations, given the variability in MRI data quality and the limited scope of initial training datasets? - Response: Thank you for raising a critical question regarding the generalizability of our AI model to diverse populations. This is particularly pertinent given the inherent variability in MRI data quality and the constraints of the initially available training datasets. - Model Initialization and Training: - Lightweight Initialization: The AI model is initiated using a lightweight architecture designed to be responsive and efficient. The initialization parameters of the model are carefully optimized to balance training speed with accuracy. - Local Training Adaptation: During local training, each participating node adjusts the model using its local data, which likely includes diverse patient demographics and varying MRI data quality. This local adaptation helps tailor the model to specific sub-populations. - Handling Data Quality Variability: - Off-Chain Data Quality Verification: To address concerns regarding MRI data quality, we employ an off-chain verification process through smart contracts on each edge device (Listings 2,3). This process ensures that only high-quality, verified data is used during the training phase. By filtering out poor quality or anomalous data early in the process, we maintain the integrity and reliability of the training data. - Robustness to Data Variability: The federated learning framework inherently enhances the model's robustness to data variability. By aggregating diverse local updates, the model learns to generalize across a broader spectrum of data characteristics than if trained on a homogenized central dataset. --- Rebuttal Comment 1.1: Title: Reply to Rebuttal by Authors Comment: Thank you for your response, it is still not clear that if this proposed framework for Alzheimer's is useful which somehow should be proved by comprehensive experiments, and my concerns about the missing details of algorithms of each part (such as blockchain, federated learning, anomaly detection) haven't been addressed based on your response, which is a kind of definition but not formulas for explaining your algorithms. The most important thing you need to do is to prove your framework by experiments (at least doing experiments with simple algorithms, small datasets). --- Rebuttal 2: Comment: Thank you for your insightful feedback and the emphasis on the importance of experimental validation. I appreciate the opportunity to clarify the intent and scope of our current manuscript, which primarily aims to introduce an innovative architectural framework for a decentralized, privacy-preserving system tailored for Alzheimer’s disease prediction - see the innovations ABOVE: - Decentralized Data Crowdsourcing with Compensation - Hierarchical Clustering in Federated Learning - Integration of Blockchain for Data Integrity and Auditability - Focus on Early Detection of Alzheimer’s Disease - Unique Application to Alzheimer’s Disease Prediction and Monitoring - Clarification of Scope: - Architectural Focus: This paper is designed as a conceptual and architectural introduction to a novel framework integrating blockchain technology, federated learning, and advanced data analytics for medical data processing. The primary contributions are the architectural innovations and the theoretical underpinning of how these components interact within our proposed system. - Stage of Research: As an architectural paper, the current focus is on detailing the system design and the potential impacts of its implementation. It establishes the groundwork for future empirical research, which will rigorously test and validate the system’s performance across various metrics. In another study, we have shown how patients being reimbursed for medical test data sharing https://doi.org/10.1017/dap.2024.4. We have also designed a user-friendly app (https://github.com/stefankam/predprodalzheimer), which receives MRI images, change to connectivity matrix and do prediction of stage of Alzheimer without transferring data to any central location. - Addressing Algorithmic Details: - While comprehensive algorithmic formulas are typically more relevant in an implementation or experimental paper, we include detail about the algorithms’ roles and interactions within the system to provide clarity on how each component contributes to the overall functionality, see the class diagram. Although this is a pure architecture paper, we have trained an AI model where we used data for 561 subjects total, among those, 231 SMC, 259 CN, and 71 MCI patients. The feature selection algorithms were applied to the graph features (degree centrality for each ROI) to select the most discriminating features for the classification of MCI, SMC, and CN subjects. Initial AI model is generated to be deployed on the blockchain nodes. See https://arxiv.org/pdf/1808.03949 Your Question: The complexity of the blockchain and Federated Learning components might pose usability challenges for less technically adept users, potentially affecting the system's adoption. - Response: While it is true that advanced technologies such as blockchain might pose a learning curve for some users, the primary intention of our system is not to require widespread adoption of blockchain technology by all patients. Instead, the system is designed with several key features that address accessibility and user engagement: - User Experience Design: The core functionality of our system is designed to be user-friendly and intuitive (https://github.com/stefankam/predprodalzheimer). Patients interact with the application primarily through a straightforward interface that does not require them to have a deep understanding of blockchain technology. The blockchain operates in the background to ensure data integrity and security, while patients focus on interacting with the system through familiar processes, such as submitting MRI data and receiving predictions. - Incentive Mechanism: To encourage participation and ensure engagement, the system includes an incentive mechanism where patients receive Ether in exchange for sharing their anonymous test data. This approach provides a tangible benefit to users, which can help overcome any potential hesitation or lack of familiarity with the underlying technology. By offering financial incentives, we aim to motivate users to participate actively without needing to understand the technical aspects of blockchain. - Blockchain as a Service: Moreover, the blockchain component of our system is implemented as a service that abstracts the complexity away from the end users. This means that users interact with a simplified front-end application, while the blockchain infrastructure manages the data submission, validation, and security behind the scenes. The integration of blockchain technology is intended to enhance data privacy and security, without placing undue burden on the users. - Targeted Application: It is important to note that the system is targeted towards a specific use case where patients are aware of and are motivated by the benefits of participating in a decentralized data-sharing network. The goal is not to achieve universal adoption but to provide a secure and incentivized platform for those who choose to contribute their data.
Summary: The authors assume that applying blockchain platforms to combine datasets for Alzheimer’s Disease and then using federated learning for multi-centralized training can improve diagnostic performance. However, the manuscript lacks technical details and experimental evidence. All descriptions are conceptual, making the manuscript a proposal rather than a technical paper. Strengths: It is interesting to apply blockchain platforms to combine datasets for Alzheimer’s Disease and then using federated learning for multi-centralized training. Weaknesses: There are no technical details and no experiments. Details can be found in Questions and Limitations. Technical Quality: 1 Clarity: 1 Questions for Authors: 1. What is your main contribution, a model, a framework or just a proposal? 2. How the dataset collected on blockchain platforms? please make the description in clear. 3. How you prove the advantage of proposed architecture? there are no experiments in the manuscript. 4. How is the federated model trained and what the performance is? Confidence: 4 Soundness: 1 Presentation: 1 Contribution: 1 Limitations: 1. No technical details and experiments. 2. The literature review of Alzheimer’s Disease diagnosis is not complete, especially regarding AI-based approaches. Flag For Ethics Review: ['No ethics review needed.'] Rating: 2 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - What is your main contribution, a model, a framework or just a proposal? - Response: Our main contribution is the development of a framework. This framework outlines a decentralized expert system architecture that integrates advanced technologies such as blockchain and federated learning for early-stage Alzheimer's disease prediction. This architecture is designed to enhance the security, privacy, and scalability of handling sensitive medical data, while also utilizing anomaly detection mechanisms to ensure the quality and integrity of the data used for AI-driven predictions. Here’s a breakdown of how this contribution can be framed: - Decentralized Architecture: The framework proposes a novel approach by using a decentralized system that leverages blockchain technology. This is intended to ensure data integrity, transparency, and security in the processing and handling of sensitive medical data. See the list of the major points ABOVE. - Integration of Technologies: The integration of blockchain with federated learning within the framework allows for a collaborative, yet secure, approach to developing AI models. See the list of the major points ABOVE. - Anomaly Detection: The framework includes an anomaly detection mechanism to ensure the quality of the data that feeds into the AI models. This is crucial in medical applications where data quality directly impacts the accuracy and reliability of disease predictions. - Scalability and Adaptability: The proposed framework is designed to be scalable and adaptable across different healthcare settings. In our system, blockchain technology is primarily utilized for two main purposes: facilitating transactions of tokens in exchange for medical data contributions and maintaining a secure and immutable record of data transactions and verification statuses. This focused use of blockchain allows us to leverage its benefits—namely, enhanced security and transparency—without significantly adding to the computational load typically associated with blockchain operations such as complex consensus mechanisms or the handling of large-scale data computations. - How the dataset collected on blockchain platforms? please make the description in clear. - For a practical example of a blockchain system where patients are reimbursed with Ether for sharing anonymous medical tests, we refer to the study (https://doi.org/10.1017/dap.2024.4). Which provides insights into the incentives and data handling aspects that are relevant to our proposed architecture. A smart contract that allows patients to submit their data along with a brief initial evaluation is given in Listing 1 (see Appendix C). The contract stores this data on-chain and allows patients to verify and timestamp their submissions. Note that this contract primarily serves as a ledger for the data and initial evaluation results, and more comprehensive checks should be performed off-chain by the application (DApp) before submitting data to the blockchain. In this contract, the "submitCertificate" function allows patients to submit the results of the off-chain anomaly detection process. The "verifyCertificate" function allows patients to verify their certificates. One can implement additional verification steps in the "verifyCertificate" function as needed. To implement a smart certificate for anomaly detection on the client side of a medical data sharing platform, we would use off-chain data analysis techniques since performing anomaly detection directly on-chain can be expensive and inefficient due to the trade-off between performance and security. - How you prove the advantage of proposed architecture? there are no experiments in the manuscript. - Response: Thank you for highlighting the importance of empirical validation in demonstrating the advantages of our proposed decentralized architecture. Although this is a pure architecture paper, we have trained an AI model where we used data for 561 subjects total, among those, 231 SMC, 259 CN, and 71 MCI patients. The feature selection algorithms were applied to the graph features (degree centrality for each ROI) to select the most discriminating features for the classification of MCI, SMC, and CN subjects. Initial AI model is generated to be deployed on the nodes; and we plan to undertake a comprehensive experiment designed to validate Federated Learning with new patients' data acquired via DApp. This will involve testing the model across different demographics and disease stages to statistically ascertain the impact of data diversity on prediction accuracy. - How is the federated model trained and what the performance is? - Response: Federated Model Training Process. See Algorithm 1 - Initialization: A central server initializes a global model with pre-defined architecture suitable for the specific medical prediction task. - Distribution of Model: The initialized model is distributed to participating nodes. - Local Training: Each node trains the model on its local dataset. This training is typically performed using traditional machine learning algorithms, adapted to the specificities of the data (e.g., types of MRI images, patient demographics). - Aggregation: - After training, each node sends only the model updates (e.g., weights, gradients) to the central server. The data itself remains at the node, preserving privacy. - The central server aggregates these updates to update the global model. This aggregation can be done using methods like weighted averaging or more sophisticated approaches depending on the algorithm used (e.g., Federated Averaging). - Iteration: The updated global model is sent back to the nodes for further training. This process repeats for several iterations until the model converges or meets certain performance criteria. - Once the model training is complete and it has converged satisfactorily, the final model can be deployed for use in predictions. Performance can be evaluated via Accuracy Metrics. --- Rebuttal 2: Comment: Thank the authors for the detailed response. I am still not convinced by the 'experiments and results' provided in the rebuttal. I do expect more complete experiments and comparisons in such kind paper. Also the literature is still thin. I would not be able to raise my score. --- Rebuttal Comment 2.1: Comment: Thank you for your insightful feedback and the emphasis on the importance of experimental validation. I appreciate the opportunity to clarify the intent and scope of our current manuscript, which primarily aims to introduce an innovative architectural framework for a decentralized, privacy-preserving system tailored for Alzheimer’s disease prediction - see the innovations ABOVE: - Decentralized Data Crowdsourcing with Compensation - Hierarchical Clustering in Federated Learning - Integration of Blockchain for Data Integrity and Auditability - Focus on Early Detection of Alzheimer’s Disease - Unique Application to Alzheimer’s Disease Prediction and Monitoring - Clarification of Scope: - Architectural Focus: This paper is designed as a conceptual and architectural introduction to a novel framework integrating blockchain technology, federated learning, and advanced data analytics for medical data processing. The primary contributions are the architectural innovations and the theoretical underpinning of how these components interact within our proposed system. - Stage of Research: As an architectural paper, the current focus is on detailing the system design and the potential impacts of its implementation. It establishes the groundwork for future empirical research, which will rigorously test and validate the system’s performance across various metrics. In another study, we have shown how patients being reimbursed for medical test data sharing https://doi.org/10.1017/dap.2024.4. We have also designed a user-friendly app (https://github.com/stefankam/predprodalzheimer), which receives MRI images, change to connectivity matrix and do prediction of stage of Alzheimer without transferring data to any central location. - Addressing Algorithmic Details: - While comprehensive algorithmic formulas are typically more relevant in an implementation or experimental paper, we include detail about the algorithms’ roles and interactions within the system to provide clarity on how each component contributes to the overall functionality, see the class diagram. Although this is a pure architecture paper, we have trained an AI model where we used data for 561 subjects total, among those, 231 SMC, 259 CN, and 71 MCI patients. The feature selection algorithms were applied to the graph features (degree centrality for each ROI) to select the most discriminating features for the classification of MCI, SMC, and CN subjects. Initial AI model is generated to be deployed on the blockchain nodes. See https://arxiv.org/pdf/1808.03949
Summary: The paper presents a decentralized expert system designed to predict early-stage Alzheimer's Disease using AI-driven MRI analysis. The system leverages blockchain technology and Federated Learning to ensure data privacy and security while performing anomaly detection on patient-submitted data. The architecture includes a Web3 application for patients to upload biological information and MRI images securely. The decentralized approach aims to improve early detection and intervention for Alzheimer's Disease, providing a more comprehensive representation of AD patterns and enhancing model performance through data diversity. Strengths: The paper encapsulates a few novel ideas. They can be summarized as follows: 1. Handling the security and sensitivity of patient medical information is of paramount importance. The authors were motivated by a very relevant problem and presented an approach to blockchain technology with stated aim of providing robust data privacy and security. By building on decades on research on this topic, this approach has the potential to be extended in future with the general updates in this domain. 2. While there are some confusion around their use case (see weakness below), the authors leveraging Federated Learning and a decentralized system to mitigate the challenges associated with model training on centralized data repositories, such as data bottlenecks and privacy concerns 3. The system aims to provide early-stage prediction of Alzheimer's Disease, which is crucial for timely intervention and improved patient outcomes. Weaknesses: However given the commendable motivations there are several challenges with the current paper, 1. First and perhaps the most important aspect is that the paper fails to present the real-world challenges associated with the adoption of such decentralized approaches, especially as it pertains to patients engaging with blockchain wallets and data submission interfaces. Also, the primary use-case for the decentralized approach is not evident - is model training the prime use-case or is the main use case patients being able to generate inferences on their own medical records. Overall, the usage scenario around the setup needs to be better motivated and established 2. The paper also lacks formalism around the presentation. For example, if the primary contribution is the architecture around the decentralized AI approach, the design principles needs to be better justified and articulated. A system architecture diagrams needs to be established as well. Similarly, the "proof" around the decentralized approach is not a rigorous mathematical proof. Rather the logic is derived from a hypothesis that more diverse data should lead to a better model. This is a hypothesis at the best and needs to be experimentally validates 3. Finally, the paper is lacking in experimental validation. For example, the proof needs to be backed by real world experiments. Also, this is not the first paper to posit a federated learning approach to medical AI prediction. Some of the SOTA methods in this space needs to be compared against Technical Quality: 1 Clarity: 2 Questions for Authors: Please see the weakness above Confidence: 5 Soundness: 1 Presentation: 2 Contribution: 2 Limitations: Please see the weakness above Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - First and perhaps the most important aspect is that the paper fails to present the real-world challenges associated with the adoption of such decentralized approaches, especially as it pertains to patients engaging with blockchain wallets and data submission interfaces. - Response: The current paper introduces a novel architecture for a decentralized AI system tailored for early Alzheimer’s disease prediction, which multiple applied studies will be built. Future publications will explore: User Interaction, Real-world Implementation Challenges, Use-Case Elaboration - In this initial phase, we outline multiple potential use-cases to showcase the versatility of the architecture. The primary use-case will indeed depend on specific implementation strategies, which will be the subject of our next series of papers. This includes a deeper exploration into whether model training or patient-driven inferences provide the most utility in practical scenarios. - For a practical example of a blockchain system where patients are reimbursed with Ether for sharing anonymous medical tests, we refer to the study (https://doi.org/10.1017/dap.2024.4). which provides insights into the incentives and data handling aspects that are relevant to our proposed architecture. - The paper also lacks formalism around the presentation. For example, if the primary contribution is the architecture around the decentralized AI approach, the design principles needs to be better justified and articulated. A system architecture diagrams needs to be established as well. Similarly, the "proof" around the decentralized approach is not a rigorous mathematical proof. - Response: Our paper primarily introduces a novel architecture for a decentralized AI system tailored for early Alzheimer’s disease prediction. The presentation style was chosen to first introduce the conceptual framework and design principles in an accessible manner. We understand, however, the need for a more formal presentation to better articulate the underlying design principles and to justify their effectiveness comprehensively. - System Architecture Diagrams: Not only we have provided the system architecture and class diagram in Figure 3, but also provided AI model in Figure 1. We have strengthened the design principles by grounding them more deeply in relevant literature and by providing rationales for why specific architectural choices were made. This include discussions on the scalability (section 4.2 application development), security (appendix A), and data integrity benefits of the decentralized approach - Empirical Validation: Although this is a pure architecture paper, we have trained an AI model where we used data for 561 subjects total, among those, 231 SMC, 259 CN, and 71 MCI patients. The feature selection algorithms were applied to the graph features (degree centrality for each ROI) to select the most discriminating features for the classification of MCI, SMC, and CN subjects. Initial AI model is generated to be deployed on the nodes; and we plan to undertake a comprehensive experiment designed to validate Federated Learning with new patients' data acquired via DApp. This will involve testing the model across different demographics and disease stages to statistically ascertain the impact of data diversity on prediction accuracy. - Mathematical Formalism: While the current logic is derived from established hypotheses within the field, we acknowledge the need for more rigorous mathematical formalism. We have explored preliminary proofs of concept for our claims in section 3.1. A more robust statistical analysis will be included to support the hypotheses with empirical data. - Finally, the paper is lacking in experimental validation. For example, the proof needs to be backed by real world experiments. Also, this is not the first paper to posit a federated learning approach to medical AI prediction. Some of the SOTA methods in this space needs to be compared against - Response: Thank you for your constructive feedback regarding the need for experimental validation and comparative analysis with state-of-the-art methods. We acknowledge these gaps in our current architecture-based manuscript. - Real-World Experiments: We agree that real-world experimental validation is crucial to substantiate the theoretical claims made about our decentralized AI system. To this end, we plan to implement another study using the proposed architecture. This requires to operationalize the DApp for patients with varying stages of Alzheimer's disease to trade their medical tests with tokens. We will document the system's performance in terms of accuracy, efficiency, and scalability within this real-world setting. - Performance Metrics: In addition to accuracy, we will evaluate other relevant metrics such as sensitivity, specificity, and area under the ROC curve (AUC-ROC) to provide a comprehensive view of the system's performance. This will help validate the system's practical effectiveness and reliability. - Benchmarking: We will benchmark our system against other leading methods that have been validated in similar contexts. This will include both centralized and decentralized approaches to medical AI prediction, particularly those using federated learning. The comparison will focus not only on performance metrics but also on aspects such as data privacy, model robustness against data variability, and system scalability. In this paper, we only deals with the architecture of the system. - Our Major Innovations: While federated learning is not novel per se, our application of this approach in a fully decentralized, blockchain-supported environment offers distinct innovations, particularly in terms of enhancing data security and patient privacy. We will clarify these unique contributions and demonstrate how they improve upon existing federated learning models (see ABOVE) --- Rebuttal Comment 1.1: Title: Acknowledging author response Comment: Thanks for the response --- Rebuttal 2: Title: Re: challenges associated with the adoption of such decentralized approaches Comment: In regard to your first comment; While it is true that advanced technologies such as blockchain might pose a learning curve for some users, the primary intention of our system is not to require widespread adoption of blockchain technology by all patients. Instead, the system is designed with several key features that address accessibility and user engagement: - User Experience Design: The core functionality of our system is designed to be user-friendly and intuitive (https://github.com/stefankam/predprodalzheimer). Patients interact with the application primarily through a straightforward interface that does not require them to have a deep understanding of blockchain technology. The blockchain operates in the background to ensure data integrity and security, while patients focus on interacting with the system through familiar processes, such as submitting MRI data and receiving predictions. - Incentive Mechanism: To encourage participation and ensure engagement, the system includes an incentive mechanism where patients receive Ether in exchange for sharing their anonymous test data. This approach provides a tangible benefit to users, which can help overcome any potential hesitation or lack of familiarity with the underlying technology. By offering financial incentives, we aim to motivate users to participate actively without needing to understand the technical aspects of blockchain. - Blockchain as a Service: Moreover, the blockchain component of our system is implemented as a service that abstracts the complexity away from the end users. This means that users interact with a simplified front-end application, while the blockchain infrastructure manages the data submission, validation, and security behind the scenes. The integration of blockchain technology is intended to enhance data privacy and security, without placing undue burden on the users. - Targeted Application: It is important to note that the system is targeted towards a specific use case where patients are aware of and are motivated by the benefits of participating in a decentralized data-sharing network. The goal is not to achieve universal adoption but to provide a secure and incentivized platform for those who choose to contribute their data. --- Rebuttal Comment 2.1: Title: Followups Comment: Thanks for the followups. Some of the clarifications such as around the reward structure provides a lot more clarity around the use case - thanks for these details. Ultimately however, I am inclined to stick to my original scores. There are two primary drivers for this : (a) while the experimental validation plans listed by the authors make sense, such an extensive change would need to re reviewed (b) the clarifications around an architectural motivation make sense but the exposition lacks any testable hypothesis. For example was there any other choice considered? Why is the particular blockchain better than alternatives? These are some of the questions need to be deliberated and addressed --- Rebuttal 3: Comment: Thank you for your thoughtful feedback and for acknowledging the additional details provided in our follow-up explanations. We understand your concerns regarding the necessity of experimental validation and the need for more explicit articulation of testable hypotheses and decision-making rationale within the architectural framework. Allow us to address these points to further clarify our position and the manuscript's contributions: - Addressing Experimental Validation: - We appreciate your recognition of our plans for experimental validation. As stated, the current manuscript is primarily focused on establishing a robust architectural framework. It is common in architectural papers to initially focus on the conceptualization and theoretical underpinnings without empirical validation. Subsequent papers often undertake the detailed empirical work. - Articulation of Testable Hypotheses and Decision Rationale: - Why Blockchain?: We selected blockchain due to its inherent properties of decentralization, immutability, and transparency, but as clearly mentioned in the paper , it was mainly due to the possibility of reimbursing patients with tokens for sharing their medical test data, as we showed it in an earlier study (https://doi.org/10.1017/dap.2024.4). which provides insights into the incentives and data handling aspects that are relevant to our proposed architecture. - Our position: not only we spelled out our position in 2 Research Design the three research questions, but also in 3.1 Decentralized expert system performance, Theorem: Decentralized expert system in diagnosing Alzheimer’s Disease could outperform traditional centralized expert system..we hope that these further comments will mitigate the concerns you've raised. --- Rebuttal Comment 3.1: Title: Followups Comment: I generally agree with the plan but as I mentioned above the changes are not atomic enough for me to raise the score with confidence within an author discussion period
Summary: This paper introduces an innovative decentralized expert system designed for early prediction of Alzheimer's Disease (AD), leveraging blockchain technology and Federated Learning. Traditional diagnostic methods often result in delays and imprecision, particularly in early-stage AD detection, while centralized data repositories face challenges in managing vast volumes of MRI data and maintaining patient privacy. The proposed system addresses these issues by combining blockchain for secure, decentralized data management and Federated Learning for collaborative AI model training across multiple institutions. The system includes robust anomaly detection mechanisms to ensure data quality and integrity, enabling precise early-stage AD predictions. This comprehensive approach aims to revolutionize disease diagnostics by enhancing data privacy, security, and collaborative efforts in the medical community. Strengths: - The paper presents a novel integration of blockchain technology and Federated Learning for early AD prediction, which is innovative in addressing data privacy, security, and collaborative AI model training. - The proposed system is well-conceived, with a detailed architecture and implementation strategy. The inclusion of anomaly detection mechanisms to ensure data quality adds robustness to the system. - The approach has significant potential to improve early-stage AD detection, which is crucial for timely intervention and better patient outcomes. The decentralized nature of the system promotes data privacy and security, addressing major concerns in medical data management. Weaknesses: - The integration of blockchain and Federated Learning introduces significant computational complexity and potential delays due to off-chain processing and communication overhead. - The system's scalability is a concern as the volume of data and the number of users increase, necessitating ongoing optimization to ensure efficient performance. Technical Quality: 3 Clarity: 3 Questions for Authors: - How does the system handle variability in MRI image quality and biological data across different institutions? - What specific measures are in place to mitigate the computational complexity and communication overhead in the decentralized setup? - Can you provide more details on the feature selection process and the performance metrics used to evaluate the AI model? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have addressed several limitations, including data quality and consistency, computational complexity, and model generalizability. However, further discussion may be needed on: - Ensuring that the AI model is unbiased and fair across different demographic groups can be difficult, especially if the training data is not representative. - Real-time processing and predictions might be challenging due to the decentralized nature and the need for off-chain processing. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - The integration of blockchain and Federated Learning introduces significant computational complexity and potential delays due to off-chain processing and communication overhead. What specific measures are in place to mitigate the computational complexity and communication overhead in the decentralized setup? - Response: In our system, blockchain technology is primarily utilized for two main purposes: facilitating transactions of tokens in exchange for medical data contributions and maintaining a secure and immutable record of data transactions and verification statuses. This focused use of blockchain allows us to leverage its benefits—namely, enhanced security and transparency—without significantly adding to the computational load typically associated with blockchain operations such as complex consensus mechanisms or the handling of large-scale data computations. - Off-Chain Processing: All data-intensive tasks, including MRI and biological information verification and anomaly detection, are conducted off-chain. This approach significantly reduces the computational burden on the blockchain. By handling these processes off-chain, we utilize more powerful computing resources available on traditional platforms, thereby avoiding the latency and resource constraints associated with on-chain computations. - Blockchain as a Ledger: The blockchain component in our system acts more as a ledger for recording transactions and verification results rather than as a processing unit for these tasks. This minimizes the data that needs to be handled by the blockchain, thus reducing potential delays and computational overhead. The transactions recorded involve only the exchange of tokens and metadata regarding the status of data submissions and verifications, which are lightweight in nature. - The system's scalability is a concern as the volume of data and the number of users increase, necessitating ongoing optimization to ensure efficient performance. - Response: In response to the reviewer's concern about the scalability of the system as the volume of data and the number of users increase, we appreciate the opportunity to clarify how the architecture is designed to manage scalability efficiently: - Scalability by Design: Our system has been architected with scalability as a core consideration. The primary use of blockchain in our setup is for transactional data and integrity checks, not for heavy computational tasks. This distinct separation ensures that the blockchain does not become a bottleneck as the user base and data volume grow. - Off-chain Processing: To address potential scalability issues, we employ off-chain processing for data-intensive tasks such as MRI and biological information analysis. This approach significantly reduces the load on the blockchain, allowing it to scale more efficiently by handling primarily lighter transactional data. - Efficient Data Management: We leverage advanced data management techniques such as data sharding and decentralized file systems like IPFS, which enhance data retrieval speeds and scalability. These technologies are well-suited to handle large-scale data operations and user growth without degrading system performance. - Adaptive Scaling Techniques: The system incorporates dynamic resource allocation and load balancing to efficiently manage resource distribution across the network. This ensures that the infrastructure can adaptively scale to meet demand without significant delays or performance issues. - How does the system handle variability in MRI image quality and biological data across different institutions? - Response: as we mentioned in section 3.4.2, detecting anomalies in MRI images typically involves computer vision techniques and deep learning models. One might consider using popular deep learning libraries like TensorFlow or PyTorch. Here in Listing 3 (see Appendix C), we provide an approach using a pre-trained model. This approach allows to detect anomalies in MRI images based on how well the autoencoder can reproduce the input image. Anomalies will typically result in higher MSE values compared to normal images. One might need to fine-tune the threshold based on the dataset and requirements. - Can you provide more details on the feature selection process and the performance metrics used to evaluate the AI model? - Response: - For feature selection, we employed the Sequential Forward Selection (SFS) algorithm specifically applied to graph features, such as degree centrality for each Region of Interest (ROI). This method allowed us to iteratively add features that most improved the Random Forest classifier's performance until no significant improvements could be observed. This approach was instrumental in identifying the most discriminating features that could effectively differentiate between MCI, SMC, and CN subjects. - The effectiveness of the feature selection and classification approach was quantitatively evaluated using accuracy as the primary performance metric. The integration of Sequential Forward Selection with the Random Forest classifier yielded a high accuracy of over 92%. This high level of accuracy indicates that the selected features were highly predictive and the model was well-tuned for the classification task at hand. - We explained in Algorithm 1 in the manuscript, - The AI model, trained initially on a public dataset, can be further fine-tuned and personalized using the brain connectivity matrices. - The model continuously learns from new patient data, improving its accuracy and adaptability. - Patients’ longitudinal data is used to monitor disease progression over time. - The trained AI model predicts the transformation to AD based on the inputted MRI images and patient’s longitudinal data.
Rebuttal 1: Rebuttal: Our paper aims to extend the current federated learning research, specifically in the context of healthcare data analysis, AD progression monitoring, privacy preservation, and practical deployment. The list of the major points and detailed comparison not only highlights the novelty of our work but also its potential impact and practical relevance in the field. - Decentralized Data Crowdsourcing with Compensation: - Innovation: Unlike traditional federated learning approaches that passively rely on existing datasets or institutionally gathered data, our system actively engages individuals (patients and medical practitioners) in a crowdsourcing model where they could be compensated with Token for their data contributions (see https://doi.org/10.1017/dap.2024.4) - Difference: This approach not only incentivizes data sharing, enhancing dataset diversity and volume, but also introduces a novel economic model to federated learning systems. - Hierarchical Clustering in Federated Learning: - Innovation: Our paper introduces a hierarchical structure within the federated learning model, where data is first processed and models are trained locally within defined clusters before contributing to a global model. - Difference: This method differs from standard federated learning, which typically involves direct aggregation of updates to a central model from all nodes. Our approach reduces communication overhead and enhances privacy by limiting the scope of data sharing to within clusters. - Integration of Blockchain for Data Integrity and Auditability: - Innovation: Incorporating blockchain technology not only for transactional purposes (compensating data contributors) but also to ensure data integrity and provide a transparent, auditable trail of data usage and model updates. - Difference: Most federated learning studies focus on the computational aspects of model training and do not integrate blockchain to secure the data and learning process, nor do they utilize blockchain for enhancing transparency and trust among participants. - Focus on Early Detection of Alzheimer’s Disease: - Innovation: Application of AI techniques such as Sequential Forward Selection and Random Forest classifiers specifically tailored for the early detection of Alzheimer’s Disease, optimized through the rich, diverse dataset gathered in Alzheimer’s Disease Neuroimaging Initiative (ADNI) database. We have therefore trained an AI model where we used data for 561 subjects total, among those, 231 SMC, 259 CN, and 71 MCI patients. The feature selection algorithms were applied to the graph features (degree centrality for each ROI) to select the most discriminating features for the classification of MCI, SMC, and CN subjects. Initial AI model is generated to be deployed on the nodes; and we plan to undertake a comprehensive experiment designed to validate Federated Learning with new patients' data acquired via DApp. This will involve testing the model across different demographics and disease stages to statistically ascertain the impact of data diversity on prediction accuracy. - Difference: While other papers may apply federated learning to healthcare, our paper specifically addresses the challenge of early and accurate disease detection by leveraging a uniquely collected and continuously updated dataset across different demographics and disease stages to statistically ascertain the impact of data diversity on prediction accuracy in a privacy preserving manner. - Unique Application to Alzheimer’s Disease Prediction and Monitoring: - Innovation: A significant innovation of our system is its application to predict the conversion from prodromal Alzheimer's Disease (AD) to AD and to monitor disease progression. This is facilitated by the use of vast datasets of longitudinal data, which are gathered via a decentralized application (DApp). This approach is particularly novel because it harnesses blockchain technology not only for data integrity and security but also as a mechanism to incentivize the contribution of longitudinal data by patients and/or healthcare providers. - Difference: The ability to gather and utilize such extensive longitudinal data sets our system apart from traditional medical data collection methods, which often struggle with data silos and privacy concerns that limit the availability of longitudinal data. Our approach enables more dynamic and robust models that can accurately track and predict the progression of AD over time, offering significant potential for early intervention and personalized treatment strategies.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Preference-based Pure Exploration
Accept (poster)
Summary: This work focuses on the setting of multi-armed bandits with vectorized rewards and studies the identification of the Pareto Optimal arms with a fixed confidence. Compared with existing works, this work considers a more general preference definition (i.e., induced by a preference cone) or targets at finding the entire set of Pareto optimal arms. A lower bound is derived, providing both statistical and geometrical implications of the hardness of the problem. Based on the lower bound, a track-and-stop style algorithm is proposed with its efficiency proved by a corresponding upper bound. Strengths: - The studied problem itself is novel and well-motivating due to the commonly existing scenarios with multiple objectives. According to the summarized literature, there have been no efforts in the exact problem setup studied here. - The overall flow is clear, and the techniques adopted are reasonable and intuitive, extending the studies from single-objective BAI: starting with the lower bound, and based on it, leveraging track-and-stop style algorithms. - Despite certain confusion about the notations/definitions (see weakness), I believe the overall results should be sound. Weaknesses: - First, I believe the overall paper needs careful proofreading and some revisions to ensure the readers’ correct understanding. For example, (1) Definition 3: $\mu$ and $\mu’$ are not labeled clearly; (2) the maximizations in Equation 1 and subsequent sections are confusing per my understanding (please let me know if I missed the definition somewhere); (3) Eqn. 1, line 231, 233: sometimes there is transpose over $M$ but sometimes not; (4) Eqn. 6: $M$ is a vector on the right-hand side? Also, the minimization is noted over $\hat{M}_t$ while the expression seems over $M$. - Second, while the adopted techniques are sound based on my intuition, it seems they largely follow the original track-and-stop paper. The authors have made efforts to highlight the challenges and differences (e.g., the discussions beneath the lower bound, and the distances introduced in Sec. 5 to accommodate Pareto fronts). However, the overall idea is still the classic track-and-stop one. Technical Quality: 2 Clarity: 2 Questions for Authors: - I would greatly appreciate some clarification over my confusion on notations listed in Weakness. - Also, the algorithm design begins with a convexification of the lower bound, which I have several questions about. (1) Is this design purely to benefit computation? In other words, if letting computation aside, can we directly track the lower bound and obtain an asymptotical upper bound; (2) as the final performance approaches the convexified characteristic time, I am wondering whether it is possible to compare it with the true lower bound so we can understand how much loss is introduced by the convexification (i.e., a tradeoff between computation and performance). Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: I understand that this work is of a theoretical nature so there are no major societal impacts to be discussed. However, I do not find justifications for related questions in the checklist (e.g., 1. Claims, 2. Limitations, 3. Theory Assumptions and Proofs). Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time spent reviewing and pointing out several notational issues to improve the manuscript. We have done a proofreading of the paper and fixed the typos and issues. We address the other concerns here. 1. **Novelty with respect to Track and Stop:** As pointed out by reviwer E3uE: ``While the overall structure closely follows the track-and-stop literature, there seems to be a lot of technical novelty in its extension to the pareto front setting. The characteristic time has a much more complicated geometric structure, which is reflected in the lower bound. New concentration inequalities need to be derived for the pareto frontier estimation (along with a novel distance)." We would like to build upon observation and highlight the salient features of our contribution and the relation to existing work. 2. **Purpose of convexification:** Convexification serves several purposes not limited to providing computational benefits. Without optimising over the convex set, we cannot guarantee that the algorithm is asymptotically optimal. In particular Theorem 4, claim (4) will not hold. Therefore, tracking the lower bound as per Track-and-Stop strategy will not lead to asymptotic optimality. There are other key points which will not hold if we do not optimize over the convex hull. For example, we cannot directly use Donsker-Vardhan in the proof of Lemma 6 but we have to prove compactness and other properties of the original non-convex set. 3. **Convexification of Characteristics Time:** In the setting with Gaussian bandits, as we show Theorem 2 in the paper, nothing is lost since we are able to get exact form of the hardest instance analytically. In the non-Gaussian setting, we need to conduct a separate study, both computational and theoretical, in order to study the gap between solutions of the original optimization problem and the convex relaxation. It will non-trivially depend on the interaction of the preference cone and the space of confusing instances. This would be a interesting avenue for future work. Further, please see lines 316-324 in the updated paper. 4. **Explaining the maximisation problem in Equation (1):** For better understanding, we explain the maximisation problem of Equation (1) as "Alternatively, this vector optimization problem can be represented in the policy space as finding a policy $\pi \in \Delta_K$ supported on the Pareto optimal set of arms. This is given by..." (line 189-191 in revised manuscript). Let us know if that clarifies your concerns. We hope that our rebuttal addresses your concerns and questions. We are eager to discuss if you have further queries. --- Rebuttal Comment 1.1: Comment: Thank you for the responses. 1. The authors’ response is quite vague and the additional techniques beyond the original Track and Stop remain unclear. I would encourage the authors to better highlight the novelties, especially compared with TaS. 2. I think the purpose of convexification needs further clarification. If it is for computational purposes, it should be positioned that way and have discussions on the results, assuming an ideal computational oracle. Suppose it is for statistical purposes (i.e., even with a computational oracle, the proposed algorithm still cannot work). In that case, the illustrations should focus on this perspective and mention the computational benefits as an additional one. From the paper, the main discussions are on the computational purpose (line 298, line 322, etc). However, from the response, it seems there are also statistical purposes in terms of the proofs. This confusion should be better clarified in the paper. 3. Given the loss from the convexification, I believe it is not rigorous/fair to call the algorithm “asymptotical optimal” (e.g., line 93, line 391) or “matching sample complexity upper bound” (line 13). These statements may only be made upon Gaussian bandits, if I understand correctly. 4. Thank you for the clarifications. Given these factors, I decided to keep my score. --- Rebuttal 2: Title: Response to reviewer comments Comment: (1) **Novelty with respect to TaS:** We believe that there is a confusion regarding novelty with respect to TaS, and we would really like to clarify it. Track-and-Stop provides a generic wrapper for designing lower bound-based pure exploration algorithm. But for the preference-based pure exploration problem the lower but bound is significantly different than that of BAI. Thus, we have to derive a new lower bound (and a ``good" relaxation) and propose multiple components to design PTnS, which were absent in the TaS paper and the following literature. We enlist these components here: 1.1) **Stopping rule:** The standard Chernoff-based stopping rule (Eq.(6) of Kauffmann et. al., JMLR, 2016) do not work for the PrePEx problem as here the term inside KL needs to consider preference (Equation (8), line 338). Thus, we proposed a new preference-aware stopping rule. This requires deriving a a novel concentration inequality (Theorem 5). These design contributions and their theoretical validity (which is one of the main contributions of the paper) are highlighted from line: 331-351. This is a contribution to both the design as well as analysis over and above the existing TaS algorithms. 1.2) **Use of convexified set for tracking:** Creating the convex-hull of alternating instances, and further using that for tracking and computing allocations is novel. It has luckily not been needed in existing BAI techniques. We dedicated Section 4.1. for this and now also add a commentary on the need and impact of convexification following your previous feedback. 1.3) **Analysis of PreTS:** The analysis of PreTS (Section 5) involves several new theoretical techniques and results: (a) defining the distance metric between pareto fronts instead of Hausdroff metric (Definition 8), (b) showing concentration of good event with respect to this metric (Lemma 4), and (c) a preference-dependent concentration of difference of means (Lemma 2). We extend the discussion in line 91-94 to highlight these components. We hope that this exposition of the non-trivial changes in algorithm design and analysis would be able to clarify your concern regarding novelty of PreTS with respect to TaS. Based on the previous feedback, we have further added an explicit remark in the main paper highlighting these novelties with respect to TaS. 2) **A case for convexification:** The advantages of convexification are of both statistical and computational in nature. A thorough discussion of the computational benefits/tradeoffs with existing approaches was avoided since this was not the main objective of the paper, and more numerical and optimization-based in nature. Since this paper is mostly theoretical and statistical in nature, and we had to explain several novel design and analysis ideas, we feel that computational discussions are best for a seperate work. Now, we have added a discussion at the end of Section 4.1. to clarify the need and impact of convexification. We quote it here verbatim: ``Discussion: Cost and Need of Convexification. For Gaussian bandits, as we can get the analytical form of the most confusing instance $\tilde{M}$ (Theorem 2), we do not pay any extra cost of convexification. In the non-Gaussian settings, where we cannot find such analytical forms for the most confusing instances, the minimum value of the inner minimisation problem under convex hull (Equation (5)) can go lower than the minimum value found in the original non-convex set of instances (Equation (3)). Thus, the characteristic time attained by solving the convex relaxation might be either higher than or equal to that of the original lower bound (Equation (3)). Hence, an algorithm solving the convex relaxation might have higher stopping time. But convexification is essential for computational feasibility of a lower bound-tracking algorithm for PrePEx, and also to prove Theorem 4, which is essential to statistically design a tracking-type algorithm. In future, it will be interesting to study this computational-statistical trade-off.’’ 3) Thanks, we agree with this observation. We remove the qualifier "asymptotically optimal" for PreTS from the paper. We edit line 13 as "matching sample complexity upper bound” (line 13) - ``a convex relaxation of the lower bound as the sample complexity upper bound". We also edit the other three mentions (line 93, 381, 391) similarly, and mention that PreTS tracks a convex relaxation of the lower bound and achieves the corresponding sample complexity for general reward functions. Let us know if this response clarifies our concerns. We would be happy to elaborate further if it does not. If we have successful to clarify, we would ask you to kindly reconsider your score. Thanks again for reviewing our work. --- Rebuttal Comment 2.1: Comment: We thank you again for your review and the following interactions. We are going to include the relevant discussions in the final version of the paper. We hope that our responses have addressed all your concerns within the scope of the current paper, and we would really appreciate it if you can adjust your score accordingly.
Summary: This paper generalized the strack-and-stop style of best arm identification analysis and algorithm design to the setting of vector-valued bandit problems where the pareto frontier must be found. Novel upper and lower bounds are proposed as well as a convex relaxation of the lower bound that produces an implementable algorithm. Strengths: While the overall structure closely follows the track-and-stop literature, there seems to be a lot of technical novelty in its extension to the pareto front setting. The characteristic time has a much more complicated geometric structure, which is reflected in the lower bound. New concentration inequalities need to be derived for the pareto frontier estimation (along with a novel distance). The explanations are clear and presented in a coherent order. The comparisons with previous track-and-stop papers provides a convenience frame to understand the novelties of this paper. Weaknesses: I wish the authors would describe the technical novelties e.g. in the proof techniques in more detail in the main body. Some interesting examples illustrating the geometric insights would greatly aid the reader’s understanding. Finally, the paper needs a thorough proofread - there are many broken references, misplaced or missing periods (especially at the end of equations), etc. Technical Quality: 4 Clarity: 2 Questions for Authors: What is lost with the convex relaxation? What happens in problem instances where the optimal action is not in the relaxed region? Confidence: 4 Soundness: 4 Presentation: 2 Contribution: 3 Limitations: N/A for societal impact, but I found the discussion on limitations mostly absent. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging the strengths and novelty of the contributions along with pertinent questions regarding the computational approach. We respond to them here. 1. **Cost of Convex Relaxation:** In the setting with Gaussian bandits, as we show Theorem~2 in the paper, nothing is lost since we are able to get exact form of the hardest instance analytically. In the non-Gaussian setting, where we cannot find such analytical form for the confusing instance, we observe that the min value of the inner minimisation problem under convex hull can be lower than the minimum value found in the original non-convex set of instances. Thus, the characteristic time attained by the algorithm using the convex relaxation can be higher than that of the original lower bound. Thus, an algorithm solving the convex relaxation might have higher stopping time. Further, please see lines 316-324 in the updated paper. 2. **What happens in problem instances where the optimal action is not in the relaxed region?** Since our approach takes the convex hull of the feasible set, which is a super-set of the true set of alternating instances, the optimal action set always lies in this set. 3. **Technical Novelties and Insights:** The technical novelties of this work can be categorised into three parts: (i) lower bound: taking the policy space perspective, (ii) algorithm design: convexification of the lower bound and a new stopping rule and (iii) proving asymptotic optimality of PreTS requires coming up with a new distance metric and establishing concentration inequalities under these metrics. We hope that our response addresses your questions. We would be glad to respond if you have any further query. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I remain supportive of acceptance.
Summary: The paper considers a generalization of the fixed confidence best arm identification. Specifically, in this setting we have a collection of arms each having a mean vector associated with it. Further, we have an ordering that establishes a preferences over the vectors. The objective is to identify the Pareto front according to the preference model. The paper establishes lower bounds, designs an algorithm based on convex relaxation, and establishes upper bounds on its sample complexity. Strengths: -The proposed problem is interesting and relevant. -I did not verify the correctness of the theorems, but the techniques seem involved with lower bounds being established. -One can see possibly extensions to this setting where we learn a Pareto set. Weaknesses: ### 1-The major weakness of the paper is the presentation: (A)There are many broken references. (B) definition 3 for partial order: twice the definition says $\mu \leq_{C} \mu$, I think the prime is missing after $\mu$. (C) In page 4 at times the paper uses $C$ and others $\mathcal{C}$ but aren’t they both referring to the cone? (D) "$c_1=$" in line 334 is not given. Further, I think Lemma 1 would say There exists a constant $c$ instead of $c_1$. (E) no space between words in lines 112-116: “settingby”, “)and” ### 2-The paper is a bit confusing since not the whole Pareto front needs to be identified only a subset of it, correct? In the end of section 2, what does “active support” mean? How is it different from just saying support? ### 3-What is $\mathcal{\bar{T}}_{\mathcal{F}}$ in Theorem 6? Does this imply that the algorithm achieves almost optimal performance? Technical Quality: 3 Clarity: 1 Questions for Authors: See the points under Weaknesses. Confidence: 2 Soundness: 3 Presentation: 1 Contribution: 3 Limitations: I think the limitation were addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for pointing out several avenues for improving our work. We address the concerns below. **Editorial edits:** We have revised our manuscript to rectify all the errors and typos as mentioned by the reviewer. **Identification of the Pareto Front:** This paper is about identifying the entire Pareto front. This resonates the line of literature paved by ``Sequential Learning of the Pareto Front for Multi-objective Bandits", crepon, Garivier, Koolen AISTATS 2024, and ``Pareto Front Identification from Stochastic Bandit Feedback" Auer et. al. AISTATS 2016. They consider similar problems but without generic preferences and linear bandit feedback. Similar but not identical problem was considered in "Adaptive algorithms for relaxed pareto set identification", Kone et. al. NeurIPS 2023, wherein an approximation of the Pareto Set has to be identified. Our problem is a non-trivial generalization of those papers in the sense that, we consider generic preferences (described through a preference cone) and linear bandit feedback. **Terminology:** Thank you for pointing out. The word "support" suffices for our purposes. **Asymptotic Optimality and Theorem 6:** $\mathcal{T}_{F}$ is the characteristic time associated with PrePEx problem. You are indeed correct, the claim of Theorem 6 is that the proposed algorithm is asymptotically optimal. We hope that our response addresses the reviewer's questions. Let us know if you have any other query. --- Rebuttal Comment 1.1: Comment: Thanks again for reviewing our paper. As the review period is coming to an end, please let us know if our response has been able to address your concerns. If yes, we would be grateful if you reconsider your assessment.
Summary: This paper studies the preference-based pure exploration problem for bandits with vector-valued rewards and a set of preferences imposed over them. The objective is to identify the most preferred policy over a set of arms according to the preferences induced on the reward vectors by an ordering cone C. The technical contributions are three folds. First, a lower bound on the sample complexity for identifying the most preferred arm with confidence level 1 – delta is proved. The lower bound shows how the geometry of the preferences and reward vectors changes the hardness of this problem. This geometry for Gaussian distributions of rewards is further explicated. A convex reformulation of the lower bound solvable with linear programming is provided. This convex reformulation of the lower bound is utilized to design the Track and Stop with Preferences (TSwP) algorithm that identifies the most preferred policy. Third, a new concentration result for vector-valued rewards is proved. It is shown that TSwP achieves a matching sample complexity upper bound. Strengths: This paper establishes a lower bound on the sample complexity for identifying the most preferred arm with confidence level 1 – delta. This lower bound reveals the hardness of PrePEx problems. This lower bound also shows how the geometry of the preferences and reward vectors changes the hardness of this problem. It is shown that the optimization problem in the lower bound involves minimization with a provably non-convex set. A convex reformulation of the problem based on ideas from disjunctive programming is provided. This convex reformulation of the lower bound is utilized to design the Track and Stop with Preferences (TSwP) algorithm that identifies the most preferred policy. A new concentration result for vector-valued rewards is proved. It is shown that TSwP achieves a matching sample complexity upper bound. Weaknesses: I only identified some minor writing issues such some citation numbers are missing, e.g., line 21. Technical Quality: 3 Clarity: 3 Questions for Authors: No. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: No. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for appreciating the strength of our contributions. We have rectified the errors, typos, and notational issues in our revised manuscript. --- Rebuttal Comment 1.1: Comment: Many thanks for the response. I like to maintain my score.
Rebuttal 1: Rebuttal: We would like to thank the reviewers for providing several valuable comment to improve the presentation and writing of our manuscript. We have incorporated those comments and are uploading a revised version here. Pdf: /pdf/cb8363b0cdb90fdd4c2d421cece4027034c61146.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Value-Based Deep Multi-Agent Reinforcement Learning with Dynamic Sparse Training
Accept (poster)
Summary: The paper introduces the Multi-Agent Sparse Training (MAST) framework to address computational overhead in Multi-agent Reinforcement Learning (MARL) by enhancing value learning through the Soft Mellowmax Operator with a hybrid TD-(λ) schema and a dual replay buffer mechanism. MAST achieves significant reductions in computational redundancy with minimal performance degradation. Strengths: - The problem of applying sparsity training to MARL is a topic. - The paper is well-written and easy to understand. - Comprehensive evaluation and ablation results provide a good understanding of the method and design decision. Weaknesses: - This work focuses only on one benchmark (StarCraft II), applying it to other benchmarks can give us a better idea of the generalizability of the approach. - How does the proposed technique compare to single-agent RL sparse training work such as "Sokar et al., 2022". Are there existing methods that are already able to achieve high performance with the techniques proposed in this work? The authors also mentioned that "Wang et al., 2019" also prunes agent networks throughout training, so it would be good to compare also to this work to see the computation reduction and performance comparison. Technical Quality: 3 Clarity: 4 Questions for Authors: - Is Figure 7 for illustration purposes (i.e. not using real data)? if so, I would suggest using real data to illustrate the idea and better show the significance of the issue. - How does this technique generalize to test environments other than StartCraft II? - How does the proposed approach compare to the single agent sparse training other than the RLx2 method? Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: More analysis of the method's limitations would be helpful. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your time and effort in reviewing our paper! Please find our responses to your comments below. We will be happy to answer any further questions you may have. ### Weaknesses > **W1**: This work focuses only on one benchmark (StarCraft II), applying it to other benchmarks can give us a better idea of the generalizability of the approach. We validate the generalizability of MAST in two main ways: - **Other Benchmarks:** We conducted a comprehensive performance evaluation of MAST across various tasks in the SMAC benchmark. Additional experiments on the multi-agent MuJoCo (MAMuJoCo) benchmark (Peng et al., 2021) are also included in Appendix B.9. - **Other Algorithms:** MAST is designed as a versatile sparse training framework for value decomposition-based MARL algorithms. We integrate MAST with state-of-the-art value-based deep MARL algorithms, including QMIX, WQMI, and RES. Additionally, we apply MAST to a hybrid value-based and policy-based algorithm, FACMAC (Peng et al., 2021). Results are presented in Section 4 and Appendix B. This comprehensive evaluation demonstrates the effectiveness and generalizability of MAST across different benchmarks and algorithms. > **W2**: How does the proposed technique compare to single-agent RL sparse training work such as "Sokar et al., 2022". Are there existing methods that are already able to achieve high performance with the techniques proposed in this work? The authors also mentioned that "Wang et al., 2019" also prunes agent networks throughout training, so it would be good to compare also to this work to see the computation reduction and performance comparison. - The single-agent RL sparse training work in (Sokar et al., 2022) uses SET (Mocanu et al., 2018) for topology evolution, but does not improve value learning under sparse models, resulting in low-sparsity RL models. In our experiments, the SET baseline can be viewed as the MARL version of that in (Sokar et al., 2022). As shown in Table 1, applying SET alone is insufficient to achieve high sparsity levels in MARL scenarios. We will annotate SET in Table 1 as (Sokar et al., 2022) in our revision. - The algorithm in (Wang et al., 2019) fails to maintain sparsity throughout training and only achieves a final model sparsity of 80%, which is lower than our results. Additionally, their experiments are limited to a two-agent environment, PredatorPrey-v2 in MuJoCo (Todorov et al., 2012). Therefore, we did not include a direct comparison with this work in our paper. ### Questions > **Q1**: Is Figure 7 for illustration purposes (i.e. not using real data)? if so, I would suggest using real data to illustrate the idea and better show the significance of the issue. Thank you for your suggestion. Figure 7 was originally intended for illustration purposes. We will update Figure 7 with real data in our revision to better demonstrate the significance of the issue. Additionally, the effectiveness of the dual buffer in improving training data distribution is validated in the ablation study in Appendix B.7. > **Q2**: How does this technique generalize to test environments other than StartCraft II? We have conducted a comprehensive performance evaluation of MAST across various tasks in the SMAC benchmark. Additionally, we performed experiments on the multi-agent MuJoCo (MAMuJoCo) benchmark (Peng et al., 2021). Please refer to Appendix B.9 for details. > **Q3**: How does the proposed approach compare to the single agent sparse training other than the RLx2 method? We have compared our algorithm with several single-agent sparse training methods, including SET, RigL, and RLx2. Specifically, SET and RigL were originally developed for deep supervised learning and later adapted for single-agent sparse training in DRL as shown in (Sokar et al., 2022) and (Graesser et al., 2022), respectively. Our results demonstrate that MAST significantly outperforms these baselines in the MARL setting. For detailed comparisons, please refer to Table 1 in Section 4.1. ### Limitations > **L1**: More analysis of the method's limitations would be helpful. Our paper does address the limitations of the MAST framework, including the challenge of managing multiple hyperparameters. This discussion is provided in Appendix A.6. --- We are grateful for your constructive suggestions, which have significantly guided our improvements. We hope our response addresses your concerns. If so, we would like to know if you could kindly consider raising your score rating. We will also be happy to answer any further questions you may have. Thank you very much! --- Rebuttal Comment 1.1: Title: Reminder to Reviewer 8jXp Comment: Dear Reviewer, Thank you for your time and effort in reviewing our paper. We hope our response has adequately addressed your concerns. If you feel that our rebuttal has clarified the issues raised, we kindly ask you to consider adjusting your score accordingly. Should you have any further questions or need additional clarification, we would be more than happy to discuss them with you. Thank you once again for your valuable feedback. --- Rebuttal 2: Title: Reminder to Reviewer 8jXp Comment: Dear Reviewer, Thank you for your time and effort in reviewing our paper. We hope our response has adequately addressed your concerns. If you feel that our rebuttal has clarified the issues raised, we kindly ask you to consider adjusting your score accordingly. Should you have any further questions or need additional clarification, we would be more than happy to discuss them with you. Thank you once again for your valuable feedback.
Summary: The paper presents a significant advancement in the field of MARL by introducing the MAST framework which aims at improving the Reliability of Training Targets and Improving the Rationality of Sample Distribution. Overall this paper is well-written and easy to follow, on a very interesting research direction with promising results. Strengths: This paper did thorough research on finding the reasons with theoretical contribution and possible solutions for the poor performance of previous specification methods and introduced 2 novel designs to solve them. The paper provides solid theoretical underpinnings for the proposed methods. The experimental results look very promising. Weaknesses: An ablation study would be good to tell to how much extent the 2 designs are contributing to the performance improvement. The overhead brought by the new design was not discussed. Technical Quality: 3 Clarity: 3 Questions for Authors: What kind of sparsification techniques are used in MAST? Also are mixing networks sparsified? The agent network in qmix is relatively very small, how is it possible to reach a 95% sparsification while maintaining a relatively high or even better results compared to the dense networks? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors did not address the limitations but I don't see a direct or potential negative societal impact from this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and effort in reviewing our paper! Please find our responses to your comments below. We will be happy to answer any further questions you may have. --- ### Weaknesses > **W1**: An ablation study would be good to tell to how much extent the 2 designs are contributing to the performance improvement. We have provided an ablation study for our proposed techniques, including the Hybrid TD$(\lambda)$ mechanism, Soft Mellowmax Operator, and Dual Buffers, in Appendix B.7. Our findings indicate that all three components contribute significantly to the overall performance improvement. > **W2**: The overhead brought by the new design was not discussed. - MAST incorporates novel TD targets with a dual buffer mechanism to train ultra-sparse MARL models. The designed TD targets can be efficiently computed by our algorithm, and the dual replay buffer mechanism does not introduce additional computational overhead compared to a same-size single buffer. Detailed FLOP calculations are provided in Appendix B.4.2. - Additionally, MAST uses gradient-based topology evolution to train sparse MARL agents exclusively. The sparse network topology evolves every 10,000 gradient update steps, making the overhead from this evolution negligible compared to the gradient updates. Following the reviewer's advice, we will include this discussion in our revision. ### Questions > **Q1**: What kind of sparsification techniques are used in MAST? Also are mixing networks sparsified? The agent network in qmix is relatively very small, how is it possible to reach a 95% sparsification while maintaining a relatively high or even better results compared to the dense networks? - MAST employs the RigL method (Evci et al., 2020), which enhances the optimization of sparse neural networks by leveraging weight magnitude and gradient information to jointly optimize model parameters and connectivity. The mixing network is also sparsified to a specified degree, as illustrated in Fig. 3. - Sparse networks often have fewer parameters, which can make them easier to train under certain conditions. This can lead to improved performance compared to dense networks, as demonstrated in our experiments. Similar observations have also been reported in the literature (Evci et al., 2020; Tan et al., 2023). ### Limitations > **L1**: The authors did not address the limitations. Our paper does address the limitations of the MAST framework, including the challenge of managing multiple hyperparameters. This discussion can be found in Appendix A.6. --- We are grateful for your constructive suggestions, which have significantly guided our improvements. We hope our response addresses your concerns. If so, we would like to know if you could kindly consider raising your score rating. We will also be happy to answer any further questions you may have. Thank you very much! --- Rebuttal Comment 1.1: Title: Reminder to Reviewer zAbe Comment: Dear Reviewer, Thank you for your time and effort in reviewing our paper. We hope our response has adequately addressed your concerns. If you feel that our rebuttal has clarified the issues raised, we kindly ask you to consider adjusting your score accordingly. Should you have any further questions or need additional clarification, we would be more than happy to discuss them with you. Thank you once again for your valuable feedback. --- Rebuttal 2: Title: Thanks for Reviewer zAbe Comment: Thank you very much for your response. We really appreciate your time and effort in reviewing our paper.
Summary: This paper introduces dynamic sparse training (DST) to the Deep Multi-Agent Reinforcement Learning (MARL) settings for the first time in the literature. Furthermore, it shows that applying directly DST algorithms to MARL does not lead to optimal results. Consequently, it proposes a new framework named Multi-Agent Sparse Training (MAST) which enhances the DST RigL algorithm with a hybrid TD-(λ) schema and a dual replay buffer mechanism in order to successfully cope with the challenging MARL settings. An extensive empirical validation is performed showing that MAST can reduce up to 20x the computational requirements at virtually no loss in performance. Strengths: * This is an original paper which introduces for the first time DST to MARL. * The paper solves the inherent problems and the suboptimal behavior of directly applying DST to MARL by proposing a new framework MAST which is specially designed for MARL. * The paper is clear and well written. The source code is provided for easy reproducibility. * The extensive empirical validation shows the superiority of the proposed framework in comparison with the most common sense baselines as there is no other DST method specially designed for MARL. * It is likely that the paper to have a fair impact on the sparse training and multi-agent reinforcement learning communities. Weaknesses: * Up to my best understanding, there seems to be no striking weak points. Technical Quality: 3 Clarity: 3 Questions for Authors: Q1) While the theoretical reduction in terms of computational resources is impressive, can you comment on the real wall-clock running time? I know that this is not possible when simulating sparsity with binary masks, but have you considered using some truly sparse implementation of the neural networks? While I am not sure how easy would be to do this for the GRU layer, there exists some sparse MLP implementations for supervised learning (e.g., Curci et al., Truly Sparse Neural Networks at Scale, arXiv:2102.01732, 2021) which may be easy to adapt to the MAST framework. This may allow you to design an experiment where you can scale up (in terms of the number of neurons) seriously the neural network for very large state or action spaces which as you mentioned is a typical challenge in MARL. Q2) (minor) I suggest adopting a uniform citation style in order to improve related work chronological readability. Currently, some of the references are cited using the year of the first preprint release on arXiv, while others are cited using the official publication year. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your time and effort in reviewing our paper! Please find our responses to your comments below. We will be happy to answer any further questions you may have. --- ### Questions > **Q1**: While the theoretical reduction in terms of computational resources is impressive, can you comment on the real wall-clock running time? I know that this is not possible when simulating sparsity with binary masks, but have you considered using some truly sparse implementation of the neural networks? While I am not sure how easy would be to do this for the GRU layer, there exists some sparse MLP implementations for supervised learning (e.g., Curci et al., Truly Sparse Neural Networks at Scale, arXiv:2102.01732, 2021) which may be easy to adapt to the MAST framework. This may allow you to design an experiment where you can scale up (in terms of the number of neurons) seriously the neural network for very large state or action spaces which as you mentioned is a typical challenge in MARL. Our primary objective is to exploit the computational redundancy in training MARL agents. Thus, we design the MAST framework to achieve this goal and aid various MARL algorithms specifically in sparse training scenarios. Existing works also focus on algorithmic FLOP reduction, such as (Sokar et al., 2022; Graesser et al., 2022; Tan et al., 2023). Our work, alongside these contributions, contributes to paving the way for more efficient and scalable multi-agent systems. As sparse methods evolve in tandem with hardware co-design, the anticipated translation of FLOP reduction into wall-clock speedups becomes increasingly viable. Recent developments, such as specialized software kernels and dedicated hardware solutions, e.g., DeepSparse (NeuralMagic, 2021) and Cerebras CS-2 (Lie et al., 2022), as well as (Curci et al., 2021), signify promising strides toward realizing the benefits of unstructured sparsity during both training and inference stages. We believe this will be a very interesting future direction ``` (NeuralMagic et al., 2021) NeuralMagic. Deepsparse, 2021. URL https://github.com/neuralmagic/deepsparse. (Lie et al., 2022) Lie, S. Harnessing the Power of Sparsity for Large GPT AI Models. https://www.cerebras.net/blog/harnessing-the-power-of-sparsity-forlarge-gpt-ai-models, 2022. ``` > **Q2**: (minor) I suggest adopting a uniform citation style in order to improve related work chronological readability. Currently, some of the references are cited using the year of the first preprint release on arXiv, while others are cited using the official publication year. Thank you for your suggestion. We will adopt a uniform citation style in our revision to improve the chronological readability of related works. --- We are grateful for your constructive suggestions, which have significantly guided our improvements. We hope our response addresses your concerns. If so, we would like to know if you could kindly consider raising your score rating. We will also be happy to answer any further questions you may have. Thank you very much! --- Rebuttal Comment 1.1: Title: Rebuttal acknowledgement Comment: Dear authors, Thank you for your answers. I will keep my original rating (accept). Best wishes, --- Rebuttal 2: Title: Thanks for Reviewer fbhH Comment: Thank you very much for your response. We really appreciate your time and effort in reviewing our paper.
null
null
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Boosting Generalization in Parametric PDE Neural Solvers through Adaptive Conditioning
Accept (poster)
Summary: This paper proposes to solve parametric PDEs with the introduction of context parameters. The low-rank design allows for rapid adaptation to unseen conditions, and the experiments show comparable or slightly better performance of the method than baselines. Strengths: 1. The utilization of context parameters facilitates efficient adaptation to novel environments while minimizing computational overhead. Weaknesses: 1. The main architecture of the proposed model is similar to LoRA, despite the fact that the context parameters are learned jointly during the pre-training stage. The core innovation lies more in the combination of existing techniques rather than introducing a fundamentally new approach. The motivation of the model design is not well established as well. 2. The experiment results seem not strong, especially for the Gray-Scott equation. 3. The paper briefly mentions the computational efficiency of the proposed method but lacks a detailed analysis. 4. The sensitivity of the proposed method to various hyperparameters is not extensively discussed, such as the size of the context vector and the rank in the low-rank adaptation. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. Why Neural ODE is utilized to generate the full trajectory? Do the experiment settings require continuous-time inference? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We're thankful for the reviewer's feedback and have addressed the raised concerns below. ### The main architecture of the proposed model is similar to LoRA. The paper primarily focuses on the general adaptation framework rather than the LoRA implementation. The key points are: - Classical ERM training is inadequate for modeling the variability in dynamical physical processes (framed as parametric PDEs), as demonstrated in section 3.2. - An alternative inductive principle is necessary; we propose an adaptive conditioning principle: learning from multiple environments to rapidly adapt to a new one (section 3.3). The core issue is developing an effective meta-learning strategy. - Our implementation of this principle is computationally efficient (fast adaptation to new environments) and scalable w.r.t. the number of training samples, via low-rank and first-order optimization. - We propose a versatile framework compatible with any NN backbone and demonstrate that physical models can be learned across different environments despite incomplete models and unknown parameters, a novel achievement. - It improves the state-of-the-art in learning from multiple environments on new proposed datasets. Experiments show our framework works well with MLP, CNN, and FNO backbones. ### The motivation of the model design is not well established as well. The proposed model combines a physics module and an agnostic NN module. The former encodes some prior knowledge available under the form of a PDE that only partly explains the underlying physical phenomenon, while the latter comes as a complement to this partial physical model in order to model the physics not explained by the former. Concerning the order of the combination ($g_a \circ g_p$), this order is natural since the NN component $g_a$ comes as a complement to the partial physics encoded in $g_p$. Alternatives methods exist: [1] propose $g = g_p + g_a$ or [2] proposed a variational formulation. We proposed $g = g_a \circ g_p$, which is more flexible than existing methods, as it does not assume the physical component needed to be completed additive or a probabilistic formulation. Given an initial condition $u_0$, the physical model $g_p$ takes as input $u_0$ and outputs an estimate of the temporal derivative $du/dt$, which is processed by the data driven term $g_a$. The latter then complements the incomplete estimate computed by the physical component. [1] Yin et al. Augmenting Physical Models with Deep Networks for Complex Dynamics Forecasting, ICLR 2021 [2] Takeishi et al. Physics-Integrated Variational Autoencoders for Robust and Interpretable Generative Modeling, NeurIPS 2021. ### The experiment results seem not strong, especially for the Gray-Scott equation We have added complementary experiments, including a new state-of-the-art transformer baseline (Transolver) in Figure 1 and Table 2, and a new ablation study, Phys-Adaptor (Tables 3 and 4), which uses a shared NN backbone without adaptation. The Transolver's performance is similar to other backbones, and the ablation highlights the importance of the proposed adaptive framework. Regarding Gray-Scott, we respectfully disagree with your conclusions. Our method outperforms all baselines except APHYNITY in the hybrid setting. Figure 15 qualitatively demonstrates the quality of our predicted trajectories for Gray-Scott. ### The paper briefly mentions the computational efficiency of the proposed method but lacks a detailed analysis. Thanks for the suggestion, we have added in the PDF rebuttal page training time and parameters gains for the different multi-environment frameworks, showing the superiority of the GEPS framework. ### The sensitivity of the proposed method to various hyper-parameters, e.g. context vector size Again, this might not be sufficiently indicated in the core paper, but for lack of space, we included the ablation analysis in the appendix. More precisely, in the appendix C.3.1, we studied the impact of the context size on the performance and on the number of parameters required for adaptation. The proposed GEPS reaches higher performance at a lower complexity, in terms of parameters, than the reference baseline CODA. Finally in C.2, we studied how the number of adaptation trajectories impact the adaptation performance. ### Why Neural ODE is utilized to generate the full trajectory? Do the experiment settings require continuous-time inference? Neural ODE here makes reference to a family of time integration methods. This includes several differentiable PDE solvers. Here we made use of a popular RK4 explicit solver available in the torchdiffeq library. Other alternatives are possible with our framework. For the ERM methods, we used the same time-stepping technique method for GEPS than those used in the ERM methods. We hope we clarified all your concerns and will make sure everything will be made clearer in the final version. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' responses. My concerns with regard to the efficiency are addressed. However, while I agree with the statement that traditional “ERM” methods may struggle to capture the variability of complex physical systems, especially when only the initial conditions are provided (in which case it would be extremely challenging for the model to make accurate predictions, as different PDE coefficients or other components could lead to divergent future states), the relationship between the motivation and the specific model design is still unclear to me. The results in Appendix C.2 seem to indicate that the original low-rank adaptation method results in stronger performance without introducing much more computational overhead than the method proposed in this paper. Moreover, as suggested by other reviewers, I think that a simple baseline with "shared backbone and domain-specific weights" should be added. Therefore, I will maintain my current score. --- Rebuttal 2: Comment: Thank you for taking time to read our answers to your concerns and acknowledging our added results. We would be happy to engage in a further discussion if you allow it. The motivations served to point out a pitfall in current PDE solvers inductive principle formulation for tackling parametric PDEs. We thus proposed a new inductive principle formulation that could tackle this pointed challenge. As for the specific framework implementation, we aimed to introduce an effective solution applicable to a wide range of neural PDE solvers, supported by two key technical contributions: - An effective yet computationally efficient meta-learning formulation, as demonstrated in Table 1 of the rebuttal, which compares favorably with existing frameworks. - A new, flexible approach for learning hybrid models within a meta-learning framework, which, to the best of our knowledge, has not been shown before. Being backbone agnostic and accepting physics if available allow it to handle a large number of PDE solvers encoutered in the literature. Regarding the simple baseline « shared backbone and domain-specific weights », as mentioned to reviewer m65H, all our baselines are in fact "shared backbone with domain specific-weights", as shown in Figure 6 of [1] in appendix. We think multi-task and meta-learning frameworks, while different in their problem formulation, solve the same optimization problem, as shown in [2]. As per your requests, we added two simple strategies, where we have a shared backbone but have domain specific-weights for the first-layer or the last layer. We report the results for Gray-Scott and Burgers for out-domain results (similar conclusions have been observed in-domain): | Model | GS | Burgers | |-------|---------|---------| | first-layer | 4.52e-2 | 7.34e-2 | | last-layer | 5.03e-2 | 8.03e-2 | | GEPS | 1.86e-2 | 5.29e-2 | Concerning the results in Appendix C.2, we actually think this is one particular strength of our method, where we can also tune the low-rank matrices to obtain better accuracy at a reasonable cost if needed. This cost could be significant for large models or for larger number of adaptation samples, justifying why we first proposed to only tune a very small subset of parameters $c^e$. We hope these results and the already provided experiments answer your concerns. Don’t hesitate to reach out if needed. [1] Kirchmeyer et al., Generalizing to New Physical Systems via Context-Informed Dynamics Model. ICML 2022 [2] Wang et al., Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Training and Effective Adaptation. ICML 2021 --- Rebuttal Comment 2.1: Comment: Dear reviewer, Thank you again for the time taken to review our paper. We would like to kindly remind that we have addressed your last concerns in our last response and hope you can take note of it this before the discussion deadline. If so, we respectfully ask if you could kindly indicate whether your rating has been updated accordingly. Thank you very much for your time, and we look forward to your response.
Summary: This paper focuses on the PDE solver generalization and proposes the GEPS model based on a low-rank-based meta-learning strategy. Specifically, the authors config the PDE solver as a low-rank framework, where different environments correspond to domain-specific diagonal matrices. During adaption, only the diagonal matrices are trained. Further, they employ a hybrid framework for incorporating physics prior as a trainable module in the model beginning. The authors provide extensive in- and out-domain experiments and compare the model with diverse baselines. Strengths: - This paper focuses on an important problem, which is PDE generalization. - The proposed method is reasonable and performs well in most cases. - This paper is well-written and clear. Weaknesses: 1. The technical contribution is quite limited. Utilizing shared backbone and domain-specific weights for multi-environment or multi-task learning is a widely used design. There are many adaptor-based models, such as [1]. As for the hybrid framework, I think the authors fail to elaborate on the motivation of the overall design. Why use the prior module at the beginning? What about using it at the end of the model or using it as an adaptor? Also, why does the physics module not use the same low-rank design as the data part? [1] Parameter-efficient Multi-task Fine-tuning for Transformers via Shared Hypernetworks, ACL 2021 In addition, fine-tuning the diagonal matrix is also related to SVD-PINN [2], which should also be included as a baseline. [2] Svd-pinns: Transfer learning of physics-informed neural networks via singular value decomposition, SSCI 2022 2. About experiments Some simple but important baselines are missing, including the adaptor-based method and some generalizable backbones. Specifically, for the adaptor-based method, the authors can train different adaptors for different environments based on a shared backbone. As for the generalizable backbones, they can experiment with advanced neural operators, such as [1,2]. I mean that train a universal model with thes backbones (the PDE coefficients are concatenated with inputs) and directly generalize them to new PDEs or coefficients. These baselines should also be included in Section 3.2 experiments. [1] Transolver: A Fast Transformer Solver for PDEs on General Geometries, ICML 2024 [2] Scalable Transformer for PDE surrogate modeling, NeurIPS 2023 As a supplement to the main results in Tables 2 and 3, the training time of different methods should be compared. 3. Comparison with ensemble in Figure 3. I think ensemble could be a strong baseline, where it performs closely to GEPS in Burgers. Maybe the authors should replace the backbone with more advanced Transformer-based methods. 4. About the title “adaptive conditioning”. This paper proposed a low-rank training strategy. Thus, I think the adaptive conditioning is unsuitable. Technical Quality: 3 Clarity: 2 Questions for Authors: About the ERM experiments, will the model know the correct PDE coefficient during training and adaption? Do these experimented ERM baselines use the same information as GEPS? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: They have discussed limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's thoughtful feedback and recommendations to improve the paper. We have addressed the raised concerns below, including additional experiments as suggested. ### Utilizing shared backbone and domain-specific weights for multi-environment or multi-task learning is a widely used design. Maybe the paper was not clear enough, but the main focus is on the general adaptation framework, not the specific implementation. The main messages are as follows: - Classical ERM training is inadequate for modeling the variability of dynamical physical processes (parametric PDEs), as demonstrated in section 3.2. - An alternative inductive principle is needed. We propose an adaptive conditioning principle: learning from multiple environments to adapt rapidly to a new one (section 3.3). On the technical side, while sharing a backbone and using domain-specific weights is common, current meta-learning methods are costly and do not allow learning PDE solvers on large datasets. Thus, our implementation of this principle: - is computationally efficient (fast adaptation to new environments) and scalable w.r.t. the number of training samples via a low-rank and first-order optimization. - is adaptable to any NN backbones. Our experiments show it works well with several backbones (MLP, CNN, FNO). - can embed physical models learned on different environments, even with incomplete models and unknown parameters, which has never been shown before. ### Motivations of the overall hybrid design The proposed model integrates a physics module $g_p$ encoding prior knowledge from a PDE that only partially explains the physical phenomenon. The NN module $g_a$complements $g_p$ by modeling the unexplained physics. Regarding the combination order ($g_a \circ g_p$), it is logical that ($g_a$) operates after $g_p$ as it must augment the incomplete knowledge. Alternative methods exist, such as [1] $g = g_p + g_a$ or [2] a variational formulation. Our approach, $g = g_a \circ g_p$, is more flexible as it doesn't assume an additive or probabilistic formulation. Given an initial condition $u_0$, $g_p$ outputs an estimate of the temporal derivative $du/dt$, which is then refined by $g_a$. For the implementation, the physical component ($g_p$) is encoded as a differentiable numerical solver. This allows it to be combined with any NN components and the entire architecture to be trained end-to-end. Since the physics component has few free parameters, a low-rank formulation is unnecessary. [1] Yin et al. Augmenting Physical Models with Deep Networks for Complex Dynamics Forecasting, ICLR 2021 [2] Takeishi et al. Physics-Integrated Variational Autoencoders for Robust and Interpretable Generative Modeling, NeurIPS 2021. ### What about using it as an adaptor? We added a new baseline “Phys-Adaptor” in the rebuttal (pdf companion Table 3 and 4) where only $g_p$ act as an adaptor, thus $g_a$ is shared across the environments. The lower performance of this baseline illustrates again the importance of adapting $g_a$. ### SVD-PINN as a new baseline. PINNs methods are data-free, requiring full knowledge of the physical information, which is not the case here. Our framework is applied on two popular settings: - Pure data-driven approaches: No prior knowledge of the physics is available, and everything must be learned from data. - Hybrid formulation: A PDE partially describing the physical system provides some prior knowledge of the physics (section 3.3.3). These settings correspond to common real-world situations explored in the literature. ### Adding an adaptor-based method Our framework can be used with different backbones as indicated above. [3] has been applied on pre-trained transformers and performs adaptation by adding new “adapter” layers, which increase the number of layers. In our setting, we train a model from scratch to learn to be adapted to new environments efficiently. During adaptation, all baselines keep the same nb. of layers and only tune a small set of parameters, e.g. $c^e$. [3] Parameter-efficient Multi-task Fine-tuning for Transformers via Shared Hypernetworks, ACL 2021 ### Experiments with advanced neural operators. We added the **Transolver** baseline in our ERM experiments in the rebuttal (Fig 1 and Table 2). The performance is similar to other ERM baselines, verifying that an adaptation framework is necessary regardless of the backbone used. ### Adding training times for the different methods We have added the training times and the number of parameters for each method in the PDF rebuttal page (table 1), showing that GEPS shows a huge time gain compared to gradient-based methods (CAVIA baseline) and that our method is less costly in terms of parameters compared to the reference baselines CoDA and LEADS. ### Ensemble baseline with a Transformer-based method. For the ensemble method, we used the same model (CNN) as in our GEPS framework for a fair comparison. Besides, the above experiments with the Transolver shows that the backbone itself does not make a significant difference. ### The "adaptive conditioning" term is unsuitable. We hope to have clarified this issue. The paper main message is that ERM inductive principle alone is not sufficient for modeling multiple complex physical dynamics and that another inductive principle should be used. Hence the adaptation inductive principle developed in the paper. ### Knowledge of the PDE coefficients for ERM experiments In all our experiments, the PDE coefficients are unknown. ### Same knowledge for ERM methods and GEPS? The assumptions are the same for all the methods (baselines and GEPS). The only difference is in the induction mechanism: ERM methods assume that all data come from the same distribution, while GEPS assumes that data come from different distributions and incorporates an adaptation mechanism to adapt the network to each distribution. We appreciate your feedback and hope we answered your concerns. --- Rebuttal Comment 1.1: Comment: Thanks for your response and new results. The experiments about Transformer-based models and efficiency helped me understand this method better. However, I still think the "shared backbone and domain-specific weights" setting is a strong and necessary baseline. Note that this baseline is not about meta-learning, it is just a classical multitask learning backbone. Besides, since the authors state that "adaptive conditioning" refers to "learning from multiple environments to adapt rapidly to a new one", the above "shared backbone and domain-specific weights" can also be viewed as a special "adaptive conditioning", where the domain-specific head can "adaptively" utilize the shared representations. Thus, I decided to keep my original score. --- Rebuttal 2: Comment: Thank you for your review and for appreciating our new experiments. We would like to engage in a further discussion if you allow it. We agree that having a « shared backbone and domain specific weights » setting is an important baseline. However, as recent papers suggest [1], meta and multi-task learning, while conceptually different in the problem formulation, share the same optimization problem in reality. All our baselines, whether meta-learning or multi-task learning, do have a shared backbone and domain specific weights, as you are suggesting. In CoDA [2], the paper provide a good analysis of the differences between all the different multi-environment learning frameworks using a shared backbone with domain specific weights, in Figure 6 in their supplementary material. However, as per your request, we added two different baselines. We proposed two different strategies: - having a shared backbone but having domain specific weights for the first layer - having a shared backbone but having domain specific weights for the last layer We report the results for Gray-Scott and Burgers for out-domain results (similar conclusions have been observed in-domain): | Model | GS | Burgers | |-------|---------|---------| | first-layer | 4.52e-2 | 7.34e-2 | | last-layer | 5.03e-2 | 8.03e-2 | | GEPS | 1.86e-2 | 5.29e-2 | We hope this and the already provided new experiments answer your concerns. We are happy to discuss more about it if needed. [1] Wang et al., Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Training and Effective Adaptation. ICML 2021 [2] Kirchmeyer et al., Generalizing to New Physical Systems via Context-Informed Dynamics Model. ICML 2022 --- Rebuttal 3: Comment: Dear Reviewer, Please have a look at the last part of the author's rebuttal, the other reviews and indicate if your rating has been updated based on that. Your Area Chair. --- Rebuttal 4: Comment: Dear reviewer, As the discussion period comes to an end, we would like to express our gratitude once again for your time and for highlighting important concerns that we hope have been addressed. You mentioned the absence of a baseline involving "a sharing backbone with domain-specific weights." We would like to kindly remind that we have addressed this concern in our last response and hope you can take note of it this before the discussion deadline. If so, we respectfully ask if you could kindly indicate whether your rating has been updated accordingly. Thank you very much for your time, and we look forward to your response. --- Rebuttal Comment 4.1: Comment: I would like to thank your effort in providing new experiment results. It resolves my concern about baselines. I still hold concerns about "adaptive conditioning". In your definition, this concept is too wide to accurately describe your method. Also, retraining the model to fit a new setting is not "adaptive". I think the presentation of this paper needs a major revision. However, I appreciate your effort in providing new experiments and adding new baselines. I will raise my score to 5. --- Rebuttal 5: Comment: We appreciate that our new experiments have addressed your concerns. We would also like to clarify the point you raised: **Retraining the model to fit a new setting is not adaptive.** We understand that the term "adaptive conditioning" might be confusing. In the meta-learning community, "adaptation" refers to the process where a model, trained on a range of environments, is efficiently retrained when exposed to data from a new distribution (or environment) to adjust to that new context. This is the concept we aimed to convey in our title: rather than performing direct inference on data from new environments (which underperforms compared to our approach), parametric neural PDE solvers should incorporate an adaptation mechanism. This mechanism enables the model to quickly learn from minimal data in these new environments, thereby enhancing its generalization capabilities. **This concept is too broad to accurately describe your method.** We recognize the concern about the breadth of "adaptation." Our paper primarily highlights a limitation in existing methods for solving parametric PDEs and demonstrates that an adaptive mechanism can address this issue, independent of the specific method proposed. While we provide a concrete solution, our main contribution is more general. However, if you believe the title should be more specific to our method, we can revise it to: Boosting Generalization in Parametric PDE Neural Solvers through Context-Aware Adaptation. Thank you for the valuable feedback and the constructive discussion, which have helped us strengthen the quality of our paper.
Summary: In this paper, we propose a meta-learning method called GEPS, which utilizes an adaptation approach to generalize a PDE solver to unseen environments. This method demonstrates better generalization compared to classical ERM approaches. Strengths: This model can adapt to a new environment \( f^e \) in one shot and predict the time trajectory of the corresponding PDE at inference time. Specifically, the proposed model outperforms other existing meta-learning methods both in in-domain and out-domain environments of the initial conditions, as well as in in-range and out-of-range (extrapolation) time horizons. Weaknesses: Section 3.2 provides the motivation for the proposed method, but I am curious about how this motivation relates to the methods proposed in Sections 3.3.2 and 3.3.3. It would be helpful to explain how the issues identified with existing methods in the motivation section led to the use of neural ODEs and the model structure presented in Figure 4. Additionally, considering the results in Tables 2 and 3, the improvement in error scale compared to existing meta-learning methods does not seem significant. Figures 14 and 15 also show minimal differences. Therefore, it would be beneficial to experimentally demonstrate other advantages or unique aspects of the proposed GEPS method beyond its accuracy. Technical Quality: 3 Clarity: 3 Questions for Authors: * The overall relative error scale of FNO and MPPDE in Figures 2 and 3 and Table 1 is around 1e-1 (even when the environment and trajectory are sufficient), which appears to be generally larger than the errors presented in the respective original papers. Is this due to the difficulty of the equations, or is there another reason for the relatively large errors? * I am not quite clear on how the physical knowledge insertion $g_p$ and the data augmentation $g_a$ in Equation (6) are each encoded. Could you provide a more detailed explanation of the encoding methods? * What is the significance of the order of $g_p$ and $g_a$? Would it cause problems if $g_a$ were applied before $g_p$? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: See above comments. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We're thankful for the reviewer's helpful feedback and have addressed the raised concerns below. ### Section 3.2 provides the motivation for the proposed method, but I am curious about how this motivation relates to the methods proposed in Sections 3.3.2 and 3.3.3: The main messages are: - Classical ERM training is inadequate for modeling the variability in dynamical physical processes (framed as parametric PDEs), as shown in section 3.2. - An alternative inductive principle is needed. We propose an adaptive conditioning principle: learning from multiple environments to rapidly adapt to a new one (section 3.3). The key focus is developing an effective meta-learning strategy. - We propose an implementation that is computationally efficient (fast adaptation to new environments) and scalable w.r.t. the number of training samples via low-rank (3.3.1) adaptation and first-order optimization (3.3.2). This framework is adaptable to different NN backbones. Experiments show it works well with MLP, CNN, and FNO. To further assess the generality and applicability of our framework, we consider two settings: - Pure data-driven approaches with no prior physics, where everything is learned from data. - Hybrid formulations with a PDE prior that partially describes the underlying phenomenon (section 3.3.3). These settings correspond to common real-world situations explored in the literature. **Use of Neural ODE**: "Neural ODE" refers to a family of time integration methods. We use a popular RK4 solver, but other solvers could also be used. This is only a relevant technical choice, e.g., for the ERM methods, we used the same time-stepping method proposed in the corresponding papers (FNO, MPPDE, Transolver, CNN) for GEPS. ### The improvement in error scale compared to existing meta-learning methods does not seem significant. Figures 14 and 15 also show minimal differences. We have now provided error bars for in-domain and out-domain experiments (tables 3 and 4 in the pdf companion), highlighting that the results obtained are consistent and significant for most cases. Regarding qualitative results, all the methods are able to reconstruct Burgers trajectory (Fig 14). For Gray-Scott (Fig 15), GEPS clearly outperforms CAVIA and LEADS. Only CoDA matches GEPS, but diverges from the ground-truth for out-range time horizon. For the Kolmogorov dataset (Fig 16), only GEPS is able to predict accurately the trajectory, and starts to diverge from ground truth for out range time horizon. ### Advantages or unique aspects of the proposed GEPS method beyond its accuracy. - We provided training times and number of parameters of GEPS compared to other methods in the PDF rebuttal page (table 1): GEPS is faster to train and requires less parameters than concurrent approaches. - in appendix C.2, we showed that unlike other methods, GEPS performance increase with the number of adaptation trajectories while other methods performance quickly saturate. - in appendix C.3.1, we showed that GEPS achieves greater performance and scales better in terms of trainable parameters compared to reference CoDA when increasing the context vector size. - In C.3.2, we experimentally observed that GEPS can adapt to new environments faster than CoDA. ### The overall relative error scale of FNO and MPPDE in Figures 2 and 3 and Table 1 is around 1e-1 which appears to be generally larger than the errors presented in the respective original papers. Is this due to the difficulty of the equations, or is there another reason for the relatively large errors? The error of FNO and MP-PDE is higher because the datasets used in the respective papers have been generated by a single equation with fixed parameter values while we consider multiple environments (i.e. a distribution of parameters). Whatever the backbone, including FNO, MP-PDE and the new Transolver (see additional experiments below), the ERM induction principle is not adequate for learning from multiple physics environments when only an initial condition is given (Initial value problems). Note that we have added a SOTA Transformer neural operator (NO) the **Transolver**, to the experimental comparison (see Fig. 1 and table 2 in the in the pdf companion), with the same conclusions. ### How the physical knowledge $g_p$ and the data augmentation $g_a$ in Equation (6) are each encoded? The physical component of the model is encoded as a differentiable numerical solver. Being differentiable, the latter can be combined with any NN components and the whole architecture trained end to end. As for the combination in equation 6, given an initial condition $u_0$, the physical model $g_p$ takes as input $u_0$ and outputs an estimate of the temporal derivative $du/dt$, which is processed by the data driven term $g_a$. The latter then complements the incomplete estimate computed by the physical component. ### What is the significance of the order of $g_p$ and $g_a$ Would it cause problems if $g_a$ were applied before $g_p$ Different approaches have been considered in the literature. [1] considered that the data term and the physical term should be combined in an additive manner: $g = g_a + g_p$. In our case, we think that $g = g_a \circ g_p$ is a more general formulation. As for the order, since one makes the assumption that $g_p$ can be incomplete, it is natural to consider that $g_a$ augments this incomplete knowledge and operates after the physical component. [1] Yin et al. Augmenting Physical Models with Deep Networks for Complex Dynamics Forecasting, ICLR 2021 We hope it answers your remarks. We will make sure that yours concerns are made clearer in the final version. --- Rebuttal Comment 1.1: Comment: Thank you for taking the time to explain my questions in detail and for showing the revised figures and tables. While I understand your explanation, I still find the motivation for the proposed method unclear (as pointed out by other reviewers regarding the lack of technical contributions). Additionally, in Tables 3 and 4, it seems there isn't a significant accuracy improvement compared to other benchmark methods, even with the error bars added from multiple experiments. Therefore, I will maintain my weak accept score of 5. However, I believe that the motivation for the proposed method and the advantages or unique aspects of the proposed GEPS method, which you explained to me, should be added and emphasized more in the revised version of the paper. Thank you. --- Rebuttal 2: Comment: Thank you for your thoughtful review and for recognizing the additional experiments we included in response to your concerns. We would like to emphasize again that, as indicated, our main contribution lies not solely in the technical advancements, but in highlighting the inadequacy of classical ERM for parametric PDEs and proposing an alternative inductive principle. We believe that this change in perspective represents a significant step forward that could impact recent trends in foundation models for PDEs, for example, those aimed at solving parametric PDEs. In addition to this conceptual advancement, we introduced an effective solution applicable to a wide range of neural PDE solvers, supported by two key technical contributions: - An effective yet computationally efficient meta-learning formulation, as demonstrated in Table 1 of the rebuttal, which compares favorably with existing frameworks. - A new, flexible approach for learning hybrid models within a meta-learning framework, which, to the best of our knowledge, has not been shown before. Respectfully, we would like to note that a score of 5 typically indicates a borderline acceptance, not a weak one. We will incorporate your feedback into the revised version. Thank you again for your valuable insights, and we are happy to engage in further discussion.
Summary: The authors propose GEPS, a method for meta-learning neural solvers for parametric PDEs. Similarly to gradient-based meta-learning methods such as MAML, the authors formulate the problem as a two-step optimization problem: the goal of the first step is to learn a model that's able to easily adapt to new environments via learning context-dependent parameters in the second step. A data-driven and physics-based decomposition is also proposed. The authors evaluate on four canonical dynamical systems and demonstrate state-of-the-art performance. Strengths: - The paper is well-written and clear. - The proposed method is straightforward and effective. The low-rank parameterization applied to meta-learning for dynamical systems is novel, as far as I am aware. - Experiments are thorough and comparisons to other meta-learning models are detailed. Evaluated on four canonical dynamical systems, the results are compelling. Weaknesses: - More experiments investigating the physics-aware component of the model would be illluminating. It's mentioned in Section 3.3.3 for the method to incorporate incomplete physics information, e.g. incomplete set of terms or inexact coefficients. However, evaluations of the hybrid method are only done with full knowledge of the system coefficients. - More ablation studies could be useful to further clarify the importance of each component of the proposed model. For example, it's unclear to what extent the improved performance is due to the specific low-rank parameterization (Eqn 3). - It would also be helpful to include the number of parameters of all methods evaluated. Technical Quality: 3 Clarity: 3 Questions for Authors: - Why is it that for Burgers, the hybrid setting of GEPS performs worse than the purely data-driven setting? Does this suggest that the data-driven/physics-based decomposition (Eqn 6) is not expressive enough to handle certain PDEs? - In Section 3.3.3, two strategies for adapting PDE parameters to a new environment are proposed (lines 213-219): learning $c^e$ only or learning both $c^e$ and $\theta^e_p$. Which strategy is used for Tables 2 and 3? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors address the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's thoughtful feedback and have addressed the raised concerns below. ### More experiments investigating the physics-aware component of the model would be illuminating: For the hybrid physics-ML setting, we make two assumptions: - the physics is only partly known and shall be completed by a NN component for the targeted forecasting task; this is the forward problem. For the pendulum, the physics component ignores the damping term, for Burgers and Kolmogorov, the model ignores both the forcing and the LES terms. The only exception is the Gray-Scott example, for which one assumes full knowledge of the equation, but the physical parameters are unknown and shall be estimated (see below). - for all datasets, the parameters of the partial physics component are unknown and shall be estimated - this is the inverse problem. The list of parameters to be estimated is provided in appendix D.3 - table 9. For example for the pendulum, 3 parameters $(\omega_0, F, w_f)$ are estimated. Figures 7 to 11, appendix C1, show the error (MAE) for the estimation of the parameters for pendulum and Gray-Scott (this is an average over all the parameters - 3 for the pendulum, 2 for GS). ### More ablation studies could be useful to further clarify the importance of each component of the proposed model: We report 4 different ablations that could justify the importance of our low-rank module. We have added a new baseline denoted **Phys-adaptor**, where we only adapt the physical component to new environments while the data-driven component is shared across all environments, without (low-rank) adaptation. The results, indicated on tables 3 and 4 on the pdf file - row **Phys-adaptor**, show that this model performs worse over all the datasets compared to GEPS, highlighting the importance of adaptation for the NN component. Other ablations appear in the appendix. - in section C.2, we evaluated different context-based meta-learning frameworks (CAVIA, CODA, GEPS) during adaptation while increasing the number of samples per environment. We demonstrated that low-rank adaptation performs better than the alternatives for complex PDEs (Kolmogorov). For example, its performance improves with the number of training trajectories, whereas existing methods quickly reach a plateau and do not show further improvement. - in section C.3.1, we showed that our framework scales better than reference method CoDA in terms of numbers parameters and performance when varying the size of the context. - Figure 13, appendix C.3.2, shows that our method allows faster convergence during adaptation compared to reference CoDA. ### It would also be helpful to include the number of parameters of all methods evaluated. We reported in the PDF rebuttal page (table 1) the training time and the number of parameters used for each baseline for 1D (Burgers) and 2D (Gray-Scott) PDEs. ### Why is it that for Burgers, the hybrid setting of GEPS performs worse than the purely data-driven setting? We were also surprised by this results, and redo the experiments. We have no clear explanation, but we believe that the high flexibility of our formulation for combining $g_p$ and $g_a$ can make training complex for some PDEs and causes over-fitting, especially considering the low-amount of data considered in our setting. ### Two strategies for adapting PDE parameters to a new environment are proposed. Which strategy is used for Tables 2 and 3 in the paper? For Pendulum and the Burgers, we used the context $c^e$ to learn the physical parameters. For Gray-Scott and the Kolmogorov, we directly learned the physical parameters without using the context vectors. The two strategies provide similar results, we reported in the paper the best results among the two alternatives. We hope we answered the main concerns which will be made clearer in the final version of the paper. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the detailed response and clarifications. I believe inclusion of these additional details will strengthen the paper. I will keep my score as is. --- Reply to Comment 1.1.1: Comment: We appreciate your positive feedback. Please don't hesitate to reach out if you have further suggestions or questions.
Rebuttal 1: Rebuttal: We thank all the reviewers for their comments and suggestions. We are encouraged that they found our method clear and well-written (Reviewer 3TLD, Reviewer m65H). We particularly appreciate that you found we tackle an important challenge (Reviewer m65H) and provide a straightforward yet strong and computationally effective solution (Reviewer 3TLD, Reviewer 1xSb), well evaluated on a variety of datasets against different baselines, showing SOTA or competitive performance (Reviewer 3TLD, Reviewer JETf, Reviewer m65H). We carefully answered all the concerns raised by each reviewer. We address in this section main concerns shared by the reviewers and we introduce additional experiments following reviewers recommendations. ## Additional experimental results - provided in the pdf companion one page: - Computational efficiency (All reviewers): Table 1 reports training time and number of parameters for best multi-environment frameworks for Burgers and Gray-Scott equations. - Error bars (Reviewer JETf): we report error bars for in-range time horizon (out-range horizon has been removed due to lack of space) for both in-domain (Table 3) and out-domain (Table 4) experiments, showing that our results are consistent and significant, training all different frameworks on 3 different seeds. - Physical adaptor baseline (Reviewer 3TLD, Reviewer m65H): we propose a new baseline “Phys-Adaptor” in table 3 and 4 following reviewer m65H recommendation where only the physical component is adapted and the neural network is shared across all environments. The lower performance of this setting demonstrates the importance of our low-rank adaptation mechanism (Reviewer 3TLD). - **Transolver** baseline for ERM methods (Reviewer m65H): we have added an advanced Transformer model to our ERM baselines following Reviewer m65H recommendation for both IVPs in Fig. 1 (Burgers has been removed for lack of space, but similar results have been observed) and when considering a historical window of past states as input (Tab 2), confirming our observations that whatever the NN backbone used, ERM methods are not effective for multi-environments datasets. ## Main concerns ### Novelty of the adaptation mechanism (Reviewer m65H, Reviewer 1xSb): We want to highlight that the focus of the paper is on introducing a general adaptation framework suitable for modeling complex PDEs, and not on the implementation of a specific NN model as some of the remarks suggest. The main messages are: - Classical ERM training is inadequate for modeling the variability in dynamical physical processes (framed as parametric PDEs), as shown in section 3.2. - As an alternative, we propose an adaptive conditioning principle: learning from multiple environments to rapidly adapt to a new one (section 3.3). The key focus is developing an effective meta-learning strategy. - Our implementation of this principle is computationally efficient (fast adaptation to new environments) and scalable w.r.t. number of training samples via a low-rank and first-order optimization formulation. We demonstrate that this model improves the state-of-the-art for learning from multiple environments. - This framework is adaptable to different NN backbones. Experiments show that it works well with MLP, CNN, and FNO. We also demonstrate that physical models can be embedded and learned across different environments, even when incomplete models and with unknown parameters, which has not been shown before. - We have validated the method on three new datasets, with multiple environments generated using a large diversity of parameters such as pde coefficients, domain definition or forcing terms. ### Model design choice (Reviewer JETf, Reviewer m65H): - For the hybrid modeling setting, the proposed model integrates a physics module and an agnostic NN module. The physics module encodes prior knowledge from a PDE that only partially explains the underlying physical phenomenon, while the NN module complements this by modeling the unexplained physics. - Regarding the combination of the physical and data-driven modules ($g_a \circ g_p$), this order is logical because the NN component ($g_a$) complements the partial physics encoded in ($g_p$). Alternative methods exist, such as [1] $g = g_p + g_a$ or [2] a variational formulation. We propose $g = g_a \circ g_p$, which is more flexible as it doesn't assume an additive or probabilistic completion of the physical component. - For time integration, we considered NeuralODE, referencing a family of time-stepping methods. In our implementation, we used a differentiable RK4 numerical solver, though other solvers could also be used. For a fair comparison with ERM methods, we used the same time stepping method proposed in the corresponding reference papers for GEPS. We acknowledge that Figure 4 in the paper might imply our framework works only with NeuralODE. We will clarify this in the final version. [1] Yin et al. Augmenting Physical Models with Deep Networks for Complex Dynamics Forecasting, ICLR 2021 [2] Takeishi et al. Physics-Integrated Variational Autoencoders for Robust and Interpretable Generative Modeling, NeurIPS 2021. ### Computational efficiency of our method (All reviewers) - We have added the training times and number of parameters for all the methods in table 1 for the Burgers and Gray-Scott equations. We observe that GEPS is more efficient in terms of training time and in terms of parameter gains. - In the main paper (c.f. Fig. 12 and Table 8 in the appendix), we showed that our method performs and scales better in terms of accuracy and total training parameters when varying the context vector size compared to reference CoDA. - Also, in Fig 13. in appendix, we observe that our method adapts faster (less steps) to new environments than CoDA. We hope this clarifies the main concerns pointed out by reviewers and will make sure that the different points discussed and new experiments will be added in the final version. Pdf: /pdf/4575f59cdb0b76db89f99d94c63e44ddba1de5e8.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Skill-aware Mutual Information Optimisation for Zero-shot Generalisation in Reinforcement Learning
Accept (poster)
Summary: Proposes a new contrastive learning objective for use in meta-RL with contextual policies. The new objective incorporates a notion of skills into the mutual information estimate. Theoretical and empirical evidence in support of the superiority of the proposed method is presented. Strengths: - The paper is clearly written. - The proposed method is well motivated. - The proposed benchmark is interesting. Weaknesses: - Meta-RL is presented as the problem of learning a contextual RL agent and a context generator for solving the task. However, there are many other kinds of meta-RL algorithms that do not use explicit context representations at all. For some canonical examples, see [RL^2](https://arxiv.org/abs/1611.02779) and [MAML](https://arxiv.org/abs/1703.03400). It is not important to include a thorough survey of other types of meta-RL methods, but for the sake of accuracy, the full scope of meta-RL should be mentioned. Especially important is to note that the proposed method requires a posterior sampling style meta-RL approach, which is not optimal at test-time compared to directly optimizing the meta-RL objective via, e.g., training a sequence model as the policy. - Due to the above issue, it is inaccurate to say that SaNCE can be integrated with any Meta-RL algorithm. - A new benchmark environment should come with thorough benchmarking. A good selection of contrastive meta-RL methods are included, but it would be more convincing if at least some RL^2 style method with suitable architecture (e.g. [transformers](https://proceedings.mlr.press/v162/melo22a.html) or [hypernetworks](https://proceedings.neurips.cc/paper_files/paper/2023/hash/c3fa3a7d50b34732c6d08f6f66380d75-Abstract-Conference.html)) was also run as a baseline. The current benchmarking doesn't try zero-shot meta-RL algorithms at all. - The paper isn't self-contained in terms of understanding how the proposed method works during meta-training and meta-testing times. See questions for details. Technical Quality: 3 Clarity: 2 Questions for Authors: - For how many episodes does the agent interact with the environment at meta-training and testing times? - How does the encoder accumulate across multiple episodes of interaction? - Is there a way to use the proposed method for adaptation within an episode? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Limitations are not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are very grateful for the reviewer's professional review work on our paper. We would like to express our sincere appreciation for the reviewer's recognition of our work, especially regarding the writing and the motivation behind our work. We also greatly appreciate the acknowledgement of our theoretical and empirical evidence. Please let us know if further clarification is required. **Re Weakness (1) and (2):** Regarding the reviewer's question about whether SaNCE can be combined with any meta-RL algorithm (e.g., $\text{RL}^2$ and MAML), SaNCE is a simple objective based on mutual information that can be used to train any context encoder as defined in Section 4.2 “Skill-aware noise contrastive estimation: a tighter $K$-sample estimator” in our paper and enable contextual RL agents to autonomously discover skills. We have now specified the types of algorithms where SaNCE can be used to make the scope clearer in the new manuscript, specifically in Section 5.1 “Experimental Setup” under the “Our Methods” paragraph. **Re Weakness (3):** Please note that the modified MuJoCo environment is not entirely new. We extended the modified MuJoCo benchmark introduced in DOMINO (https://arxiv.org/abs/2210.04209) and CaDM (https://arxiv.org/abs/2005.06800). Hence, in our extension, there are four new-added environments (Walker, Crippled Hopper, Crippled Walker, HumanoidStandup) compared with the original benchmark. Additionally, in our experiments, we used a different task set design than in the DOMINO and CaDM papers. Specifically, we evaluated RL algorithms in out-of-distribution tasks which have more extreme environmental features. For benchmarking, we referred to the DOMINO paper and chose PEARL as a baseline. Our baseline CCM (https://arxiv.org/pdf/2009.13891) is also a zero-shot meta-RL algorithm, and in the CCM paper, CaDM and PEARL were used as baselines for comparison. TESAC, mentioned in the paper (https://arxiv.org/pdf/1910.10897) as a multi-task soft actor-critic with an off-policy version of task embeddings, was used for comparison with the reviewer's recommended $\text{RL}^2$ and MAML. Therefore, we believe our choice of benchmarking aligns with the reviewer's expectations. Additionally, we directly compared our zero-shot meta-RL algorithms with DOMINO and CaDM using exactly the same task set design. We present the comparison results in Appendix G "A Comparison with DOMINO and CaDM." Since we cannot replicate their pre-trained encoder and prediction networks, we have only compared our results with the experimental results from their original papers. The results show that our model-free methods SaCCM and SaTESAC significantly outperform the model-free DOMINO and are slightly inferior to the model-based CaDM. **Re Weakness (4):** In the updated manuscript, we have added explanations on how the proposed method works during meta-training and meta-testing times. First, SaNCE is an objective for contrastive learning that can be used to train context encoders. Therefore, we have added details in Section 3, "Preliminaries," under the "Contrastive learning" subsection, on how the context encoder is trained and works during the meta-training and meta-testing phases: "In a Meta-RL setting, the context encoder $\psi(c|\tau_{0:t})$ first takes the trajectory $\tau_{0:t}= \{s_{0}, a_{0}, r_{0},...,s_{t}\}$ from the current episode as input and compresses it into a context embedding $c$. Then, the policy $\pi$, conditioned on context embedding $c$, consumes the current state $s_t$, outputs the action $a_t$. The policy $\pi$ conditioned on a fixed $c$ alters the state of the environment in a consistent way, thereby exhibiting a mode of skill." Furthermore, in Figure 5, we updated the caption and gave a more illustrative description of how the context encoder works during meta-training and meta-testing times. Besides, to help readers understand the learning procedure, we provide pseudo-code for how to use SaNCE during meta-training and meta-testing in Algorithms 1 and 2 in Appendix D.3 (you can find them in the pdf file we submitted in the global rebuttal). **Re Q(1) how many episodes does the agent interact with the environment:** We have detailed this in Appendix D.3 "Implementation details" under the “Base algorithm” section in the original manuscript. In the Meta-training phase, we trained agents for 1.6 million timesteps in each environment on the Panda-gym and MuJoCo benchmarks. The maximum episode length is listed in Appendix Table 3 (can be found in the pdf file we submitted in the global rebuttal). For meta-testing, we tested 100 episodes in each environment, with tasks randomly sampled from the moderate and extreme task sets. Considering this might be of interest to the readers, we have added this detail in the main paper's Section 5.1 “Experimental Setup” under the “Our Method” paragraph. **Re Q(2) How does the encoder accumulate across multiple episodes of interaction?:** Our proposed method needs to adapt within an episode. Therefore, at the beginning of each episode, the encoder is reinitialised and does not accumulate across multiple episodes. Specifically, we use an LSTM as the encoder, and we initialise the hidden state and cell state to zero at the start of each episode. We have added the aforementioned content in the “Encoder Architecture” paragraph of Appendix D.3 "Implementation Details" in the updated manuscript. **Re Q(3) Is there a way to use the proposed method for adaptation within an episode?:** Our proposed method adapts within an episode. We have added descriptions in Section 3, “Preliminaries,” under the “Contrastive learning” subsection and in Figure 5 to make this clearer. Finally, we have provided a demonstration video, "SaMI.MP4," in our Anonymous GitHub repository (https://anonymous.4open.science/r/SaMI) to better illustrate that our method helps the agent complete the three steps of "explore effectively, infer, adapt" within an episode. --- Rebuttal Comment 1.1: Comment: Thank you for the response. I don't think it is fair to describe CCM as a zero-shot meta-rl method as it explicitly uses a different policy for first collecting exploration episodes and then runs the exploitation policy given the context encoding. My concerns have been mostly addressed so I'm raising my score.
Summary: This paper introduces Skill-aware Mutual Information (SaMI) and Skill-aware Noise Contrastive Estimation (SaNCE) to enhance zero-shot generalization in reinforcement learning (RL). The authors address the challenges faced by Meta-Reinforcement Learning (Meta-RL) agents in tasks requiring different optimal skills by using context encoders based on contrastive learning. SaMI distinguishes context embeddings according to skills, while SaNCE, a K-sample estimator, optimizes the SaMI objective, reducing the negative sample space and mitigating the log-K curse. The proposed methods are empirically validated on modified MuJoCo and Panda-gym benchmarks, demonstrating significant improvements in zero-shot generalization and robustness to sample size reductions. Strengths: ### S1. Novel Problem Formulation The paper introduces a novel and relevant problem formulation by addressing the need for distinguishing context embeddings according to skills in Meta-RL. This approach targets the challenge of generalizing across tasks with varying optimal skills, which is crucial for effective RL in diverse environments. ### S2. Empirical Validation The proposed methods, SaMI and SaNCE, are thoroughly validated through experiments on modified MuJoCo and Panda-gym benchmarks. The results demonstrate substantial improvements in zero-shot generalization to unseen tasks, showcasing the practical applicability and effectiveness of the methods. ### S3. Clear Presentation The paper is well-structured, with clear explanations of the proposed methods and their benefits. The use of figures, such as visualizations of context embeddings and success rate comparisons, effectively supports the presentation of results. The theoretical proofs and experimental details provided in the appendices further enhance the clarity and comprehensiveness of the paper. Weaknesses: ### W1. Scalability Concerns The scalability of the proposed methods to more complex, large-scale environments is not thoroughly discussed. While the methods show promising results in the tested benchmarks, a broader analysis of their scalability and practical utility in more complex scenarios is needed to understand their full potential and limitations. ### W2. Dependency on Skill Definitions The success of the proposed methods depends on the accurate definition and identification of skills. The paper does not address potential issues related to the variability in skill definitions across different environments and tasks, which could impact the robustness and generalizability of the methods. Technical Quality: 2 Clarity: 3 Questions for Authors: ### Q1. Scalability to Complex Environments How do the proposed methods scale to more complex, large-scale environments? Are there any specific challenges or limitations that need to be addressed for practical deployment in such scenarios? ### Q2. Variability in Skill Definitions How does the variability in skill definitions across different environments and tasks affect the proposed methods? Have the authors considered any strategies to mitigate the potential impact of inconsistent skill definitions on the robustness and generalizability of the methods? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The authors acknowledge the limitations related to the requirement for accurate skill definitions and the focus on specific benchmarks. However, a more detailed discussion on potential negative societal impacts and strategies to mitigate them would be beneficial. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your feedback, especially your comments on the scalability of the method and variability in skill definitions. We are also grateful for the reviewer's recognition of the novelty and contributions of our work, as well as the acknowledgement of our empirical validation and presentation. Please let us know if further clarification is required. **Re Q1. Scalability to Complex Environments:** To further explore the full potential and limitations of SaMI, we have added 4 new environments in the MuJoCo benchmark (Walker, Crippled Walker, Crippled Hopper, and HumanoidStandup), adopted broader test tasks (with more extreme unseen mass and damping values in testing tasks, shown in Table 3 in the pdf file we submitted in the global rebuttal) and conducted a more comprehensive analysis (i.e., video demos in our Anonymous GitHub repository (https://anonymous.4open.science/r/SaMI) to demonstrate the different skills learned by various algorithms. Through additional experiments, we have gained a clearer understanding of SaMI's scalability to more complex, large-scale environments: 1) In more complex MuJoCo environments (Crippled Ant, Crippled Hopper, Crippled Half-Cheetah, SlimHumanoid, HumanoidStandup, and Crippled Walker) as well as the Panda-gym benchmark, SaMI helps the RL agents to be versatile and embody multiple skills, and leads to increased returns/success rates during training and zero-shot generalisation. These environments require the agent to master different skills. When the environmental features (e.g., cube mass, table friction) vary between tasks, the agent can use different skills to work across multiple tasks. 2) In the MuJoCo Ant, Half-Cheetah and Hopper environments, we observed that when the environment requires only a single skill, the improvement is not as significant. However, SaMI requires far fewer samples. We believe that with the 10 environments in MuJoCo and Panda-gym, we can conclude that SaMI demonstrates significant advantages in complex, large-scale environments. It enables the agent to become more versatile, thereby performing well in these complex settings. As for the limitations, we discussed that we did not make independence assumptions about environmental features in the Discussion section. Therefore, there is potential to apply our approach to more complex environments where environmental features interact with each other (e.g., the material of the cube affecting both its mass and friction). This will be explored in our future work. We also hope this study inspires others to focus on scalability in complex environments without relying on restrictive assumptions. We have also added another limitation in Section 5.3 “MuJoCo” under the “Results and Skill Analysis” paragraph, noting that the RL agent tends to learn a single skill to work across multiple tasks in some environments. For example, in the Hopper environment, we found that to adapt to different mass values, the TESAC/CCM/SaTESAC/SaCCM policy tends to learn only one skill, which is the Hopper hopping forward on the floor. As a result, we can see from Table 2 (in the pdf file we submitted in the global rebuttal) that the returns obtained by the four algorithms are very similar across the training tasks, moderate test tasks, and extreme test task settings. In this case, our conclusion is that "even though SaNCE uses fewer samples, it does not degrade RL performance when only one skill is required." Additionally, further discussion and research are needed to determine when multiple skills are required and what improvements multiple skills can bring beyond generalisation. **Re Q2. Variability in Skill Definitions:** To more clearly explain the distinctiveness of skills, we have added a description in Section 4.4 "Skill-aware trajectory sampling strategy": "Methods that focus on skill diversity often rely heavily on accurately defining and identifying skills (https://arxiv.org/abs/1802.06070), with some requiring a prior skill distribution that is often inaccessible (https://arxiv.org/abs/2207.07560). This variability in skill definitions across tasks can affect the robustness and generalisability of these methods. For these reasons, our approach does not use such skill definitions and priors for specific environments or tasks. In this section, we give the distinctiveness of skills and propose a practical trajectory sampling method. In this study, we believe that distinctiveness of skills is inherently difficult to achieve — a slight difference in states can make two skills distinguishable, and not necessarily in a semantically meaningful way. Instead, we should focus on whether the skills acquired by the agent can complete the task. For example, in high-friction tasks, the agent must acquire the Pick\&Place skill to avoid large frictional forces, while in high-mass tasks, the agent must learn the Push skill since it cannot lift the cube. In that way, without skills definition in a semantically meaningful way, we only need to train an agent on a set of tasks, and it will autonomously discover diverse skills to work across multiple tasks." Additionally, based on the reviewer's comments, we have added video demos (https://anonymous.4open.science/r/SaMI) in our experiments to help readers better understand how RL agents complete different tasks through different skills. Meanwhile, using the t-SNE and PCA plots provided in the original manuscript, we can understand the distinctiveness of the acquired skills in a semantically meaningful way. For example, in the videos, we can see that SaCCM has learned the Push, Pick\&Place, Drag and Slide skills in Panda-gym. --- Rebuttal Comment 1.1: Title: Acknowledgement Comment: I appreciate the authors for the detailed response, especially the video demos, which seem to be very effective. Most of my concerns are addressed, therefore I'm raising my assessment.
Summary: Learning generalizable skills across different tasks is desirable in Reinforcement Learning. Some methods embed task information into a context latent space which is then used to train policies. This paper proposes a new objective Skill-aware Mutual Information (SaMI) that incentivizes the distinction of context embeddings according to skills. If optimized, the agent can then execute a skill depending on the underlying context. In addition to the objective, the paper proposes a K-sample estimator, Skill-aware Noise Contrastive Estimation (SaNCE), that enables an agent to optimize the SaMI objective in a sample-efficient manner. Strengths: - relevant topic: Generalising RL agents across tasks is an important topic that will enhance the field of RL - well-motivated: The example in the introduction in addition to Fig. 1 is helping the reader to visually understand the considered problem - The paper clearly states the contributions Weaknesses: only minor things such as an algorithm box would be useful to understand the learning procedure Technical Quality: 3 Clarity: 3 Questions for Authors: - Section 5.3 provides insights into how the positive and negative samples are determined. How prone is this to declare very similar samples, i.e. similar behaviors as positive and negative samples? For example, the return for sample 1 might be high and the return for sample 2 is low, but they are in essence executing the same behavior. - Could the authors elaborate on the training procedure? How is the sampling of features exactly done and what exactly is the training set as mentioned in Section 6.1 line 246? Does this mean the RL agent is building upon existing offline data? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Overall I found this a good paper with minor issues/unclear points (see Weaknesses/Questions). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your recognition of our work. We fully agree with the reviewer that generalising RL agents across tasks is an important topic that will enhance the field of RL, and we are very grateful for your positive feedback on it. We appreciate your acknowledgement of our presentation and our contribution statement. Based on the reviewer's comments, **to help readers understand the learning procedure,** we provide pseudo-code for how to use SaNCE during meta-training and meta-testing in Algorithms 1 and 2 in Appendix D.3 "Implementation Details" (you can find them in the pdf file we submitted in the global rebuttal). Please let us know if further clarification is required. **Re Q(1) “How prone is this to declare very similar samples, i.e. similar behaviours, as positive and negative samples? For example, the return for sample 1 might be high and the return for sample 2 is low, but they are in essence executing the same behaviour”:** In Section 5.3, we define samples (i.e., trajectories) with high returns as positive samples, and the skill that generates this positive sample is defined as a positive skill. In this way, if sample 1 and sample 2 do not have similar returns (i.e., sample 1 has a high return, and sample 2 has a low return), then their corresponding skills are also not similar (i.e., skill 1 is a positive skill, and skill 2 is a negative skill). Note that the similarity of skills mentioned here is not semantic similarity; it does not mean that all skills that exhibit "pick the cube off the table and place it to the goal position" are considered similar skills. Furthermore, the reviewer's concern involves the issue of the distinctiveness of skills. To more clearly explain the distinctiveness of skills, we have added a description in Section 4.4 "Skill-aware trajectory sampling strategy": "Methods that focus on skill diversity often rely heavily on accurately defining and identifying skills (https://arxiv.org/abs/1802.06070), with some requiring a prior skill distribution that is often inaccessible (https://arxiv.org/abs/2207.07560). This variability in skill definitions across tasks can affect the robustness and generalisability of these methods. For these reasons, our approach does not use such skill definitions and priors for specific environments or tasks. In this section, we give the distinctiveness of skills and propose a practical trajectory sampling method. In this study, we believe that distinctiveness of skills is inherently difficult to achieve — a slight difference in states can make two skills distinguishable, and not necessarily in a semantically meaningful way. Instead, we should focus on whether the skills acquired by the agent can complete the task. For example, in high-friction tasks, the agent must acquire the Pick\&Place skill to avoid large frictional forces, while in high-mass tasks, the agent must learn the Push skill since it cannot lift the cube. In that way, without skills definition in a semantically meaningful way, we only need to train an agent on a set of tasks, and it will autonomously discover diverse skills to work across multiple tasks." **Re Q(2): “How is the sampling of features exactly done and what exactly is the training set as mentioned in Section 6.1, line 246? Does this mean the RL agent is building upon existing offline data?”:** The original text in line 246 is “During training, we uniform-randomly select a combination of environmental features from a training set.” This sentence means that during meta-training, at the beginning of each episode, we sample a task from the training task set to interact with for one episode. Since tasks are defined by environmental features, this means selecting a combination of environmental features from a training task set at the start of each episode. To ensure it is clear to readers that "training set" refers to data collected during training rather than offline data, we have revised the original text to: “During meta-training, we uniform-randomly select a combination of environmental features from a training task set.” Similarly, during meta-testing, at the beginning of each episode, we select a combination of environmental features from the moderate/extreme task set. Additionally, in response to the reviewer's comment, “Could the authors elaborate on the training procedure?”, we have added the description of the meta-training and meta-testing processes in the new manuscript. We have added descriptions in Section 3, “Preliminaries,” under the “Contrastive learning” subsection and in Figure 5. We also provided an example in the first paragraph of the introduction section illustrating the need for the agent to adapt within an episode: “When facing an unknown environment, the agent needs to explore effectively, understand the environment, and adjust its behaviour accordingly within an episode. For instance, if the agent tries to push a cube across a table covered by a tablecloth and finds it ‘unpushable,’ it should infer that the table friction is relatively high and adapt by lifting the cube to avoid friction, rather than continuing to push.” --- Rebuttal Comment 1.1: Comment: I highly appreciate the authors' responses that clarify my questions. The responses confirm my initial assessment in recommending accepting this paper.
Summary: The paper proposes an alternative approach to infoNCE for learning contrastive task representations by using the structure of skills in a meta learning setting. In doing so, the algorithm samples more negatives within the same task as the positive (but with lower reward), in a procedure that somewhat resembles hard negative mining. A benefit to this procedure is it is less sensitive to the choice of infoNCE batch size K, which in large multi-task settings is often only able to approximate mutual information for infeasibly large values. Empirical results show this alternative contrastive loss improves performance on RL benchmarks. Strengths: - The log K curse is an important practical issue when using contrastive infoNCE losses in RL settings with many tasks—often the classification becomes trivial without hard negatives. Approaches like those proposed in this paper will be important to scaling contrastive learning methods for control. - Empirical results show strong benefits against the baselines tested in many environments. Weaknesses: - There seem to be issues with the mathematical formulation (see questions) - I found the presentation of the method to be confusing, in particular regarding how the learned representations were actually used for meta learning Technical Quality: 1 Clarity: 1 Questions for Authors: - I'm confused by the statement in Lemma 1. The RHS, $\log K \le I(x;y)$ only depends on the data distribution. In some cases, it seems the mutual information between $x$ and $y$ can be less than $\log K$, regardless of the parameterization used. Is this a contradiction? - Similarly, I don't understand Figure 6—K is on the x-axis, but $\log K$ is drawn as a horizontal line. - From eq. 2, it seems the definition of the SaMI $I_{\mathrm{SaMI}}(c ; \pi_c ; \tau_c)=\mathbb{E}_{p(c, \pi_c, \tau_c)} \{\log \frac{p(\tau_c \mid c, \pi_c)}{p(\tau_c)}\}$ is equivalent to the mutual information $I(\tau_c;(c,\pi_c))$. But by the chain rule we know $I(\tau_c;(c,\pi_c))=I(\tau_c;c)+I(\tau_c;\pi_c\mid{}c)$ which contradicts the next statement $I_{\mathrm{SaMI}}(c ; \pi_c ; \tau_c) \leq I(c ; \tau_c)$ - How is the "momentum encoder" parameterized in this setting? - Is there significance to the bars not lining up in Figure 2 (c)? - The authors may wish to discuss related work on hard negative mining Confidence: 4 Soundness: 1 Presentation: 1 Contribution: 2 Limitations: Addressed Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the reviewer's feedback and for acknowledging our empirical results and recognising the importance of our research contributions to scaling contrastive learning methods for control. We hope our clarifications, corrections and analysis can convince the reviewer to reconsider their score. Please let us know if further clarification is required. **Re Q(1) the statement in Lemma 1 and typo in Figure 6:** Thank you for pointing out the typo in Figure 6 in the original paper, we’ve fixed it in the updated paper, including: 1) The x-axis should be "training epochs" instead of $K$. The main idea that Figure 6 aims to present is that as the training progresses, the lower bound of $I(x;y)$, $I_\text{SaNCE}$ (yellow line), gradually approaches $I(x;y)$ (dark blue line). 2) Considering the importance of Figure 6 to Lemma 1, we have moved it to the main text (so it's Figure 3 now, you can find it in the one-page pdf). Regarding the reviewer's question, "In some cases, it seems the mutual information between $x$ and $y$ can be less than $\log K$, regardless of the parameterization used." We have revised the statement of Lemma 1 in Section 4.1 to ensure we have considered all cases. We believe the following two changes will address the reviewer's concern: 1) We do not consider the case when $x \perp y$, i.e. $\log K > I(x;y) = 0$ ($\forall K \geq 1$), because a meta-RL agent learns a context encoder by maximising MI between trajectories $\tau_c$ and context embeddings $c$, which are not independent according to the MDP graph in Figure 2. We also added the aforementioned content in Section 4.1. More specifically, in Section 4.2, we define skills as follows: “A policy $\pi$ conditioned on a fixed context embedding $c$ is defined as a skill $\pi(\cdot|c)$, abbreviated as $\pi_c$. If a skill $\pi_c$ is conditioned on a state $s_t$, we can sample actions $a_t \sim \pi(\cdot|c,s_t)$. After sampling actions from $\pi_c$ at consecutive timesteps, we obtain a trajectory $\tau_{c}=\{s_t, a_t, r_t, s_{t+1}, \ldots, s_{t+T}, a_{t+T}, r_{t+T}\}$ which demonstrates a consistent mode of behaviour.” Therefore, trajectories $\tau_c$ and context embeddings $c$ are not independent. 2) Given that we focus on generalisation, we consider cases with a finite sample size of $K$, hence the true $I(x;y) \geq \log K$. This is because we want to learn a context encoder to compress helpful information even with finite samples, which is crucial for generalisation. The key to achieving good compression is having a reliable measure of compression, and MI is a good measure of compression (https://arxiv.org/pdf/1810.05728). $K$-sample estimators like NCE or InfoNCE were originally proposed to estimate MI from finite samples. Therefore, we believe it is important to consider whether the $K$-sample estimator can approximate MI with finite samples, i.e., we should consider cases where the true $I(x;y) \geq \log K$. We also added the above statement in Section 4.1 on page 4 of the updated manuscript. We have included the restrictions of "finite sample size $K$" and the independence of $x$ and $y$ in Lemma 1 to ensure that the Lemma covers all cases (as shown on page 4 of the updated manuscript): Lemma 1: Learning a context encoder $\psi$ with a $K$-sample estimator and finite sample size $K$, we always have ${I}_{\text{InfoNCE}}(x;y|\psi,K)$ $\leq$ $\log K$ $\leq$ $I(x;y)$, when $x \not \perp y$. (see proof in Appendix A) **Re Q(2) definition of SaMI in eq.2:** Thank you for pointing out our typo "$p(\tau_c|c,\pi_c)/p(\tau_c)$" in eq.2. We have corrected it, and it now aligns with the formula in Appendix B "Proof for Lemma 2". We have corrected eq. 2 to: $$I_{\text{SaMI}}(c;\pi_c;\tau_c) = E_{p(c,\pi_c,\tau_c)} \log \ \frac{p(c,\pi_c,\tau_c)}{p(c)p(\pi_c)p(\tau_c)} $$ Besides, we gave proof in Appendix B of the original manuscript: $I_{\text{SaMI}}(c;\pi_c;\tau_c) = I(c;\tau_c) - I(c;\tau_c|\pi_c)$, hence $I_{\text{SaMI}}(c;\pi_c;\tau_c) \leq I(c;\tau_c)$. **Re Q(3) How is the "momentum encoder" parameterized?:** In Appendix D.3 "Implementation Details" of the original manuscript, the "momentum encoder" is updated $\theta_{\psi^*}$ by $\theta_{\psi^*} \leftarrow m \cdot \theta_{\psi} + (1-m) \cdot \theta_{\psi^*}$. This is a similar setup to existing research (e.g. https://arxiv.org/abs/1911.05722). The momentum update makes $\theta_{\psi^*}$ evolve more smoothly by having it slowly track $\theta_{\psi}$ with $m \ll 1$ (e.g., $m=0.05$ in this research). **Re Q(4) Is there significance to the bars not lining up in Figure 2(c)?:** Yes, the bars not lining up in Figure 2 (c) (Figure 4 in the updated manuscript) illustrate that the negative sample space may differ across tasks. To address the reviewer's concern, we have added an explanation in Appendix C, "Sample Size of $I_{\text{Sa+InfoNCE}}$,": "Figure 4 illustrates the differences in the size of the negative sample space, in which the negative sample space may vary across tasks (as shown by the non-lining-up bars in Figure 4(c)). This is because we define negative samples as trajectories with low returns, so the size of the negative sample space is influenced by sampling randomness." **Re Q(5) related work on hard negative mining:** We thank the reviewer for pointing out this related work, such methods provide insight into defining positive/negative samples. We have carefully considered this suggestion, but our method does not make significant contributions in terms of sample hardness. In the third paragraph of Section 4.4 “Skill-aware trajectory sampling strategy,” we briefly added insight we gain from hard negative samples: “Positive samples are generated by the optimal skill for the current task, while lower return samples are classified as negative. This polarised definition helps the model select the optimal skill from among many skills with varying returns and avoids the issue of hard negative examples during training (https://arxiv.org/abs/2010.04592).” --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for your response. The new statement of eq. (2) seems to be equivalent to the definition of the *total correlation* (Watanabe, 1960). Could you comment on the relationship between the updated eq. (2) definition of $I_{\\mathrm{SaMI}}$ and existing information-theoretic quantities like total correlation (and estimators such as Bai et al. (2023))? ### References Bai, K., Cheng, P., Hao, W., Henao, R., & Carin, L. (2023). Estimating total correlation with mutual information estimators. _Proceedings of The 26th International Conference on Artificial Intelligence and Statistics_, 2147–2164. [https://proceedings.mlr.press/v206/bai23a.html](https://proceedings.mlr.press/v206/bai23a.html) Watanabe, S. (1960). Information theoretical analysis of multivariate correlation. _IBM Journal of Research and Development_, _4_, 66–82. IBM Journal of Research and Development. [https://doi.org/10.1147/rd.41.0066](https://doi.org/10.1147/rd.41.0066) --- Reply to Comment 1.1.1: Title: Response to the official comment from Reviewer hnsz Comment: **Re “The new statement of eq. (2) seems to be equivalent to the definition of the total correlation”:** We thank the reviewer for pointing out this related work. The eq. (2) $I_{\text{SaMI}}(c;\pi_c;\tau_c)$ is the **interaction information**, and it differs from the **total correlation** in definition. Both interaction information and total correlation are generalisations of mutual information. Specifically, let's first consider the **total correlation** for three variables. For a given set of 3 random variables $\{x,y,z\}$, the total correlation $TC(x,y,z)$ is defined as the Kullback–Leibler divergence from the joint distribution $p(x,y,z)$ to the independent distribution $p(x)p(y)p(z)$. This divergence simplifies to the difference of entropies: $$TC(x,y,z)=H(x)+H(y)+H(z)-H(x,y,z)$$ where $H(x)$ is the information entropy of variable $x$, and $H(x,y,z)$ is the joint entropy of the variable set $\{x,y,z\}$. In contrast, the **interaction information** $I(x;y;z)$ for three variables $\{x,y,z\}$ is given by: $$I(x;y;z)=(H(x)+H(y)+H(z)) - (H(x,y)+H(x,z)+H(y,z)) + H(x,y,z)$$ In Appendix B, “Proof for Lemma,” we refer to it as “interaction information,” drawing from references "Interaction Information for Causal Inference: The Case of Directed Triangle" (https://arxiv.org/pdf/1701.08868) and "Multivariate Information Transmission" (https://ieeexplore.ieee.org/document/1057469). There are several other names for Interaction Information, including the amount of information [1], information correlation [2], co-information [3], and simply mutual information [4]. To make eq. (2) clearer, we have moved the definition of Interaction Information into the main text. Therefore, we directly defined $I_{SaMI}$ using the Interaction Information definition: $$I_{SaMI}(c;\pi_c;\tau_c)= I(c;\tau_c)-I(c;\tau_c|\pi_c)$$ Therefore, we proposed a lower bound $I_{\text{SaMI}}(c;\pi_c;\tau_c)$ to approximate $I(c;\tau_c)$. We believe that other generalisations of mutual information (i.e., MI estimators) could also be helpful in approximating $I(c;\tau_c)$. **Re "the relationship between the updated eq. (2) definition of $I_{SaMI}$ and estimators such as Bai et al. (2023)":** TC estimators [5] is an interesting study, we fully agree with Bai et al.'s perspective [5] that when calculating mutual information among multiple variables, existing work often seeks ways to decompose interaction information, which leads to too much strong independence assumptions about the data distribution. For instance, as mentioned in [5], some studies make a strong assumption that all variables are conditionally independent [6]. Without such strong independence assumptions, TC estimator [5] is a method of decomposing interaction information and assumes that the relationships between variables follow a tree-like or line-like structure, thus applying tree-like or line-like decomposition to the interaction information, resulting in tree-like or line-like MI estimators (i.e., TC estimators). Our proposed SaMI approach does not make any assumptions about the data, and therefore does not decompose the interaction information. This is primarily because we focus on mutual information among just three variables in the context of the RL problem, which is significantly simpler than the scenarios addressed in [5]. Additionally, through extensive experiments, we have verified that maximising the mutual information $I_{SaMI}$ significantly enhances the zero-shot generalisation ability of reinforcement learning. We thank the reviewer again for noting the typo in our original eq(2) and helping us to refine Lemma 1. We would be grateful if the reviewer could reevaluate their score in light of our given clarifications. **References** [1] Ting, Hu Kuo. "On the amount of information." Theory of Probability \& Its Applications 7.4 (1962): 439-447. [2] Wolf, David R. "The Generalization of Mutual Information as the Information between a Set of Variables: The Information Correlation Function Hierarchy and the Information Structure of Multi-Agent Systems." (2004). [3] Bell, Anthony J. "The co-information lattice." Proceedings of the fifth international workshop on independent component analysis and blind signal separation: ICA. Vol. 2003. 2003. [4] Yeung, Raymond W. "A new outlook on Shannon's information measures." IEEE transactions on information theory 37.3 (1991): 466-474. [5] Bai, Ke, et al. "Estimating total correlation with mutual information estimators." International Conference on Artificial Intelligence and Statistics. PMLR, 2023. [6] Poole, Ben, et al. "On variational bounds of mutual information." International Conference on Machine Learning. PMLR, 2019. --- Rebuttal 2: Title: Response to the official comment from Reviewer hnsz Comment: Thank you for your response. **The second statement is the correct definition $I_{SaMI}(c;\pi_c;\tau_c)= I(c;\tau_c)-I(c;\tau_c|\pi_c)$, i.e., the 3-variable interaction information.** So we are optimising the interaction information $I_{SaMI}(c;\pi_c;\tau_c)$, defined as $I_{SaMI}(c;\pi_c;\tau_c) = I(c;\tau_c)-I(c;\tau_c|\pi_c)$. Please kindly check the updated manuscript for the updated eq. (2) at https://anonymous.4open.science/r/SaMI/SaMI.pdf. So the entire story of the version changes is as follows: 1) **The SaMI formula $I_{SaMI}(c;\pi_c;\tau_c)$, defined based on interaction information, has not changed,** and the proof (in Appendix A and B of our original manuscript) based on this formula still holds. 2) **The only change made was to the detailed version of the formula with the log probabilities, i.e., $E_{p(c,\pi_c,\tau_c)} \log \ \frac{p(\tau_c|c,\pi_c)}{p(\tau_c)}$.** This was corrected due to a typo pointed out by the reviewer, but it was not essential for the theory and has been removed from the paper. Since we cannot evaluate $p(c,\pi_c,\tau_c)$ directly, we approximate it by Monte Carlo sampling using $K$ samples from $p(c,\pi_c,\tau_c)$ (i.e., trajectories from environments). Therefore, extensive discussion about the details of the distribution $p(c,\pi_c,\tau_c)$ is not required. 3) We ensure that our framework effectively maximises $I_{SaMI}$ through the $K$-sample estimator $I_{SaNCE}$ proposed in Section 4.3 of the original manuscript, and the skill-aware trajectory sampling strategy proposed in Section 4.4 of the original manuscript. **Please refer to Section 4.1 of our original manuscript for our motivation.** Since maximising MI $I(c;\tau_c)$ faces the $\log K$ curse, we propose $I_{\text{SaMI}}(c;\pi_c;\tau_c)$, which is a ground-truth MI that is smaller than $I(c;\tau_c)$. We state in our Lemma 1 that $I_{\text{InfoNCE}}(x;y|\psi, K) \leq \log K \leq I(x;y)$; please refer to Appendix A of our original manuscript for the proof of Lemma 1. We have presented **the motivation for this study in Section 4.1 of our original manuscript:** “Good compression is crucial for generalisation, and compressing valuable information from a limited number of $K$ samples requires MI as an effective measure of compression (https://arxiv.org/abs/1810.05728). We derive three key insights when learning a context encoder with finite sample size: (1) focus on a ground-truth MI that is smaller than $I(x;y)$; (2) develop a $K$-sample estimator tighter than the $I_{\text{InfoNCE}}$; (3) increase sample quantity $K$, however, this is usually impractical. A meta-RL agent learns a context encoder by maximizing MI between trajectories $\tau_c$ and context embeddings $c$. Driven by insight (1), we introduce Skill-aware Mutual Information (SaMI) in Section 4.2, designed to enhance the zero-shot generalization of downstream RL tasks. Corresponding to insight (2), we propose Skill-aware Noise Contrastive Estimation (SaNCE) to maximize SaMI with finite samples in Section 4.3. Finally, Section 4.4 demonstrates how to equip a Meta-RL agent with SaNCE in practice.” Besides, there are many formulas (or generalised versions of these formulas) for calculating the mutual information between variables $c,\pi_c,\tau_c$. **We aim to address the $\log K$ curse. Therefore, we proposed $I_{SaMI}(c;\pi_c;\tau_c)$, which aligns with our motivation: $I_{SaMI}$ serves as a ground-truth MI that is smaller than $I(c;\tau_c)$. Hence, we maximise the interaction information $I_{SaMI}$.** This allows for faster convergence in sample-limited tasks, thereby facilitating zero-shot generalisation. Achieving this is crucial for applying RL-based robots to sample-limited real-world scenarios and enabling rapid adaptation, which is one of the significant bottlenecks. We believe there are various bounds of mutual information, lower bounds and upper bounds (https://arxiv.org/pdf/1905.06922). For example, **the total correlation pointed out by the reviewer has great potential for calculating mutual information among multiple variables. However, it does not align with our motivation.** Since $TC \geq I_{SaMI}$, it is not suitable for zero-shot generalisation scenarios.
Rebuttal 1: Rebuttal: Thank you very much for the reviewer's feedback. We appreciate the time and effort that the reviewers dedicated to providing feedback on our manuscript, and are grateful for the insightful comments and valuable improvements to our paper. We have addressed each of the reviewers' comments individually. Please find video demos in our Anonymous GitHub repository (https://anonymous.4open.science/r/SaMI). We also attach a one-page PDF file which contains all of the Tables and Figures mentioned in our responses to the reviewers. You can also find our updated anonymous manuscript in our Anonymous GitHub repository (named "SaMI.pdf"). Pdf: /pdf/2e6e7eb3d825aa34cfa14a4b787b849476525129.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper presents an interesting way to deal with the "log k curse", that is novel while being sensible. The results seem promising, even though their presentation needs improving, and I believe it can serve as foundation for many future works to build upon. Overall I think the idea and execution are solid, but the paper as an artifact needs work, as I'll point out in detail in the Weaknesses section. Strengths: The paper presents an interesting novel technique that is intuitive without being trivial. There are experiments across 2 suites of tasks and solid mathematical work, and the overall idea is flexbible enough for future work to build upon. Weaknesses: I believe the results, and more specifically their presentation is a serious weakness of the paper. In Panda-Gym the authors claim their method "boosts success rates by an average of 20.23% in moderate tests and18.36% in extreme tests", but in fact when taking into account the standard deviation of theirs and previous method performances we *cannot* ascertain whether there was any improvement whatsoever on the moderate and extreme tests. This also points to an abuse of the use of the blue background to "indicates that our method surpasses PEARL, TESAC and CCM". Moreover in the MuJoCo results while we in general do see more statistically significant results for Test (Moderate) and Test (extreme) there are still a few misuses of the blue background, and also we in general don't see statistically significant improvement in the training setting, which leads to the question of why do we see such different behaviours in Panda-Gym (non-statistically significant improvement in Moderate and Extreme, but improvement in Training) and MuJoCo (the opposite), a question that would have been good if the authors had explored a little more. Finally the interpretation of the t-SNE results seem quite problematic, where 2 clear clusters, one on the lower left and another on the lower right, containing context embeddings of the 4 kinds of settings for Panda-Gym seem to be ignored, but an artificial cluster is suggested by using a yellow bounding box. I consider such cluster artificial as many of the points on the upper part of the bounding seem closer to points outside it then to points inside it, and it's always good to remember that for t-SNE local distances are meaningful, but global ones are not. Technical Quality: 3 Clarity: 3 Questions for Authors: + Do the authors have any hypothesis on why the generalization behaviour is different when using SaNCE in Panda-Gym vs MuJoCo? + I believe fixing the plots and claims on improvements is fundamental for the paper to be a good research artifact and recommend the authors do so + Given what was said about the t-SNE analysis I believe it either should simply be removed, or a different analysis using clustering methods in higher dimensions should be used instead. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper is mostly about addressing a limitation of previous methods, and I do not believe it has any potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the reviewer’s comments, especially the helpful comments on the presentation of experimental results. We appreciate your recognition of the novelty of our work and the potential of our method. We have made revisions to enhance the clarity and presentation of our experimental results based on your suggestions. Please let us know if further clarification is required. **Re statistical significance tests:** We have added a t-test to conduct a statistical hypothesis test to determine whether SaMI brought a statistically significant improvement over the respective base RL algorithm without SaMI. i) In the updated manuscript, we marked with an asterisk (*) in Table 1 and Table 2 where the algorithm with SaMI shows statistically significant improvement over the same algorithm without SaMI (you can also find them in the pdf file we submitted in the global rebuttal). The t-test results indicate that SaMI brings significant improvement to the extreme test set. This aligns with our conclusions, that SaMI leads to increased returns during training and zero-shot generalisation, especially in environments that require different skills. We have added this information in Section 5.3. ii) We also report the p-values of the t-tests in Appendix H in the updated manuscript. **Re Q(1) whether the generalisation behaviour is different for SaNCE in Panda-Gym vs MuJoCo:** The results from Panda-Gym and MuJoCo are consistent as they both show: i) SaMI helps the RL agents to be versatile and embody multiple skills; ii) SaMI leads to increased performance during training and zero-shot generalisation, especially in environments that require different skills. Specifically, *1) Panda-Gym:* The RL agent must equip itself with different skills to achieve a high success rate across multiple tasks because different environmental features require different skills. For example, tasks with high friction require the Pick\&Place skill (picking the cube off the table and placing it in the goal position), while tasks with high mass require the Push skill (pushing the cube to move it across the table). Therefore, in the Panda-Gym benchmark, we see that multiple skills are needed in training tasks, moderate test tasks, and extreme test tasks, so SaNCE brings improvement across all three. *2) MuJoCo:* In the MuJoCo environment, we found: i) In some environments (Ant, Hopper, and Half-Cheetah), the RL agent only needs to learn a single skill to generalise across different tasks. For example, in the Hopper environment, we found that to adapt to different mass values, the TESAC/CCM/SaTESAC/SaCCM policy tends to learn only one skill, which is the Hopper hopping forward on the floor. As a result, we can see from Table 2 that the returns obtained by the four algorithms are very similar across the training tasks, moderate test tasks, and extreme test task settings. In this case, our conclusion is that "even though SaNCE uses fewer samples, it does not degrade RL performance"; ii) When the environment (Crippled Ant, Crippled Hopper, Crippled Half-Cheetah, SlimHumanoid, HumanoidStandup, and Crippled Walker) requires different skills to complete tasks, SaNCE brings significant improvements. For example, in the Crippled Ant environment, when it has 3 or 4 legs available, the Crippled Ant Robot learns to roll to adapt to varying mass and damping in the training tasks and moderate tasks. However, during zero-shot generalisation in extreme tasks, when only 2 legs are available, the Ant Robot can no longer roll. Instead, it adapts by walking using its two legs. Therefore, we can see that SaNCE brings significant improvement, especially during zero-shot generalisation. To better showcase generalisation behaviour, we have added 4 new environments in the MuJoCo benchmark (Walker, Crippled Walker, Crippled Hopper, and HumanoidStandup), and adopted broader test tasks (with more extreme unseen mass and damping values in testing tasks, shown in Table 3 in the pdf file we submitted in the global rebuttal). This allows us to better demonstrate SaNCE's zero-shot generalisation ability across more environments and less familiar testing tasks. Additionally, to better present our results, we have utilised videos on GitHub to clearly demonstrate how SaNCE helps the RL algorithm learn different skills and perform zero-shot generalisation (https://anonymous.4open.science/r/SaMI). To help readers better understand the generalisation behaviour, we have added the aforementioned descriptions in Section 5.2 "Panda-gym" and Section 5.3 "MuJoCo" under the "Result and Skill Analysis" paragraph. **Re Q(2) and (3) t-SNE plots and claims on improvements:** We have updated the t-SNE plots in Figure 6 in the one-page pdf. The original plot represented 100 trajectories for each task, where each point in the t-SNE plot represents the context embedding of the final time step of each trajectory, encapsulating the skill information of the entire trajectory. However, when randomly initialising the cube's starting position, if the cube is very close to the target position, the agent only needs to slightly move the cube to complete the task. In this case, the embeddings for all 4 tasks cluster together. Additionally, there are failure cases across all 4 tasks where the agent attempts to move the cube, but the cube remains still, leading to clustering of the embeddings. Therefore, the 2 clear clusters containing all tasks represent these two scenarios, with one cluster on the lower left and another on the lower right. To better illustrate the clustering of embeddings based on different skills, we have removed the aforementioned two scenarios from the 100 tests for each task. In the revised t-SNE plot, we can clearly see two distinct clusters. **Regarding the presentation of results:** To better present our experimental results, we have made the following improvements: 1) We’ve deleted the blue background. 2) We have removed the average improvement percentage. --- Rebuttal Comment 1.1: Comment: Thank you for your response and the changes made to the paper. Could you tell me the 5 values for the performances of SaTESAC and TESAC on the Panda-Gym extreme test setting? I'm somewhat confused by the t-test results. I still do not understand how you can call the region inside the yellow box a cluster in T-SNE, as there are multiple points with x < -10 who look like they should be part of the so-called cluster, but which would be pretty bad for the push skill, which end up just making me doubt a little whether the method is truly doing what the authors claim it is doing. --- Reply to Comment 1.1.1: Title: Response to the official comment from Reviewer cm3M Comment: Thank you for your response. The table below provides the requested 5 values for the performances of SaTESAC and TESAC showing the average success rate ± standard deviation (on 8 extreme test tasks): | | seed 1 | seed 2 | seed 3 | seed 4 | seed 5 | | -------- | ------- | ------- | ------- | ------- | ------- | | TESAC | 0.19±0.13 | 0.20±0.26 | 0.25±0.19 | 0.18±0.14 | 0.28±0.31 | | SaTESAC | 0.37±0.37 | 0.37±0.35 | 0.36±0.36 | 0.36±0.34 | 0.38±0.35 | These 5 values in the table demonstrate that *the improvements made by SaTESAC over TESAC in extreme test tasks are significant*, which aligns with our paired t-test results. The paired t-test indicates that *the improvements made by SaTESAC over TESAC in all three settings (Training, Moderate, and Extreme) are statistically significant at a significance level of 0.05*. The paired t-test results are shown in the table below (you can also find these results in Appendix H of the updated paper at: https://anonymous.4open.science/r/SaMI/SaMI.pdf): | | Training | Moderate | Extreme | |-------------------|-----------|-----------|-----------| | t-statistic | 12.180000 | 13.800000 | 8.020000 | | p-value | 0.000260 | 0.000160 | 0.001310 | Please find all the results for TESAC/SaTESAC/CCM/SaCCM across multiple seeds used to generate Table 1 and Table 2 in our anonymous GitHub repository (https://anonymous.4open.science/r/SaMI/data/MuJoCo.xlsx and https://anonymous.4open.science/r/SaMI/data/Panda-gym.xls). Please refer to the "success rate \& t-test" sheet in the file “data/Panda-gym.xls” to find all experimental results related to Panda-gym. The content of the table above can also be found in the "Mean Success Rate-Multiple Envs" sheet. Thank you for your comment on the projection method. We have added **UMAP** (https://arxiv.org/pdf/1802.03426) as an alternative projection method, which is similar to t-SNE but more efficient and tends to better preserve the global structure of the data than t-SNE (i.e., more clearly separates groups of similar categories from each other). We have replaced the t-SNE visualisation in Figure 6 with UMAP in the updated paper. In the new context embedding, two clear clusters are visible, corresponding to Push and Pick\&Place, respectively. Please find the updated Figure 6 at https://anonymous.4open.science/r/SaMI/data/figure_6.png Finally, to better understand what the method is doing, combining the newly added UMAP visualisations (in Appendix F of the updated paper at: https://anonymous.4open.science/r/SaMI/SaMI.pdf) with Heatmap visualisations (provided in the original manuscript in Appendix F), and video demos (in our anonymous GitHub repository https://anonymous.4open.science/r/SaMI) would be helpful. We have provided skill analysis in the original manuscript in sections 5.2 and 5.3 under the “Results and skill analysis” paragraph, which you can also find on page 9 (Panda-gym environment) and page 10 (MuJoCo environment) of the updated manuscript (https://anonymous.4open.science/r/SaMI/SaMI.pdf). Please let us know if further clarification is required.
null
null
null
null
null
null
Achievable Fairness on Your Data With Utility Guarantees
Accept (poster)
Summary: This paper addresses a significant limitation in the fairness literature, which is the use of uniform fairness requirements across diverse datasets. It proposes the YOTO framework to approximate the fairness-accuracy trade-off and reduce computational costs while existing methods typically require multiple models for training. Strengths: - Originality: This paper discusses the problems associated with using uniform fairness metrics across diverse datasets and provides a framework for choosing fairness guidelines based on the characteristics of individual datasets. While this issue has been addressed in the fairness literature, this paper proposes a new framework to tackle it. Unlike previous work that recovers a single point on the trade-off curve corresponding to pre-specified fairness constraints during the training procedure, this method estimates the entire trade-off curve. - Quality and Clarity: The paper is well-constructed and is easy-to-follow. The motivation and research question are clearly stated. The theoretical analysis is solid. The experiment is conducted across three different practical applications (tabular data, image, and text), which is rigorous. - Significance: This paper proposes a novel framework to approximate the accuracy-fairness trade-off curve, which is an important and efficient evaluation paradigm for providing fairness guidelines in practice. The method has a practical advantage, allowing decision-makers to observe the minimum fairness violation that a particular dataset could have as the model accuracy increases. Weaknesses: - The content in lines 121-126 that illustrates the suboptimal problem, as shown in Figure 1, is highly similar to the content in lines 61-66. This content should be summarized to make it more concise. - Since this work is based on YOTO, YOTO should be introduced in the Preliminary section for better illustration. - Section 3.1 should also be introduced in the Preliminary section, as it is the basic in-processing framework for fairness. - There is a lack of dataset and experiment setup descriptions, including data size, sensitive attributes chosen, held-out set size, and train-test split ratio. Technical Quality: 3 Clarity: 3 Questions for Authors: In Lemma 3.1, $\delta$ specifies the relationship between the confidence interval and the data size of the held-out set. Is there any requirement for the size of the held-out calibration dataset to guarantee an effective measure for the confidence intervals? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have adequately addressed the limitations. Since this work is mainly about fairness, I do not see any negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's recognition of the quality and originality of our work. We clarify the questions raised below. > The content in lines 121-126 that illustrates the suboptimal problem, as shown in Figure 1, is highly similar to the content in lines 61-66. This content should be summarized to make it more concise [...] Since this work is based on YOTO, YOTO should be introduced in the Preliminary section for better illustration [...] Section 3.1 should also be introduced in the Preliminary section, as it is the basic in-processing framework for fairness [...] We thank the reviewer for their suggestions regarding the structure of our write-up and will make the proposed modifications to the final version of our paper. > There is a lack of dataset and experiment setup descriptions, including data size, sensitive attributes chosen, held-out set size, and train-test split ratio. Due to space constraints, we have included comprehensive details about our experimental setup in Appendix F of our paper. This includes dataset details (including dataset sizes), sensitive attributes chosen, model architectures used for both, our YOTO model and the baselines, and training details (such as the number of epochs, compute resources used, methodology of early stopping and fairness losses). For completeness, we will also include the key details in the final version of our main text. > In Lemma 3.1, $\delta$ specifies the relationship between the confidence interval and the data size of the held-out set. Is there any requirement for the size of the held-out calibration dataset to guarantee an effective measure for the confidence intervals? In general, Hoeffding's inequality provides finite-sample guarantees which remain valid regardless of the size of the calibration dataset. However, if the calibration dataset size is low, the CIs obtained might be more conservative (and hence less informative). The width of the CIs decreases as the calibration dataset size increases, thereby leading to more informative confidence intervals. In fact, it can be seen from Lemma 3.1 that the width of the CIs on $acc(h_\lambda)$ is $\mathcal{O}(1/\sqrt{|D_{cal}|})$. To investigate how the calibration dataset size impacts the informativeness of our CIs in practice, we have conducted comprehensive ablation studies across different datasets in Appendix F.4. Our empirical results show that while the informativeness of the CIs for a given calibration dataset size depends on the fairness metric and dataset under consideration, we observe that in many cases the CIs obtained are informative when the size of calibration dataset is at least 1000. We hope the above addressed the concerns raised by the reviewer and that the reviewer will consider increasing their score. --- Rebuttal Comment 1.1: Comment: I am satisfied with the authors’ replies, and they have addressed my concerns and questions. After checking the replies the authors provided to other reviewers, I am happy to raise my score.
Summary: Considering inherent accuracy-fairness trade-off in real-world scenarios with data imbalance and bias, imposing strict fairness constraint could be impractical. To address this, the paper introduces an efficient method to contextualize and accurately estimating fairness-accuracy trade-off curve for each dataset. The proposed YOTO ease redundant model training to realize the trade-off curves. Also, the paper presents a method to quantify the confidence of the estimate. Strengths: - Clarity of the writing: The paper is easy to follow and well-motivated. - Novel Approach: The paper introduces a novel method to approximate the fairness-accuracy trade-off curve efficiently, which is a significant improvement over traditional methods requiring multiple model trainings. - Computational Efficiency: By leveraging the YOTO framework, the proposed method reduces the computational burden, making it feasible for large datasets and complex models. - Practical Utility: The framework provides a practical tool for auditing model fairness, offering a principled way to select data-specific fairness requirements and assess compliance. Weaknesses: - Data Dependency: The methodology requires separate datasets for training and calibration, which may not be feasible in situations with limited data. - Assumptions for Statistical Guarantees: The statistical guarantees rely on assumptions that may not hold in all practical scenarios, potentially limiting the generalizability of the results. - Objective Formulation Dependency: The trend (scattered points) of YOTO and corresponding CIs are specifically designed for relaxed fairness metric rather than explicit fairness notion, which may make the boundary inaccurate. In other words, trade-off curve and CIs can be sensitive to fairness loss formulation. Technical Quality: 3 Clarity: 3 Questions for Authors: - Can I understand the trade-off curve estimated by proposed method similar to Pareto Frontier with CIs inherent in dataset when model design is specified? If so, I think it would be worth mentioning related thresholding post-processing methods [1,2]. - I think we can easily interpret points reside in upper-left are suboptimal in Figure 3. However, how can we interpret the ones reside in lower-right region? Does this mean the proposed trade-off estimate and corresponding CI is not accurate? [1] Kim, Joon Sik, Jiahao Chen, and Ameet Talwalkar. "FACT: A diagnostic for group fairness trade-offs." International Conference on Machine Learning. PMLR, 2020. [2] Jang, Taeuk, Pengyi Shi, and Xiaoqian Wang. "Group-aware threshold adaptation for fair classification." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. No. 6. 2022. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Please see the questions and weaknesses above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful to the reviewer for acknowledging the novelty and practical utility of our approach and the clarity of our writing. We respond to the questions raised below. > Assumptions for Statistical Guarantees: The statistical guarantees rely on assumptions that may not hold in all practical scenarios, potentially limiting the generalizability of the results. We would like to clarify that while Theorem 3.4 relies on regularity assumptions, our statistical guarantee in Proposition 3.2 does not require any additional assumptions beyond the exchangeability of data. In specific, Proposition 3.2 provides the 'worst-case' optimal trade-off, accounting for finite-sample uncertainty and the upper CIs obtained remain valid even if the YOTO classifier $h_\lambda$ is not trained well (and hence achieves sub-optimal accuracy-fairness trade-off), although in such cases the CI may be conservative. This means that any trade-off which lies above the upper CIs is guaranteed to be suboptimal with probability $1-\alpha$ without any additional assumptions. > Objective Formulation Dependency: [...] trade-off curves and CIs can be sensitive to fairness loss formulation While the trade-offs and CIs obtained may be sensitive to the fairness loss formulation, it is important to emphasise that our coverage guarantees in Propositions 3.2 and 3.3 are independent of the fairness loss formulations. Specifically, our upper CIs obtained in Proposition 3.2 remain valid regardless of the fairness loss used. Similarly, for the lower CIs, our sensitivity analysis should adjust the lower CIs to account for any sub-optimalities in the estimated trade-off curve arising from our choice of fairness loss in practice. This is also evident from our experimental results in Section 5 where we consider baselines with various fairness loss formulations, and show that the obtained trade-offs are overall consistent with our CIs. > Can I understand the trade-off curve estimated by proposed method similar to Pareto Frontier with CIs inherent in dataset when model design is specified? If so, I think it would be worth mentioning related thresholding post-processing methods [1,2]. We thank the reviewer for their question. It is important to clarify the distinct goals of our work and those in [1,2]. Our focus in this work is on reliably approximating the optimal accuracy-fairness trade-offs achievable within a specific model class, $\mathcal{H}$, using the finite data available. To achieve this, we construct CIs with probabilistic guarantees of the form $\mathbb{P}(\tau^*_{fair}(\Psi) \in \Gamma^\alpha) \geq 1-\alpha$. This is in contrast to [1, 2] which provide upper bounds on the best achievable accuracy-fairness trade-off when infinite data is available. While these works offer valuable theoretical insights, there is no guarantee regarding whether this upper bound is achievable by the model class under consideration, given available data. In fact, we consider the FACT Pareto frontiers proposed in [1] in Appendix F.6, where we show empirically that these Pareto frontiers are highly conservative and are not achieved by any of the SOTA baselines we considered. We will further highlight these distinctions in our updated manuscript. > I think we can easily interpret points reside in upper-left are suboptimal in Figure 3. However, how can we interpret the ones reside in lower-right region? Does this mean the proposed trade-off estimate and corresponding CI is not accurate? Our confidence intervals (CIs) quantify the range of most likely trade-offs achievable for a given finite dataset and model class. In particular, our CIs only offer coverage of at least $1-\alpha$. This means that if a baseline's trade-off falls below the CIs (i.e. in the blue region in Figure 1) then this would suggest that the model achieves an exceptionally good accuracy-fairness trade-off, although in practice this will rarely occur (i.e. with a probability of at most $\alpha$). This is also consistent with our experimental results, where we consider $\alpha = 0.05$. Our results in Table 1 verify that less than 5\% of the SOTA baseline trade-offs fall below our lower CIs, as promised by Proposition 3.3. We hope that the above has addressed the questions raised by the reviewer adequately and that the reviewer will consider raising their score.
Summary: This paper proposes a computationally efficient method to estimate the accuracy-fairness trade-off curve with the statistical guarantee given the dataset and model class. Specifically, it first adopts an existing method, You-Only-Train-Once (YOTO), to get the trade-off curve and then proposes a systematic way to obtain the confidence intervals for fairness and accuracy. Strengths: 1. The research problem on the fairness-accuracy tradeoff is fundamental and practical. There is rarely literature that touches on this problem. 2. Considering the finite sample error, the confidence interval is essential for tradeoff investigation. This paper proposes a systematic way to investigate this problem. 3. The paper is well-written and easy to follow. Weaknesses: 1. The research focus, the fairness-accuracy tradeoff, is computationally expensive since the auditing process usually involves multiple model training. Even though the author adopts a computation-efficient YOTO framework to tackle this issue. However, the main contribution on the efficiency side is mainly from existing work. I suggest the authors highlight the values of statistical guarantee. For example, given the statistical guarantee, can the authors provide a more comprehensive evaluation or any insights for model training? Otherwise, the audience may have a "so what" question in mind. 2. I understand the general clue for deriving confidence intervals for the fairness and accuracy side, but I am still confused about how the authors estimate confidence intervals in practice, especially for sensitivity $\Delta(h_{\lambda})$. (1) How do you estimate or justify $\Delta(h_{\lambda})$ without know Pareto optimality? It seems to be counter-intuitive. (2) Do you consider Pareto optimality only for specific model classes? I am curious about "When YOTO satisfies Pareto optimality" (Line 693). If the main conclusion only holds under specific model classes, it would be better to highlight such condition. (3) I suspect the analysis is rigorous. For example, for Hoeffding’s inequality in Line 215, $\tilde{acc}$ should be the expectation of $acc$. However, $\tilde{acc}$ is the accuracy of $h_{\lambda}$ on $D_{cal}$, which is a realization (not mean) of ACC variable. 3. I notice that there is a similar literature at https://openreview.net/forum?id=dSbbZwCTQI, which also trains once with flexible tradeoffs. The main idea of YODO is to ``line'' in the weight space that connects the accuracy-optimum and fairness-optimum points using a single model, which is different from YOTO manner. It would be better to highlight the difference and why the authors chose YOTO instead of YODO. Technical Quality: 3 Clarity: 3 Questions for Authors: Please see Weakness. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The method requires an in-distribution calibration dataset and only applies to in-distribution tests. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thoughtful review and for highlighting the strengths of our work, including its motivation and presentation. Below we address some of the questions raised. > [...] given the statistical guarantee, can the authors provide a more comprehensive evaluation or any insights for model training? [...] Our confidence intervals (CIs) are designed to equip practitioners with an auditing tool to assess fairness trade-offs for different model classes and datasets. If for a classifier $h_0$, the accuracy-fairness trade-off lies above our proposed upper CIs (i.e. in the pink region in Figure 1), then with probability $1-\alpha$ the classifier $h_0$ achieves a sub-optimal trade-off (i.e., there exists another classifier $h' \in \mathcal{H}$ with $acc(h') \geq acc(h_0)$ and $\Phi_{fair}(h') \leq \Phi_{fair}(h_0)$). In this case, one should explore alternative fairness methodologies and model architectures until the model achieves a trade-off in the "permissible trade-off region" (green area in Figure 1). In this way, our CIs can also guide algorithm development, as researchers can use the permissible region of trade-offs identified by our CIs as a target during the design of improved algorithms. It is worth emphasising that unlike prior works which only consider the empirical accuracy-fairness trade-offs, our CI-based methodology is the first to avoid false conclusions arising from finite-sampling errors, thereby offering more reliable insights in realistic data-limited scenarios, as we demonstrate empirically in Figure 1. > How do you estimate or justify $\Delta(h_\lambda)$ without known Pareto optimality? First, we recall that $\Delta(h_\lambda)$ (defined in Figure 2a) intuitively denotes the gap between the trade-off achieved by YOTO and the optimal achievable trade-off $\tau^*_{fair}$, and is an unknown quantity in general. Therefore, for practical purposes, we use sensitivity analysis to posit 'plausible' values for this quantity. The high-level idea behind our procedure is to calibrate the value of $\Delta(h_\lambda)$ using some additional models trained separately using standard methods with varying fairness constraints. **Sensitivity analysis** Explicitly, our sensitivity analysis approach is as follows: First, aside from our YOTO model, our sensitivity analysis procedure uses $k$ additional models $\mathcal{M} :=$ { $h_1, ..., h_k$ } trained separately using standard regularized loss (in Eqn. (2) of our paper). Let $\mathcal{M_0} \subseteq \mathcal{M}$ be the models which achieve a better empirical trade-off than our YOTO model (see Figure 2b in our main text). We choose $\Delta(h_\lambda)$ for our YOTO model to be the maximum gap between the empirical trade-offs of these models in $\mathcal{M}_0$ and the YOTO model. In practice, this shifts our lower CIs downward until all the trade-offs for models in $\mathcal{M}$ lie above our lower CIs. **Intuition** If we assume that training models separately for each fairness constraint recovers the optimal trade-off curve, then we could in principle obtain this optimal curve by training many different models, although at a significant computational cost. Our approach instead offers a compromise between the optimality of trade-offs and computational cost, as we instead train a few models with fairness regularization parameters $\lambda$ sampled uniformly and use these to posit plausible values for $\Delta(h_\lambda)$. (In our experiments, we found that using 2 separately trained models for sensitivity analysis was sufficient.) > Do you consider Pareto optimality only for specific model classes? Our methodology remains valid for any model class $\mathcal{H}$ which can be optimised using gradient-based methods. In this setting, the models in $\mathcal{H}$ can be trained using a YOTO-style architecture and hence our methodology remains applicable. While we mention this in Section 2.1 (lines 108-109), we will highlight this further in the updated manuscript. > [...] for Hoeffding’s inequality in Line 215, $\tilde{acc}$ should be the expectation of $acc$ [...] We wish to clarify that $\widetilde{acc(h_\lambda)}$ is indeed a random variable. Formally, $$ \widetilde{acc(h_\lambda)} := \sum_{(X_i, Y_i) \in D_{cal}} \frac{1(h_\lambda(X_i) = Y_i)}{|D_{cal}|}. $$ Then, Hoeffding's inequality shows that with probability at least $1-\alpha$ we have that $$ |\widetilde{acc(h_\lambda)} - \mathbb{E}[\widetilde{acc(h_\lambda)}] | \leq \sqrt{\log{(2/\alpha)}/(2|D_{cal}|)}. $$ Then, Lemma 3.1 follows from the fact that $acc(h_\lambda) = \mathbb{E}[\widetilde{acc(h_\lambda)}]$. We will clarify this further in the final version. > [...] It would be better to highlight the difference and why the authors chose YOTO instead of YODO Firstly, we thank the reviewer for pointing us to this manuscript. The YODO architecture comprises two sets of parameters, each corresponding to accuracy and fairness optimisation respectively. This means that, for a given size of the main model, the YODO architecture will require the optimisation of twice as many parameters as the standard model. In contrast, the YOTO architecture does not split the weights but instead uses the FiLM layers to dynamically adapt the hidden layers of the models depending on the fairness regularization parameter $\lambda$. Consequently, YOTO only adds a relatively small number of additional parameters as it only involves two additional MLPs which can be significantly smaller than the main model. (See Appendix F.1 for more details on model architectures.) As a result, training YOTO can be computationally cheaper, especially in cases where the main model is very large. For completeness, we will include a comprehensive discussion of this comparison in the final version of our paper. We hope that we were able to address all the reviewer's questions and hope that the reviewer would consider increasing their score. --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer 3zFr Comment: Thanks for the detailed response. I am satisfied with the response and will raise my score.
null
null
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Dual Critic Reinforcement Learning under Partial Observability
Accept (poster)
Summary: This paper proposes a dual-critic architecture for the asymmetric RL regime where a policy deployable in partially observable settings is trained under full observability. The proposal is meant to be an improved version of Unbiased Asymmetric Actor Critic that, through its two critics, improves learning efficiency while reducing variance. This is demonstrated through theory and experiments in MiniGrid and MiniWorld. Strengths: **Originality**: The authors provide a new algorithm in the context of asymmetric RL. The contribution over related works is accurate and clearly stated. **Quality**: The theory supporting the work appears sound. The experiments seem solid and are conducted across a number of partially observable domains, including both discrete gridworlds and continuous control environments. The results show that the method outperforms existing methods in terms of sample efficiency, sometimes by a substantial margin. **Clarity**: The writing is mainly clear, particularly the motivation and the exposition of related works. A few technical details can be clarified however. **Significance**: Deep RL for POMDP problems remains a significant challenge and new algorithms can have a substantial impact. Asymmetric RL has emerged as a promising paradigm for such problems and so I believe the work targets an important research topic. Weaknesses: **Originality**: Ultimately, the proposed method is somewhat derivative of two other well-known methods. The main contribution is the dual critic architecture, which is a linear combination of a typical recurrent critic $V(h)$ and the unbiased asymmetric critic $V(h,s)$. On its own, this would not be a major issue since the method is shown to perform well, but I believe the insights behind this approach are also discussed in the original Unbiased Asymmetric Actor Critic paper. **Quality**: While the theorems appears correct, I wonder how relevant they are to the topic at hand. To my knowledge, what we *really* care about is the variance of the policy gradient Monte-Carlo estimator --- not the value estimator. The former is important since it plays a big role in the sample efficiency of policy learning. While the variance of the value estimator may indicate the amount of data required to train a reasonable value function, this is again only important insofar as reducing policy gradient variance through its role as a baseline. There also needs to be a discussion on *why* we don't always want to reduce variance all the way to zero by setting $\beta = 1$ (which is equivalent to the simple recurrent critic). What value does increasing the variance (in order to incorporate state information) have? I believe this is not totally well understood in the literature either, but the topic should at least be brought up to clarify the point above. An investigation that can properly answer this question would be a strong contribution. Regarding the implementation, it seems the choice of when to clip to $\beta = 0$ is very important but I don't understand why the particular heuristic in the paper was chosen. I'm not sure why the motivation of preventing large updates when the advantage is non-positive is relevant. The results of the experiments show a solid advantage but it's not totally clear where the benefits are coming from (likely because I didn't understand the motivation behind the clipping of $\beta$). I think the experiments would also benefit from stronger baselines such as the Believer algorithm cited in the related work, and partially observable tasks that involve more than simple navigation. **Clarity**: As previously mentioned, the reason why the state-based critic $V(h,s)$ is beneficial could be better explained, along with the heuristic for when to clip $\beta = 0$. Perhaps a toy example could be useful for this. I'm also unsure what the significance of the interchange method is. **Significance**: I believe there isn't enough insight into the proposed approach to justify its use compared to Recurrent PPO or Unbiased Asymmetric Actor-Critic, which are somewhat better understood. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. Do you know how the dual critic architecture affects the variance of the policy gradient estimator? 2. How was the clipping method for $\beta$ chosen? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. We provide the following clarifications in response to your comments. > Weakness 1: About the insight. > > ... the proposed method is somewhat derivative of two other well-known methods. ... there isn't enough insight ... **In this study, we address the issue of high variance stemming from an over-reliance on state information. We demonstrate that state information is not always advantageous for training purposes.** In many environments, relying solely on partial observations can yield superior results compared to using state information. **Incorporating state information can offer significant advantages for training, as it often encapsulates high-level features that improve the learning of $V(h,s)$ compared to $V(h)$.** For example, in an extreme scenario where the state information $s$ represents the true value of $V(h)$, the agent can effortlessly infer this true value through $V(h,s)$. Specifically, in the Empty Navigation task, when state information indicates the agent's relative distance to the goal, the value function learning process is significantly streamlined. In addition, the reward function is defined based on the state rather than the observation. Consequently, learning for the standard critic necessitates implicit consideration of the distribution $b(s|h)$. In contrast, **the oracle critic bypasses this requirement by directly learning the reward**, as the distribution is explicitly provided through the sampling operation during the policy gradient calculation. **However, state information is not always advantageous for training.** When the state information contains significant noise, **adding excessive noisy features to the input can hinder learning the value function.** In unfamiliar environments, whether state information genuinely supports the learning process remains to be determined. To address this issue, we introduce a DCRL framework that leverages the strengths of two critics. **We provide theoretical evidence indicating that DCRL mitigates the learning variance while maintaining unbiasedness. Moreover, our approach incorporates a dynamic weighting mechanism, which can be interpreted as a form of lower-bound-soft-Q-learning, distinguishing it from conventional linear weighting methods.** Compared to Recurrent AC and UAAC methods, our proposed DCRL demonstrates substantial improvements. > Weakness 2 and Question 1: About the variance reduction in policy gradient. > > ... the variance of the policy gradient Monte-Carlo estimator ... how the dual critic architecture affects the variance of the policy gradient estimator? We fully understand your concerns regarding the variance of the policy gradient Monte Carlo estimator. Traditional Actor-Critic methods employ the reward-to-go term to compute the policy gradient. Nonetheless, **the variance of the reward-to-go tends to be high**, which has led researchers to substitute this term with the Q-value to mitigate variance in the standard policy gradient method, i.e., $Q(h_t, a_t) \nabla_{\theta}\log\pi_{\theta}(a_{t}|h_{t})$. Similarly, in our study, we focus on the policy $\pi(a|h)$, which is linked to **a unique $Q(h, a)$ but can also correspond to multiple $Q(h, s, a)$ values** due to the non-one-to-one relationship between state and history. **Consequently, the variance of the policy gradient derived from $Q(h, a)$ is lower than that obtained from $Q(h, s, a)$.** As stated in Theorem 1 of our paper, $Q_{\mathrm{dual}}(h, s, a)$ in DCRL achieves a reduction in this variance compared to $Q(h, s, a)$ by integrating the standard critic, thereby decreasing the variance of the policy gradient. > Weakness 3 and Question 2: About the $\beta$. > > ... reduce variance all the way to zero by setting $\beta=1$ ... when to clip $\beta=0$ ... the motivation behind the clipping of $\beta$ ... How was the clipping method for $\beta$ chosen? In response to weakness 1, we elaborated on the advantages of employing the oracle critic $V(h, s)$, which incorporates state information for policy training. The motivation behind the clipping mechanism is to leverage the standard critic when opportunities for policy enhancement remain following updates from the oracle critic. **This approach can be interpreted as a form of lower-bound soft Q-learning, wherein only samples exhibiting a positive advantage contribute meaningful insights into the optimal soft Q-value.** To validate this concept, we implemented a simplified weighting method that employs fixed ratios to weight the two critics (referred to as DCRL No Clip Version). As illustrated in Figure 1 of the rebuttal PDF, this method shows performance improvements in only certain environments, confirming that the dynamic weighting mechanism of DCRL is crucial. To evaluate the impact of the parameter $\beta$, we conducted ablation experiments testing various values of $\beta \in \\{1/5, 1/3, 1/2, 2/3, 4/5\\}$. **The choice of $\beta$ demonstrates notable stability.** **This stability is primarily attributable to the characteristics inherent in lower-bound soft Q-learning.** > Weakness 4: > > ... the experiments would also benefit from stronger baselines... Thank you for this valuable suggestion. We have followed your advice and considered the commonly used POMDP baseline DreamerV3.  We compared the performance of DCRL, DreamerV3, and additional baselines across eight tasks in the MiniGrid environment. Figure 3 of the rebuttal PDF illustrates the average scores from five training seeds over $1e5$ steps for each method. **The results indicate that DCRL consistently outperforms DreamerV3 across all eight tasks**. Importantly, DCRL requires fewer computational resources and less training time than DreamerV3. Thank you for your insightful and thoughtful reviews, which have significantly improved the quality of our paper. If you find that these concerns have been resolved, we would appreciate it if you would reconsider your ratings of our paper. --- Rebuttal 2: Title: Have we addressed your concerns? Comment: Thanks again for your time and effort in reviewing our paper! As the discussion period is coming to a close, we would like to know if we have resolved your concerns expressed in the original reviews. We remain open to further feedback and are committed to implementing additional improvements if necessary. If you find that these concerns have been resolved, we would appreciate it if you would consider reflecting this in your rating of our paper! --- Rebuttal Comment 2.1: Comment: Thank you to the authors for clarifying my questions. I appreciate your clarification on the connection between the policy gradient variance and the value function variance. The paper should better distinguish these two concepts -- "variance" on its own is typically associated with the policy gradient estimator and it's important that these concepts not be conflated. Your explanation of how they are mutually related helps allay my concerns. I also appreciate the insight behind why we would sometimes opt for the higher variance $V(h, s)$ critic. **However, I'm not sure I'm convinced by the explanation of the dynamic weighting mechanism.** As I understand it, there are some situations where we want the $V(h, s)$ critic to have higher weight and others when we want $V(h)$ to have higher weight -- presumably $V(h)$ ought to be weighted higher near the end of training when we've converged on a good $V(h)$ critic and $V(h, s)$ unnecessarily adds (irreducible) sampling variance. The argument that the DCRL critic has lower variance and is unbiased compared to $V(h)$ is misleading since that would suggest we always want to choose $\beta = 1$. In reality, the question of *when* each critic is more effective is the key question here. That leads me to my next point. The authors implicitly claim that the dynamic weighting is an effective way to decide when to apply the DCRL critic vs the $V(h)$ critic. To me, the explanation in the paper (and in the rebuttal) is a bit brief and lacking -- it still doesn't quite answer why this works so well. Why is the sign of the advantage $\delta_\phi(h_t, a_t)$ a good determinant of how we should weight the two critics? Lastly I appreciate the inclusion of Dreamer-v3 as a baseline that does not utilize state information. I still think including additional baselines that do leverage state information would be an improvement. --- Reply to Comment 2.1.1: Comment: Thank you very much for your response. We are happy to hear that we have addressed some of your concerns. We will incorporate your suggestions regarding the concepts of variance and provide a more precise description in the revised manuscript. For your remaining questions, we provide point-to-point responses as follows: > Q1: About the dynamic weighting mechanism. We apologize for the misunderstandings that have arisen. To clarify, our claim is that DCRL reduces variance compared to UAAC (which relies solely on $V(h,s)$) and is unbiased with respect to $V(h)$ in expectation, leading to an unbiased policy gradient. **However, this does not imply that we always want to choose $\beta=1$.** When $\beta=1$, state information can not be utilized. In fact, $\beta=1$ is not used in either the derivations or experiments of DCRL. As shown in Figure 1 of the rebuttal PDF, we tested values of $\beta$ in $\\{1/5, 1/3, 1/2, 2/3, 4/5\\}$ and selected $\beta=1/2$ for reporting our results in the paper. We agree that the key question is when each critic is more effective. Our motivation is to enhance the balance between the two critics. DCRL achieves this by dynamically adjusting the weight rather than always setting $\beta$ to $1$ or $0$. Specifically, the adjustment mechanism is based on the advantage $\delta(h, a)$. This approach aims to reduce variance while preserving the acceleration provided by state information. From the perspective of lower-bound-soft-Q-learning, **the policy gradient induced by $V(h)$ supports the lower bound of the optimal Q-values only when the advantage is positive.** This allows the policy to update towards higher returns without altering the update direction from $V(h, s)$. Conversely, when the advantage is non-positive, $V(h)$ does not provide useful information about the optimal Q-values and may interfere with updates from $V(h, s)$, thus diminishing the benefit of state information. Therefore, $V(h)$ is excluded from training in DCRL under these conditions. As shown in Figure 1 of the rebuttal PDF, the results of the DCRL No Clip Version experiment support this conclusion. We also agree that $V(h)$ ought to be weighted higher near the end of training. However, this presents a challenge: accurately identifying the later stages of training in an unfamiliar environment can be difficult. Mismanagement of this weighting may lead to suboptimal solutions. Therefore, we adopt a more intuitive and straightforward approach: **applying $V(h)$ whenever it reduces variance without disrupting updates of state information.** The idea that $V(h)$ ought to be weighted higher near the end of training is the direction of our future efforts. > Q2: About additional baselines that leverage state information. Thank you for your valuable feedback. Our paper has included three relevant latest baselines that utilize state information: UAAC, AAC, and Oracle Guiding. Following your insightful suggestions, we are conducting experiments about Believer. As Believer focuses on representation models instead of learning frameworks like DCRL, it involves collecting random samples to pre-train the representation model for $5000$ epochs, which considerably extends training time. We will consider incorporating additional baselines in the revised version of our manuscript as per your recommendation. We sincerely appreciate the time and effort you have invested to the discussions! Please feel free to let us know if you have any further concerns.
Summary: The authors propose a methood that uses a weighted dual critic structure for tackling POMDPs. The dual critic structure consists of a critic that receives global state information while the other critic receives only the partially observations of the state. The authors provides some simple yet concrete analytical results that show the method induces unbiased critic estimates and can reduce variance compared to that of the Unbiased Asymmetric Actor-Critic. Strengths: The paper is well written and easy to follow. The motivation for their approach is well laid out and although the focus of the paper is to provide a new methodology which is supported by experiments, the paper includes some lite-touch analytics that support the overall ideas well. The idea is quite simple and intutive, the presentation is nevertheless very clean with the authors having laid out both the rationale and benefits for the method in a very clear way. Weaknesses: 1. While I understand the reason for doing so, such a strategy introduces a harsh jump-discontinuity in the dual advantage and critic - this may lead to numerical instabilities. The paper would benefit from a discussion on this issue. 2. The method also seems related to centralised training and decentralised execution in multi-agent reinforcement learning. There, the critic is trained with global information while the actor is trained using only local inputs. It would be a nice connection to make if the authors could include some discussion on degenerate case of ct-de with N=1. 3. Notwithstanding the discussions on why the method performs well in the chosen environments (lines 297-314) I would have liked to have seen some discussion and analysis of when one could expect this method to be most useful. Indeed, it's conceivable that including more information may slow training even if one can expect improved asymptotic performance. There may also be situations where we do not have access to the full global state even during training. I think the paper would benefit from such discussions. Technical Quality: 3 Clarity: 4 Questions for Authors: Q1. The authors state that $V^\pi(h)$ has nonzero variance. Using the definition of the dual value function, setting $\beta\equiv 1$ implies that $V_{\rm dual}^\pi(h,s)=V^\pi(h)$. However, setting $\beta=1$ in equation 8 implies that $Var_{s|h}[V_{\rm dual}^\pi(h,s)]=0$. Can the authors explain this. Q2. How does the method perform over a range of values of $\beta$? Q3. What environments should we expect the method to do well in (and not so well)? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: L1.The experiment section does not contain an ablation on the value of $\beta$. Without this it is difficult to understand the size of the range of values of $\beta$ for which the method performs well. L2. The experiment environments are quite simple - it would be useful to see how the method performs on complex benchmark environments. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive comments and suggestions. Below, we provide detailed responses to each point. > Weakness 1: > > ... harsh jump-discontinuity ... numerical instabilities. We fully understand your concerns. **The numerical instability arises primarily from the variance of $V(h, s)$ due to the expectation of $V(h, s)$ is equal to $V(h)$.** Figure 8 in the appendix illustrates that our DCRL demonstrates a more stable value distribution and experiences fewer numerical instabilities than UAAC, attributable to its reduced variance. > Weakness 2: > > ... degenerate case of ct-de with N=1. We appreciate the valuable insights you have provided. The degenerate case of CTDE when $N=1$ corresponds to a standard POMDP. In this context, the well-known multi-agent actor-critic algorithm COMA, utilizes both state information ($s$) and history ($h$) as inputs for the critic during centralized training. Conversely, during execution, the actor relies solely on history as its input. **This situation can be interpreted as a specific instance of the UAAC framework.** We have incorporated relevant discussions on this topic in the revised manuscript. > Weakness 3.1 and Question 3: > > ... when one could expect this method to be most useful. > > What environments should we expect the method to do well in (and not so well)? **Incorporating state information can offer significant advantages for training, as it often encapsulates high-level features that improve the learning of $V(h,s)$ compared to $V(h)$.** For example, in an extreme scenario where the state information $s$ represents the true value of $V(h)$, the agent can effortlessly infer this true value through $V(h,s)$. Specifically, in the context of the Empty Navigation task, when the state information indicates the agent's relative distance to the goal, the value function learning process is significantly streamlined. Furthermore, the reward function is defined based on the state rather than the observation. Consequently, the standard critic's learning necessitates implicit consideration of the distribution $b(s|h)$. In contrast, **the oracle critic bypasses this requirement by directly learning the reward**, as the distribution is explicitly provided through the sampling operation during the policy gradient calculation. **However, state information does not always enhance training.** When the state information contains significant noise, **introducing excessive noisy features to the input can hinder learning the value function**. In unfamiliar environments, whether state information genuinely supports the learning process remains to be determined. The motivation behind this work is that state information may be accessible during training in many scenarios. However, introducing a state could potentially lead to bias or variance. **In most cases, state information offers higher-level features that assist agents in decision-making. In these situations, DCRL enhances the process by further reducing variance. Conversely, when state information and partial observations correspond one-to-one, the oracle critic demonstrates no variance; DCRL may become ineffective under these circumstances.** > Weakness 3.2 > > ... more information may slow training ... do not have access to the full global state ... The scope of our study encompasses situations where state information is accessible during training. This assumption is prevalent in previous works and reflects real-life common scenarios. We provide the runtime in seconds for each method below regarding the potential issue of increased training duration due to additional information. DCRL requires more time to execute compared to other baselines, primarily because it trains an additional critic network. Nevertheless, **DCRL demonstrates significantly higher sample efficiency and performs better with fewer environment steps, thus compensating for the slower wall time.** Table 1: Comparison of DCRL and other baselines in MiniGrid. The first number represents the mean ± std of performance, while the second number indicates the runtime in seconds. | | A2C ($1e5$ frames) | A2C ($1e6$ frames) | PPO ($1e5$ frames) | PPO ($1e6$ frames) | | :------------------------------: | :--------------------: | :---------------------: | :--------------------: | :---------------------: | | Recurrent Actor-Critic | 0.32±0.13, 108.41s | 0.28±0.09, 1026.31s | 0.26±0.07, 111.90s | 0.52±0.03, 1130.45s | | Asymmetric Actor-Critic | 0.26±0.10, 106.49s | 0.32±0.14, 1000.69s | 0.27±0.08, 110.21s | 0.60±0.06, 1107.77s | | Unbiased Asymmetric Actor-Critic | 0.31±0.08, 108.89s | 0.39±0.09, 1066.49s | 0.28±0.10, 114.35s | 0.59±0.08, 1160.74s | | **DCRL (Ours)** | **0.63±0.07, 130.12s** | **0.89±0.01, 1224.29s** | **0.47±0.08, 132.70s** | **0.84±0.03, 1331.69s** | > Question 1: > > ... $V^{\pi}(h)$ has nonzero variance. We apologize for the misunderstanding caused by a typo in our paper. In line 155, we clarify that $V(h,s)$ possesses nonzero variance, while $V(h)$ has zero variance. > Question 2: > > How does the method perform over a range of values of $\beta$? To address your concerns, we conducted ablation experiments on the values of the $\beta$ parameter ($\beta \in \\{1/5, 1/3, 1/2, 2/3, 4/5\\}$). Figure 1 of the rebuttal PDF illustrates that **the choice of $\beta$ exhibits notable stability**. This stability primarily arises from the characteristics of lower-bound-soft-Q-learning. Consequently, we selected $\beta = 1/2$ as the parameter for reporting our results in the paper, as it consistently exhibits superior performance. Thank you for your thoughtful review and insights into our work, which have significantly improved the quality of our paper. If you find that these concerns have been resolved, we would appreciate it if you would reconsider your ratings of our paper. --- Rebuttal 2: Title: Reviewer Response Comment: I would like to thank the authors for their detailed responses to my comments. I would also like to congratulate the authors for having produced useful additional results, particularly the ablation on $\beta$ in the short space of time. My opinion is that overall the paper has merit and the additional analyses have allayed some of my concerns. In light of the fact that parts of the main insights exist albeit within the multi-agent reinforcement literature and that the corresponding methodology may be applied in the degenerate, single-agent case means that the contribution is somewhat constrained. In MARL, there are also techniques to learn how best to interpolate or switch between using a value function with global state inputs and that with local inputs which are worth mentioning e.g. [1]. This is an important consideration as it is likely not a priori known which environments would benefit from global information during training should this information be available. For this reason, I will keep my score as is. **Other points** * I recommend the authors use a different notation for the value functions $V^\pi(h,s)$ and $V^\pi(h)$ and similarly for the action-value functions. * I think my earlier point about jump-discontinuities can be addressed by showing that $V^\pi_{\rm dual}$ is a smooth function w.r.t. the update parameters of $V^\pi(h,s)$ and $V^\pi(h)$ whenever the latter two functions are differentiable and, showing the corresponding statement for $Q^\pi_{\rm dual}$ . [1] Mguni, David Henry, et al. "Mansa: Learning fast and slow in multi-agent systems." International Conference on Machine Learning. PMLR, 2023. --- Rebuttal 3: Comment: Thank you for your prompt response. We also appreciate your recognition of our supplementary experiment and are glad to have addressed some of your concerns. We will incorporate your suggestions regarding notation and jump discontinuities in our revision of the manuscript. For your remaining questions, we provide point-to-point responses: > Q1: Regarding the relevance to MARL problems. We apologize for the misunderstandings that have arisen. The setup presented in our paper differs from the MARL problem formulation. In MARL, the optimization objective is to maximize the reward under the global states, optimizing each agent accordingly. These methods mostly presuppose that the combination of optimized actions under partial observations leads to optimality under global states. However, this assumption does not always hold, even when $N=1$. The joint-action learner can not be decomposed into independent actors. The policy that maximizes rewards under partial observations does not necessarily align with the policy that maximizes rewards under global states, i.e., $Q(h, a) \neq Q(s, a)$ [DRQN2015Arxiv]. In contrast to MARL, the optimization objective in the POMDP problem discussed in this paper does not rely on this assumption, focusing instead on maximizing rewards under partial observations. Therefore, compared to our DCRL, the applicability of MARL methods in POMDP contexts is limited. > Q2: Regarding the effectiveness of state information. We fully understand your concerns. In POMDPs, rewards are directly defined based on state information rather than on observations. Consequently, learning the value function $V(h,s)$ with the reward as the target is typically simpler than learning $V(h)$. This characteristic makes state information effective in most practical applications, a conclusion that numerous previous studies have empirically validated [UAAC2022AAMAS, Suphx2020Arxiv, PerfectDou2022NIPS, Honor-of-Kings2020NIPS]. Additionally, our DCRL leverages the advantages of state information while reducing variance, significantly enhancing training efficiency. Extensive experiments demonstrate that our DCRL consistently outperforms methods that do not incorporate state information across all tested tasks. Thank you very much for your contribution in clarifying our method. Please feel free to let us know if you have any further concerns. **Reference** - [DRQN2015Arxiv] Hausknecht Matthew, et al. Deep recurrent q-learning for partially observable MDPs. Arxiv, 2015. - [UAAC2022AAMAS] Andrea Baisero, et al. Unbiased asymmetric reinforcement learning under partial observability. In AAMAS, pages 44–52, 2022. - [Suphx2020Arxiv] Junjie Li, et al. Suphx: Mastering mahjong with deep reinforcement learning. Arxiv, 2020. - [PerfectDou2022NIPS] Guan Yang, et al. PerfectDou: Dominating Doudizhu with perfect information distillation. In NIPS, pages 34954–34965, 2022. - [Honor-of-Kings2020NIPS] Deheng Ye, et al. Towards playing full MOBA games with deep reinforcement learning. In NIPS, pages 621-632, 2020. --- Rebuttal 4: Title: Re: Comment: Thanks again to the authors for their detailed response. My comment about the relationship with MARL is that the centralised training with decentralised execution (CT-DE) paradigm closely resembles the central idea being presented, albeit for a distributed setting with a different need to aggregate local information. In particular, the methods that stem from CT-DE degenerate into something similar when considering the case of $N=1$. While it is true that in MARL, one has to consider how to best promote the so-called individual-global-max condition, in the degenerate $N=1$ case this may not be a concern, I believe as the local and joint policy coincide (even if the critic is centalised). Note also that the solution to a POMDP is in general, different to the solution when the agent is endowed with the missing data (which recovers the MDP setup) - this is not specific to the multi-agent setting. I would like to thank the authors for their response to point 2 - this is a helpful clarification. --- Rebuttal 5: Comment: Thank you very much for your feedback! We are glad to have addressed some of your concerns. **Indeed, there are situations where the local and joint policy coincide, this holds ture when the state encompasses history (rather than just the observation).** The centralized critic is equivalent to using $V(h, s)$, which theoretically ensures that the policy is unbiased and converges to the optimal solution. Since MARL primarily deals with cooperation and competition among multiple agents, partial observability often stems from other agents. When the problem is reduced to $N=1$, most tasks fall into this situation. **In POMDPs, it is more common for the state not to fully include history**, which is equivalent to using $V(s)$. The state alone does not typically reveal whether the agent has previously collected the necessary information and thus cannot adequately indicate if the current state is favorable or adverse. **The policy is biased and may not converge to the optimal solution.** The UAAC paper has proved it in Theorem 4.2 and includes a toy example to illustrate the concept in Appendix B.2. We appreciate your constructive feedback and apologize for the lack of clarity in our description. We will provide a more detailed explanation in the revised version of our manuscript.
Summary: The paper presents Dual Critic Reinforcement Learning (DCRL), a framework designed to handle partial observability in RL. Traditional RL methods often struggle with high variance and instability when relying on full-state information. DCRL addresses this by integrating two critics: an oracle critic with access to complete state information and a standard critic operating within a partially observable context. This dual approach aims to improve efficiency and reduce variance during training, leading to optimized performance in online environments. Strengths: - This paper is well structured and easy to read. The authors provide thorough analysis and experiment results. - The paper provides theoretical proof that DCRL reduces learning variance while maintaining unbiasedness. The dual value function is defined as a weighted combination of the oracle and standard value functions, and it is proven to be an unbiased estimate of the true value function with lower variance compared to using the oracle critic alone. - The effectiveness of DCRL is validated through extensive experiments in the Box2D and Box3D environments. Results show that DCRL significantly outperforms baseline methods, including Recurrent Actor-Critic and Asymmetric Actor-Critic frameworks, across various tasks in the MiniGrid and MiniWorld environments. DCRL achieves faster convergence and higher returns, particularly in complex and high-uncertainty scenarios. Weaknesses: Refer to Questions. Technical Quality: 3 Clarity: 4 Questions for Authors: - There is a typo in the definition of $R(h,a)$ in line 122. - Can you explain in more detail how the weighting mechanism between the oracle critic and the standard critic is designed to reduce variance? Are there specific conditions or thresholds for adjusting the weights? - Is it necessary to ensure the alignment of $V(s)$ and $V(h,s)$ in DCRL? if yes, how to ensure it? Confidence: 5 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your encouraging words and constructive feedback. We appreciate your time reviewing our paper and provide point-by-point responses to your comments below. > Question 1: > > There is a typo in the definition of $R(h,a)$ in line 122. We greatly appreciate your pointing out the issues. We have corrected the typos in the revised version. > Question 2: > > Can you explain in more detail how the weighting mechanism between the oracle critic and the standard critic is designed to reduce variance? Are there specific conditions or thresholds for adjusting the weights? To address the high variance issue arising from an over-reliance on state information, we propose a weighting mechanism that leverages the benefits of two critics. As demonstrated in Theorem 2, the no-variance standard critic can effectively mitigate the variance introduced by the oracle critic, thereby reducing overall variance. **The weight adjustment is governed by the advantage function of the standard critic.** When the advantage function yields a non-positive value, $\beta$ is clipped to $0$. Conversely, when the advantage is positive, $\beta$ activates the standard critic to accelerate training and reduce variance. Theorem 5 establishes that this weighting mechanism is a form of lower-bound-soft-Q-learning. This approach significantly enhances the performance of DCRL while maintaining high stability concerning hyperparameters. > Question 3: > > Is it necessary to ensure the alignment of $V(s)$ and $V(h,s)$ in DCRL? if yes, how to ensure it? Thank you for this question. The answer is no. It is unnecessary to ensure the alignment of $V(s)$ and $V(h,s)$. In the POMDP setting, the policy operates based on partial observations, resulting in a standard policy gradient derived from $V(h)$. Maintaining the policy gradient's unbiasedness is crucial when incorporating state information to enhance training. **This stipulates that the value function must be unbiased concerning $V(h)$ without considering its relationship with $V(s)$.** Our DCRL framework emphasizes two critics: $V(h)$ and $V(h,s)$. It has been established that $V(h,s)$ is unbiased about $V(h)$. Furthermore, Theorem 1 demonstrates that $V_{dual}(h,s)$ in the DCRL framework is also unbiased. Thank you very much for recognizing our work! Your time and effort are greatly appreciated! --- Rebuttal Comment 1.1: Title: Response Comment: Thank you to the authors for answering my questions! I will keep my score. --- Reply to Comment 1.1.1: Comment: Thank you very much for your support! We appreciate the time and effort you've invested in reviewing our manuscript!
Summary: In this work, the authors aim to learn policies in the POMDP setting. They make use of two critics - one that has the privileged state information and one that doesn't and uses only history. Creating a dual value function which is a convex combination of these two value functions. They use this dual value function for policy optimization. They perform experiments in box2d and box3d environments. Strengths: The authors use the bias-variance tradeoff well. Using standard critic is biased but using only the oracle critic has high variance. The come up with an estimator with low bias as well as low variance and empirically justify this in their experiments. Weaknesses: (1) Even assuming that you have the state information during training is a strong assumption. You might not always have that. A lot of works do not make that assumption and perform pretty well. (2) There should have been comparisons to commonly used POMDP works like Dreamer [] and others. But the comparisons were made only with (in some words) different variants of the algorithm. Technical Quality: 3 Clarity: 3 Questions for Authors: (1) Do you think in most real world settings you described in the introduction, you will have access to the state information for training the oracle critic? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors have discussed their limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your constructive comments and suggestions. We have revised our paper accordingly. Below, we present detailed responses to each point. > Weakness 1 and Question 1: > > Even assuming that you have the state information during training is a strong assumption. You might not always have that. A lot of works do not make that assumption and perform pretty well. > > Do you think in most real world settings you described in the introduction, you will have access to the state information for training the oracle critic? We fully understand your concerns. To address your concerns, we included a more detailed explanation of the assumptions regarding state information accessibility and provided a representative **real-world scenario (autonomous driving)** of the POMDP problem. Incorporating state information during the training process is a common assumption adopted by many previous works. In POMDP problems such as MiniGrid and MiniWorld, researchers often use state information to assist agents during training [UAAC2022AAMAS, Believer2023ICML]. In various partially observable games, including **card games (e.g., Mahjong, DouDiZhu) and MOBA games (e.g., Honor of Kings)**, we can readily obtain the state information (opponents' hands or attributes) during training. Consequently, researchers often leverage this information to enhance agent training, resulting in significant performance improvements [Suphx2020Arxiv, PerfectDou2022NIPS, Honor-of-Kings2020NIPS]. In autonomous driving applications, due to the high costs of specific sensors and the challenges of acquiring high-definition maps in some areas, existing solutions mainly rely on end-to-end visual models [Tesla, PhiGent]. During the training phase, autonomous driving companies typically equip a small fleet of training vehicles with expensive sensors and **use the collected high-definition maps and other information as state information to facilitate the training of these end-to-end visual models**. This approach aids in constructing 3D environments, thereby enhancing the performance of autonomous driving systems [BEV2023TPAMI]. > Weakness 2: > > There should have been comparisons to commonly used POMDP works like Dreamer [] and others. But the comparisons were made only with (in some words) different variants of the algorithm. Thank you for this valuable suggestion. We have followed your advice and considered the commonly used POMDP baseline DreamerV3 [DreamerV32023Arxiv].  We compared the performance of DCRL, DreamerV3, and additional baselines across eight tasks in the MiniGrid environment. Figure 3 of the rebuttal PDF illustrates the average scores from five training seeds over $1e5$ steps for each method. **The results indicate that DCRL consistently outperforms DreamerV3 across all eight tasks.** Importantly, DCRL requires fewer computational resources and less training time than DreamerV3. Thank you for your thought-provoking and discussion-worthy suggestions, which have significantly improved the quality of our paper. If you find that these concerns have been resolved, we would appreciate it if you would consider reflecting this in your rating of our paper. **References** - [UAAC2022AAMAS] Andrea Baisero, et al. Unbiased asymmetric reinforcement learning under partial observability. In AAMAS, pages 44–52, 2022. - [Believer2023ICML] Andrew Wang, et al. Learning belief representations for partially observable deep RL. In ICML, pages 35970–35988, 2023. - [Suphx2020Arxiv] Junjie Li, et al. Suphx: Mastering mahjong with deep reinforcement learning. Arxiv, 2020. - [PerfectDou2022NIPS] Guan Yang, et al. PerfectDou: Dominating Doudizhu with perfect information distillation. In NIPS, pages 34954–34965, 2022. - [Honor-of-Kings2020NIPS] Deheng Ye, et al. Towards playing full MOBA games with deep reinforcement learning. In NIPS, pages 621-632, 2020. - [Tesla] Tesla AI Day. [Online]. Available: https://www.youtube.com/watch?v=j0z4FweCy4M - [PhiGent] PhiGent: Technical Roadmap. [Online]. Available: https://43.132.128.84/coreTechnology - [BEV2023TPAMI] Hongyang Li, et al. Delving into the devils of bird's-eye-view perception: A review, evaluation and recipe. IEEE TPAMI, 2023. - [DreamerV32023Arxiv] Hafner Danijar, et al. Mastering diverse domains through world models. Arxiv, 2023. --- Rebuttal Comment 1.1: Comment: Thank you for your response and additional experiments. I am increasing my score based on the clarifications. I agree that there might be environments where obtaining state-information is possible and you can use that state observation during training but that would severely limit your work in terms of applicability to any POMDP setting. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the time and effort you have invested to the discussions! The above inspiring discussion has greatly enhanced the quality of our paper. The scope of our study encompasses situations where state information is accessible during training, which is an essential topic in POMDPs with many applications like autonomous driving. For problem settings without access to state information, we can leverage a pre-trained teacher network to provide labels, thereby facilitating training without compromising deployment. We will incorporate this suggestion and provide a more comprehensive discussion in the revised manuscript. --Best wishes from all the authors!
Rebuttal 1: Rebuttal: We appreciate all the reviewers for their insightful and constructive feedback. In response to these helpful comments, we conducted supplementary experiments in the MiniGrid environment (see the attached PDF): 1. **Figure 1**: Ablation studies on different values of $\beta \in \\{1/5, 1/3, 1/2, 2/3, 4/5\\}$ and a simple weighting method (referred to as DCRL No Clip Version) with fixed ratios to weight the two critics. 2. **Figure 2**: Comparison of two different dual architectures: 1) $V(h)$ and $V(h,s)$ (our DCRL), and 2) $V(s)$ and $V(h,s)$ (referred to as DCRL State Version). 3. **Figure 3**: Comparison with the commonly used POMDP baseline, DreamerV3. These experiments further demonstrate the following: 1. The dynamic weighting mechanism of DCRL is crucial, and the choice of $\beta$ exhibits notable stability. 2. Our DCRL framework outperforms the DCRL State Version due to its unbiased characteristic, which contrasts with the biased nature of the DCRL State Version. 3. DCRL consistently surpasses DreamerV3 across all tasks, confirming its superior performance. We hope these additional experimental results provide further evidence of the effectiveness of our DCRL framework. Moreover, we made several revisions and improvements to our paper: 1. We included a more detailed explanation of the assumptions regarding state information accessibility in the introduction section. 2. We addressed the degenerate case of CTDE when $N=1$ in the related work section. 3. We added additional toy examples in the appendix to illustrate when DCRL is effective and when it may fail. Furthermore, we addressed each reviewer's questions or concerns with detailed point-by-point responses. As a result, the quality of our paper has markedly improved. We are profoundly grateful for the reviewers' contributions to our paper and sincerely hope that our revisions have adequately addressed your concerns. We remain open to further feedback and are committed to implementing additional improvements if necessary. Pdf: /pdf/017ddc0f7792c5fb167d29e45415a53e7501001f.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper proposes a dual critic architecture for learning POMDPs. One critic is the standard critic that uses the history information while the other critic is the unbiased asymmetric critic that uses the history and state information. The authors prove that using both critics reduces variance and propose a weighting mechanism between the two critics based on the advantage. Empirical results show that DCRL provides a substantial benefit over the asymmetric critic or unbiased asymmetric critic. Strengths: - The authors motivate the problem clearly and explain why a dual critic approach would be useful - The authors prove that variance is reduced - Experimental results show significant improvements over baselines on minigrid and miniworld. Weaknesses: The primary weakness of this paper is novelty. While I can appreciate that the dual critic architecture reduces variance, the proofs seems fairly obvious. In my opinion, the main novel contribution of this paper then relies on the weighting mechanism and the empirical results showing that DCRL can beat AC, UAAC, and also recurrent A2C. The novelty therefore seems marginal. Additionally, asymmetric actor critic (AC) was proposed to utilize additional state information to learn a policy more efficiently over the standard recurrent A2C where the critic only uses history information. UAAC then corrects the bias in AC by incorporating both the state and the history. However this paper proposes going back to the standard recurrent A2C method for one of the critics to achieve unbiasedness. It would have been interesting to compare two dual critic architectures: 1) V(h) and V(h,s) and 2) V(s) and V(h,s). Although the second approach would be biased, I would assume that using V(s) would lead to more efficiency gains. This seems to be an obvious comparison that was not considered. Technical Quality: 4 Clarity: 3 Questions for Authors: See weaknesses Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 2 Limitations: The authors clearly discuss the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your constructive comments, which have significantly enhanced the quality of our manuscript. Below, we provide a point-by-point response to your feedback. > Weakness 1: > > The primary weakness of this paper is novelty. While I can appreciate that the dual critic architecture reduces variance, the proofs seems fairly obvious. In my opinion, the main novel contribution of this paper then relies on the weighting mechanism and the empirical results showing that DCRL can beat AC, UAAC, and also recurrent A2C. The novelty therefore seems marginal. Thank you for your valuable feedback. In this study, we address the critical issue of high variance resulting from an over-reliance on state information, **a topic that has been largely overlooked but is highly relevant to the POMDP community**. To mitigate this issue, we propose a DCRL framework that harnesses the strengths of two critics. We provide theoretical evidence demonstrating that DCRL mitigates the learning variance while maintaining unbiasedness. Moreover, our DCRL incorporates **a dynamic weighting mechanism**, which can be interpreted as a form of **lower-bound-soft-Q-learning**, distinguishing it from conventional linear weighting methods. This mechanism substantially improves DCRL performance while maintaining **high stability concerning hyperparameters**. We conducted additional experiments to validate our claims: 1) We implemented a simplified weighting method that uses fixed ratios to weight the two critics (referred to as the DCRL No Clip Version). 2) We performed ablation experiments on the values of the $\beta$ parameter ($\beta \in \\{1/5, 1/3, 1/2, 2/3, 4/5\\}$). Figure 1 of the rebuttal PDF illustrates that the DCRL No Clip version shows performance improvements in only some environments compared to UAAC. In contrast, DCRL consistently enhances performance across all tested environments, **demonstrating that the dynamic weighting mechanism of DCRL is crucial. Additionally, the choice of $\beta$ exhibits notable stability**. We selected $\beta = 1/2$ as the parameter for reporting our results in the paper, as it consistently exhibits superior performance. > Weakness 2: > > Additionally, asymmetric actor critic (AC) was proposed to utilize additional state information to learn a policy more efficiently over the standard recurrent A2C where the critic only uses history information. UAAC then corrects the bias in AC by incorporating both the state and the history. However this paper proposes going back to the standard recurrent A2C method for one of the critics to achieve unbiasedness. It would have been interesting to compare two dual critic architectures: 1) V(h) and V(h,s) and 2) V(s) and V(h,s). Although the second approach would be biased, I would assume that using V(s) would lead to more efficiency gains. This seems to be an obvious comparison that was not considered. We greatly appreciate your suggestion. Following your suggestion, we conducted **ablation experiments comparing two different dual architectures**: 1) $V(h)$ and $V(h,s)$ (our DCRL), and 2) $V(s)$ and $V(h,s)$ (referred to as the DCRL State Version). Figure 2 of the rebuttal PDF illustrates that the DCRL State Version demonstrates commendable performance among the various DCRL variants. However, the biased characteristics of the DCRL State Version contribute to instability in its effectiveness. **Notably, our DCRL achieves superior training performance in three out of four environments.** These findings further highlight the significance of our DCRL framework. We appreciate your constructive feedback. These questions are very inspiring and discussion-worthy. The quality of the manuscript has been greatly enhanced by incorporating these and other reviewers' valuable suggestions. If you find that these concerns have been resolved, we would appreciate it if you would consider reflecting this in your rating of our paper. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed and focused responses. I am honestly impressed that they were able to perform all of the new experiments and the new results strengthen the paper greatly. I've raised my score. I'm now more convinced that DCRL is a good approach and outperforms other possible reasonable methods. In particular, the fixed weighting $\beta$ experiments show that dynamic reweighting is crucial, increasing this paper's contribution. One minor point in the rebuttal is that I'm not convinced of the argument that the mechanism resembles lower-bound-soft-Q-learning since soft Q learning fundamentally optimizes a different objective (max entropy RL or soft Bellman equation), while here the critic is still optimized for the standard Bellman equation, but again this is minor. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the time and effort you have dedicated to the discussions! The above inspiring discussion has greatly improved the quality of our paper. Thank you for recognizing our efforts. We want to assure you that we are committed to incorporating these discussions into the final version. Regarding the lower-bound soft-Q-learning, we employ an approximation in which the entropy parameter approaches zero to meet the condition. We will incorporate this suggestion and provide a more comprehensive discussion in the revised manuscript. --Best wishes from all the authors!
null
null
null
null
null
null
Bayesian Domain Adaptation with Gaussian Mixture Domain-Indexing
Accept (poster)
Summary: The paper proposes a novel method, "Gaussian Mixture Domain-Indexing" (GMDI), to address domain adaptation with inaccessible domain indices. The technique improves upon prior work by modeling the domain indices prior with a Gaussian Mixture. Empirically, it has been shown that the proposed method achieves state-of-the-art performance in classification and regression tasks. Strengths: **Novelty**: The paper proposes a novel technique to address the issue of domains in domain adaptation having multiple semantics. The method is a natural extension from prior work (VDI) by changing the Gaussian prior to a Gaussian Mixture. **Theoretical Support**: The paper proves the correctness of the proposed method with solid theoretical derivations. **Empirical Validation**: Extensive experiments show that GMDI significantly outperforms state-of-the-art methods in both classification and regression tasks, with substantial improvements in accuracy and mean squared error (MSE). I particularly like the illustration of the learned domain indices. Figures 4 and 5 clearly show that GMDI learns the domain indices more accurately compared to the prior state-of-the-art method, VDI. Good job! Weaknesses: **Clarity**: Some equations in the paper might have typos. Please see the details in the next section. **Computational Overhead**: The use of CRP and dynamic mixtures increases computational overhead, which might make the method less practical for large-scale or real-time applications without further optimization. It would be helpful if the authors could provide some comments and empirical analysis on the computational overhead of GMDI. Technical Quality: 3 Clarity: 3 Questions for Authors: **Typos?** Should the left side of Equation (1) be $p(z|x,\epsilon)$ or should $\int_{x}$ be added to the right side? A similar issue seems to exist in Equation (5) as well. **Abstract Claim** I do not understand the authors' claim in the abstract, “For classification, GMDI improves accuracy by at least 63% (from 33.5% to 96.5%).” It seems from Table 1, the result for DG-60, the authors picked the worst performance of all baselines, ADDA with an accuracy of 33.5%. It is unclear why the authors claim the improvement is at least 63%. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: I do not see specific limitations that might lead to potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed and positive comments. The following is our responses to the questions mentioned in the comments. **1. Computational Overhead: The use of CRP and dynamic mixtures increases computational overhead, which might make the method less practical for large-scale or real-time applications without further optimization. It would be helpful if the authors could provide some comments and empirical analysis on the computational overhead of GMDI.** We appreciate this useful suggestion of the reviewer. In the global "Author Rebuttal", we conduct detailed ablation and computational overhead experiments comparing the proposed GMDI with GMDI w/o CRP. Please kindly refer to the global "Author Rebuttal". **2. Typos? Should the left side of Equation (1) be $p\left ( z\mid x, \varepsilon \right )$ or should $\int_{x}$ be added to the right side? A similar issue seems to exist in Equation (5) as well.** Thanks for pointing out the typos. The left side of Equation (1) and Equation (5) should be corrected to $p\left ( z\mid x, \varepsilon \right )$. We will include the correction in the final version. **3. Abstract Claim I do not understand the authors' claim in the abstract, “For classification, GMDI improves accuracy by at least 63% (from 33.5% to 96.5%).” It seems from Table 1, the result for DG-60, the authors picked the worst performance of all baselines, ADDA with an accuracy of 33.5%. It is unclear why the authors claim the improvement is at least 63%.** We apologize for the misleading claim. We will correct it in the final manuscript. --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer 72He Comment: Thank you. My concerns are well addressed. I stay postive for this manuscript. --- Reply to Comment 1.1.1: Title: Thanks. Comment: Thank you very much for your positive support.
Summary: This paper proposes a Bayesian Domain Adaptation method with Gaussian Mixture Domain-Indexing (GMDI) to address the challenge of inferring domain indices when they are unavailable. Existing methods often assume a single Gaussian prior for domain indices, ignoring the inherent structures among domains. GMDI models domain indices as a mixture of Gaussian distributions, with the number of components dynamically determined by the Chinese Restaurant Process. This approach provides a higher level of flexibility and effectiveness in adapting to diverse target domains. Theoretical analysis demonstrates that GMDI achieves a more stringent evidence lower bound, closer to the log-likelihood. Extensive experiments on classification and regression tasks show that GMDI significantly outperforms baselines, achieving state-of-the-art performance. Strengths: 1. GMDI is the first to model domain indices as a mixture of Gaussian distributions, allowing it to capture the inherent structures among different domains. This approach provides a more flexible and powerful way to infer domain indices. 2. By using the Chinese Restaurant Process, GMDI can dynamically determine the number of mixture components, adapting to varying numbers of domains. This enhances its capability to handle complex datasets with an unknown number of domains. 3. The paper provides a detailed theoretical analysis, demonstrating that GMDI achieves a more stringent evidence lower bound and a tighter upper bound of the objective function compared to existing methods. This theoretical foundation supports the effectiveness of the proposed approach. Weaknesses: 1. GMDI relies on the availability of domain identities but cannot infer them as latent variables. This limits its applicability to scenarios where domain identities are also unknown. 2. The use of the Chinese Restaurant Process and Gaussian Mixture Model can be computationally intensive, especially for large-scale datasets with numerous domains. This could hinder the scalability of GMDI. 3. In the experiment, the binary classification task is not challenging, which makes the performance advantage not convincing. The multi-classification tasks are necessary. Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive comments, the insightful questions, and helpful suggestions. The following are our responses to the questions mentioned in the comments. **1. GMDI relies on the availability of domain identities but cannot infer them as latent variables. This limits its applicability to scenarios where domain identities are also unknown.** Thank you to the reviewer for pointing out this weakness. Our proposed GMDI focus on inferring domain indices following a mixture of Gaussian distributions, with the number of mixture components dynamically determined by a Chinese Restaurant Process (CRP). Extensive experiments on classification and regression tasks demonstrate the strong domain index modeling capability of GMDI, significantly outperforming state-of-the-art methods. However, we acknowledge that GMDI's reliance on domain identities limits its applicability in situations where domain identities are unknown. We plan to address this issue in future work. **2. The use of the Chinese Restaurant Process and Gaussian Mixture Model can be computationally intensive, especially for large-scale datasets with numerous domains. This could hinder the scalability of GMDI.** We appreciate the reviewer for highlighting this issue. In the global "Author Rebuttal", we conduct detailed ablation and computational cost experiments comparing the proposed GMDI with GMDI w/o CRP and VDI. Please kindly refer to the global "Author Rebuttal" for a comprehensive analysis. **3. In the experiment, the binary classification task is not challenging, which makes the performance advantage not convincing. The multi-classification tasks are necessary.** Thank you for bringing this to our attention. Fllowing VDI, we have tested the performance of GMDI on the multi-classification dataset *CompCars*. The *CompCars* dataset is a 4-way classification dataset containing 18,735 samples across 30 domains. Due to the scarcity of datasets with ground-truth domain indices, we leave the extension of experiments on new datasets for future research. More details about the datasets used are available in Appendix H of the paper. --- Rebuttal Comment 1.1: Comment: Thank you. The authors have addressed all my concerns. --- Reply to Comment 1.1.1: Title: Thanks Comment: Thank you very much for your positive support.
Summary: The paper introduces the Gaussian Mixture Domain-Indexing (GMDI) algorithm for domain adaptation when domain indices are unavailable. Unlike traditional methods that use a simple Gaussian prior, GMDI employs a Gaussian Mixture Model adjusted by a Chinese Restaurant Process, enabling adaptive determination of mixture components. Strengths: - The use of a Gaussian mixture to represent complex domain indices seems interesting. - The proposed method demonstrates superior performance over baseline models in experimental results. - The technical part (the proposed method) seems non-trivial and its complexity seems sufficient, yet it might not be necessary. Weaknesses: - The learning loss for the proposed model is over too complex, featuring multiple conditional Kullback-Leibler divergences, which might complicate implementation and interpretation. - The paper lacks clear definitions for the 'local' and 'global' domain index within the probabilistic graphical model, which could confuse readers about the model's scope and applicability. - While introducing the Chinese Restaurant Process (CRP) adds flexibility to the Gaussian Mixture Model, it also increases computational costs. For the problem described in Figure 1, a fixed-component GMM might have been a simpler and more effective solution, though the number of components may not be that flexible. - The authors should also seriously consider empirically comparing the proposed method with a fixed-component counterpart, e.g. by simply using Gumbol softmax to infer the Gaussian component. - The related work may not be sufficiently discussed. For example [a] discussed an end-to-end approach that learns the domain index using adversarial learning; [b] takes the domain index/identity as a latent dynamical system, coupled with adversarial learning. [a] Out-of-distribution Representation Learning for Time Series Classification [b] Extrapolative Continuous-time Bayesian Neural Network for Fast Training-free Test-time Adaptation ----------------Minor------------------ - The connection between Equation (5) and Equation (6) is not clearly explained, leaving a gap in understanding the sequential logic of the model's formulation. -The paper's clarity and accuracy in writing could be improved. For instance, the statement "DP requires a predefined number of components" is misleading, as Dirichlet Processes are inherently nonparametric and do not require a predefined number of components. - Several symbols used in the equations are not adequately explained (both in the major paper and appendix), making it difficult to fully grasp the proposed model and its mathematical foundations. - The proofs and the theoretical part seem to closely follow that of VDI, making it hard to evaluate its novelty. Technical Quality: 2 Clarity: 2 Questions for Authors: See weaknesses. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The limitations are addressed in the Sec. conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed comments and constructive suggestions. The following is our responses to the questions mentioned in the comments. **1. The learning loss for the proposed model is over too complex, featuring multiple conditional Kullback-Leibler divergences, which might complicate implementation and interpretation.** Thanks for pointing out this shortcoming. The learning loss for the proposed GMDI is indeed somewhat complex. GMDI follows the framework of VDI, which involves multiple latent variables and requires KL divergence constraints on these variables, making this complexity unavoidable. However, by modeling the domain index as a mixture of Gaussian distributions and using the Chinese Restaurant Process (CRP), GMDI demonstrates excellent performance on the relatively more complex *CompCars* dataset. This indicates its potential for application to even more intricate datasets, showcasing its broad applicability. **2. The paper lacks clear definitions for the 'local' and 'global' domain index within the probabilistic graphical model, which could confuse readers about the model's scope and applicability.** We apologize for the confusion. GMDI follows the setting of VDI. (1) Local domain index $u$: It contains instance-level information, meaning each data point has a unique local domain index. Essentially, the local domain index can be viewed as an intermediate latent variable. (2) Global domain index $\theta$: It contains domain-level information, meaning all data points within the same domain share the same global domain index. The global domain index is the true "domain index". Our proposed GMDI models the global domain index (which contains domain-level information) as a mixture of Gaussian distributions. This mixture provides a higher level of flexibility in a larger latent space, thereby enhancing performance. We will incorporate the above definitions into the final version. **3. While introducing the Chinese Restaurant Process (CRP) adds flexibility to the Gaussian Mixture Model, it also increases computational costs. For the problem described in Figure 1, a fixed-component GMM might have been a simpler and more effective solution, though the number of components may not be that flexible.** We appreciate the reviewer for highlighting this issue. In the global "Author Rebuttal", we conduct detailed ablation and computational cost experiments comparing the proposed method with a fixed-component GMM. Please kindly refer to the global "Author Rebuttal". **4. The authors should also seriously consider empirically comparing the proposed method with a fixed-component counterpart, e.g. by simply using Gumbol softmax to infer the Gaussian component.** Thank you for the insightful suggestion. In the global "Author Rebuttal", we conduct detailed ablation and computational cost experiments comparing the proposed method with a fixed-component counterpart using Gumbol softmax. Please kindly refer to the global "Author Rebuttal". **5. The related work may not be sufficiently discussed. For example [a] discussed an end-to-end approach that learns the domain index using adversarial learning; [b] takes the domain index/identity as a latent dynamical system, coupled with adversarial learning.** Thanks for the mention of these two related works. We follow the setting of VDI, where the concepts of "domain index" and "domain identity" indeed share significant similarities, though they are somewhat different. The two mentioned papers primarily focus on "domain identity", and they are both highly valuable and worthy of discussion. We will discuss them in the related work section of the final version. ----------Minor---------- **6. The connection between Equation (5) and Equation (6) is not clearly explained, leaving a gap in understanding the sequential logic of the model's formulation. -The paper's clarity and accuracy in writing could be improved. For instance, the statement "DP requires a predefined number of components" is misleading, as Dirichlet Processes are inherently nonparametric and do not require a predefined number of components.** We appreciate the reviewer for pointing out the issues. (1) Equation 5 and Equation 6 both represent the posterior factorization of data encoding $z$. $\theta$ in Equation 5 is the global domain index. While GMDI models the global domain index $\theta$ as a mixture of Gaussian distributions, $\theta ^{v}$ in Equation 6 indicates the $v$-th component of the mixture distribution of $\theta$ with the prior $p\left ( v \right )$. (2) Thanks for pointing out the misleading statement. The Dirichlet Process (DP) is indeed non-parametric. Due to the difficulty of direct construction, we apply the Chinese Restaurant Process by stick-breaking with a predefined bound $K$ to implement the DP. We will correct it with the explanatory sentences above. **7. Several symbols used in the equations are not adequately explained (both in the major paper and appendix), making it difficult to fully grasp the proposed model and its mathematical foundations.** We apologize for the missing explanations of some notations. Below is a detailed explanation of the missing notation: $\beta$: independent random variable with a Beta distribution in the stick-breaking representation; $\pi$: the probability vector in the stick-breaking; $\gamma$: parameter of $q(\beta)$; $\eta$: parameter of $q(v)$. We will include the explanations of the above notations in the final version. **8. The proofs and the theoretical part seem to closely follow that of VDI, making it hard to evaluate its novelty.** Thank you for bringing this to our attention. Compared to VDI, the theoretical contribution of the proposed GMDI is that it proves GMDI achieves a more stringent evidence lower bound, as well as a tighter upper bound of the objective. Theoretically, it demonstrates that GMDI can effectively infer optimal domain indices. --- Rebuttal Comment 1.1: Title: Thanks Comment: Thanks for the efforts. The author's response addressed some of my concerns, but my major concerns about using a simpler solution such as fixed component GMM still exists. Though the additional results are provided, I am surprised that GMDI w/o CRP fails to beat VDI in terms of accuracy, while haveing a smaller MSE. This could due to overfiting. Based on the general quality of the paper and above concerns, i decide to downgrades my ratings. --- Reply to Comment 1.1.1: Title: Incorrect comments and unfair downgraded rating from the reviewer uzxH Comment: Thanks for your response. However, we are quite sure that your following claim is not correct, "I am surprised that GMDI w/o CRP fails to beat VDI in terms of accuracy." In fact, on the *CompCars* dataset, GMDI w/o CRP achieves an accuracy of 43.0%, while the accuracy of VDI is 42.5%. Please see the corresponding details in the global "Author Rebuttal". This indicates that GMDI w/o CRP indeed performs better than VDI, contrary to your claim. We have conducted detailed experiments on *TPT-48* dataset for regression task and *CompCars* dataset for classification task. On all datasets, GMDI w/o CRP slightly outperforms VDI, while it lags far behind GMDI. Because of the above, the downgraded rating by you is unfair for us. Thank you very much and looking forward to your positive response. --- Rebuttal 2: Title: Thanks for the prompt response Comment: Thank you very much for your prompt response. The performance gap between the two models, i.e., GMDI w/o CRP and VDI, is significant, which is what we expected and good result. As you suggested, we further applied a two-tailed paired t-test and found that the statistical significance of the observed performance difference between these two models is significant for alpha=0.05. Because of the positive experimental result, we are expecting you to at least upgrade the current rating score to the original rating score. Thank you very much for you kind support. --- Rebuttal Comment 2.1: Comment: Please let us know about the search space of the hyperparameter K for each dataset. Is K=1 for CompCars? If so, it is no wonder the results are not significantly different from those of VDI. Furthermore, the search space of K should be sufficiently large for GMM-based variational inference models. For now, I still have concerns about the over-complexity of the method. Such complexity may not be necessary for considering the multimodal assumption of the domain index. Therefore, I decided to keep my current ratings. --- Reply to Comment 2.1.1: Title: Thanks for the prompt response Comment: Thank you for your response. The search space of the hyperparameter K for each dataset ranges from 1 to 5. Due to limitations in computational resources, we did not perform a broader search for the hyperparameter K. It's clear that the hyperparameter K for the *CompCars* dataset is not 1. We have already stated in the global "Author Rebuttal" that "The number of components for GMDI w/o CRP is set to the upper bound K of GMDI," and we have detailed the upper bound K for each dataset in the Experimental Study section of the paper. For the *CompCars* dataset, the hyperparameter K is 3, not 1. In our previous response, based on your suggestion, we have further applied a two-tailed paired t-test and found that the statistical significance of the observed performance difference between GMDI w/o CRP and VDI is significant for alpha=0.05. We are puzzled by your claim in the review that "the results are not significantly different from those of VDI," as this contradicts the results of our significance testing. Moreover, while the search space of K indeed needs to be sufficiently large for GMM-based variational inference models, this is actually where GMDI shines. Using a fixed-component GMM requires extensive searching for K, a process that is very time-consuming. In contrast, GMDI leverages CRP to dynamically adjust the number of components with just an simple upper bound on K. Although the use of CRP in GMDI is more complex compared to fixed-component GMM, CRP's dynamic adaptability significantly reduces the cost of searching for K, making the use of CRP essential. You downgraded the rating because of the incorrect statement "I am surprised that GMDI w/o CRP fails to beat VDI in terms of accuracy.". We have addressed this issue raised by you at the first-round discussion (see the previous discussion). And now we believe we also have addressed the issue raised at the second-round discussion (the above discussion). Because the issues have been addressed appropriately, we wish you are able to keep at least the original rating for our paper rather than the current downgraded score. Thank you very much for your positive support. --- Rebuttal 3: Title: Further explanation of the experimental results Comment: To avoid misunderstanding and further clarify the experimental results presented in the global "Author Rebuttal," we would like to provide the following additional explanation. VDI is our baseline, which models the domain index as a single Gaussian. Our proposed GMDI models the domain index as a mixture of Gaussians and incorporates CRP. GMDI w/o CRP is a version that follows your suggestion by not using CRP and instead utilizing a fixed-component GMM (Gaussian Mixture Model with a fixed number of components). We conducted detailed experiments on the relatively complex *TPT-48* and *CompCars* datasets. The results show that GMDI w/o CRP performs better than VDI but falls short of GMDI. This indicates that using a fixed-component GMM improves performance, but the lack of CRP leads to results inferior to those of GMDI, demonstrating the importance of both GMM modeling and the use of CRP. Please let us know if we need to provide further explanation. Thank you very much. --- Rebuttal Comment 3.1: Comment: Dear Reviewer uzxH, Just wanted to check if the authors' further response addressed your remaining concerns. Feel free to ask follow-up questions if needed. Thanks! Best, AC --- Rebuttal 4: Title: Thanks for the prompt response. Comment: Thank you for your detailed comments. The following is our responses to the concerns mentioned in the comments. **1. "However, the study primarily builds upon the existing framework of VDI, and its theoretical developments, which are closely aligned with those of NDI, are mostly relegated to the appendix without appropriate citations. This lack of clarity suggests that some key proofs might be incorrectly attributed to this work."** Thank you for pointing out this issue. Our theorems/lemmas are indeed partially based on VDI, and we will carefully review them to ensure that VDI is correctly cited in both the theorems/lemmas in the main body of our paper and the proofs in the appendix in the final version. Compared to VDI, the theoretical contribution of the proposed GMDI is that it proves GMDI achieves a more stringent evidence lower bound, as well as a tighter upper bound on the objective. Theoretically, it demonstrates that GMDI can effectively infer optimal domain indices. **2. "Additionally, the use of the Chinese Restaurant Process (CRP) to manage an unknown number of Gaussian components is not innovative, as thoroughly discussed in prior research [1]. Therefore, both the technical novelty and theoretical contributions of this paper seem incremental."** Thanks for pointing out this. The main innovation of our proposed GMDI is that, to the best of our knowledge, GMDI is the first to introduce a CRP-based Gaussian mixture model to represent the domain index. **3. "The experiments presented are not sufficiently comprehensive. I can understand that the authors have difficulties in conducting experiments using additional GMM variants, yet such comparisons are essential for a robust evaluation. Notably in the experiments, the CRP model frequently identifies the optimal number of components as two or three. It is important to figure out why CRP outperforms fixed-component GMMs, given that the "optimal" number of Gaussian components is known. A more thorough ablation study could clarify these results."** Thanks for mentioning this. Since fixed-component GMMs are fundamentally similar, none of them can adaptively determine the number of mixture components. It is neither realistic nor meaningful to compare all variants of fixed-component GMMs within a short period. We have already conducted a detailed comparison with the Gumbel softmax version (i.e., GMDI w/o CRP). The experimental results show that our proposed GMDI demonstrates clear advantages. We leave the extension of comparative experiments with additional GMM variants for future research. Regarding why the CRP model (i.e., GMDI) outperforms the fixed-component GMM (i.e., GMDI w/o CRP), it may be due to the complexities of the training process and parameter search space. Firstly, during training, the varying sample distribution within each batch can make it difficult for the fixed-component GMM (i.e., GMDI w/o CRP) to adapt well under a fixed K, potentially leading to suboptimal performance and convergence challenges. In contrast, the CRP model (i.e., GMDI) can automatically adapt to the sample distribution during training, and experiments have observed that GMDI converges faster to better performance, which aligns with our expectations. Secondly, due to the complex parameter space, it is challenging to fine-tune both the fixed-component GMM (i.e., GMDI w/o CRP) and GMDI to their optimal performance within a short time frame. However, under fair comparison conditions, GMDI demonstrates superior performance and convergence speed compared to the fixed-component GMM (i.e., GMDI w/o CRP) within a limited search space, while significantly reducing the cost of parameter searching. As shown in the experimental results in the table below, we have tested various hyperparameters, K and $\tau$, as thoroughly as possible. The experiments indicate that even under the fairest possible comparison conditions, the fixed-component GMM (i.e., GMDI w/o CRP) struggles to achieve better performance, thereby validating our conclusions. (1)GMDI w/o CRP on *CompCars* dataset: | Accuracy(%) | K=1 | K=2 | K=3 | K=4 | K=5 | | ----------- | ---- | ---- | -------- | ---- | ---- | | $\tau$=100 | 42.6 | 42.1 | 42.4 | 41.8 | 40.0 | | $\tau$=50 | 42.6 | 42.8 | **43.0** | 42.1 | 39.8 | | $\tau$=10 | 42.6 | 41.8 | 42.0 | 42.9 | 42.4 | | $\tau$=1 | 42.6 | 42.7 | 36.9 | 30.3 | 30.3 | | $\tau$=0.1 | 42.6 | 38.5 | 41.8 | 30.4 | 30.3 | --- Rebuttal Comment 4.1: Title: Thanks for the prompt response. Comment: **4. "Furthermore, the methodology should be tested on data featuring more complex multimodal distributions, which may require a larger number of Gaussian components for accurate representation of the index. Such evaluation would help validate the method’s effectiveness and demonstrate more advantages over a fixed components method."** Thank you for bringing this to our attention. We have already conducted comparative experiments on the most complex dataset used in VDI, *CompCars*, and the experimental results show that our proposed GMDI demonstrates clear advantages. Due to the scarcity of datasets with ground-truth domain indices, we leave the extension of comparative experiments on new datasets for future research.
null
null
Rebuttal 1: Rebuttal: We thank all the respected reviewers for their detailed comments and believe that all the mentioned issues can be properly addressed in the final version of our paper. The major concerns lie in the ablation and computational cost experiments. We take this opportunity to clarify these issues and present our responses accordingly. Regarding the ablation and computational cost experiments, we have followed the suggestions and provided detailed comparison results in the following Ablation Study section. We appreciate the reviewers for highlighting the issue of computational cost associated with the Chinese Restaurant Process (CRP). The CRP is indeed computationally intensive. To improve computational efficiency, the proposed GMDI employs the stick-breaking construction of the CRP by specifying an upper bound $K$ for the number of components in the Gaussian mixture. By selecting an appropriate $K$, we are able to leverage the benefits of the CRP while reducing computational cost. More details are available in Appendix A of the paper. To evaluate the impact and computational cost of the CRP, we conduct ablation and computational cost experiments. Following the reviewers' suggestions, we implement GMDI w/o CRP using Gumbel softmax. The number of components for GMDI w/o CRP is set to the upper bound $K$ of GMDI, and the hyperparameter temperature $\tau$ for Gumbel softmax ranges from 0.1 to 50 (with the best performance reported). "Total time" refers to the total training duration, which concludes when the loss converges. The experimental results on three datasets are shown in the following tables. (1) *TPT-48*(W->E) dataset: | Method | MSE | Total time | Epochs | Time per epoch | | ------------ | --------- | ---------- | ------ | -------------- | | VDI | 2.496 | 1h 24m 18s | 400 | 13s | | GMDI w/o CRP | 2.471 | 2h 14m 54s | 500 | 16s | | GMDI | **2.087** | 1h 31m 13s | 300 | 18s | (2) *TPT-48*(N->S) dataset: | Method | MSE | Total time | Epochs | Time per epoch | | ------------ | --------- | ---------- | ------ | -------------- | | VDI | 3.160 | 1h 51m 43s | 500 | 13s | | GMDI w/o CRP | 3.050 | 2h 22m 58s | 500 | 17s | | GMDI | **2.493** | 2h 2m 35s | 400 | 18s | (3) *CompCars* dataset: | Method | Accuracy(%) | Total time | Epochs | Time per epoch | | ------------ | ----------- | ---------- | ------ | -------------- | | VDI | 42.5 | 3h 13m 15s | 600 | 19s | | GMDI w/o CRP | 43.0 | 4h 3m 48s | 700 | 21s | | GMDI | **44.4** | 4h 16m 14s | 600 | 26s | We find that although the proposed GMDI has a longer "Time per epoch" compared to GMDI w/o CRP, it converges faster due to the flexible number of components adaptively controlled by CRP. Therefore, the "Total time" is roughly the same as GMDI w/o CRP. On the *TPT-48*(W->E) dataset, due to faster convergence, the "Total time" of GMDI is significantly less than that of GMDI w/o CRP and is even comparable to VDI. In all three datasets, the performance of GMDI w/o CRP is significantly worse than that of GMDI. On the two *TPT-48* datasets, compared to the baseline VDI, GMDI w/o CRP reduces MSE by only 1% and 3%, whereas GMDI reduces MSE by 16% and 21%, far surpassing GMDI w/o CRP. On the *CompCars* dataset, GMDI's accuracy significantly higher than that of GMDI w/o CRP. These results indicate that although using a fixed-component GMM is simpler, the computational costs are roughly equivalent to using CRP, but the performance is far inferior to CRP, demonstrating the significance of CRP in GMDI. Additionally, compared to VDI, which models the domain index as a single Gaussian distribution, GMDI's computational costs are only slightly higher, yet its performance is far superior. For large-scale datasets with numerous domains, modeling the domain index as a simple single Gaussian distribution may result in poor performance due to the dataset's complexity. The experimental results indicate that GMDI has broad applicability.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Humanoid Locomotion as Next Token Prediction
Accept (spotlight)
Summary: This paper proposes to use next-token-prediction as a learning objective and train a casual transformer for humanoid locomotion. Compared to previous RL-based methods, the advantage of this method is to fuse data from different sources, including mocap data, videos, RL controller, and MPC controller. The authors successfully show that their model has lower tracking errors when trained with unlabeled data. The trained model is deployed on a Digit humanoid robot and shows robust locomotion across diverse real-world areas. Strengths: 1. This paper introduces the next-token-prediction framework into humanoid robots. The intuition is simple yet very interesting. 2. The usage of different data sources, such as the RL controller and Internet videos, is novel. 3. The real-world experiments are very solid. It is exciting to see such a framework would bring benefits on real-world robot learning. Weaknesses: 1. As shown in Figure 8, the prediction error is correlated to the tracking error. I am curious about how much data the authors need to make these two errors correlated, since from my knowledge, the real world is much more complex and it is usually hard to have such a correlation, 2. Recently, there have been numerous impressive advancements in humanoid robotics. While these developments are exciting, it is unfortunate that most of these works do not open-source their code. I understand the reasons behind keeping the code proprietary, but open-sourcing it would provide significant benefits to the broader community. I believe there are a lot of engineering details behind the simple framework, both in the algorithm and in the real world deployment, which however might not be shown in the paper. Technical Quality: 4 Clarity: 4 Questions for Authors: See weaknesses. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The authors have mentioned the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments and suggestions! Please find our responses below: > As shown in Figure 8, the prediction error is correlated to the tracking error. I am curious about how much data the authors need to make these two errors correlated, since from my knowledge, the real world is much more complex and it is usually hard to have such a correlation, We use a total of 30k trajectories for training. Since we evaluate the prediction error on a held out validation set we think that the errors may be correlated in a lower data regime as well (less training data may lead to higher prediction error and higher tracking error as well). Please let us know if this answers your question. > Recently, there have been numerous impressive advancements in humanoid robotics. While these developments are exciting, it is unfortunate that most of these works do not open-source their code. I understand the reasons behind keeping the code proprietary, but open-sourcing it would provide significant benefits to the broader community. I believe there are a lot of engineering details behind the simple framework, both in the algorithm and in the real world deployment, which however might not be shown in the paper. Thank you for the suggestion. We commit to open-source the code, models, and data used in this study to facilitate future research in this area. --- Rebuttal Comment 1.1: Title: Thank you for the reply Comment: Thank authors for the reply. My questions are mostly addressed. I have raised the score according to the open-source commitment of authors. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you for your response and for increasing the score. Please let us know if you have any further questions. We would be happy to address them.
Summary: Rather than training an RL model to learn how to walk, this paper focuses on an SSL based approach towards humanoid locomotion. Using autoregressive prediction of actions and sensor data, they pre-train and deploy zero shot to the real world. The results are strong, performing better than RL at times. Strengths: * The idea of using a masked token for unknown modalities is smart and works well * Generalization to walking backwards is impressive * The approach is scalable and outperforms RL in many cases Weaknesses: * There was a lack of details/comprehensiveness on some of the experiments. For example, it was not clear to me how many trials were used for Figure 5. The authors also did not compare to existing MPC and only compared to RL based approaches. Limited details for hparams were given. * There are more ablations that could be done to further confirm design decisions (see questions). Technical Quality: 4 Clarity: 4 Questions for Authors: * Did the authors do experiments with quantization/VQ? If so it would be nice to see the results for that. * Ablations for masked token? * I did not understand this comment: "Rather than predicting the next token in a modality-agnostic way, we make predictions in a modality aligned way. Namely, for each input token we predict the next token of the same modality." * Ablations on which way to pre-train--noisy and incomplete or joint? * Not much information on the modalities themselves/examples modalities. If there are tons of motors/sensory information used at any given point are all of these predicted at once, with the same model? In that case, how is the causal mask constructed? * What is the prediction error in Table 1c? * Was predicting both \hat{o}_{i+1} and \hat{a}_{i+1} at the same time tried? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments and suggestions! Please find our responses below: **Additional details on some experiments.** Our goal in Figure 5 is to give a qualitative sense and we use one trial for each command. The following table shows a quantitative comparison between our model, RL, and MPC models. We report the mean velocity tracking error of different models under different yaw commands (250 different yaw values sampled in total). | | | | | | | | ----------- | --------- | --------- | --------- | --------- | --------- | | Yaw (rad/s) | 0.0 | 0.1 | 0.2 | 0.3 | 0.4 | | MPC | 0.108 | 0.112 | 0.112 | 0.123 | 0.119 | | RL | 0.082 | 0.083 | 0.090 | 0.105 | 0.110 | | Ours | **0.070** | **0.077** | **0.078** | **0.081** | **0.089** | We can see that MPC has the highest tracking error among the three methods while our model has the better performance across all the commands. We will update the comparisons to MPC, as well as the hyperparameters, in the next version of our paper. **Ablation on quantization.** Our model uses MSE loss on continuous observations and actions. We compare it to using cross entropy (CE) loss on quantized observations and actions. We use uniform binning for quantization, i.e., for each dimension of the observations and actions, we split the value of that dimension into N uniform bins and the observation/action prediction is formulated as a classification problem of predicting which bin the value of that dimension falls into. We use N=100 for all dimensions. The results are shown below: | | | | | ---------------- | --------------------- | ------------------- | | | Continuous (MSE loss) | Quantized (CE loss) | | Prediction Error | **1.39** | 10.41 | We can see that using MSE loss has a better performance. Note that here we only use a vanilla quantization method of uniform binning and using more advanced approaches such as non-uniform binning or vector quantization might provide better performance. **Ablation on mask token.** Our model uses mask token to replace the missing modalities in model-based, MoCap, and human video data. We compare using a leanable mask token with using a constant vector (e.g., zero vector) to fill in the missing modalities. Results are shown below: | | | | | ---------------- | ------------------- | --------------- | | | Learnable mask token | Constant vector | | Prediction Error | **1.39** | 1.42 | We can see that using mask token is slightly better than using a constant vector. **Clarification on modality-aligned prediction.** Given a sequence of interleaved observation and action tokens [$o_1$, $a_1$, …, $o_{t-1}$, $a_{t-1}$, $o_t$, $a_t$, …], the naive next token prediction will predict $o_t$ from $a_{t-1}$ and then $a_t$ from $o_t$. For modality-aligned prediction, we predict $o_t$ from $o_{t-1}$ and $a_t$ from $a_{t-1}$, i.e., we always predict the token from the previous token that belongs to the same modality. **Ablation on noisy pre-training vs. joint training.** We compare two ways of training our model: 1) first pre-training on noisy observation prediction on MPC, MoCap, and human video data, and then fine-tuning on action prediction on RL data. 2) training the model jointly on observation and action prediction on all data. Results are shown below: | | | | | -------------- | ------------------ | -------------- | | | Noisy pre-training | Joint training | | Tracking Error | 0.311 | **0.310** | We can see that joint training is slightly better. The results are also reported in Table 1c in the paper. **Details on the modalities used.** There are in total two modalities: observation and action. The observation is a 102-dim vector which is the concatenation of the joint positions of each joint/motor, the velocities of each joint, the velocity of the body, the gravity, the command, etc. The action is a 36-dim vector which contains the commands it will send to each motor for the next step. During inference, at each step the model predicts both the observation and action vector for this step. Assuming the input is the history of observations and actions [$o_1$, $a_1$, …, $o_{t-1}$, $a_{t-1}$], the model predicts $o_t$ and $a_t$ at the t-th step. The causal mask is constructed such as any modality in each step (e.g., $o_{t-1}$) will only see other modalities in the same step ($a_{t-1}$) as well as all the modalities in the previous steps (e.g., $o_{t-2}$ and $a_{t-2}$). **The prediction error for stage training (Table 1c).** For stage training, the model is pre-trained to predict observations in the first stage, and then fine-tuned with action prediction in the second stage. Since the prediction error counts in both observation prediction and action prediction, and the model is not supervised with observation prediction in the second stage, it is probably not fair to compare with the model that is trained jointly on observation and action prediction, which is why we didn’t report the number. For reference, the prediction error for stage pretraining is 42.22 which is way higher than the prediction error of joint training (0.88). **Was predicting both observation and action at the same time tried?** Our model predicts both observation and action at the same time. Please also see the clarification on modality-aligned prediction above. --- Rebuttal Comment 1.1: Title: Author response summary Comment: We appreciate your positive review and helpful suggestions. We wanted to provide a summary of the additional information provided in our response for your reference. Additional experiments on: - MPC comparisons - ours > RL > MPC - Quantization ablation - continuous > discrete - Mask token ablation - learnable > constant - Pre-training ablation - joint > staged Additional details on: - Modality prediction - predict next token of the same modality - Modalities used - observations and actions - Prediction error - provided for reference - Simultaneous prediction - used in our models Thank you again for your valuable feedback. We would be happy to provide any further information if needed. --- Rebuttal 2: Title: Response by Authors Comment: Thank you for your response. We are glad to hear that we addressed all of your initial comments. We would like to offer some clarifications regarding the related work discussion. We believe that reviewer FS4P’s suggestion about an additional reference in robotic manipulation was a minor point, which we agreed to include. Reviewer FS4P acknowledged that this concern was fully addressed in their response and also noted the novelty of our work in their review. We want to emphasize that we do not claim "Autoregressive Pre-training for Robotics" as our primary contribution. In fact, we already have the "Transformers for Robotics" section in the related work that discusses a number of papers on autoregressive pre-training in robotics. We are happy to expand this section further. The novelty of our work lies in casting real-world humanoid locomotion as a joint distribution learning problem with transformers. We further show that this formulation enables training with noisy or missing data (e.g., derived from videos) and achieves excellent results in the real world. While our work can have excellent impact on multiple areas, the paper and title focus on humanoid locomotion. We chose this problem as a test bed because it is particularly challenging due to its highly dynamic, complex nature. The success of our method in this domain is particularly noteworthy (few prior approaches used learning and the ones that did rely on RL). Given these clarifications and the original review, we believe the original score more accurately reflects our contribution and that it would be unfair to reduce it. We respectfully ask you to reconsider maintaining your initial score. --- Rebuttal Comment 2.1: Comment: I appreciate the response. Perhaps, this was a mistake on my end, as while reading the paper I believed that the idea of autoregressive pre-training for robotics was a primary contribution. Adding to this, I think that the lack of inclusion of autoregressive pre-training approaches in the "Transformer for Robotics" section is of slight concern, as *autoregressive pre-training approaches for robotics are the most similar to this work.* (I say lack of inclusion because all approaches listed were either not pre-training or not autoregressive as far as I could tell). Together, these enhanced my initial beliefs regarding the merit of this work. I acknowledge the contributions discussed and the excellent results. Hence, I think a score of 8 is fitting. This updated score is based off of the planned inclusion of more autoregressive pre-training approaches in the related works. --- Rebuttal 3: Title: Response by Authors Comment: Thank you for your response. We appreciate your acknowledgement of our contributions and results. We would like to clarify a few points below: - Autoregressive pre-training is not a primary contribution of our work. While our approach can be used with pre-training, our method does not involve pre-training. We are "just" training a model from scratch and deploying it zero shot, without multi-stage training or fine-tuning. - We are more than willing to expand the “Transformers for Robotics” section with work on autoregressive pre-training in robotics to provide a more comprehensive context. - Our primary contribution lies in formulating real-world humanoid locomotion as a joint distribution learning problem, enabling training with noisy or missing data and achieving very strong results in a particularly challenging domain. Thank you for your consideration.
Summary: This paper trained an autoregressive transformer for humanoid robot walking control using four types of data. The data sources included trajectories rolled out by RL Policy and Scripted Method, existing datasets, and human poses from YouTube videos. This work successfully enabled the humanoid robot to walk in urban environments without falling. The study explored how to train using data with missing actions, specifically by utilizing observation data and observation-action data and demonstrated the effectiveness of training with missing data. The robot in this work does not have visual capabilities. Strengths: 1. The research problem in this work is very novel, and it has been verified on real humanoid. The effects shown in the video are excellent. 2. This work is dedicated to applying autoregressive Transformers to humanoid locomotion, which is a promising direction. 3. This work explores the usefulness of training with action-free data. Weaknesses: 1. Claiming that training with video might be overclaiming; in fact, it only uses human poses from the video, which is quite different from training with video. 2. Pre-training with action-free data is not a novel contribution. 3. Based on the model architecture and training data, it appears that the robot can only walk randomly without controlling speed or turning. However, in Figure 5, it seems to control the robot’s walking. How is this achieved? What exactly is the observation? It doesn't seem to be the observed images. How is the observation transformed into R^m? 4. For the four types of data collected—Neural Net Controller, Model-based Controller, MoCap, Internet Videos—the authors did not conduct ablation experiments to verify the usefulness of each type of data. Especially the MoCap and Internet Videos from inverse dynamics are highly questionable in their usefulness. Considering that the authors did not prove the usefulness of data from inverse dynamics, the paper would degrade to merely fitting some existing policies with a transformer, significantly limiting its practical value. 5. The paper does not specify the control frequency or how the control frequency changes with model size, nor does it explain how to deploy the robot in the field. Where is the GPU located? 6. The pre-training method used in GR1[1] is very similar to this work. The only difference is that GR1 uses two-stage training while this paper uses joint training. Both output observation-action, but this paper does not discuss this. [1] UNLEASHING LARGE-SCALE VIDEO GENERATIVE PRE-TRAINING FOR VISUAL ROBOT MANIPULATION Writing suggestions: 1. Figure 3 is currently unclear. I suggest adding sequence numbers to the tokens to distinguish their order and using superscripts to differentiate predicted tokens from given tokens. 2. Equations 5-10 are not very necessary and could be condensed into 1-2 lines. More space could be used to explain the training data for the transformer, which is more important. 3. Using subplots to redraw Figures 6 and 7 would be clearer. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How to control the robot's walking? 2. What exactly is Observation, and how is it transformed into R^m? 3. From my understanding, the Neural Net Controller is the RL policy, the Model-based Controller is a scripted policy (non-learning method), and MoCap is a robot dataset, but not for the used Agility Robotics. Is my understanding correct? 4. Is it possible to conduct ablation experiments on each type of data? 5. What is the control frequency, what GPU and model size are used during deployment, and is the GPU placed on the robot or on a nearby PC? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: 1. One key limitation of the article is the lack of visual input, which is not discussed in the paper. 2. Another limitation of the article is that it does not show how the real humanoid turns; it appears that the robot in the video only walks randomly. 3. Is it compliant to use YouTube videos, especially those featuring humans as the main subject? Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments and suggestions! Please find our responses below: > Claiming that training with video might be overclaiming; in fact, it only uses human poses from the video, which is quite different from training with video. We pre-process human videos using a pre-trained transformer model to extract human poses. We will clarify in the revised manuscript. > Pre-training with action-free data is not a novel contribution. We did not intend to claim that training with action-free data is unique to our work. Indeed, there has been prior work that studied this problem (we will include a discussion in the revised version of the manuscript). In this work, we show that a benefit of our approach is that it enables us to leverage action-free data for humanoid locomotion. > it appears that the robot can only walk randomly without controlling speed or turning. However, in Figure 5, it seems to control the robot’s walking. How is this achieved? Our controller supports omni-directional walking at varying speeds of [-1,1] m/s. We will include additional videos in the revised supplementary materials. The walking speed and direction are specified as a 3-dim vector containing the desired linear velocity on x-axis and y-axis, and the desired angular velocity around the z axis. We specify the command as a part of the observation vector. For real-world deployment, we vary the command in real time via a joystick. > What exactly is the observation? It doesn't seem to be the observed images. How is the observation transformed into R^m? The observation is a 102-dim vector which is the concatenation of the joint positions of each joint (26-dim), the velocities of each joint (26-dim), the linear and angular velocity of the body (6-dim), the gravity (3-dim), the clock input (2-dim), the command (3-dim) and the previous action containing the commanded position of each joint (20-dim) and the updated P and D gains for the PD controller (16-dim). The robot is blind and does not take visual information as input. Observations are projected using a single linear layer into the embedding dimension. We will include the additional details in the revised version of the manuscript. > the authors did not conduct ablation experiments to verify the usefulness of each type of data. Following reviewer suggestion, we perform the ablations below: | | | | -------- | ---------- | | Data | Pred. Err. | | NN | 4.17 | | NN+MPC | 2.83 | | NN+MoCap | 2.66 | | NN+Video | 3.19 | | | | | ------------------ | ---------- | | Data | Pred. Err. | | NN | 4.17 | | NN+MPC | 2.83 | | NN+MPC+MoCap | 2.28 | | NN+MPC+MoCap+Video | 2.23 | We see that each of the data sources leads to gains in performance individually and in aggregate. > The paper does not specify the control frequency or how the control frequency changes with model size, nor does it explain how to deploy the robot in the field. Where is the GPU located? For all our experiments and all the model sizes we try in this work (1M, 2M, and 8M), the model is run at 50Hz during inference. The robot does not have a GPU on board and the model is run on the CPU of the on-board Intel NUC computer. In all of our outdoor experiments, everything is running on board. We use the 2M model by default for all experiments. > Related Work (GR1). Thank you for the reference. While we focus on a different task (humanoid locomotion vs. manipulation) and use a different training strategy (staged training vs. joint training), there are some shared observations such as predicting observation can help action prediction. We will include the discussion in the revised version of the paper. > Writing and formatting. Thanks for the suggestions. We will incorporate. > How to control the robot's walking? > What exactly is Observation, and how is it transformed into R^m? Please see our comments above. > From my understanding, the Neural Net Controller is the RL policy, the Model-based Controller is a scripted policy (non-learning method), and MoCap is a robot dataset, but not for the used Agility Robotics. Is my understanding correct? Neural network controller is an RL policy from [33]. Please note that the Model-based Controller is a state-of-the art classical controller from Agility Robotics (for a dynamic task like humanoid locomotion it would be hard to script a policy). MoCap is not a robot dataset but a dataset of humans from [28, 24] (we retarget the human pose trajectories to humanoid locomotion trajectories via inverse kinematics). > One key limitation of the article is the lack of visual input, which is not discussed in the paper. We focus on blind locomotion in this work which is a challenging problem. We intend to extend our approach to vision in future work. We will include a discussion in the revised manuscript. > Another limitation of the article is that it does not show how the real humanoid turns; Our controller supports omni-directional walking and can turn in any direction. Please see above for additional discussion. We will include videos in the updated version. > Is it compliant to use YouTube videos, especially those featuring humans as the main subject? The videos we use are a subset of public videos from research datasets (Kinetics, PoseTrack) and we pre-process the videos to extract poses which anonymizes the individuals in the videos. --- Rebuttal Comment 1.1: Comment: Thank you very much for your response. I appreciate the effort in addressing my questions, and most of them have been satisfactorily resolved. However, I have some concerns regarding the reproducibility of the paper. It seems challenging to obtain the four datasets mentioned in the paper based solely on the descriptions provided. While I believe the paper offers valuable insights, the limited details, absence of detailed information in the appendix, and lack of accompanying code might affect its overall impact on the community. Considering these factors, I believe it may be best to maintain my previous score. Thank you once again for your thoughtful response. --- Reply to Comment 1.1.1: Title: Response by Authors Comment: Thank you for your response. Please find our responses below: > I appreciate the effort in addressing my questions, and most of them have been satisfactorily resolved We are glad to hear you found our responses satisfactory. To ensure we have fully addressed all of your concerns, please let us know if there are any specific questions or points that you feel remain unresolved or require further clarification. We would be happy to address them. > However, I have some concerns regarding the reproducibility of the paper. It seems challenging to obtain the four datasets mentioned in the paper based solely on the descriptions provided. While I believe the paper offers valuable insights, the limited details, absence of detailed information in the appendix, and lack of accompanying code might affect its overall impact on the community. We commit to release all of the materials to fully reproduce the paper, including the four datasets, accompanying code, and detailed documentation. We will further include detailed descriptions of the datasets in the appendix, covering data collection methods, pre-processing steps, and any other relevant details. Please let us know if there is any additional information that you found lacking, and we would be happy to include it in the revised manuscript. --- Rebuttal 2: Title: Response by Authors Comment: Thank you for your response and for raising the score. Please find our responses below: > The project page doesn't include videos of the robot turning; can it actually turn in the real world or walk for an extended period of time? However, the project page seems to be inaccessible now. It would be interesting to see its obstacle avoidance capabilities under control. Our policy can turn in all directions and walk for extended periods of time, up to 2 hours on a single battery charge. We have extensively tested these capabilities during a week-long deployment in San Francisco. Thanks to the responsive and accurate command following, our robot can avoid obstacles and navigate busy city streets with pedestrians. As an additional data point, our omnidirectional walking capabilities are comparable to the state-of-the-art RL approach in Figure 3 of [33], with improved command following as demonstrated in Figures 5 and Figure 6 (left) of our manuscript. We apologize for the inaccessible project page. It seems that our GitHub-hosted anonymous website was suspended by GitHub, which we are working to resolve with GitHub. Per the rebuttal instructions, it seems that we are not allowed to provide a new link to external pages in the rebuttal. We welcome any suggestions on how we might share the videos demonstrating these capabilities within the constraints of the rebuttal process and commit to include them in the release. > Additionally, if the robot needs to go uphill or downhill slightly, would it fail? And it will be fine to analyze its robustness. Our policy can handle gentle slopes, and we have tested it in the real world on inclines of up to 8.7% (5 degrees). For example, Figure 1 (row 3, column 1) shows the robot walking up a 7% slope in San Francisco. Regarding robustness, our approach trained purely on offline trajectories without any domain randomization has shown surprising effectiveness in walking in the real world. However, as we acknowledge in the limitations section, our policies may be less robust to sudden, large disturbances compared to RL policies. In follow-up experiments, we found that pre-training policies with our proposed framework and fine-tuning them with a small amount of RL leads to excellent results, enabling rapid acquisition of new capabilities like walking on steeper slopes and robustness to large disturbances. > If this method were extended to enable humanoid robots to go up and down stairs, what challenges would arise? To extend our method to enable the robot to go up and down stairs, we would need to incorporate vision to guide the foot placement. Our method can be readily extended to achieve this by incorporating images (potentially pre-processed using a vision encoder) as part of observations in addition to proprioception. One challenge would be in collecting training trajectories that contain visual inputs. This would be relatively straightforward for trajectories collected using the same robot body (e.g., via prior RL or MPC controllers). To incorporate human trajectories from mocap or videos, we would need to include ego vision inputs where available (e.g., first-person videos) and use our strategy for training with missing data for trajectories without ego vision (e.g., third-person videos). > Some of the suggestions may go beyond the scope of this paper Thank you for raising these questions. We will include the discussion of capabilities, limitations, and future work in the revised version of the manuscript with an extended discussion section. --- Rebuttal Comment 2.1: Title: Added videos of the robot turning to the project page Comment: We have resolved the hosting issue with GitHub and our anonymous project page is accessible again via the link provided in the original manuscript. As requested, we have included videos of the robot turning in the real world under the “Turning in the Real World” section at the bottom of the page. Please let us know if all your concerns are now fully addressed. We would be happy to provide any additional information. --- Rebuttal 3: Title: Response by Authors Comment: Thank you for your response. We are glad to hear that we have addressed your remaining concerns regarding robot capabilities and appreciate your acknowledgement of our results. We are more than willing to include a more detailed autoregressive models for robotics section. We will incorporate the suggestion in the revised manuscript. We believe that we have addressed all of your concerns from the initial review and subsequent discussions. Given these and in light of the strengths you and all other reviewers have highlighted, we respectfully ask you to consider increasing the overall score to better reflect our contributions. --- Rebuttal Comment 3.1: Comment: I believe the score I have given is appropriate. This paper represents yet another success of Next Token Prediction in a different domain, a success that has been replicated many times. There are few new techniques introduced. Essentially, papers in this paradigm primarily involve preparing data, training, and testing, without much technical complexity. I also took into account that this paper does not achieve significant breakthroughs; previous methods could also achieve tasks such as walking and turning, just not using next token prediction. A notable drawback is that the scenarios considered are particularly simple, with only 100 dimensions of the robot's body state being considered as observations. The challenge might lie in processing and unifying large amounts of data. The primary reason this paper achieves such a high score maybe is that it effectively addresses the problem of studying humanoid locomotion using AI methods and works well in the real world. If the same approach were applied to robotic manipulation, it might not score as highly. However, the manipulation problem involves more complex considerations than humanoid locomotion, such as language instructions, visual perception, and the robot's state. Despite some limitations, I still believe this work is valuable because it demonstrates that next token prediction works well in such high degrees of freedom, given the appropriate data. Considering the author's commitment to open-sourcing their work, which could significantly advance the progress of humanoid locomotion—a field that has been relatively unexplored—I believe a score of 7 is reasonable.
Summary: This work view the robot locomotion control problem as an next token prediction problem. A causal transformer is trained autoregressively on various sources of data. The performance on a full-sized humanoid robot's locomotion indicates that this formulation can be a promising path for complex robotic control problems to incorporate diverse sources of data. Strengths: 1. This work presents a promising way to scale robotic learning in terms of data via generative modeling. 2. Real-world experiments, and some metrics indicate the comparable performance of the proposed method to RL-based / model-based controller. Weaknesses: 1. In Sec. 3.6 Model inference, the first step might not follow the statement. > At inference time, our transformer model will always have access to observation-action pairs. 2. It lacks description on the inference frequency, considering the large scale pretraining with real-world locomotion task. 3. It might be beneficial to ablate on how different sources of the data affect the performance. Technical Quality: 4 Clarity: 4 Questions for Authors: 1. In Table 1(c), why no Pred. Err. for staged training? 2. In Sec. 4.3, can the authors detail the inverse kinematics problem they solve? > In order to use these trajectories for training a robot, we solve an inverse kinematics problem to 183 find the corresponding robot poses. 3. In Sec. 4.4, can the authors detail on the filter strategy? > Once we retarget the motion from the Internet videos to humanoid trajectories, we filter the trajectories with the low optimization cost. 4. How many Internet data are used? This seems not stated in the paper. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: As the authors noted, data from human video come with the cost of being noisy as a result of our progress on the computer vision techniques. It would be helpful to elaborate on current results of using Internet videos. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the comments and suggestions! Please find our responses below: > In Sec. 3.6 Model inference, the first step might not follow the statement. Thanks for pointing this out. Indeed, at the first step we have the current observation but not the action from the previous step. We use zero padding to fill in the missing actions for the initial step in both training and inference. We will clarify this in the next version of the paper. > It lacks description on the inference frequency During inference, our neural network model predicts desired joint positions as well as PD gains at 50 Hz. These are used as targets for a low-level PD controller that runs at 2000 Hz. > It might be beneficial to ablate on how different sources of the data affect the performance Thank you for the suggestion. We perform detailed ablations below: | | | | -------- | ---------- | | Data | Pred. Err. | | NN | 4.17 | | NN+MPC | 2.83 | | NN+MoCap | 2.66 | | NN+Video | 3.19 | | | | | ------------------ | ---------- | | Data | Pred. Err. | | NN | 4.17 | | NN+MPC | 2.83 | | NN+MPC+MoCap | 2.28 | | NN+MPC+MoCap+Video | 2.23 | We find that each of the data sources leads to gains in performance individually (first table) and in aggregate (second table). > In Table 1(c), why no Pred. Err. for staged training? For stage training, the model is pre-trained to predict observations in the first stage, and then fine-tuned with action prediction in the second stage. Since the prediction error counts in both observation prediction and action prediction, and the model is not supervised with observation prediction in the second stage, it is probably not fair to compare with the model that is trained jointly on observation and action prediction. For reference, the prediction error for stage pre-training is 42.22 which is considerably higher than the prediction error of joint training (0.88). > In Sec. 4.3, can the authors detail the inverse kinematics problem they solve? We formulate an inverse kinematics optimization problem as follows: $$ \begin{align} \min_{\substack{\mathbf{q}[t], \mathbf{\dot{q}}[t]}} ~ & \sum_{t=1}^{N} \varphi^{\text{traj}}[t] + \varphi^{\text{reg}}[t] \\ \text{s.t.} ~ & \mathbf{q}[t+1] = \mathbf{q}[t] + \frac{\mathbf{\dot{q}}[t+1] + \mathbf{\dot{q}}[t]}{2} dt, \\ & \mathbf{q} \in \mathcal{Q}, \mathbf{\dot{q}} \in \mathcal{V} \end{align} $$ where $\mathbf{q}$ is the robot state in the generalized coordinates, and $N$ and $dt$ are the optimization horizon and sampling time. The optimization variables include $\mathbf{q}$, $\mathbf{\dot{q}}$. For constraints, we include the Euler integration of posture $\mathbf{q}$, constrain the range of $\mathbf{q}$ and $\mathbf{\dot{q}}$ to their admissible sets $\mathcal{Q}$ and $\mathcal{V}$. In the cost function, $\varphi^{\text{traj}}$ tracks keypoint locations from human trajectories, and $\varphi^{\text{reg}}$ represents the regularization costs that include joint velocity minimization and smoothness. > In Sec. 4.4, can the authors detail on the filter strategy? We empirically set a threshold on the reconstruction loss of the inverse kinematics retargeting and filter out the trajectories below the threshold. > How many Internet data are used? This seems not stated in the paper. We use 1k human videos, which is the same number of trajectories we use for MoCap data. > It would be helpful to elaborate on current results of using Internet videos. Please see the additional ablations reported above. We believe that this is a promising signal for using video data. --- Rebuttal Comment 1.1: Title: Kind reminder for feedback - 2 days left for discussion Comment: Thank you again for your time and effort spent on providing a careful review of our paper. We performed new experiments on how different data sources affect performance. We find that (1) each of the data sources improves the performance individually and (2) that different data sources are complementary and lead to greater gains in aggregate. These detailed ablations demonstrate that our approach can benefit from different data sources. We also provided additional details on: - model inference - inference frequency - prediction errors - inverse kinematics - filtering strategy - number of videos To ensure we have fully addressed all of your concerns, please let us know if there are any specific questions or points that you feel remain unresolved or require further clarification. We would be happy to address them. Many thanks --- Rebuttal 2: Title: Thank you for the rebuttal. Comment: Thank authors for the detailed response. My questions are mostly addressed. My concern left is that will the largest model work at 50Hz, i.e. whether the increased model size with generative pretraining, hinders the inference. It would be great to discuss the minimum required frequency for functioning, and its relation to model size. I have raised the score according to the detailed open-source commitment of authors. --- Rebuttal 3: Title: Thank you Comment: Thank you for your response and for increasing the score. We think that the challenge of balancing model size and inference speed in real-time settings like ours is very interesting. Please find some comments below. The minimal frequency for functioning is determined by the highly dynamic nature of the problem (unstable bipedal robot with a big upper body with a lot of mass and inertia). We might be able to reduce it a bit (e.g., 40Hz) but probably not much. Our current models are relatively small (up to 8M parameters) and can run within 50Hz on the on-board CPU computer. We think that it is likely that the improvements in inference software (e.g., low precision inference), on-board compute (e.g., access to a GPU), training recipes (e.g., distillation), and modeling (e.g., sparse architectures) will offset these challenges and enable us to keep using larger models within the inference speed constraints. We find the modeling angle to be a particularly interesting direction for future work. For example, one could imagine architectures with more parameters than flops, where different parameters are activated at different frequencies and different parts of the model are executed asynchronously. We will include the additional discussion and an analysis of the model size and inference speed in the revised manuscript. Thank you for the suggestions.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
MVSDet: Multi-View Indoor 3D Object Detection via Efficient Plane Sweeps
Accept (poster)
Summary: This paper mainly focuses on the problem of 3D object detection form multi-view images. It introduces MVSNet-like method for depth prediction, brings out probabilistic sampling, soft weighting and pixel-aligned Gaussian Splatting to improve the correctness, robustness of depth prediction, especially with sparse images. Therefore, it improves the performance for 2D features to be projected to the 3D space, therefore improves the performance of 3D object detection. Strengths: A substantive assessment of the strengths of the paper, touching on each of the following dimensions: originality, quality, clarity, and significance. We encourage reviewers to be broad in their definitions of originality and significance. For example, originality may arise from a new definition or problem formulation, creative combinations of existing ideas, application to a new domain, or removing limitations from prior results. You can incorporate Markdown and Latex into your review. See https://openreview.net/faq. This paper is clear-written. It introduces probabilistic sampling, soft weighting and pixel-aligned Gaussian Splatting to improve the performance of MVSNet-based depth prediction to improve the performance of 3D object detection without the need of ground truth geometry. Weaknesses: 1. The 2D feature extractor, the detection head, and the detection head of this work is not clearly described. 2. The MVSNet-like depth prediction with probabilistic sampling and soft weighting, as well as the pixel-aligned Gaussian Splatting are all common methods or from previous works, I consider the novelty to be limited. 3. Also, I do not agree with the opinion of the article that ground truth geometry for supervision on the training stage is difficult to obtain. As far as I know, To obtain supervision for 3D object detection (AABB or OBB bounding boxes), you need to use a lidar or a RGBD camera to obtain the geometry of a 3D scene, and then label the bounding boxes. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. I do not know about the 2D feature extractor, the detection head, and the detection head of this work, are they the same as NeRF-Det? 2. Please further explain the novelty of your method, or your novel discovery that leads you to your method. 3. I do not agree with the opinion of the article that ground truth geometry for supervision on the training stage is difficult to obtain, please further explain this claim, since your results are not better than CN-RMA and ImGeoNet that used ground truth geometry for supervision. 4. Following my question 3, introducing ground truth geometry may cause significant time and memory consumption in training, and makes the network far more complicated and therefore increases time and memory cost in inference. Could you give me detailed time and memory consumption of your own method and all the baselines you have run, on both training and inference stages? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, the authors have addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[R3/Q1] Feature extractor and detection head.** Yes, they are the same with NeRF-Det. We use the ResNet50 to extract image features at multiple stages, and fuse them via a feature pyramid network as the final feature map for each image. The 3D U-Net (Line 149) outputs feature maps at three scales, all of which are sent into the detection head. The detection head consists of three 3D convolutional layers for classification, location, and centerness prediction, respectively. From each voxel center and each of the three scales, the head estimates a class probability, a centerness score, and 3D bounding box offset. Please refer to NeRF-Det for more details. **[R3/Q2.1] Limited novelty.** We do not agree. **(1)** Compared to other MVS models (which predict one depth location obtained from probability weighted sum over all depth planes), we are the first to _probabilistically sample_ multiple top-scored depth locations. As agreed by **Reviewer YYFF**, our probabilistic sampling and soft weighting shows strong superiority over the normal mvs models when there is only few number of depth planes. **(2)** The pixel-aligned Gaussian Splatting is originally only used in the novel view synthesis. We are the first to apply it in the detection task to regularize the depth prediction with light computation overhead. **[R3/Q2.2] Novel discovery leading to our method.** As shown in Fig 1 of our main paper, NeRF-Det shows many wrong backprojections of 2D pixel features to the points in the free space due to the inaccurate geometry learned from NeRF. Therefore, we propose to use plane sweep to better estimate geometry. Compared to NeRF which does not have sufficient surface constraints (only predict a density score per point), the plane sweep approach can accurately predict the surface. However, the standard plane sweep method require sampling many depth planes to estimate the depth accurately, which leads to intractable computation for our multi-view 3D detection task. To tame computation complexity while maintaining accuracy, we propose probabilistic sampling and soft weighting together with the novel use of pixel-aligned Gaussian Splatting. **[R3/Q3] GT geometry is needed for labeling bboxes and thus should be used in supervision.** We disagree. 3D bounding boxes can be annotated _without_ using ground-truth (dense) geometry obtained from lidar or rgb-d cameras. An example is the Objectron Dataset **[1]**, where it annotates the 3D bounding boxes on a _sparse point cloud_ obtained via feature tracking on a AR device, and refines bounding boxes by re-projecting them onto the multi-view images (see Section 3.2 and 3.3 of their paper). Due to the requirement of dense geometry to compute TSDF or surface voxels for supervision, CN-RMA and ImGeoNet are largely restricted to certain datasets such as Scannet and ARKitScenes, where tedious heavy postprocessing of the raw data are needed to ensure high quality dense geometry. In contrast, our proposed method offers better versatility on datasets without dense geometry since we do not rely on the GT geometry for supervision. **[1] Objectron: A Large Scale Dataset of Object-Centric Videos in the Wild with Pose Annotations. CVPR 2021** **[R3/Q4] Time and memory comparison.** **Tab. R3-ZuP3/Q4** of attached PDF shows the comparison of time and memory in training and testing stages on the ScanNet, respectively. We omit comparison with ImGeoNet since it does not release any code. All models are run on 2 A6000 GPUs. Due to the complexity of CN-RMA (as mentioned in Sec 3.5 of their paper), it requires much longer time to train and evaluate than other models. Furthermore, CN-RMA consumes much more memory in the training stage because it requires joint end-to-end training of the 3D reconstruction and detection network. Although NeRF-Det is efficient in time and memory of training and testing stages, their performance is much worse than ours as shown in Table 1 and 2 of our main paper. --- Rebuttal Comment 1.1: Comment: The rebuttal clearly addresses my questions. While its performance does not surpass CN-RMA, it is efficient in terms of time and memory cost during both training and testing stages. I strongly recommend including Tab.R3-ZuP3/Q4 in the final version. I would like to raise my rating.
Summary: The manuscript proposes MVSDet, a multiview 3d object detection model that is evaluated on indoor scene datasets. Multiview information is lifted to 3D via an efficient per-frame depth sampling scheme. The most probable top-k depth values per pixel are used to lift 2D features into a global feature volume in a weighted way. Based on the thus accumulated 3d feature volume a 3d network regresses 3d bounding box parameters. In order to regularize the depth regression which is essential for constructing a occlusion-aware 3d feature volume, the manuscript proposes leveraging pixel-aligned Gaussian splats to construct a rendering loss against nearby views. Both the probabilistic depth sampling and the rendering loss are shown to contribute significantly to the performance of the model. Overall the model outperforms other models that do not use GT depth supervision during training. Strengths: The probabilistic depth estimation shows strong performance improvement over the dense planesweep depth approach when considering memory. Tab 3 is very effective in convincing me of the need for the probabilistic depth sampling. I do wonder how the numbers change without the regressed depth offset correction? Showing that Gaussian Splat-based rendering loss supports the performance of object detection model is useful since not all 3d object detection datasets do have surface GT for training. Tab 4 shows a small improvement for the more strict mAP threshold. These two contributions are strongly supported by the ablation studies in the experiment section and useful to be shared with the community. Weaknesses: I do wonder how much the difference is to supervising with GT depth instead of gaussian splats. (The experiment in Fig 6 is similar but not quite the same since placing features at GT depth locations does not need the probabilistic depth model to regress the depth). This last experiment might drive home the effectiveness of Gaussian splats and allow direct comparison to the ImGeoNet and CN-RMA related works in Tab 1 and 2. The writing and illustrations are mostly clear and support the understanding of the manuscript. There are quite a few open questions (see questions) that should be addressed to improve the presentation of the method. Technical Quality: 3 Clarity: 2 Questions for Authors: - Fig 1: it is kind of hard to see what is going on with the red points in the renderings. - Fig 2: "ray intersects at 3 points (shown as dots)" -- do you mean as red triangles? - Fig 2: the arrows in the gaussian splats orange box of the diagram are not very clear. This could be polished more. - l 151: it is unclear what is ment by 27 location canidates are selected for each target object? - l 184: how is the depth offset predicted in more detail and how is it used during sampling? Is it used for example in Eq 8 to adjust the depth planes per pixel? - ablation that does supervise the proposed model with gt depth instead of Gaussian splats rendering? - Tab 4: is the memory change per batch? (I assume since B=1)? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The limitations are addressed adequately in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[R2/Q1] No depth offset and how to use depth offset.** The first row of **Tab. R2-YYFF/Q1** of attached PDF shows the result of removing depth offset, which is worse than our model. Please refer to **R1/Q6** on how to predict and use depth offset. **[R2/Q2] Use GT depth as supervision.** **Tab. R2-YYFF/Q2** of attached PDF shows the ablation study of replacing Gaussian splats with ground truth depth supervision. The performance of our model is very close to using ground truth depth as supervision, which strongly verifies the effectiveness of Gaussian Splats. In addition, the 'GT Depth' model still cannot directly be compared with ImGeoNet or CN-RMA because the two models require the dense 3D geometry to supervise, which need tedious 3D reconstruction procedures on top of the raw RGB-D data. **[R2/Q3] Red points in Fig 1.** The red points are the voxel centers of the 3D volume for detection (see Sec 3.1 in our main paper). They never appear in the rendering branch. The rendering is done by splatting the Gaussian primitives predicted from the selected nearby views. **[R2/Q4,Q5] Fig 2.** The red triangles denote the selected depth locations by our probabilistic sampling. The three points (one green and two red) are the intersections of the ray in the 3D volume. Only the green point receives the corresponding pixel feature because it resides near the selected depth locations. The red points are the invalid backprojection locations. We will refine Fig 2 in the final version. **[R2/Q6] 27 locations.** Following NeRF-Det and ImVoxelNet, we apply center sampling to select candidate voxels in the 3D volume that are responsible to regress each target object. For each target object, we sample 27 voxel locations that are closest to the target object center. We will add the clarification to this part in the final paper. **[R2/Q7] Memory change per batch.** Yes. We set batch size to 1 and Tab 4 of main paper shows the memory change per batch. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my questions and the two evaluations that support the claims in the paper (depth offset helps, GS rendering is nearly the same as GT depth supervision). From looking at the other reviews I dont see any other problems that were not addressed. The consensus seems to move to weak accept which I support.
Summary: This work presents a method for multi-view 3d object detection. The method computed a MVS cost volume using a few planes, it then samples k likely depth values per pixel and builds a 3d feature volume based on the voxels close to the samples depth values weighted by their confidence. Additionally, during training pixel aligned gaussian splatting (GS) is used to provide an additional rendering loss to guide the depth estimation. Strengths: The proposed 3d feature volume construction seems to outperform the existing baselines. The use of GS during training seems to also slightly improve the 3d object detection. Weaknesses: It is unclear how the nearby views are selected. Is it the same as in the existing works? To evaluate the contribution of the probabilistic sampling alone it would be good to add a line in table 3 without probabilistic sampling and soft weighting. What is the benefit of using the PAGS rendering loss over the classical MVS photometric losses? Technical Quality: 3 Clarity: 3 Questions for Authors: Why does it not improve with more planes? Is the 3d grid too coarse? It seems that 2 views are used for MVS, but 3 for GS. Why not use more? How is the depth offset predicted? Like in MVSNet? Is it ever used for the object detection? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Limitations were mentioned. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[R1/Q1] How to select nearby views.** We compute the Euclidean distance between the camera location of reference view and input views to find the nearby views. **[R1/Q2] No probabilistic sampling or soft weighting.** By comparing Row 2 and 3 of Tab 3 in our paper, we can already evaluate the effectiveness of probabilistic sampling as it shows severe performance decrease without probabilistic sampling. We also conduct an experiment of removing probabilistic sampling and soft weighting in **Tab. R1-N557/Q2** (attached PDF). Compared with Row 2 of the same table, not using probabilistic sampling has large detection performance drop, which again verifies the effectiveness of probabilistic sampling. **[R1/Q3] Compare with photometric loss.** The first row of **Tab. R1-N557/Q3** (attached PDF) shows the result of replacing PAGS with photometric loss. Photometric loss assumes the consistent pixel colors across nearby views and would fail in the case of occlusion (e.g. cluttered objects) or non-Lambertian surface (e.g. the varnish layer of the furniture), which is common in the indoor scene. Consequently, it only brings marginal improvement. In contrast, the PAGS can reconstruct the scene with 3D Gaussians and the spherical harmonics of each Gaussian model the view-dependent colors. **[R1/Q4] Why not improve with more planes?** We conjecture the saturation of performance at mAP@.25 vs. increase of performance at mAP@.5 from 12 to 16 planes in **Tab. R1-N557/Q4** (attached PDF) is likely due to the less strict evaluation at mAP@.25, which requires less accurate bounding box localization and consequently does not gain more information from the increase in the number of depth planes. **[R1/Q5] why not use more views?** We follow MVSNet[23] to use 2 views for MVS. We set 3 views in GS because we empirically find 3 nearby views is enough to cover the local area and adding more views does not bring significant performance as shown in **Tab. R1-N557/Q5** of attached PDF. 'GS=5' means using 5 views for GS and 2 views for MVS. **[R1/Q6] How to predict and use depth offset.** The depth offset is predicted through a MLP on top of the refined cost volume for each depth bin. Each offset is added to its corresponding depth bin to adjust the discrete depth planes. They are used in the probabilistic sampling of $\text{d}_{\text{idx}_k}$ (Line 191) and the depth map D (Eqn 8) of Gaussian Splatting. --- Rebuttal Comment 1.1: Title: Answer to Rebuttal Comment: Thank you for your detailed answers and additional evaluations. As all of my questions have been answered I would consider raising my rating to weak accept, if no further discussions arise.
null
null
Rebuttal 1: Rebuttal: We thank all reviewers for their affirmation of the effectiveness of the proposed probabilistic sampling and soft weighting and the use of Gaussian Splatting for multi-view 3D object detection without using ground-truth geometry as supervision. We strongly agree with **Reviewer YYFF** that the two contributions deserve to be shared with the community. **All the experiment tables are included in the attached PDF**. Pdf: /pdf/f9295501fdbd40a66b6c2b7fb0843b7548f727f2.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
SHED: Shapley-Based Automated Dataset Refinement for Instruction Fine-Tuning
Accept (poster)
Summary: This paper explores the problem of Shapley-based data selection for instruction tuning. Specifically, the proposed approach is composed of three steps–clustering the target samples, Shapley-style evaluation for each clustering, and resampling from the clusters based on the Shapley scores. The paper validates its approach on two instruction tuning benchmarks and obtains favorable results against several baselines. Strengths: The paper is clearly written and very easy to read. The problem is well-contextualized and well-formulated. The structure is complete and the language is neat. Evaluations are standard and a number of baselines are considered. Weaknesses: The paper is probably a bit simple for a full publication of NeurIPS. It has concrete merits, though, I have concerns for several aspects of its contributions. 1. The paper adopts the expensive Shapley approach for data selection, citing its advantage for factoring in the combinatorial effects between samples. Yet, after estimating the Shapley score of the clusters, the final data selection is only conducted via resampling based on the score or ranking of the clusters, where the combinatorial effect between clusters is also not considered. In this sense, I am not sure whether using the costly Shapley approach delivers actual benefits. A grid search over the sampling ratio from each cluster could have better results. 2. Instruction mining has been "extensively studied" during the past year and a wealth of papers have emerged after the paper [LIMA: Less is more for alignment]. Most of those papers claim to reach comparable performance using just a few percentages of the original instruction tuning dataset. Yet, none of these papers are included as a baseline in this work. For the baselines compared within this work, DSIR is for pre-training data selection where its scalability and computation overhead and not at all comparable with the proposed method (and similarly for DQ). 3. It is very well known Shapley-style evaluations are notably expensive as they rely on repetitive model retrainings. It is important to benchmark the computation overhead for the proposed methods. 4. The paper claims the proposed method is suitable for any objectives. This very strong argument is not validated in this paper. For many tasks where this may not be true. For example, its computational complexity is prohibitive for pre-training data selection. Also, this approach only applies to cluster-level data selection but not to sample-level selection. Its effectiveness seems highly dependent on the embedding space for clustering. This work uses Sentence-Transfomer in the experiments. It may not work for other tasks. At least, these major limitations are not discussed in this paper. 5. As mentioned before, a notable number of methods have been proposed for the problem of instruction mining. This paper does not clearly identify a research gap to fill. A similar Shapley-style approach has been proposed TS-DSHAPLEY. Experiments conducted in this paper are also limited. In general, I have concerns about the novelty and contribution of this paper. Technical Quality: 2 Clarity: 2 Questions for Authors: See Weaknesses. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 1 Limitations: The paper claims the proposed method is suitable for any objectives. This very strong argument is not validated in this paper. For many tasks where this may not be true. For example, its computational complexity is prohibitive for pre-training data selection. Also, this approach only applies to cluster-level data selection but not to sample-level selection. Its effectiveness seems highly dependent on the embedding space for clustering. This work uses Sentence-Transfomer in the experiments. It may not work for other tasks. At least, these major limitations are not discussed in this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback. **W.1:** We appreciate the reviewer's concern but note a possible misunderstanding about SHED. SHED does consider the combinatorial effects between clusters: - **Combinatorial Effects:** SHED calculates Shapley Value (SV) for cluster proxies, reducing computational cost while capturing inter-cluster relationships. Each cluster is represented by a proxy data point (closest to the centroid). **SVs are calculated by considering possible combinations of these proxies, thus considering interactions between clusters.** SV compresses the combinatorial space into a single score, encapsulating individual importance, synergies, and potential redundancies. - **Score-based Selection:** The selection step uses these scores, which already encapsulate combinatorial effects. **A high SV indicates a high likelihood of positive impact when combined with instances from other clusters.** Grid search for data selection incurs high costs. If there are C clusters and sampling ratio takes x values, with training time T. Grid search’s complexity is O(Tx^C), scaling exponentially with C, which is impractical. In contrast, Shed’s complexity is O((Ck/n)[(C+n)t/2 + T_m]). As Figure 3 shows, SHED can scale linearly with the number of clusters by setting C/n to a fixed value, making it feasible for large datasets. Thus, SHED offers a significant computational advantage over grid search. **W.2:** We appreciate the reviewer’s concern. To address this, we compared SHED with LIMA and Cherry-LLM: **LIMA:** - MMLU: SHED (1k) - 41.71 vs. LIMA (1k) - 34.9 - ARC: SHED (1k) - 48.12 vs. LIMA (1k) - 46.16 - Note: SHED is fully automated, unlike LIMA’s manual curation. **Cherry-LLM (using WizardLM):** - MMLU: SHED (1k) - 33.62, SHED (7k) - 35.63 vs. Cherry (7k) - 33.08 - ARC: SHED (1k) - 51.36, SHED (7k) - 50.11 vs. Cherry (7k) - 52.90 **SHED achieves comparable or better performance with less data.** Since SHED used MMLU as test set during data selection, its performance on ARC could be improved if used as test set during selection, showing SHED's effectiveness for task-specific data selection. Baseline Criteria: We use DSIR because it, like SHED, selects data by importance. DSIR samples data based on estimated importance, similar to SHED's use of SVs. We included DQ as it was also used for fine-tuning in its paper. We avoided methods requiring human or LLM evaluations due to high costs. **W.3:** We appreciate the reviewer’s concern about SHED's computational cost and would like to clarify: **Transferability & Cost Amortization:** - **One-time Computation for Multiple Uses:** SHED-refined datasets perform well across various model sizes and architectures. This transferability makes them reusable for different models or tasks. **This one-time computational investment ensures long-term efficiency, reducing the need for repeated data selection. SHED's amortized cost over several models significantly lowers the per-use cost.** - **Comparison with Non-transferable Methods:** Unlike methods requiring computation for each new model or task, SHED's transferability **eliminates recurrent computational expenses.** - **Efficiency Gains:** SHED-refined datasets lead to comparable or superior model performance with smaller datasets, directly translating to lower computational and time costs in the fine-tuning phase. **Efficiency Design:** SHED reduces the computational cost by approximating SVs for cluster proxies instead of individual data points. Our analysis and experiments show its complexity can grow linearly with the number of clusters, not exponentially with dataset size. **Benchmark:** Figure 3 (c) has shown the time for SV calculations across different numbers of clusters, which also indicates SHED’s scalability. **W.4:** We appreciate the reviewer’s comment but believe there may be a misunderstanding of our claims: - **Clarification of Objectives:** As highlighted in our paper, **SHED focuses on fine-tuning.** However, Figure 3 (c) shows linear growth in time, suggesting potential for large-scale pre-training data selection. SHED is adaptable to various **fine-tuning objectives**, such as task-specific performance, model fairness, and domain focus, by modifying the value function in SV. **This adaptability is intended within the fine-tuning context.** - **Not Pure Cluster-Level Selection:** While SHED uses clustering to reduce computational complexity, it does not solely rely on cluster-level selection. Instead, it considers individual samples within clusters. The final selection step, QWCS, allows for the selection of individual samples. - **Embeddings:** We use Sentence-Transformer, which is well-suited for NLP tasks and produces semantically meaningful embeddings. Trained on a large and diverse corpus, these embeddings are robust across various NLP tasks. SHED's framework can adapt to different tasks with suitable embeddings. **W.5:** We emphasized SHED's advantages over TS-DSHAPLEY in Section 2.3, including lower computational overhead, transferability, flexible framework, and data diversity. The unavailability of TS-DSHAPLEY code and datasets, along with significant computational demands, pose challenges for comparison. Nonetheless, the results presented in our paper demonstrate the superiority of SHED: - **Efficiency:** Figure 3 (c) shows SHED's computational cost scales linearly with the number of clusters. **While TS-DSHAPLEY's cost scales with data size.** - **Transferability:** Tables 6, and 7 show SHED’s robust performance across different models, underscoring its transferability. TS-DSHAPLEY lacks evidence of effectiveness across varied models. - **Human alignment:** Table 5 shows SHED-refined datasets not only enhance accuracy but also align well with human preferences. TS-DSHAPLEY does not evaluate human preference alignment, leaving it unclear whether its selected datasets improve the model’s ability to follow human instructions. --- Rebuttal Comment 1.1: Title: Thanks for the responses. Comment: Thanks for the responses. I have carefully read the rebuttal and other reviews. I appreciate the authors' effort, yet most of my opinions remain. I don't think there is a particular, critical flaw in the paper. It is a nice work that I would like to see in some venues, (e.g., as a short paper or at an ACL conference). Given the existence of similar works such as TS-DSHAPLEY, I don't think it is in the best interest to publish it at NeurIPS. --- Rebuttal 2: Comment: Thank you for carefully reading our rebuttal and providing your feedback. **We greatly appreciate your recognition that our paper does not contain any technical flaws. However, we are concerned about the score of 3, which is typically reserved for papers with significant technical issues or weak evaluations.** Since you acknowledged there are no critical flaws, a score of 3 seems inconsistent with the paper's merit. We would like to further clarify the distinctions and advantages of SHED compared to TS-DSHAPLEY: TS-DSHAPLEY randomly samples multiple subsets of the training data, estimates the contributions of data points in these subsets, and then repeats the process multiple times and aggregates the results to estimate the Shapley values for the entire training set. Notably, this method will calculate the Shapley value for all instances in the training set. Advantages of SHED: - **Improved computational efficiency:** SHED only computes Shapley values for representative proxy data samples selected from clusters, rather than for each individual data point as in TS-DSHAPLEY. This dramatically reduces the computational overhead, making SHED more scalable to large datasets. - **Model-agnostic data representation:** SHED uses model-agnostic sentence embeddings (e.g., from Sentence Transformers) to represent data samples, while TS-DSHAPLEY relies on representations extracted from the target language model. The model-agnostic approach enhances the transferability of the curated datasets across different language models, amortizing the computational cost of data selection. Table 7 showcases the transfer performance of SHED-curated datasets across different models. SHED is trained on the MMLU and WizardLM datasets using the LLaMA-7B model, and the resulting optimal subsets are used to fine-tune various models, including LLaMA-13B, Vicuna-7B, and GPT-2. The results demonstrate that SHED-constructed datasets exhibit robust performance across a range of models, confirming their applicability across models and even different model families. This strongly supports the transferability of SHED datasets. - **Clustering-based data sampling:** SHED employs clustering algorithms to group similar data samples and selects representative proxy data for each cluster. This helps capture the diversity and complexity of the original dataset while reducing redundancy. TS-DSHAPLEY does not explicitly consider data diversity in its sampling process. - **Flexible optimization objectives:** SHED's value function in the Shapley value calculation can be customized for various optimization objectives, such as accuracy and fairness, allowing the curated datasets to align with task-specific requirements. TS-DSHAPLEY focuses primarily on model predictive accuracy. - **Unified and extensible framework:** SHED presents a unified framework that integrates clustering, Shapley value computation, and data sampling, with the flexibility to accommodate different clustering algorithms, optimization objectives, and sampling strategies. This makes SHED more adaptable to various scenarios than TS-DSHAPLEY. We hope this clarification helps to further understand the unique contributions and strengths of SHED, and we kindly request reconsideration of the score in light of this information. **The existence of a related but less effective method should not be grounds for rejection.** Thank you once again for your feedback and consideration. --- Rebuttal Comment 2.1: Title: Thanks for the response. Comment: Thanks to the authors for the response. Let me explain where I feel this work can be improved. "Model-agnostic data representation" relies on the quality of embedding space. Compared with random sampling, the effectiveness of clustering also depends on the embedding quality. Sentence-Transformer often works quite nicely despite being task-agnostic. This has been witnessed in many works. But on a larger picture, how we move forward from here remains unclear. On tasks where Sentence-Transformer cannot capture the nuances, what are we going to do? "Clustering-based data sampling" is a standard practice for reducing computation complexity when the scale of the problem becomes an issue. For example, [Gio: Gradient information optimization for training dataset selection] also implemented the idea of clustering-based data selection. The work strives for "Flexible optimization objectives" and "Unified and extensible framework". But the development of this paper concentrates on "selecting instruction samples". First, selecting instruction samples is not a particularly computationally intensive task such that many methodologies can be applied. There is now a wealth of paper on this topic. Since the overall size of these instruction-tuning datasets is typically not very large, there has been a constant debate on whether research should focus on "selecting a subset of instruction-tuning examples". Given this being a crowded field, the work could greatly improve its impact if extended to broader applications beyond "instruction mining". --- Reply to Comment 2.1.1: Title: Thank you for your feedback Comment: Thank you for your thoughtful feedback. We appreciate the points raised, and would like to address each one in turn: 1. Embedding Quality and Model-Agnostic Data Representation: - While there may be edge cases where Sentence-Transformer does not fully capture nuances, **SHED’s flexible framework allows for integrating alternative or task-specific embeddings, which can be chosen based on the task’s particular needs.** For example, CodeBert can be used to replace the Sentence-Transformer for tasks involving code understanding. In addition, if a task requires nuanced domain-specific understanding, fine-tuning the Sentence-Transformer to adapt to these needs can be implemented, like LLaVA's method, which only requires a small cost. This adaptability ensures that SHED remains effective across a diverse set of tasks, even beyond the current scope. - **However, if one insists on a truly "perfect" task-agnostic model, even widely-used powerful models like LLaMA3 cannot claim to be suitable for every possible task. The task-agnostic nature we discuss is within a reasonable and practical range.** 2. Clustering-Based Data Sampling: While clustering is commonly used to reduce computational complexity, **SHED’s innovation lies in considering combinatorial effects between data points, model-agnostic data selection, reducing computation cost, and a flexible framework.** Combining clustering with Shapley value calculations allows us to account for the combinatorial effects between data points, enhancing the selection process by considering their interactions. The transferability of SHED-refined datasets is the key advantage over other methods. Additionally, While classic Shapley value calculation is impractical, SHED significantly reduces computational costs by utilizing an approximation method and only calculating Shapley values for cluster proxies rather than individual data points, making it both efficient and scalable. Flexible framework design gives SHED the potential to be applied to various tasks. **We believe that utilizing well-established methods as a part of SHED should not be grounds for rejection.** 3. Importance of Instruction Selection - **The need for instruction-tuning data selection is particularly important as fine-tuning datasets grow increasingly large and complex.** For example, datasets like P3, which includes over 100 million examples, the Pile with 825 GB of text, and LAION-400M with 400 million image-text pairs, illustrate the scale at which fine-tuning can occur. Efficient, scalable methods like SHED are essential for managing such large datasets. **Besides, as the theoretical and experimental analysis shows, our approach is designed to scale to much larger datasets, proving its value in broader applications.** - **Furthermore, most researchers lack the resources to support pre-training from scratch, making fine-tuning an essential strategy.** SHED-selected datasets have demonstrated strong performance across multiple models, and releasing these curated datasets to the community could provide a significant contribution, helping researchers save substantial resources. - **The field of instruction-tuning data selection may indeed be crowded, but the crowdedness does not diminish the value of contributions that address real and pressing challenges. The computational advantages, transferability, and flexible design of SHED make it a valuable tool in this area.** Moreover, as instruction-tuning datasets continue to grow in size and complexity, the need for efficient, scalable data selection methods like SHED will only increase. By providing a framework that is both efficient and adaptable, we believe SHED makes a meaningful contribution to the field, with potential applications extending well beyond the current scope.
Summary: The paper proposed a data refinement framework that refines datasets for fine-tuning LLMs by using the Shapley value. Based on the description, SHED is able to create smaller, high-quality datasets from large, extensive datasets without human intervention or commercial LLMs. This process involves three key components: model-agnostic clustering, proxy-based Shapley calculator, and optimization-aware sampling. Extensive experiments show that datasets curated by SHED achieve comparable or superior performance to full datasets while significantly reducing the data size and computational costs. Moreover, SHED-curated datasets exhibit strong transferability across various LLMs. Strengths: 1. The studied problem is critical as the proposed method is general to creat small yet high-quality datasets that benefit LLM finetuning in terms of performance improvement and computational cost reduction. 2. The experiments are extensive by covering multiple datasets and fine-tuning tasks. Weaknesses: 1. While SHED reduces the computational complexity of calculating Shapley values based on clustering and proxy-based calculations, it is unclear how effecient (e.g., in terms of time) it is compared to the classical calculation method. 2. As authods illustrated, the reliance on clustering may inadvertently reduce data diversity, which results in the overlook of rare but important samples. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Is it possible to evaluate SHED on a larger dataset in the industry level considering that the used datasets in the paper are still relatively small? 2. How will the selection of initial clustering groups affect the final model performance? Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: No potential negative societal impact is observed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We’d like to thank the reviewer for the insightful and positive feedback. We are encouraged that the reviewer found our work meaningful, novel, and effective. For the thoughtful questions and constructive suggestions. We'd like to share our responses below. **W.1:** We appreciate the reviewer's observation regarding the efficiency of SHED compared to classical Shapley value calculation methods. In our paper, **we provide a detailed analysis of SHED's computational complexity and actual runtime. As shown in Figure 3 (c), we present the computational time for Shapley value calculations across different cluster numbers. This graph clearly illustrates how SHED's runtime scales with the number of clusters, providing concrete evidence of its computational efficiency.** In contrast, for the classical Shapley value calculation method, the time complexity is exponential in the size of the dataset. Given our dataset size is about 100,000 instances, this results in a computational cost that is prohibitively large. As a result, it is not feasible to provide actual runtime measurements for the classical method on our dataset. However, we can analyze the efficiency of SHED compared with the classic Shapley value method under the experimental settings of this paper. **For the classical Shapley value calculation** method with a dataset of |D| = 100,000 instances: We need to consider all possible subsets (2^|D|). For each subset S, we fine-tune the model (taking time |S|t, where |S| is the size of the subset) and evaluate it on the test set (taking time T_m). Therefore, **the time complexity would be: O(2^|D| * (|D|t/2 + T_m)) = O(2^100,000 * (50,000t + T_m)) This is computationally infeasible.** In contrast, **SHED's complexity, as detailed in our paper’s discussion section, is: O((Ck/n)[(C+n)t/2 + T_m]) = O(500 * [1530t + T_m]). This demonstrates a significant reduction in computational complexity compared to the classical method.** **W.2:** We appreciate the reviewer's insightful observation regarding the potential limitation of our clustering approach in SHED. **Our use of clustering and proxy data points is a trade-off we make to significantly reduce computational complexity while still capturing the overall data distribution effectively.** However, **we have taken steps to mitigate the overlooking issue.** The Quality-Weighted Cluster Sampling (QWCS) variant of our method allows for more diverse sampling. As shown in our experiments, despite this potential limitation, SHED still outperforms other methods across various tasks, suggesting that **the benefits of our approach outweigh this drawback in practice.** Nevertheless, we agree that this is an important area for improvement. In our future work, we plan to explore more sophisticated clustering algorithms and sampling strategies that could better preserve rare but important samples. We thank the reviewer for highlighting this important aspect, which will help us improve our method. **Q.1:** We appreciate the reviewer's question about evaluating SHED on industry-level datasets. We'd like to address this point: - **Scalability:** SHED's design, particularly its use of clustering and proxy data points, makes it **inherently scalable to larger datasets. By setting the ratio C/n to a fixed value, as done in the experimental setup, the computational complexity of SHED grows linearly with the number of clusters**, not exponentially with the dataset size like traditional Shapley value calculations. - **Current Limitations:** Our current experiments were constrained by the computational resources available to us in an academic setting. Our current computational resources are limited and insufficient to conduct experiments on industry-scale datasets within a short timeframe. In the future, **we plan to rent additional server resources to apply SHED on much larger datasets. We are committed to open-sourcing the curated datasets from these experiments, contributing valuable resources to the research community. These efforts will further validate SHED's effectiveness at industrial scales and support broader research.** We thank the reviewer for this suggestion, allowing us to emphasize our commitment to contributing to the research community despite current resource constraints. **Q.2:** We appreciate the reviewer's insightful question about the impact of initial clustering on SHED's performance. The initial clustering step in SHED plays a significant role as it determines the proxy data points used for Shapley value calculation. Our method employs several strategies to ensure robustness to variations in initial clustering: - **Semantic-preserving embeddings:** We use Sentence Transformers to generate embeddings, which effectively capture semantic similarities across various domains. This helps ensure that semantically similar data points are likely to be clustered together, even if the exact cluster boundaries may vary. - **Multiple initializations:** In fact, we conducted multiple experiments with different random initializations to verify the robustness. **We found that the number of clusters is a factor that affects the performance.** Therefore, we conducted a sensitive analysis of the number of clusters. Our sensitivity analysis (Figure 3) shows that performance improvements plateau when the number of clusters exceeds 3√|D|. This suggests that **as long as we choose a sufficiently large number of clusters, the exact choice is less critical.** - **Shapley value calculation:** This step considers the contribution of each proxy data point across multiple subsets, which helps to average out some of the variability introduced by clustering. - **Optimization-aware sampling:** The final selection step considers the Shapley values of all clusters, providing another layer of robustness against individual cluster variations. We thank the reviewer for this valuable question, which helps us clarify the robustness of SHED. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response. They partially address my question related to the computation time of SHED. I have updated my scores.
Summary: Tuning large language models to domain tasks is a difficult challenge requiring a dataset of high quality examples. Since noisy examples can significantly degrade performance, it is important to be able to curate these small, high-quality fine-tuning datasets. The authors propose filtering using each data point’s estimated contribution to the model performance using Shapley values. The proposed SHED method clusters the data, then iteratively deletes clusters based on the Shapley values of representative samples until a small dataset remains. Strengths: - The topic of the paper is well-motivated. It is known that large models need high quality data during the fine-tuning process, and are sensitive to noisy data in this stage. High quality data can be prohibitively expensive to collect - Shapley values are a known and well-studied method to asses the importance of training points to a model’s predictions - Unlike previous data selection methods employing Shapley values (TS-DSHAPLEY), shed uses clustering and representative sample selection to significantly reduce the computational cost of data subsection. SHED also considers different target objectives (e.g. fairness), although it is not clear how easy it would be to adapt other methods to this objective. This is an interesting novelty that can help make shapley values more practically useful in this data selection setting. - Experiments are conducted on good language model eval datasets (e.g MMLU) - Results show that datasets filtered via SHED achieve improved performance compared to other sampling methods, and also compared to full dataset fine-tuning - Results are shown across a variety of evaluation datasets. Weaknesses: - The related work leave room for improvement. In particular, the primary focus of the paper is a method that uses the estimated effect of each data point on accuracy to filter data points. Yet, the discussion of such related methods is lacking. Some recent works (e.g. DsDm https://arxiv.org/abs/2401.12926) also use the impact of individual data points to subselect datasets. - SHED uses a proxy data point (closest to the centroid) to estimate the predictive influence of the entire cluster. While this save on significant computational cost, the effectiveness of this stage depends significantly on the quality of the text embeddings used to compute similar, as well as the size and number of clusters - The experimental method training leaves room for improvement. All methods are stated to train for the same number of epochs and same other hyperparameters. However, as training can be sensitive to hyperparameters, it could make sense to conduct a sweep based on some val set for each method, and present results for the best hyperparameters on the test set. - It is not clear how the filtered data size was chosen. A variety of tables (e.g .Table 6 and 7) use different counts of QOCS and QWCS of 10k and 13k. Why not use the same ksubselected data size for all method for a clearer comparison? Technical Quality: 3 Clarity: 4 Questions for Authors: Why are the different data selection methods compared with different final selected dataset sizes? How were the training hyperparameters chosen? Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors have sufficiently discussed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback. We are encouraged by the positive comments on our well-motivated, clearly presented, and highly effective work. Below are our responses to the thoughtful questions and constructive suggestions: **W.1:** We sincerely thank the reviewer for highlighting this issue and the reference to DsDm. We acknowledge that our original manuscript overlooked DsDm, which shares similarities with our approach in using the impact of data points for selection. However, like other works, DsDm does not consider the combinatorial effects of data points in the same way that SHED does. The key differences are: - Individual Impact: DsDm primarily focuses on estimating the impact of individual data points on model performance. It uses data models to approximate how each training example affects the model's predictions on target tasks. - Linear Approximation: DsDm's data models are typically implemented as linear functions, which implicitly assume that each data point's contribution is independent of other points in the dataset. In contrast, SHED's use of Shapley values allows it to consider the marginal contribution of data points, **thereby capturing potential interaction effects between data points**. We will revise our manuscript to include a detailed discussion of DsDm and clearly articulate SHED's unique contributions. We appreciate this feedback, which will significantly improve our paper's comprehensiveness and scholarly value. **W.2:** We appreciate the reviewer's observation regarding SHED's use of proxy data points. Indeed, the effectiveness of this approach is influenced by the quality of embeddings and clustering parameters. We'd like to address these points: - **Sensitivity Analysis:** As detailed in our paper, we have already conducted a sensitivity analysis on the number of clusters. Figure 3 demonstrates how performance and computational time vary with different cluster numbers. This analysis shows that performance improvements plateau when the number of clusters exceeds 3√|D|, providing empirical guidance for choosing this parameter. - **Embedding Quality:** We use Sentence Transformers for generating embeddings, which are known for their ability to capture semantic similarities between sentences. By clustering on these embeddings, we group semantically similar text, which helps in discovering and organizing potential patterns and relationships in the text data, enhancing the representativeness of our proxy points. While our current approach has shown robust performance, we acknowledge the importance of this aspect. In our future work, we plan to further investigate the impact of different embedding techniques and clustering methods on SHED's performance. We thank the reviewer for this valuable feedback, which will help us improve our work. **W.3:** We appreciate the reviewer's insightful comment on our experimental methodology. We'd like to clarify our approach and reasoning: **Fairness in Comparison and Focus on Data Impact:** Our decision to use the same hyperparameters across all methods was deliberate, aiming to ensure a fair comparison that isolates the impact of data selection on model performance. This approach allows us to directly observe how different data selection methods affect performance without the confounding influence of varying hyperparameters. We acknowledge that our current approach of using fixed hyperparameters across all methods may not fully capture the optimal performance of each method. To address this concern, we propose the following improvements for our revised manuscript: - Hyperparameter Sweep: We will conduct a hyperparameter sweep for each method, including learning rate, batch size, and number of epochs. - Best Configuration Reporting: We will report results on the test set using the best hyperparameters found for each method during the sweep. We believe these additions will strengthen our experimental methodology. We thank the reviewer for this suggestion, which will enhance the reliability of our results. **W.4:** We appreciate the reviewer's observation regarding the varying data sizes used in our experiments. We acknowledge that this may have caused some confusion in interpreting our results. To address this concern: - **Comprehensive Comparison:** To ensure a fair comparison, **we have included results for a fixed data size of 10k samples across all methods in Table 2**. Additionally, as detailed **in Appendix B of our paper, we have conducted experiments comparing different methods across various sample sizes**. These provide a comparison of method performance when using identical amounts of data. - **Method-Specific Optimization:** In Appendix B, we provide comprehensive comparisons of different methods across various sample sizes. Our analysis reveals that QOCS often performs better with smaller subsets. This is likely because it prioritizes the highest quality, most informative data, making it particularly effective when working with limited data. On the other hand, QWCS often benefits from slightly larger subsets. This method balances quality with diversity, allowing for the inclusion of a broader range of data patterns. As a result, QWCS can capture more comprehensive representations of the dataset. These characteristics explain the different optimal subset sizes for each method. By presenting the best-performing subsets in the main text, we aim to demonstrate the full potential of each approach. We will revise our manuscript to more clearly explain our rationale for presenting the best-case scenarios and to direct readers to the appendix for these fixed-size comparisons. We thank the reviewer for this feedback, which will help us improve the clarity of our results presentation. **Q1:** Please see the responses R.3 and R.4. We have also presented all the training hyperparameters in Appendix C. --- Rebuttal Comment 1.1: Title: Looking forward to your response Comment: Dear Reviewer, As we are approaching the end of the rebuttal period, we would like to cordially inquire about the extent to which we have successfully addressed the concerns outlined in your review. Should any lingering points require further attention, please rest assured that we are enthusiastic about the opportunity to provide comprehensive responses to any subsequent queries or comments you may have. Your constructive input remains invaluable to us, and we appreciate your dedication to enhancing the quality of our manuscript. Thank you for your time and consideration. Best, Authors --- Rebuttal Comment 1.2: Title: Thank you for your response. Comment: Thank you for your detailed response to my feedback. Please find an additional note below. W.1: Although Shapley does help consider interactions between data points, since you're re-computing the values after every cluster removal I don't think it actually matters here, i.e. compared to models that consider the contribution of each data point independently. Further, the claim that you are "capturing potential interaction effects between data points" is also not quite accurate, as the data points are grouped into larger clusters over which the Shapley values are computed. --- Reply to Comment 1.2.1: Title: Thank you for the feedback Comment: Thank you for your insightful comment. We would like to address your concerns regarding the interaction effects between data points. The primary objective of removing clusters in our method is to calculate the marginal contribution, or boundary effect, of each cluster relative to the remaining clusters. By sequentially removing clusters and observing their impact on model performance, we can get how each cluster contributes to the overall model accuracy. After calculating the boundary contribution of a cluster in one iteration, we restart the process with a new randomized removal order. This repetition is crucial because it allows us to capture the variation in each cluster (or the data points within them)'s marginal contribution across different combinations of clusters. **By performing this process multiple times, with different sequences of cluster removal, we can average the different marginal contributions to accurately estimate the true Shapley value for each cluster. This allows us to observe how the presence or absence of a cluster in different cluster combinations influences the overall model performance, thereby capturing the interaction effects between the data points in different clusters.** **We agree with that the claim about capturing "interaction effects between data points" might not have been entirely accurate. It is more precise to state that our method considers the interaction effects between data points in different clusters. The clustering process itself is designed to group data points that are likely to interact in similar ways with the model and data points in other clusters.** By using Shapley values on these representative clusters, we capture a higher-level interaction effect, which is crucial for scaling the computation to larger datasets. While it’s not the same as evaluating every individual interaction, **this method strikes a balance between computational feasibility and the need to preserve interaction effects,** which we believe adds value to the data selection process. We sincerely appreciate your insightful feedback and will revise our manuscript to accurately reflect this aspect of our method. Your feedback is incredibly helpful in improving our paper.
null
null
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Sparse Bayesian Generative Modeling for Compressive Sensing
Accept (poster)
Summary: This paper introduces a new type of sparsity inducing generative prior for the inverse problem. The authors theoretically underpin our approach by proving that its training maximizes a variational lower bound of a sparsity inducing log-evidence. Strengths: This work can learn from a few corrupted data samples and, thus, requires no ground-truth information in its training phase. Weaknesses: 1. The comparison methods include Lasso, CKSVD, and CSGAN, all of which are presented before 2018. The comparison methods are too old. 2. I think the author should clarify the specific application scenarios for this research direction. 3. In Figure 7, compared with CSGAN, the results of CSVAE and CSGMM are not very clear. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. In Table 2: Resources for simulations on MNIST (M = 200, Nt = 20000, Fig. 2 a)), CSGAN has much less parameters than the proposed CSVAE , why does CSGAN take much more training time? 2. Could the reconstruction results be influenced by the amount of noise? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: I think the application scenes of this work is not very clear. The datasets used for comparison are very naive, colorful images with more textures should be used. And the authors should include more recent comparison methods, existing comparison methods are all presented before 2018. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer 34q1 for the comprehensive review. In the following, we address the reviewer’s raised weaknesses and questions. **To Weaknesses 1:** We would very much appreciate it if the reviewer could specify which comparison methods are missing in our work. While it is difficult to track all the latest literature, we are aware of the rapidly improving diffusion-based and unfolding-based compressive sensing techniques recently published. However, to the best of our knowledge, these techniques generally rely on the assumption of being trained on many ground-truth training samples. CSGAN seems to us to be the current state-of-the-art generative prior, which also relaxes the requirements on the training data by incorporating the knowledge that the prior is to be used for compressive sensing, which is why we compare our method to CSGAN. **To Weaknesses 2:** While we do not think that we can comprehend all possible applications, we give an overview of three possible applications we can think of, in which the ability to learn from compressed data is crucial and where many state-of-the-art machine learning-based compressive sensing techniques might not be applicable. 1) ECG denoising: The sensors in wearable electrocardiography (ECG) monitoring devices generally provide noisier signals, with more artifacts compared to those typically used in the hospital [1]. However, if one wants to do patient-specific training, the ability to directly train from the data provided by these sensors comes with severe benefits as it is rather unrealistic to first capture clean ground-truth training samples in hospitals for each patient individually. 2) Electron microscopy: In electron microscopy, one generally wants to restrict the total amount of electron dose used for measuring to not interact with (change) or even destroy the measured sample of interest [2]. However, this typically leads to noisy acquisitions with low contrast and resolution. Thus, being able to learn from these corrupted acquisitions also comes with significant benefits. 3) Wireless communication: In, e.g., the current 5G wireless communication standard, mobile users during communication receive compressed observations on a frequent basis (see Eq. (1) in [3]). While current machine learning techniques in wireless communication rely on simulated data or require expensive measurement campaigns for the training data, our proposed method can directly learn from the compressed data that mobile users receive during communication. All these applications have in common that they are rather low-dimensional. Moreover, latency plays an important role in ECG denoising as well as wireless communication rendering our proposed method to be specifically interesting for those applications. **To Weaknesses 3:** These observations come from the different reconstruction aspects of CSGAN and CSVAE/CSGMM. Our estimators approximate the point conditional mean estimators, which minimizes the distortion measure MSE. Therefore, CSVAE- and CSGMM-based reconstruction emphasizes the reconstruction of details, while CSGAN emphasizes the visual contrast. This can be explained by the perception-distortion tradeoff [4]. By comparing the ground truth (first row) with the reconstructed images more closely, one can see that the CSGAN misses out on details, which are reconstructed by CSVAE and CSGMM. **To Question 1:** The different training times result from the distinct training algorithms between CSGAN and CSVAE/CSGMM. More specifically, the training of CSGAN builds on a min-max optimization problem, where additionally after each update step, a Lasso-like reconstruction algorithm has to be applied. On the other hand, the training of CSVAE optimizes the ELBO derived in Eq. (15). **To Question 2:** The noise level plays indeed an important role. In fact, since our proposed model can incorporate the noise level in the training process (see Eq. (2)), our proposed method is quite robust against additional noise. To evaluate this, we simulated different noise levels on MNIST and plotted the nMSE and SSIM results in the attached PDF (Fig. 4 a) and b)). One can see that our proposed method exhibits significantly less performance decrease for higher noise levels (i.e., smaller SNR) compared to all other baselines. **To Limitations:** The choice of low-resolution grayscale images and 1D signals in our work is mainly motivated by the exemplary applications stated above, which are all low-dimensional and can be interpreted as either 1D or grayscaled 2D signals/images. Additionally, the used datasets in our work have been considered as the benchmark datasets for our baselines. We would like to thank the reviewer again for the valuable feedback. We will revise our final paper to point out exemplary applications more specifically. Moreover, we plan to also include the simulations on varying noise levels. [1] G. Revach, T. Locher, N. Shlezinger, R. J. G. v. Sloun, R. Vullings, "HKF: Hierarchical Kalman Filtering with Online Learned Evolution Priors for Adaptive ECG Denoising," 2023, arXiv:2210.12807. [2] T.-O. Buchholz, M. Jordan, G. Pigino, and F. Jug, “Cryo-care: Content-aware image restoration for cryo-transmission electron microscopy data,” in 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), 2019, pp. 502–506. [3] W. Kim, Y. Ahn, J. Kim, and B. Shim, “Towards deep learning-aided wireless channel estimation and channel state information feedback for 6G,” Journal of Communications and Networks, vol. 25, no. 1, pp. 61–75, 2023. [4] Y. Blau and T. Michaeli, “The perception-distortion tradeoff,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, Jun. 2018. --- Rebuttal 2: Comment: Thank you for the careful rebuttal. I thins the authors have addressed my concerns. I hope you can revise the final paper to point out exemplary applications more specifically. I have improved my rating.
Summary: The authors present an elegant new approach for dictionary based compressive sensing wherein the sparsity inducing prior is tuned from data by maximizing a lower bound on the evidence. This is a new paradigm for compressive sensing which appears to improve on reconstruction error over standard approaches and does not require solving an optimization problem at inference time. Strengths: - Approaching dictionary based compressive sensing in this way is novel as far as I'm aware. Not having to solve an optimization problem at inference time has the potential to be impactful. - I found the paper to be very well-written. The authors did a good job of motivating their approach by relating it back to relevant background literature and discussing necessary theory. - Their results appear to be promising and should hopefully motivate future work in this paradigm. Weaknesses: - Since their approach offers a new perspective on sparse Bayesian learning, it would have been helpful to compare the sparsity of predictions in addition to the nMSE. - I understand that space is limited, but I found the description of the experiments to be a bit terse. - It wasn't clear to me when a practitioner should choose CSVAE vs. CSGMM Technical Quality: 3 Clarity: 4 Questions for Authors: - How does your approach compare to methods for sparse Bayesian learning with proper priors; for example Louizos et. al. "Bayesian Compression for Deep Learning." NeurIPS 2017? - Have you tried evaluating the quality of error bars from the predictive posterior so that your approach might be used in the context of UQ? - Can you explain the reason for the numerical instabilities in the noise free simulations for some of the image datasets? - Have you analyzed the histogram of the posterior over $s$? Can you suggest how the practitioner who is interested in a sparse $s$ might $0$ entries of the vector? Confidence: 2 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: I think the authors have done a good job addressing the potential limitations of their approach in section 3.4. It would be interesting to analyze how well generative priors trained on a particular dataset transfer to similar datasets. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank reviewer WoRD for the positive review and the appreciation of our work. In the following, we address the reviewer’s raised weaknesses and questions. **To Weaknesses (bullet point 1):** We thank the reviewer for this suggestion and give it serious consideration for the final manuscript. **To Weaknesses (bullet point 2):** As the final manuscript is allowed to have one additional page, we plan to reorganize the experimental section and describe the simulations in more detail. **To Weaknesses (bullet point 3):** There are a few aspects to consider when choosing between CSGMM and CSVAE comprising the following: 1) After training CSGMM with training samples observed with one fixed measurement matrix, it is possible to apply the model to reconstruct signals from different measurement matrices even with different dimensions. This is not possible with CSVAEs as the encoder is trained measurement matrix specific. 2) When the training set has been observed using varying measurement matrices, the covariance matrices in Eq. (17) become sample-dependent. Moreover, as the EM algorithm requires all training samples for one update step, storing all these sample-specific covariance matrices might lead to memory-related issues for the CSGMM. This issue does not arise with CSVAEs since there, the update step is done via small training batches. 3) For the online reconstruction, CSGMM has to evaluate $p(k|\mathbf{y})$ for all components $k$. For very high dimensions, this can get computationally more expensive than the simple forward operation through the equivalent CSVAE's NN encoder. **To Question 1:** In the mentioned paper, the priors are assigned directly to the weights of the neural network. However, in our neural network-based parameterization of CSVAE, we keep the parameters (i.e., $\mathbf{\theta}$) deterministic but rather combine the sparse Bayesian learning framework with the output $\mathbf{\gamma}_{\mathbf{\theta}}(\mathbf{z})$ of the CSVAE by interpreting it as the diagonal of a zero-mean Gaussian with diagonal covariance matrix. From our perspective, both settings are rather different and are difficult to compare to each other. We hope the reviewer agrees on this point with us. **To Question 2:** While we think, that the idea to explicitly evaluate the error bars of the posteriors is interesting, we did not try this out. We appreciate the idea and will give it serious consideration for the final manuscript. Additionally, in the attached PDF, we now included error bars in the results for the nMSE and SSIM. **To Question 3:** We implemented CSVAE in pytorch. The numerical instabilities concern the backpropagation of Eq (24) in our work, which includes the pseudoinverse. Although (24) is theoretically differentiable w.r.t. $\mathbf{\theta}$, it turned out that utilizing the equivalent (8) instead of (24) with some small artificial noise variance (equivalent to 40dB SNR) led to stable training, while directly using (24) sometimes resulted in numerical issues. **To Question 4:** From our perspective, this question seems to relate to Question 2. We did not analyze the posterior's histogram. We think, there are several possibilities to approximate the compressible estimate of $\mathbf{s}$ by a sparse one. One way could be to decide on some decision threshold. We refer to [1] for more details. We would like to thank the reviewer again and give the reviewer's suggestions regarding the sparsity and posterior analysis a serious consideration for the final manuscript. Additionally, we will extend the Appendix by a discussion about when to choose CSGMM over CSVAE and vice versa. [1] G. Dziwoki, M. Kucharczyk, "On a Sparse Approximation of Compressible Signals," 2020, Circuits Syst. Signal Process. 39(4): 2232-2243 --- Rebuttal 2: Comment: Thank you for your thoughtful response. I think extending the Appendix to include a more concrete discussion of CGSMM versus CSVAE would be helpful. While not a requirement, I hope you do end up including some analysis of the posterior and sparsity since this will make your work relevant to a broader community.
Summary: The paper introduces a novel training algorithm for generative models used as priors in linear inverse problems, with a specific focus on compressed sensing. The authors propose a training principle that regularizes the prior to learn a sparse representation of the signal of interest, implemented in Variational Autoencoders (VAEs) and Gaussian Mixture Models (GMMs). This is achieved by learning two posterior distributions p(y∣s) and p(s∣z) over a set of parameters through optimization of the Evidence Lower Bound (ELBO) derived in Equation 15. During the inversion phase, the true signal is estimated by sampling from the posterior distribution E[x∣y] as given in Equation 9. Unlike many contemporary generative priors, this method does not require solving an optimization algorithm to achieve the estimate. This approach simplifies the inversion process, making it more computationally efficient. The performance of these implementations is validated on datasets containing different types of compressible signals. The results demonstrate the effectiveness of the proposed method in accurately reconstructing signals. Additionally, the paper provides theoretical support on the tightness of the lower bound of the estimate, further validating the robustness and reliability of the proposed approach. Strengths: The proposed method appears to be novel and aligns with an intriguing line of work that focuses on training generative priors with the intention of using them for inverse problems. Section 2 of the paper is well-crafted, effectively elucidating the connection between the proposed method and existing literature. For the given baselines, the proposed method yields better reconstructions of the true signal, as verified by commonly used metrics and visual inspection. Weaknesses: The authors provide theoretical motivation; however, they do not address how the potential estimation of the measurement y impacts the estimation of the true signal x. Additionally, building on this point, the CSVAE may experience representation error, as noted in [5], which is not mentioned as a limitation. While the proposed method does improve upon [5], this remains a source of error. Section 4 is limited in the following ways: * The only metric reported for natural images is the normalized mean square error, while other commonly used metrics, such as PSNR and SSIM, are not included. * Given the computational focus of this paper, it is concerning that all experiments were conducted on low-resolution grayscale images, especially considering the A40 GPU budget. * The baselines appear to be less contemporary compared to current literature on compressed sensing inversion algorithms. [Normalizing Flow](https://proceedings.mlr.press/v119/asim20a/asim20a.pdf) and [Diffusion Models](https://openreview.net/pdf/4f6f0e2347a3d6f9a88b39e445f77c1e7503064e.pdf) have largely replaced GANs due to issues with representation error. Similar to the proposed method, Diffusion Generative Priors do not require gradient-based approaches to estimate the posterior. There are hyperlinks to the work in the responses. * Considering this paper is training generative before be used downstream, there seems to be no reported metrics about training other than computational resources. Technical Quality: 2 Clarity: 2 Questions for Authors: * Could the authors please provide perceptual metrics (i.e. SSIM, LPIPS) for the reconstruction comparison experiments in Figures 2 and 3? * Could the authors elaborate on the computational overhead provided in footnote 4 and explain the reason for utilizing only grayscale images when reporting having a computational budget of NVIDIA A40 GPU for CSVAE? * Could the authors please explain the advantages the proposed method has over contemporary generative prior such as normalizing flow and diffusion-based priors? * One concept I am unclear about is how do you check if the proposed method is overly biased towards the measurements y? This issue might arise because the conditional distribution P(y|s) has equally likely probabilities when Y=AS, but does not guarantee S will be consistent with training samples depending on how sparse s is in practice. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: Please refer to the weakness portion of the review. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer y1C8 for the detailed review. In the following, we address the reviewer’s raised weaknesses and questions. **To Summary:** We would like to point out that we do not learn $p(\mathbf{y}|\mathbf{s})$. In fact, keeping $p(\mathbf{y}|\mathbf{s})$ fixed by a pre-known dictionary is a key property of our proposed method. Instead, we learn the parameters of a statistical model for $p_{\mathbf{\theta},\mathbf{\delta}}(\mathbf{s})$. Moreover, we do not reconstruct $\mathbf{x}^*$ by posterior sampling, but rather approximate the point conditional mean estimator. This estimation methodology is different from the diffusion-based posterior sampling methods or the MAP-like GAN-based reconstruction which the reviewer might be familiar with. **To Weaknesses (before bullet points):** The representation error of GAN-based compressive sensing originates from the discrepancy between the learned GAN’s range (i.e., its learned manifold) and the ground-truth signal, which is supposed to be reconstructed. Our proposed method learns no manifold but rather a parameterized statistical model, which is why our proposed method does not experience this source of error. **To Weaknesses (bullet point 1 and Question 1):** The PSNR is a function of the nMSE, which is why the PSNR encodes the same information as the nMSE, and including both to us seems to provide no additional information. However, we agree with the reviewer that the SSIM provides additional information, which is why we plotted the SSIM for Fig. 2 and 3 in the attached pdf (Fig. 3). Our proposed methods either perform comparably or outperform CSGAN in all settings and outperform all other baselines. **To Weaknesses (bullet point 2) and Question 2:** The choice of low-resolution grayscale images and 1D signals in our work is motivated by the exemplary applications, we have in mind for our proposed method. We think that there are several applications that crucially depend on the ability to learn from strongly compressed signals and exhibit signals/images in this dimensional regime. For example, in wireless communication (either 1D or 2D low-dimensional) as well as in ECG denoising (1D signals) in wearable technology, data is typically low-dimensional, noisy, and compressed, and ground-truth training data is difficult to obtain. Moreover, in those applications, latency plays a crucial role, which is why we think that our proposed method matches those applications. However, we decided on images and generic 1D signals in our work, as these are the benchmark datasets used for our baselines, and we wanted to introduce our method independent from specific applications. We refer to our answer to reviewer 34q1 (Reviewer 4) - Weaknesses 2) for more details about exemplary applications. Having said that, there is indeed a memory-related limitation for our proposed method related to the computation and processing of the calculated covariance matrices. Sparse Bayesian learning-based techniques generally share this limitation. While it is possible to circumvent the explicit computation of the posterior covariance matrices in Eq. (8) for training the CSVAE and CSGMM, the covariance matrices in Eq. (17) have to be computed and inverted. This is also, what we refer to in footnote 4. In the final version of the paper, we will revise our limitations and explicitly discuss this point in more detail as this property might limit the application of our model in extremely high-dimensional image settings. To overcome this issue and combine it with efficient Sparse Bayesian learning methods is considered future work. **To Weaknesses (bullet point 3) and Question 3:** To our best knowledge, contemporary generative priors for compressive sensing generally rely on the assumption of being trained on many ground-truth training samples. However, as pointed out in our previous paragraph, there are several applications in which this assumption is difficult or even not possible to realize. In contrast, our proposed method does not require ground-truth training samples in the training phase and, thus, can be applied for those applications. CSGAN seems to us to be the current state-of-the-art generative prior, which also relaxes the requirements on the training data by incorporating the knowledge that the prior is to be used for compressive sensing. This is why we compare our method with CSGAN. **To Weaknesses (bullet point 4):** In the attached PDF (Fig. 4 c) and d)), we included the tracking of the objective functions for CSVAE and CSGMM as an additional metric for successful training. **To Question 4:** Unfortunately, we cannot fully comprehend the reviewer’s question. Can the reviewer specify what is meant by ”equally likely probabilities when $\mathbf{y} = \mathbf{A}\mathbf{s}$”? The reviewer might refer to a distributional shift between the training and test samples. While we think that it is an important question whether the proposed method is robust against distributional shifts in the test set, we consider this question to be out of the scope of our work as this paper is supposed to introduce the general concept and demonstrate good performance on benchmark datasets. We would like to thank the reviewer again. We will revise our final paper and plan to explicitly address the potential limitations with very high-dimensional data, which Sparse Bayesian learning generally suffers from. Moreover, we plan to also include the results on the SSIM distortion metric. --- Rebuttal 2: Comment: I would like to thank the reviewers for their detailed response, I will update my score accordingly based on the rebuttal. --- Rebuttal 3: Title: Further Questions Comment: Yes, I agree with both points "to our best knowledge, contemporary generative priors for compressive sensing generally rely on the assumption of being trained on many ground-truth training samples" and "the ability to learn from strongly compressed signals and exhibit signals/images in this dimensional regime." However, one of the datasets used is MNIST, which has sufficient data for diffusion-based prior or normalizing flows and these priors are not limited to images they can perform inference on 1-D signal. Please correct me if something was done differently to the training set that could not be adopted for this framework. Furthermore, even in the case where there is limited data, there are untrained priors such as [Deep Decorder](https://arxiv.org/pdf/1810.03982) or [Deep Image Prior](https://arxiv.org/pdf/1711.10925) that can be used as comparisons. Based on the response above you have implicity answered my question about the equally likely probabilities along the line of Y=AS. --- Rebuttal 4: Title: Answer to Reviewer y1C8 Comment: We would like to thank the reviewer for commenting on our rebuttal. We agree with the reviewer that the MNIST dataset generally provides enough ground-truth training samples for training state-of-the-art machine learning-based compressive sensing techniques. However, we indeed modified the dataset for training our proposed method. In all our simulations in the main paper (Fig. 2 and 3) and most results in the Appendix (all except the dashed results in Fig. 6), the parameter $M$ does not only refer to the dimension of the compressed observation, which is to be reconstructed but also to all training samples (Section 4.1 - Measurement matrix and evaluation metric). For example, all training samples in Fig 2 a) have been compressed by a Gaussian measurement matrix to have dimension $M$ before being used for training, which is why none of the state-of-the-art methods trained on ground truth data could have been used in all these settings. We will revise our work to make this more prominent in our final version. We agree that Deep Image Prior and Deep Decoder could have been considered as further baselines. Both are cited in our introduction. However, we decided against both as these methods only apply to natural images and cannot be trained [1], whereas our setting is about reconstructing any compressible type of signals based on learning from corrupted training samples. Therefore, CSGAN and CKSVD seem to us to be the closest baselines from the "generative model"- and "dictionary learning"-community. We hope the reviewer agrees with us that both untrained neural-network baselines are, thus, not required for our work. [1] D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Deep image prior,” International Journal of Computer Vision, vol. 128, no. 7, p. 1867–1888, Mar. 2020.
Summary: The paper proposes a set of methods to learn a generative model over the sparse representations of a signal. The dictionary basis is fixed and given, and the signal of interest is assumed to be sparse in this basis. Therefore, the generative model learns to provide such sparse representations. The method learns the generative prior from the noisy and compressed observations. The authors leverage the notion of conditional Gaussianity and propose VAE and GMM based models for learning such priors. The choice enables the inference with only forward operation. Strengths: The idea of using purely forward operations for inference is quite nice, since most of the solutions based on generative priors require gradient descent on the latent space and suffer from latency among other problems. Furthermore, learning the prior from compressed observations makes the approach more practical as it does not need ground truth signal. Weaknesses: Overall, the paper combines some of the existing ideas from prior works (for example posterior updates from SBL, VAE ELBO derivations), and therefore, the novelty seems marginal. To be more concrete, it *seems* straightforward that if one wants to do variational Bayes, then the VAE style decomposition is the standard approach. If there are any additional challenges, the authors should clarify. The idea of forward pass only inference has been tried before (see for example the paper “Solving Linear Inverse Problems Provably via Posterior Sampling with Latent Diffusion Models” NeurIPS 2023). Learning from the compressed samples can be compelling although not new as mentioned by the authors themselves (for example CSGAN). The proposed method requires further elaboration on the loss with respect to learning from uncompressed measurements substantiated with more experiments. The experiment section is a bit thin given the limited technical novelty, particularly with respect to the baselines. Given a lot of prior works in the literature on learning based solvers for inverse problems (for example, all variations of generative priors based on flows and diffusions, unfolded algorithms like lista and its variations), the authors have a more difficult task of placing their contribution within the prior work. The paper is not very convincing in showing what is the unique direction in which the field is moved forward. Technical Quality: 3 Clarity: 2 Questions for Authors: Some of my questions about the novelty was implicit in my comments above. - Can authors provide some comment on the performance loss with respect to uncompressed generative prior? - How many MC samples are used at inference for (9)? - Generative priors generally suffer from convergence issues and require multiple restarts. Could the authors comment if their forward-only operation has a similar issue? - It would be good to incorporate error bars in the plots and numerical results. Given the MC step, I expect a higher variance for the proposed approach. Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: See comments above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer K63t for the thorough review and the appreciation of the proposed method’s ability to reconstruct via a single forward operation. In the following, we address the raised weaknesses and questions. **To Weaknesses (first paragraph):** We agree with the reviewer that the idea of realizing variational Bayes via a VAE-style decomposition is straightforward. However, this is not our only technical contribution and there are additional challenges and contributions from our side. 1) One technical contribution from our side is to extend the statistical model of SBL for compressive sensing in Eq. (2) by an additional (conditional) latent variable $\mathbf{z}$ with arbitrarily parameterized distribution (Eq. (7)). While this extension might look minor, this idea enables the learning of a nontrivial sparsity inducing prior with training data. In contrast, the original SBL for compressive sensing in Section 2.2 cannot learn from data but rather applies the EM algorithm to adjust a naive prior during inference. To our best knowledge, we propose the first model, which incorporates SBL for compressive sensing while simultaneously being able to learn from training data in an initial offline phase. 2) We also rigorously prove that our introduced statistical model in Eq. (7) is sparsity inducing (Theorem 3.1). We consider this as a further technical contribution. 3) The idea of using a VAE-style decomposition might be straightforward. However, we think that it is not trivial to show that there exists a tractable ELBO for the statistical model in Eq. (7) (which is different from the standard VAE ELBO). We derive this ELBO from (12) to (15) with additional closed-form formulas in Appendix D. Moreover, for the CSGMM, we derive the closed form of its m-step in Lemma 3.2, which solves the non-trivial optimization problem in Eq. (30). Altogether, we would like to kindly disagree that our work has ”limited technical novelty” and hope the reviewer agrees on this point. **To Weaknesses (fourth paragraph):** We understand the reviewer’s perspective that our baselines might seem to ignore some of the state-of-the-art compressive sensing methods. We are aware of the rapidly improving diffusion-based and unfolding-based compressive sensing techniques recently published. We also partially cite the corresponding pioneering work (such as the ALISTA paper) in our introduction. However, these state-of-the-art models have to learn from lots of ground-truth data. We want to emphasize that we neither aim nor claim to outperform these techniques as we think that it is illusory to expect a model, which is trained on a few and heavily compressed data samples (as our model), might outperform state-of-the-art pre-trained techniques trained on many ground-truth samples. From a (natural) image processing perspective, the capability to learn from compressed data might seem minor as many ground-truth natural images are often accessible. However, we think there are several applications that strongly rely on this ability and being able to outperform what seems to us as the state-of-the-art model in this regime (CSGAN) is certainly a contribution from our perspective. We additionally would like to refer to our answer to 34q1 (Reviewer 4) - Weaknesses 2) for exemplary applications where the currently available state-of-the-art models are not applicable, as this reviewer asked explicitly for applications. Having said that, we will revise our work in the final version to better place our work in the current research area. **To Question 1 and Weaknesses (third paragraph):** In Appendix I, we included the comparison between CSGAN and the proposed CSGMM and CSVAE trained on ground-truth and compressed data. Moreover, in the original work of CSGAN, it was shown that CSGAN can outperform the standard GAN when being trained on ground-truth data for compressive sensing. Since our main focus lies on the capability to train from compressed data, the informational gain from further baselines trained on ground-truth data seems to us rather limited (see also our previous answer). We hope the reviewer agrees with us on this. **To Question 2 and Question 4:** The number of Monte Carlo samples is 64 (Section 4.1 - Hyperparameters). While we understand the reviewer’s intuition of the CSVAE estimator to exhibit high variance, the opposite is the case. In fact, the proposed approach is highly robust w.r.t. the Monte Carlo approximation and already one/a few sample(s) are sufficient for approximation. In fact, CSGAN also relies on (Monte-Carlo-like) estimating $\mathbf{x}^*$ several times, which is why it is possible to explicitly compare both methods in this regard. In Fig. 1 (left) of the attached pdf, the nMSE over MC samples for CSVAE and CSGAN is shown for MNIST reconstructions with observations of dimension 200. It can be seen that taking more than one sample is almost negligible for CSVAE, while only one trial for CSGAN performs significantly worse than computing many trials. We additionally agree with the reviewer that error bars help the reader evaluate the results, which is why we included error bars representing the standard deviations of the estimations. In Fig. 1 (right) and Fig. 2 of the attached PDF, we included the same error bars in all the plots of our main paper. **To Question 3:** As shown in Fig. 1 (left) in the attached PDF, the inference of our proposed method is robust and requires no multiple starts. This property is based on the differences between the estimation methodology of GANs and our proposed methods. While GANs enforce their estimation to lie on a learned manifold, our proposed method regularizes the estimation by means of a probabilistic prior and aims to approximate the conditional mean estimator. We thank the reviewer again. Based on the feedback, we will revise our paper to place our work better in the research area as well as include error bars and the simulations regarding the MC samples. --- Rebuttal Comment 1.1: Title: Follow-up on the answers Comment: I would like to thank the authors for taking time to answer my questions and comments. I will discuss some of the responses a bit further, and that, with the intention of building more confidence and increasing my score, so hopefully the authors can bear with the process, and their time is well spent at the end. * To begin, I agree with the authors that “the capability to learn from compressed data” is not minor, as I mentioned in my original review. * I am also satisfied with the answers to my questions regarding MC samples, error bars, and convergence issues. * **Regarding deriving ELBO from SBL:** As far as I can see, the inequality in (7) is the standard step in ELBO bounds. The decomposition in (13) follows from the basic probability decomposition for $\mathbf{z}, \mathbf{s}$ and $\mathbf{y}$ (given in Appendix B). Then, the computation of the bound follows from MC sampling like classical VAE (also referenced by the authors themselves, namely ref. 36), and the tractable computation of some terms in Appendix D. The derivations of Appendix D (eq. 26-29) are not exactly the same as VAE paper but builds very similarly on Gaussian likelihood assumption and the linearity of expectation. So, unfortunately, I still feel that these derivations are based on standard linear algebra and probability techniques, very similar to the standard VAE. We can therefore focus on other comments regarding the novelty. * **Regarding “1. One technical contribution from our side is to extend the statistical model…”:** I agree with the reviewer that the proposed method enables “the learning of a nontrivial sparsity inducing prior with training data”, however the actual contribution is “the learning of a nontrivial sparsity inducing prior” from **compressed measurements**, as the authors emphasize (although it is not so clear from the title and not so pronounced in the abstract). So, the question is: does the method perform equally better from uncompressed data? If not, what makes the method better in case of learning from compressed data? I feel combining two contributions (namely learning a nontrivial sparsity prior AND learning from compressed data) and selecting the baselines accordingly makes the message convoluted: is the paper about a better generative prior or about learning from compressed data? Answering such questions is important for the paper, which is proposing a general method for solving a particular problem. --- Rebuttal 2: Title: Answer to Reviewever K63t Comment: We thank the reviewer for the reviewer's comments on our rebuttal and for the willingness to increase the score if all of the reviewer's concerns are addressed. Moreover, we thank the reviewer for pointing out that our contribution might be misunderstood as we focus more on the learning aspect in our abstract and contributions instead of the "learning from compressed measurements"-aspect. It seems to us that the key aspects of our contributions depend on the individual background of the reader. Coming from a "SBL for compressive sensing" perspective, emphasizing the ability to learn from data in an initial training phase might be more natural. However, we agree that for the "generative models for compressive sensing" community, the additional aspect of being able to learn from compressed measurements is crucial to accurately putting our work's contribution into concurrent literature. In Fig. 6 and 7, we compare CSGAN, CSVAE, and CSGMM trained on uncompressed (ground-truth) data (dashed curves) as well as compressed data (solid curves). Perceptually, CSGAN benefits more from uncompressed (ground-truth) information during training than our proposed CSVAE and CSGMM. In the case of the distortion metric nMSE, all approaches perform approximately equally better when being trained on uncompressed data. In total, the performance gains to the CSGAN baseline seem to be more prominent when it comes to being trained on compressed training samples. From our perspective, this indicates that our proposed approach is particularly relevant in those cases where only compressed training samples are accessible and/or the online (inference) latency plays a crucial role. The intuition behind why the approach is capable of successfully learning from compressed data is the following: After each parameter update step during training, the proposed CSGMM and CSVAE do the following: - They estimate the corresponding ground-truth samples in the training data (Eq. (8) right side) - Additionally and arguably even more importantly, they quantify the uncertainty of this estimation in terms of the estimations' covariance matrix (Eq. (8) left side). - After doing so, both proposed models incorporate both (the estimation as well as the error quantification) in the subsequent update step. In the case of CSGMM, this is done by its corresponding M-step (Lemma 3.2). In the case of CSVAEs, this is done in its modified reconstruction loss in Appendix D, as well as the additional KL divergence in Eq. (15), which does not appear in standard VAEs. This mechanism (especially the aspect of quantifying the estimation's error and incorporating it in the next update step) does not appear in CSGAN's methodology of learning from compressed data. This is because CSGAN estimates the ground-truth samples after each update step but only provides these estimations to its discriminator without considering any uncertainty. From an intuitive point of view, this is the reason why our proposed approach seems to be superior when it comes to compressed training samples. We would like to thank the reviewer again, and based on the reviewer's comments, we will revise our work to better phrase our work's contribution regarding learning from compressed measurements. Moreover, we will consider extending our discussion section to explicitly discuss the intuitive learning aspects of our proposed method described above. --- Rebuttal Comment 2.1: Title: Thanks for the response Comment: I would like to thank the authors for the answer. I increased my score to weak accept. I feel the above discussion can be helpful to include in the revised version of the paper, particularly to make the story more coherent. They might even think about reflecting it in the title and abstract if permitted. I still feel that the story of the paper is about learning from compressed measurements, as it does not have a detailed case study on uncompressed measurements. Their arguments that SBL is particularly useful because of incorporating uncertainties and because of estimating the ground truth samples is compelling to me. As a stretch goal, the authors could think of ablation experiments to verify this hypothesis (for example by freezing the variance params in SBL - not sure if it makes sense though).
Rebuttal 1: Rebuttal: Dear Program Chair, Senior Area Chair, Area Chair and Reviewers, We would like to thank you for taking the time to review our paper and for the valuable feedback. For our response, please refer to our point-by-point responses to each reviewer below. You will also find attached a PDF with additional simulation results, in which the plots for the corresponding reviewers are marked with blue boxes and their names. Sincerely, The authors. Pdf: /pdf/1a56e47896f58bbf71d01ad74081e28144fd4d7a.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Universal In-Context Approximation By Prompting Fully Recurrent Models
Accept (poster)
Summary: In-context learning has emerged as one of the puzzling properties of language models at scale. While a lot of work has been done to understand the mechanics supporting this behavior in attention-based models, much less has been done on recurrent models. Given the renewed interest in these architectures (e.g. Mamba or Griffin), a better understanding of in-context learning in these models is needed. The paper proposes to study these abilities through the notion of in-context approximation, that is whether a model can produce a certain function if it's given the right prompt. The authors show two universality results by introducing a low-level programming language that implements the canonical proof ideas for this kind of result within the neural dynamics. Strengths: - The paper studies in-context learning from an interesting perspective: in-context approximation theory. - Abstracting neural dynamics in a programming language is an elegant way to prove the approximation results of the paper. - The paper is overall very well written and easy to follow. Weaknesses: - The review of the different recurrent architectures is inaccurate. For example, Mamba and Hawk do not have an A matrix that is constant: it is x-dependent. On top of that, this matrix is diagonal with real values, which makes the implementation of the rotation algorithm presented in 4.2 impossible. More clarity on these questions, starting in Section 2 is necessary. - Some synthetic numerical experiments involving the training networks to solve the task, e.g. comparing the solution found by gradient descent to the one of the construction or confirming the importance of gating to get more compact solutions, would be an interesting add-on. I am aware that it is a theoretical paper and I am not expecting such experiments to recommend acceptance. Yet, it would still nicely complement the paper. - Section 6 does not bring much to the story. Technical Quality: 4 Clarity: 4 Questions for Authors: - Can the authors position their work compared to: - [Zucchet et al. 2023](https://arxiv.org/abs/2309.01775): they study the link between gated RNNs (very similar to the one you consider) and attention, therefore connecting to the literature on how attention solves in-context learning. Additionally, they also highlight the importance of gating in practice. - [Orvieto et al. 2023](https://arxiv.org/abs/2307.11888) and the references mentioned there on the universal approximation capabilities of linear RNNs. It would be valuable to compare with those results in more detail, c.f. next question. - I am wondering about how fundamentally different is in-context approximation compared to the universal approximation. Is one a subset of the other? How are they related / what are their difference? Given that one of the contributions of the paper is the introduction of this notion, answers to these questions would be particularly valuable. - What is the interest of introducing such a low-level programming language? It feels to me that LSLR is a tiny abstraction on top of the neural equations that is not necessarily worth it. Why not directly include the primitives used in e.g. Figure 3 in this language? Some more open questions: - The paper shows that deep linear RNNs can implement a variety of functions in context, somehow supporting the argument that linear recurrence + nonlinear instantaneous processing is quite powerful. Does the framework provided by the authors give hints about which kinds of functions are difficult to learn with such networks? The authors mention something around these lines in lines 165-167, but more detail would be appreciated. - Are there some links between the in-context learning approximation notion introduced here and the notion of controllability in control theory? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The authors have correctly addressed the limitation of the applicability of their construction to more practical settings as well as highlighted that better understanding in-context learning is critical in a world where LLMs are available through APIs. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The review of the different recurrent architectures is inaccurate. For example, Mamba and Hawk do not have an A matrix that is constant: it is x-dependent. On top of that, this matrix is diagonal with real values, which makes the implementation of the rotation algorithm presented in 4.2 impossible. The reviewer is right that Mamba and Hawk do have an $A$ matrix that is input-dependent and diagonal. However, the $A$ matrix of the Linear RNN does not directly map to the $A$ matrix of the Mamba/Hawk model. The Mamba/Hawk has a number of channel-mixing operations which we can leverage to this end. This is the basis of our proof in Appendix E that any Gated Linear RNN can be expressed as a Hawk or a Griffin model. Our construction respects their diagonal transition matrix structure, as can be seen in Eq. 43. > Can the authors position their work compared to: Zucchet et al. 2023: they study the link between gated RNNs [...] and attention, therefore connecting to the literature on how attention solves in-context learning. [...] Thank you for bringing the work of Zucchet et al. 2023 to our attention. The connection is quite interesting and possibly opens a different route for proving the same properties via transformers. However, while it has been shown that transformers with softmax attention are universal in-context approximators (Petrov et al., 2024), to the best of our knowledge this hasn't been proved for linear attention yet. However, our intuition is that this is likely possible and a proof for the universal in-context approximation properties of linear attention transformers might be possible via our present work. Still, it is not clear what the prompt complexity in this case would be: whether it would be the most efficient regime for fully recurrent models from this paper or the less efficient regime for transformers from (Petrov et al., 2024). > I am wondering about how fundamentally different is in-context approximation compared to the universal approximation. Is one a subset of the other? How are they related / what are their difference? [...] Universal _in-context_ approximation is a very different beast than the classical universal approximation. Classically, one can choose the model parameters conditional on the target function. With universal _in-context_ approximation, however, the model parameters are _fixed_ and independent of the target function. In other words, a single model needs to approximate all target functions (given a prompt that specifies the target function). In our experience, proving _in-context_ properties has been more difficult because of the complex interactions between different prompt tokens. While it might appear that universal _in-context_ approximation is a more demanding property than universal approximation and hence should be a subset of it, the two hypothesis classes could be very different and hence we could not formally define a notion of an inclusion or subset. For example, _in-context_ approximation makes sense only for sequence-to-sequence models. However, intuitively, it does feel that if an architecture could be a universal _in-context_ approximatior, then it should also be able to be a universal approximator. > What is the interest of introducing such a low-level programming language? It feels to me that LSLR is a tiny abstraction on top of the neural equations that is not necessarily worth it. Why not directly include the primitives used in e.g. Figure 3 in this language? Great question! We found the bookkeeping required to work directly with the layer equations to be very messy, error-prone and not particularly intellectually stimulating. LSRL helped us streamline the proof process significantly. In a way, you could also think of it as a proof assistant of sorts, where one specifies the high-level strategy and LSRL fills in the formal details. > The paper shows that deep linear RNNs can implement a variety of functions in context, somehow supporting the argument that linear recurrence + nonlinear instantaneous processing is quite powerful. Does the framework provided by the authors give hints about which kinds of functions are difficult to learn with such networks? [...] For any neural network architecture, one can always construct functions that are difficult for it to learn. Especially in the current "war" between Transformers and Linear RNNs, the literature is full of carefully constructed examples that trip one or the other. Linear RNNs will always be bad at recall within large context sizes due to their fixed-size hidden state. No non-linear activations, gating, or any other fancy mechanism can fix this problem without breaking the full recurrence and bringing some flavour of attention. That is why we put the query before the prompt, as that changes to problem into search rather than memorization and recall (See footnote 1). Simply flipping the order, i.e., putting the query _after_ the prompt, would be not possible for arbitrary precision and fixed hidden state. > Are there some links between the in-context learning approximation notion introduced here and the notion of controllability in control theory? Another very good question! We did look into it but there is a key difference in the setup. With controllability, one seeks a control signal $u$ that can bring a (linear) system to a target state $x'$. However, $u$ depends on the target state $x'$. Instead, in the universal approximation setting, we are looking for a single control signal that is a map from _all possible inputs_ to their respective target states. Controllability would be more closely related to the ability of a model to be prompted to produce any desirable output $x'$. Note that our universal in-context approximation results subsume this prompt controllability setting as one can always define a constant function that produces $x'$. A. Petrov, P. HS Torr, and A. Bibi. Prompting a pretrained transformer can be a universal approximator. In ICML 2024. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their answers! I'll keep my score as it is. [Minor point] In their rebuttal, the authors mention that the universality of linear RNNs was only proven in 2023. To the best of my understanding, Boyd and Chua already proved a universality result in 1985 (https://stanford.edu/~boyd/papers/pdf/fading_volterra.pdf). --- Reply to Comment 1.1.1: Comment: Thank you for highlighting Boyd and Chua's work! The connection between what we now call Linear RNNs and the results in the Boyd and Chua (due Wang and Xue) was recent but the fundamental results are indeed quite old. Thank you for the insight, we will add the Boyd and Chua reference to our updated manuscript for completeness.
Summary: This paper designs a new programming language LSRL that compiles to a fully recurrent structure and shows that multiple recurrent architectures including RNNs, LSTMs, and SSMs can serve as universal in-context approximators. They also show that multiplicative gating allows more numerically stable construction. Strengths: 1. This paper reveals interesting properties of multiple recurrent structures and shows they can perform universal in-context approximation. 2. The discovery of the numerical instabilities when gating is missing is also interesting and provides intuitions on the role of gating. 3. The programming language created by this paper may be of independent interest. Weaknesses: 1. In the introduction, the paper mentions that RNNs can be prompted to act as any token-to-token function over a finite token sequence. However, in the actual construction, the prompt specifying the function must be put after the query. This claim, while necessary for RNNs, is not very standard. Clarity will be improved if this caveat is mentioned earlier in the paper. Technical Quality: 4 Clarity: 4 Questions for Authors: 1. If one considers a hybrid architecture that consists of both RNN and Transformer Layer, can one use a combination of LSRL and RaSP language to compile such a model? 2. In the conclusion, it is mentioned that the compiled transition matrix is often diagonal. Are those matrices exactly diagonal here? Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Limitation are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's positive opinion of LSRL and our investigation into the numerical instabilities without gating. > In the introduction, the paper mentions that RNNs can be prompted to act as any token-to-token function over a finite token sequence. However, in the actual construction, the prompt specifying the function must be put after the query. This claim, while necessary for RNNs, is not very standard. Clarity will be improved if this caveat is mentioned earlier in the paper. We have mentioned at the beginning of Section 2 (Line 74/footnote 1) why it is necessary to put the query before the prompt. However, we agree with the reviewer that mentioning this in the Introduction could further improve the readability of the paper. > If one considers a hybrid architecture that consists of both RNN and Transformer Layer, can one use a combination of LSRL and RASP language to compile such a model? This is indeed an interesting proposition. We do not see any immediate reasons why that would not be possible. As long as one is willing to specify different layers in different languages, the layers could be compiled with the respective compilers and then fused together. Some care would have to be exercised to align how variables are represented in order to ensure interoperability but that should not be a problem. > In the conclusion, it is mentioned that the compiled transition matrix is often diagonal. Are those matrices exactly diagonal here? The transition matrix depends on the program that is to be implemented. For the two universal in-context approximation programs we have in the paper the transition matrices are not diagonal but highly sparse with many blocks being diagonal. --- Rebuttal Comment 1.1: Comment: Thank the authors for the detailed response. I will keep my score.
Summary: The paper shows that sequence-to-sequence networks like RNNs, LSTMs, and Mamba are universal in-context approximators, which was defined as the ability to compute any function with an appropriate prompt. The authors propose a programming language called LSRL which can express any language expressible by a linear RNN (and vice-versa). The authors discuss numerical stability issues in their proposed language and discuss possible solutions. Overall, the paper streamlines a theoretical framework to understand expressivity of recurrent models and will prove to be an interesting contribution to the wider community. Strengths: The strength of the paper lies in its simplistic exposition of its motivation, proposed framework, and the discussions of various frailties and solutions. Sequence-to-sequence models have proven to be competitive to transformers, but the gaps between the two architectures are still under exploration. Expressivity for transformers has been studied through RASP (Weiss et al.'21), and this paper shows a similar programming language to understand linear RNNs. Through LSRL, the authors argue about the kind of languages linear RNNs can express and the impact of gating in RNNs. Overall, the paper takes an important step towards understanding sequence-to-sequence models. Weaknesses: As such, I don't see clear weakness with the work. However, I would like the authors to be more clearer on their contributions. How is the theoretical framework different from the expressivity studies on feedforward networks and RNNs [1, 2]? That is, as far as I understand, the constructions in section 4.1 and 4.2 still construct a RNN that can do a dictionary lookup for each possible query, very similar to ones constructed in previous works. If I understand correctly, the strength of the paper is more on creating a systematic programming language to compile languages into RNNs. The universal expressivity happens to be a by-product of this framework and is very similar to previous constructions. References: 1: Andrew R Barron. 1993. Universal approximation bounds for superpositions of a sigmoidal function. IEEE Transactions on Information Theory, 39(3):930–945. 2: Recurrent Neural Networks Are Universal Approximators. AM Schäfer, HG Zimmermann. In Artificial Neural Networks–ICANN 2006: 16th International Conference, Athens, Greece, September 10-14, 2006. Proceedings, Part I 16, pages 632–640. Springer. Technical Quality: 4 Clarity: 4 Questions for Authors: Please check my questions above. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The authors discuss the limitations of their work in section 7, and clearly indicate the future directions that the community can pursue starting from their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for recognizing our work's contributions towards understanding sequence-to-sequence models. > As such, I don't see clear weakness with the work. However, I would like the authors to be more clearer on their contributions. How is the theoretical framework different from the expressivity studies on feedforward networks and RNNs [1, 2]? That is, as far as I understand, the constructions in section 4.1 and 4.2 still construct a RNN that can do a dictionary lookup for each possible query, very similar to ones constructed in previous works. If I understand correctly, the strength of the paper is more on creating a systematic programming language to compile languages into RNNs. The universal expressivity happens to be a by-product of this framework and is very similar to previous constructions. There are two key differences between the classical results the reviewer has shared and this work. First, removing the non-linearity from the state update (going from an RNN (Eq.2) to a Linear RNN (Eq. 3)) is non-trivial and complicates the analysis significantly. In fact, it was only last year that it was shown that Linear RNNs can be universal approximators (see Wang and Xue, 2023). Second, and that's the key novelty of our work, universal _in-context_ approximation is a very different beast than the classical universal approximation. Classically, one can choose the model parameters conditional on the target function. With universal _in-context_ approximation, however, the model parameters are _fixed_ and independent of the target function. In other words, a single model needs to approximate all target functions. That makes our proofs very different and, at least in our view, the results much more interesting as they extend to how we often currently interact with large pretrained models: via prompting, rather than via training. Shida Wang and Beichen Xue. State-space models with layer-wise nonlinearity are universal approximators with exponential decaying memory. NeurIPS 2023 --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. After going through the response, I am maintaining my score.
Summary: The paper explores the potential of various recurrent neural network architectures (RNNs, LSTMs, GRUs, Linear RNNs, and gated architectures) to act as universal in-context approximators, crucial for zero-shot and in-context learning without fine-tuning. It introduces the LSRL programming language to facilitate this, highlighting the role of multiplicative gating in enhancing model stability for practical applications. Strengths: The concept of universal approximation in the context of in-context learning is both intriguing and innovative. This research presents a unique angle that adds valuable insights to the field. Weaknesses: - The paper is very dense and hard to read and follow. The construction is rather tricky. Do you have any high-level idea in the proof? - I don't understand the practical value of proving that linear RNNs are in-context universal approximators. Linear RNNs are known to be inferior at in-context learning and retrieval (see references [1], [2], [3]). How does demonstrating that linear RNNs are in-context universal approximators mitigate this issue in practice? Isn't it somewhat contradictory to claim that RNNs are in-context universal approximators but cannot perform in-context learning well? - Many theoretical papers claim that non-linear RNNs are universal approximators or Turing-complete, often under impractical assumptions like infinite precision and exponential hidden state sizes. What about the assumptions in this paper? Do you think all assumptions are practical? - The definition of gated (linear) RNNs in this paper is very unusual. Typically, gating refers to the recurrence itself being gated by forget gates or data-dependent decays, rather than gating the output. [1] In-Context Language Learning: Architectures and Algorithms https://arxiv.org/abs/2401.12973 [2] Simple linear attention language models balance the recall-throughput tradeoff https://arxiv.org/abs/2402.18668 [3] RNNs are not Transformers (Yet): The Key Bottleneck on In-context Retrieval https://arxiv.org/abs/2402.18510 Technical Quality: 3 Clarity: 2 Questions for Authors: See weakness. Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for finding our work to be _intriguing and innovative_ and for _adding valuable insights to the field_. We would like to address their concerns. > The paper is very dense and hard to read and follow. The construction is rather tricky. Do you have any high-level idea in the proof? The high-level ideas behind the two LSRL programs in Listing 1 and Listing 2 are rather succinct, although their implementation might indeed be a bit tricky. For the _continous case_ (Listing 1, Section 4.1), the high-level idea is to discretize the domain into cells and to approximate the target function $f$ with a piece-wise constant function $g$ that is constant in each cell with a value equal to $f$ evaluated at the centre point of the cell. Then, evaluating this approximation at an input $x$ in an iterative fashion requires one to go through the cells one by one, find the cell containing $x$ (Lines 16-19) and return the value of $f$ at the centre of that cell (Lines 21-23). This is a variant of the classic universal approximation results that used similar discretization constructions. For the _discrete case_ (Listing 2, Section 4.2), the high-level idea is to recognize that any discrete function can be represented as a map or a key-value dictionary. This dictionary can be provided in-context, iterating keys and values. Then, given a query, we can process the key-value pairs one by one until we find the key that matches the query (Lines 19-25). Then, we know that the following value is the one we should output, so we copy it into a state variable and output it repeatedly (Lines 26-30). We understand that this is the trickiest part of the paper and will add the above explanations as a way to improve the presentation for the camera-ready version of the paper. > I don't understand the practical value of proving that linear RNNs are in-context universal approximators. Linear RNNs are known to be inferior at in-context learning and retrieval (see references [1], [2], [3]). How does demonstrating that linear RNNs are in-context universal approximators mitigate this issue in practice? Isn't it somewhat contradictory to claim that RNNs are in-context universal approximators but cannot perform in-context learning well? The reviewer is raising some very interesting questions. It is true that, especially in deep learning, there is often a tension between theory and practice. The fact that we provide a proof by construction that something is possible does not mean that these are solutions that models will learn via gradient descent on real world data. However, our universal in-context approximation results might imply that perhaps the low empirical performance is an issue with how we train, prompt, or evaluate these models, rather than some fundamental limitations of the in-context learning abilities of Linear RNNs. The very fact that models with good in-context learning abilities exist for these architectures (as shown in our results) might indicate that we might be doing something suboptimal with the training and inference of these models. Furthermore, there is no free lunch: for any given architecture, we can design problems that it won't perform well on. In our view, the question should not be to figure out if Linear RNNs or Transformers are the "better" architecture. They offer different trade-offs and are useful in different settings. We would like to highlight that we have discussed the fundamental properties of such theoretical results in our Limitations section. > Many theoretical papers claim that non-linear RNNs are universal approximators or Turing-complete, often under impractical assumptions like infinite precision and exponential hidden state sizes. What about the assumptions in this paper? Do you think all assumptions are practical? We make no assumptions about infinite precision and exponential hidden state sizes. As we anyways use discretization, infinite precision is neither necessary nor helpful for our constructions. The hidden state sizes of our constructions are also independent of the target precision $\epsilon$ (in the continuous setting) or the key-value dictionary size (in the discrete setting). The error bound on line 228 does assume Lipschitzness, but that is a standard condition. Therefore, we work with very few assumptions on the architecture and, hence, our assumptions are practical. > The definition of gated (linear) RNNs in this paper is very unusual. Typically, gating refers to the recurrence itself being gated by forget gates or data-dependent decays, rather than gating the output. This paper focuses on Gated Linear RNNs and Gated RNNs. By definition, the gating of Gated Linear RNNs cannot be on the recurrence itself as the model will not be linear anymore. Similarly, the gating of Gated RNNs cannot be on the recurrence because the model would not be an RNN anymore (the standard RNN is a linear state update followed by an element-wise non-linearity). Perhaps the reviewer has LSTMs and GRUs in mind where the gating is indeed on the recurrence. However, as we show in Appendices B, C and D, our models subsume LSTMs, GRUs and the Hawk/Griffin architecture and hence they apply to these architectures as well. --- Rebuttal Comment 1.1: Comment: - How does the proof of universal in-context learning of linear RNNs guide the practical development of these models? While I am not opposed to theoretical papers, I'm not convinced that this proof will significantly benefit the development of the field. As a researcher who leans more towards empirical work, I cannot recommend acceptance for this paper unless it clearly guides practice or explains some successes or flaws in existing empirical phenomena. However, I would not oppose others in the theory community who may find this work valuable. I believe my perspective is typical among empirical researchers; theory should either guide empirical study or explain existing empirical phenomena. - Regarding gated RNNs, gating the recurrence remains linear if no activation is involved in the middle of the recurrence. For example, current linear models like Griffin, HGRN, Mamba, GLA, and RWKV6 all incorporate such gating in the recurrence and can leverage either parallel scan [1] or chunkwise form [2] for parallel training. Gated linear recurrence should definitely reference these linear RNN models with data-dependent forget gates or decays. The definition used in this work seems questionable to me. [1] https://arxiv.org/abs/1709.04057 Parallelizing Linear Recurrent Neural Nets Over Sequence Length [2] https://arxiv.org/abs/2312.06635 Gated Linear Attention Transformers with Hardware-Efficient Training --- Reply to Comment 1.1.1: Comment: > How does the proof of universal in-context learning of linear RNNs guide the practical development of these models? While I am not opposed to theoretical papers, I'm not convinced that this proof will significantly benefit the development of the field. As a researcher who leans more towards empirical work, I cannot recommend acceptance for this paper unless it clearly guides practice or explains some successes or flaws in existing empirical phenomena. However, I would not oppose others in the theory community who may find this work valuable. I believe my perspective is typical among empirical researchers; theory should either guide empirical study or explain existing empirical phenomena. Our paper has the following contributions that are directly applicable to empirical research: 1. **LSRL is a tool with practical applications in itself.** We develop LSRL, a new programming language that is isomorphic to a large class of recurrent models. Using this language and the compiler we have developed, one can directly program behaviours that can be then incorporated into RNNs, LSTMs, GRUs and Hawk/Griffin models. Due to this isomorphism, conversely, given a model with these architectures, we can “decompile” it as an LSRL program. This would be quite useful for people doing interpretability, explainability and safety research for models with these architectures. 2. **We explain the successes of architectures with multiplicative gating.** Recurrent architectures with multiplicative gating such as Mamba and Hawk have outperformed architectures without it. With our experiments, we show that multiplicative gating is much more numerically stable when one wants the model to follow an algorithm in-context precisely, which is how a lot of the evaluation and benchmarks are set up. We also show that multiplicative gating enables a much more compact implementation of concrete algorithms, and hence can explain why smaller models with multiplicative gates are sometimes comparable in performance to larger models that do not have them. Therefore, our work **explains one aspect that can drive the success** of gating architectures and **guides the practice to use multiplicative gating**. We believe that beyond serving as a basis for engineering and applied research, the foundation of science is a desire to understand the world around us for the joy of discovery and knowledge. The development of machine learning is not restricted to building new models, but also includes understanding, characterizing and organizing the properties of the methods, tools and mathematical models we already have. We are positive that a large part of the NeurIPS audience feels the same way, hence why “Theory” is one of the areas explicitly mentioned in this year’s Call for Papers. In this way, our work fills key gaps in the characterization of the universal approximation properties of neural network models, an area with a long history, while also serving as a guide for design decisions for recurrent architectures and explaining some successes of multiplicative gating. > Regarding gated RNNs, gating the recurrence remains linear if no activation is involved in the middle of the recurrence. For example, current linear models like Griffin, HGRN, Mamba, GLA, and RWKV6 all incorporate such gating in the recurrence and can leverage either parallel scan [1] or chunkwise form [2] for parallel training. Gated linear recurrence should definitely reference these linear RNN models with data-dependent forget gates or decays. The definition used in this work seems questionable to me. Perhaps we are using two different definitions of “linear recurrence”. We mean it in the sense of a “linear time-invariant system” (LTI). As such, models like Mamba and Griffin would not satisfy this definition. They would be something like a “linear time-variant system” (LTV). However, as every time-invariant system is a trivial time-variant system, the class of LTV systems is a proper superset of the class of LTI systems. Therefore, if there is an instance of an LTI system with a certain property (in our case, universal in-context approximation), then it immediately follows that there is an instance for an LTV system with that property. In our work we show that the more restricted setting already is a universal in-context approximator, automatically making the more general setting you are referring to a universal in-context approximator as well. We show this connection explicitly and formally in Appendix D, where we show that the Hawk/Griffin architecture is also a universal in-context approximator. Hence, the definition we use is not limiting our results in any way whatsoever, it is just the most general (or, if one wants, _abstract_) definition that serves the goal of our work.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Adaptive Preference Scaling for Reinforcement Learning with Human Feedback
Accept (poster)
Summary: To learn a versatile rewards essential for the downstream policy optimization, this paper introduces a novel adaptive preference loss function inspired by distributionally robust optimization (DRO). The proposed approach incorporates an learnable instance-specific scaling factor to accommodate varying uncertainties of preference strength and change the scaling between the preference distribution and reward difference to be non-linear. The proposed loss function for learning the scaling parameters is strictly convex, and thus they only induce negligible additional computational overhead. Experiment results on robotic control tasks and LLMs verify the proposed method. Strengths: 1. Adapting the optimization strength by factoring in the characteristics of each preference pair is an interesting direction. 2. The proposed method is well (theoretically) motivated. 3. The proposed method work well in practice. Weaknesses: 1. The adaptive scaling is defined on a per-instance basis, which necessitates significant compute costs and hinders real-world mini-batch learning. This is evident from Appendix, where the algorithm box shows that the training process operates in a per-datapoint manner 2. Some of the derivation may be simplified, both to make the paper more succinct and leave room for algorithm box as well as other technical details. 3. L221: could you theoretically justify the incorporation of adaptive preference scaling into DPO. e.g., in the framework of KL-control as in the original DPO paper? Technical Quality: 3 Clarity: 2 Questions for Authors: 1. L71 "Prior work on this topic is very limited.": There are actually a vast pool of literature concerning loss functions for (RLHF's) reward modeling, a quick pointer is [1] for language tasks and [2] for image generation, as well as the literature review in them. I encourage the authors to add proper discussions and citations to make the claims more precise. 2. L140-141: Could you elaborate this sentence more: "For some applications, it may lead to a reward function that is not flexible enough to differentiate a pair of segments"? In particular, why is the resulted reward function not flexible enough? And how is this related to the linear scaling? 3. In Eq. (3), why do we need to have $KL(p, 1/2)$ in both the objective function and constraint? *** [1] Yang, Shentao, et al. "Preference-grounded token-level guidance for language model fine-tuning." Advances in Neural Information Processing Systems 36 (2023). [2] Yang, Shentao, Tianqi Chen, and Mingyuan Zhou. "A Dense Reward View on Aligning Text-to-Image Diffusion with Preference." Forty-first International Conference on Machine Learning. 2024. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: A discussion on limitations and/or potential negative societal impact seems missing. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank you for your constructive comments! In the following, your comments are first started and then followed by our point-by-point responses. **W1: The adaptive scaling is defined on a per-instance basis, which necessitates significant compute costs and hinders real-world mini-batch learning.** Response to this concern is included in point 3 in the global rebuttal. The additional cost of our proposed method is negligible and it does not hinder mini-batch learning. **W2: Some of the derivation may be simplified, both to make the paper more succinct and leave room for algorithm box as well as other technical details.** Thank you for the suggestion. We will simplify some duplicate derivation in section 3 and include the algorithm box and technical details, such as a discussion on additional computational costs, in the main page. **W3: could you theoretically justify the incorporation of adaptive preference scaling into DPO. e.g., in the framework of KL-control as in the original DPO paper?** DPO start from the policy optimization objective: $$\\max\_\\pi \\mathbb{E}\_{s,a\\sim\\pi (a|s)}[r(s,a)] - \\beta D\_{\\mathrm{KL}}[\\pi(a|s), \\pi\_{\\mathrm{ref}(a|s)}]$$ and make use of the optimality condition given that the horizon is 1: $\pi_r(a|s) = \frac{1}{Z(s)} \pi_{\mathrm{ref}}(a|s) \exp \left( \frac{1}{\beta}r(s,a) \right).$ Then they reparameterizes the reward function as $r(s,a)=\beta\log(\pi_r(a|s)/\pi_{\mathrm{ref}}(a|s))+\beta\log Z(s)$ and employs the standard cross-entropy loss for reward learning. Our proposed Ada-DPO retains the same reparametrization and only modifies the reward learning loss to Equation (6). The derivation and analysis for this change in the reward loss are included in Sections 3.2 and 3.3. We assume the reviewer is suggesting moving the adaptive scaling factors to the KL-constrained policy optimization objective. Unfortunately, we found this difficult due to the regularization on scaling factor. However, at a high level, as mentioned in Line 222, we can merge the regularization parameter $\beta$, which controls the KL term, with the scaling factors in the final objective. Thus, our method can be viewed as adapting a different KL-control term for each preference pair. For strong preference data, we learn a large scaling factor, corresponding to a smaller KL-control term, allowing the model to deviate more from the reference policy. In contrast, a larger KL-control term is used for ambiguous preference data. **Q1: "Prior work on this topic is very limited.": There are actually a vast pool of literature concerning loss functions for (RLHF's) reward modeling, a quick pointer is [1] for language tasks and [2] for image generation, as well as the literature review in them. I encourage the authors to add proper discussions and citations to make the claims more precise.** Thank you for pointing out the related works. We will include discussions and citations of these references in our next version. After carefully reviewing the mentioned works, we find that both focus on changing the reward function, but the used loss is still based on cross-entropy. This makes them complementary to our work, as we are improving the loss function. Specifically: [1] introduces a token-level preference reward learning loss instead of the standard sequence-level objective, addressing the granularity mismatch between preferences and LM training losses. Their approach employs a more fine-grained reward function. However, the cross-entropy loss is still employed when there are two responses, which means their method can potentially be combined with ours. [2] focuses on the preference alignment of text-to-image diffusion models and proposes temporal discounting DPO-style objectives that consider the sequential nature of the generation process. However, their method is specific to diffusion models and cannot be applied to general RLHF as ours can. Additionally, the used loss is still based on cross-entropy. In summary, we consider these two works to be orthogonal and complementary to our work. **Q2: Could you elaborate this sentence more: "For some applications, it may lead to..."** Response to this concern is included in point 2 in the global rebuttal. **Q3: In Eq. (3), why do we need to have $\mathrm{KL}(p,1/2)$ in both the objective function and constraint?** Including the KL term in both the objective and the constraints is a common practice in constrained distributionally robust optimization (DRO) problems, as shown in Eq. (1) in [3]. Without the KL constraints, the loss becomes the cross entropy loss with temperature $\tau_0$, making the loss not be able to adapt to each pair of samples. Without the KL term in objective, the objective will not be smooth and hard to optimize. **L1: A discussion on limitations and/or potential negative societal impact seems missing.** Response to this concern is included in point 4 in the global rebuttal. ### References [1] Yang, Shentao, et al. 'Preference-grounded token-level guidance for language model fine-tuning.' Advances in Neural Information Processing Systems 36 (2023). [2] Yang, Shentao, Tianqi Chen, and Mingyuan Zhou. 'A Dense Reward View on Aligning Text-to-Image Diffusion with Preference.' Forty-first International Conference on Machine Learning. 2024. [3] Qi Qi, Jiameng Lyu, Kung sik Chan, Er Wei Bai, Tianbao Yang. Stochastic Constrained DRO with a Complexity Independent of Sample Size. Transactions on Machine Learning Research, 2023. --- Rebuttal Comment 1.1: Title: Response to the authors Comment: Dear authors, Thank you so much for the helpful responses. I hope these discussions and revisions can be incorporated into the next version of the manuscript. I've increased my rating to 6. --- Reply to Comment 1.1.1: Title: Thank you for the thorough review Comment: Dear Reviewer, Thank you for your thoughtful feedback and for increasing the scores toward acceptance. We will incorporate your suggestions in our next version. Best regards, The Authors
Summary: This paper studies the problem of learning from preference data and introduces a learnable scaling parameter for each preference sample. The authors propose an adaptive preference loss function that assign small scaling parameters to ambiguous preferences pairs and large scaling parameters to clear preferences. Experiments demonstrate the improvement of policy optimization performance and efficient of the hyperparameter tuning. Strengths: Considering different samples have different preference strength, using an adaptive preference scaling makes sense for preference learning. Experiments are conducted on both robotic control and natural language generation tasks. Weaknesses: The increase in flexibility of the proposed reward function is not verified. The authors claim that one of the limitations of the BT model is the logit of the preference distribution scales linearly with the reward difference. However, according to Proposition 3.1, the limitation still exists. There is no empirical result that supports the claim. Missing the ablation study of the regularization term in the proposed loss function. Missing the RLHF baseline in the natural language generation task. “Ada-Pref demonstrates greater resistance to performance degradation than Pref, indicating its superior ability to align the learned reward function with policy optimization.” It seems that it is not true in the Hopper task? Minors: Line 120: The notation of state transition function is conflicted with preference distribution. Please clarify the meaning of M and K in Algorithm 1. Technical Quality: 3 Clarity: 3 Questions for Authors: I do not believe the proposed loss function is convex, can you provide the proof of Remark 3.1? Why the reward models trained with the proposed method can better guide policy model selection considering the reward model performance is not improved? How do the authors obtain the true reward difference in Section 4.3? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The computation overhead introduced by learning the scaling parameter is not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments! We sincerely appreciate your time in reading the paper, and our point-to-point responses to your comments are given below. **W1: The increase in flexibility of the proposed reward function is not verified. The authors claim that one of the limitations of the BT model is the logit of the preference distribution scales linearly with the reward difference. However, according to Proposition 3.1, the limitation still exists. There is no empirical result that supports the claim.** Thank you for the suggestion. We provide additional clarification to better demonstrate the increased flexibility of the proposed method: * We would like to clarify that the proposed method no longer maintains a restrictive linear relationship between the logit of the preference distribution and the reward difference. In Proposition 3.1, we show that the relationship between the optimal reward difference and the logit of the true preference distribution varies for strong preference pairs and ambiguous pairs, resulting in a more flexible non-linear relationship. Additionally, during training, the learned logit is often not optimal, meaning the scaling factor is not strictly on the bounds. This is illustrated in Figures 5a and 6, where the scaling factor in the middle can create a more complex correspondence. In contrast, the BT model always maintains the same linear relationship between the learned logit and the learned reward difference during training, restricting flexibility. * Directly measuring the flexibility of the reward function empirically can be challenging since it's difficult to quantify. To better support our claim, we provide some supporting empirical evidence in the paper. For instance: * Figure 5(c) shows that Ada-Pref learns smaller reward differences for pairs with ambiguous preferences and larger reward differences for pairs with strong preferences, indicating a wider range of learned reward differences. * Figure 7 provides examples showing that Ada-DPO learns larger reward differences than DPO for clear pairs and smaller reward differences than DPO for ambiguous pairs. We also include more examples in the response to the second reviewer. **W2: Missing the ablation study of the regularization term in the proposed loss function.** The ablation study of the regularization term $\rho$ in the proposed loss function has been provided in Appendix D.3 due to space constraints. If we remove the regularization term, the proposed loss collapses to the standard DPO loss with a modified $\beta' = \beta \cdot \tau_0$, which performs significantly worse than the reported DPO results. Consequently, we did not include it in the main text. For reference, removing the regularization term achieves a win rate of 24.56 on the summarization task. **W3: Missing the RLHF baseline in the natural language generation task.** Thank you for pointing out the absence of an RLHF baseline in the natural language generation task. We understand the importance of experimenting with PPO instead of DPO to fully comprehend the impact of our method on RM's signal. Due to resource constraints, we prioritized experiments requiring PPO for robotic control tasks. Conducting PPO experiments for natural language generation tasks presents significant computational challenges, including the need to store and manage both reward and critique models (which are LLMs). PPO also involves more training steps and numerous hyperparameter tunings, making it difficult to implement within a resource-limited environment. Therefore, we opted for DPO as an alternative in our submission. Due to the limited resources and the need to run other experiments, we are currently unable to provide PPO results for natural language generation tasks. Nonetheless, we will try our best to update the results before the discussion deadline. **W4: It seems that Ada-Pref's greater resistance to performance degradation is not true in the Hopper task?** This is true in the Hopper task as well. Comparing Tables 1 and 2, we can see that Ada-Pref drops by 32.9\% in the Hopper task, while Pref drops by 43.7\%. **Q1-A: I do not believe the proposed loss function is convex, can you provide the proof of Remark 3.1?** The proposed loss function is strictly convex with respect to each $\tau_i$ when $r(z_w) \neq r(z_l)$. For simplicity, we omit the index $i$ from $\tau_i$ and set $C = r(z_w) - r(z_l)$. Then the instance-level loss function is written as: $$f(C, \tau) = -\tau \log \sigma(C/\tau) + \rho\tau. $$ Since $\frac{\partial f}{\partial \tau} = \frac{C^2}{\tau^3} \sigma'(C/\tau) > 0$ for all $\tau \in [\tau_0, \tau_{\mathrm{max}}]$, $f$ is strictly convex with respect to $\tau$. **Q1-B: Why the reward models trained with the proposed method can better guide policy model selection considering the reward model performance is not improved?** We cannot directly evaluate the performance of the reward model solely based on preference prediction accuracy. This is because preference prediction accuracy does not fully capture the effectiveness of the reward model in the context of policy optimization. It only reflect the sign of reward difference but not the scale. Our reward function is designed to be more flexible, providing distinct rewards for clear pairs and comparable rewards for ambiguous pairs. This flexibility allows the reward function to generate a wider range of rewards, which is crucial for effective downstream policy optimization. **Q2: How do the authors obtain the true reward difference in Section 4.3?** PyBullet environments provide the ground truth reward for each pair of state and action, which are then used to obtain the true reward difference in Section 4.3. **Q3: The computation overhead introduced by learning the scaling parameter is not discussed.** Response to this concern is included in point 3 in the global rebuttal. --- Rebuttal Comment 1.1: Title: Response to the authors Comment: Thank you for the informative response! Can you further visualize situations in which the BT model cannot express, maybe in your synthetic setting where the true preference strengths can be captured? The learned scaling parameter also seems linear regarding preference strength, as shown in Figure 5(b). If this motivation is not well-established, adding Adaptive Preference Scaling that has complicated implementation is not worth the cost. I also wonder if using simpler approaches, such as adding a reward margin to ranking loss or adding another layer of (non-) linear function outside the sigmoid function, can achieve similar or better performance. --- Reply to Comment 1.1.1: Title: Thank you for the discussion Comment: Thank you for your thoughtful comments! Due to NeurIPS regulations, we are unable to upload figures or provide links. However, we would like to clarify the following points: The BT model assumes a linear relationship between the reward difference and the logit of the preference distribution. This means it cannot accurately represent situations where the relationship is non-linear. In contrast, our method models this relationship as non-linear. Specifically, with linear regularization, the relationship becomes piecewise-linear (see Proposition 3.1), and with quadratic regularization, the relationship is more complexly non-linear (see Proposition 3.2). In both cases, it is important to note that when the logit value is small, our method learns a smaller reward difference. Conversely, when the logit value is large, our method learns a larger reward difference compared to the BT model. This adaptive, non-linear relationship makes our method more flexible and better suited for capturing complex preference dynamics that the BT model cannot handle. Further examples of the benefits of this non-linear relationship are provided in our response to Q1 of reviewer E9fU. In Example 1 (low preference strength), Example 2 (moderate preference strength), and Example 3 (large preference strength), the reward differences for DPO are 0.64, 1.07, and 1.47, respectively. Meanwhile, the reward differences for Ada-DPO are 0.31, 0.88, and 2.83, respectively, demonstrating that Ada-DPO scales more appropriately across varying levels of preference strength. Regarding the learned scaling parameter shown in Figure 5(b), we want to clarify that the near-linearity of the scaling factor does not imply a linear relationship between the learned logits and preference strength. Lastly, in the additional experiment, we implemented the reward margin method from [1] on three robotic control tasks. We did not include other reward margin methods, such as those from [2] and [3], because they require additional data called "score", which is expensive to obtain, making a fair comparison difficult. As shown in the table below, the reward margin method from [1] not only fails to match our method’s performance but also performs significantly worse than standard RLHF. | Task | Method | Return | |--------------|----------|---------| | HalfCheetah | Pref | 2724.42 | | | Margin | 577.98 | | | Ada-Pref | **2875.45** | | Ant | Pref | 2917.81 | | | Margin | 866.5 | | | Ada-Pref | **3177.11** | | Hopper | Pref | 1324.91 | | | Margin | 40.13 | | | Ada-Pref | **1692.1** | [1] Qin, Bowen, Duanyu Feng, and Xi Yang. "Towards Understanding the Influence of Reward Margin on Preference Model Performance." arXiv preprint arXiv:2404.04932 (2024). [2] Touvron, Hugo, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov et al. "Llama 2: Open foundation and fine-tuned chat models." arXiv preprint arXiv:2307.09288 (2023). [3] Amini, Afra, Tim Vieira, and Ryan Cotterell. "Direct preference optimization with an offset." arXiv preprint arXiv:2402.10571 (2024). --- Rebuttal 2: Comment: I appreciate the authors' detailed reply. They have addressed most of my concerns. However, there is no evidence (even without a toy example) demonstrating that the BT model would fail due to its linear relationship. This makes it unclear for researchers to determine when the proposed method, which seems hard to re-implement, should be used. Therefore, I would keep my previous evaluation. --- Rebuttal Comment 2.1: Comment: Thank you for your insightful comment. We would like to clarify that the BT model does not completely fail but is indeed insufficient in some cases. For example, when the relationship between the ground truth reward difference and the logit is non-linear, the BT model cannot fully capture and learn this correspondence. In our synthetic case, we only know the ground truth reward, not the logit of the preference distribution, so we can't provide direct evidence even in a toy example. However, the superior performance of our method across various tasks suggests that non-linear relationships do exist in real data. While the overall performance of the BT model is ok, our method offers greater flexibility and consistently delivers better results.
Summary: The paper identifies a limitation in RLHF methods, noting that ranking over pairs of trajectory segments often fails to capture the varying strengths of preferences across different pairs. To address this, the paper proposes a new adaptive preference loss (Ada-DPO), underpinned by distributionally robust optimization (DRO). The proposed method improves policy performance and is supported by theoretical proofs on convexity and univariate analysis. Strengths: - The paper identifies a key limitation in previous RLHF methods and provides a clear theoretical analysis of its methods and claims. - It presents robust experiments and quantitative analysis on both robotic controls and natural language generation tasks. Weaknesses: - It seems that only a single run was conducted for the experiments in the NLP tasks. More repeated runs would be more convincing, especially since the differences in performance are so close. Additionally, p-value testing for that domain would help determine if the differences are truly significant. Technical Quality: 3 Clarity: 3 Questions for Authors: - I would like to see more examples like Figure 7, comparing the learned reward differences with different scaling factors. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: - More discussion on the limitations of the proposed method and future directions can be added. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank you for appreciating the feature of the proposed method and are grateful for the constructive comments! In the following, your comments are first stated and then followed by our point-by-point responses. **W1: It seems that only a single run was conducted for the experiments in the NLP tasks. More repeated runs would be more convincing.** Thank you for pointing that out. We have included additional results and analysis for more runs in point 1 of the global rebuttal. We can observe from the table that our method consistently achieves stable and significant improvements over the baselines in both tasks. Given the large margin of improvement, we believe p-value is not necessary. **Q1: I would like to see more examples like Figure 7, comparing the learned reward differences with different scaling factors.** We provide three examples with different learned scaling factors in the summarization task below. We do not include the complete prompt here for better readability and space limit. ---- ### Example 1: * **Original text**: I need some advice. I've been talking with this girl for about 2 weeks now. We went out last weekend and it went great. We were working on setting up another date and she told me that she was concerned about distance ... * **Chosen summary**: agreed to meetup for coffee but haven't heard from her since tuesday night. Want to know what i can do to make it happen again. * **Rejected summary**: agreed to meetup but haven't heard from since. Tried texting a couple times just trying to understand whats going on. This sample pair gets $\tau=0.325$ and the learned reward difference is 0.3071(Ada-DPO) vs 0.6419 (DPO). We can observe that the two responses in this example is similar, with the chosen summary being slightly more specific. Our method learn a small scaling factor and smaller reward difference since the gap between two responses is not that significant. ---- ### Example 2: * **Original text**: I'm not sure if I can even do anything, and if the person in question wasn't an ... * **Chosen summary**: store franchise owner is probably stealing from their store by under ordering groceries/charging less than customer pays. possibly unethical behavior by owner of store? not sure how to proceed/act. help pls reddit. * **Rejected summary**: store franchise owner probably is hiding their grocery bill from the rest of the store staff and is getting some kind of unethical benefit out of it, not sure if I can do anything. Advice? This sample pair gets $\tau=0.93$ and the learned reward difference is 0.8786(Ada-DPO) vs 1.066 (DPO). This example present a reasonable difference, with Chosen being more correct and better summarize the text. Our method and the DPO baseline get a similar reward difference and the scaling factor is around 1. ----- ### Example 3: * **Original text**: So my dream is do stand up comedy, improv comedy, writing and/or sketch comedy full time... * **Chosen summary**: I want to pursue stand up comedy full time but I am afraid of losing my brothers rent money and my family. Do I follow my dreams or play it safe? Any advice/criticism is greatly appreciated! * **Rejected summary**: Follow your dreams or play it safe? This sample pair gets $\tau=3.00$ and the learned reward difference is 2.832(Ada-DPO) vs 1.467 (DPO). Our method learn a large scaling factor and a larger reward difference for the last sample. As we can see from the two responses, the preference is obvious, with the rejected summary do not contain important information. ------ **Q2: More discussion on the limitations of the proposed method and future directions can be added.** Response to this concern is included in point 4 of the global rebuttal.
Summary: The paper focuses on redesigning the loss function with adaptive scaling parameters to deal with the uncertainty in the preferences and thus improving the reward modeling flexibility. In the context of both robotics and NLP, the algorithm with the new loss shows improved performance. Strengths: 1. The proposed loss is flexible and can be incorporated in majority of the current RLHF frameworks 2. The objective has strong connections to DRO which is critical since preferences will be noisy and sub-optimal in practical scenarios. Weaknesses: 1. The experimental ablation is weak and doesn't provide concrete indications of the strength of the proposed algorithm. For ex in Figure 3, the win-rate gap is marginal and not significant. Why is the performance in summarization better in Figure 4 is not extremely clear? 2. The experimental evaluations lack important benchmarks and comparisons. For the Robotics environment, it's crucial to compare with Pebble, SURF, and PARL to understand the true efficacy of the proposed approach. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Can the authors provide more concrete mathematical justifications regarding the issue with linear depedence with the logit of the preference distribution? Specifically, connecting with the relation to noisy preferences? 2. "For some applications, it may lead to a reward function that is not flexible enough to differentiate a 141 pair of segments, which are supposed to have significantly different rewards." Can the authors please highlight such examples intuitively or mathematically, why such a collapse might happen? 3. The authors mention "Our proposed method is inspired by DRO, it serves a distinct purpose: improving reward learning in RLHF, which is orthogonal to distributional robustness". Can the authors pls justify this statement and discuss why it's orthogonal? DRO in reward learning will also improve the robustness and flexibility of the reward model to noisy preferences, in what sense it does better than DRO? 4. Can the authors provide a more concrete definition of the best win rate and best prediction accuracy? 5. It will be helpful if the authors can provide more details on how the trajectories are constructed for the robotics experiments. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Check above Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for the valuable feedback that you have provided! Please see our response per each of your concerns below: **W1: Weak experimental ablation. Why is the performance in summarization better in Figure 4 is not extremely clear?** The response to weak experimental ablation is included in the global rebuttal. In Figure 4, we select the best model based on the preference prediction accuracy and the improvement indicates that our method better aligns the learned reward function and policy optimization, while DPO baseline results in a model with high preference accuracy but low win rate. The gap may be due to that our Ada-DPO is more robust to overfitting, and that the learned reward is more flexible and better guide the policy optimization. **W2: The experimental evaluations lack important benchmarks and comparisons. For the Robotics environment, it's crucial to compare with Pebble, SURF, and PARL to understand the true efficacy of the proposed approach.** The proposed method is orthogonal to PEBBLE [1], SURF [2], or PARL [3] due to the distinct focus of our work. Specifically: * **PEBBLE** aims to enhance both sample and feedback efficiency by integrating unsupervised pre-training with off-policy RL. * **SURF** focuses on improving feedback efficiency by utilizing unlabeled data for reward learning. * **PARL** tackles the issue of the alignment objective's dependence on data generated by the optimal policy by introducing a bilevel formulation. In contrast, our method focuses on enhancing reward model training by considering the varying preference strength between each sample pair. This approach is distinct from the goals of improving feedback efficiency or resolving the entanglement between the alignment objective and the trajectory collecting policy. Our proposed loss function can be readily adapted to enhance the performance of these methods (PEBBLE, SURF, and PARL), serving as a complementary approach rather than a direct comparison. As discussed in Remark 3.2 of the paper, Eq. (4) in PEBBLE [1], Eq. (3) in SURF [2], and Eq. (5) in PARL [3] show that these frameworks still optimize the standard cross-entropy loss for reward learning. Our proposed adaptive loss function can replace this standard cross-entropy loss to potentially enhance their performance. **Q1: Can the authors provide more concrete mathematical justifications regarding the issue with linear dependence with the logit of the preference distribution? Specifically, connecting with the relation to noisy preferences?** We want to clarify that the issue of the linear dependence of the BT model is not directly related to noisy preferences. Rather, it limits the flexibility of the learned reward difference, preventing the reward function from providing a wider range of rewards in downstream policy optimization. Our proposed method incorporates scaling factors to achieve a non-linear relationship, allowing the learned reward difference to be more flexible. We provide a mathematical analysis in Section 3, demonstrating that our method increases the complexity of the reward model compared to the baseline. Specifically, we show that our method learns a smaller reward difference for ambiguous preference data and a larger reward difference for strong preference data. The experimental results further prove that this improved flexibility translates to better performance. We believe this sufficiently demonstrates the linear dependence is not flexible enough and hence can be suboptimal. **Q2**: Included in global rebuttal. **Q3: Can the authors pls justify the statement and discuss why the proposed method is orthogonal to DRO?** While our proposed method borrows the technical concept from DRO, the aims are fundamentally different. DRO focuses on improving robustness against data distribution shifts between training and test sets, which does not necessarily translate to increased flexibility of the reward model that we want to address. In contrast, our method aims to improve flexibility by incorporating scaling parameters to help reward learning, but not targeting robustness against data distribution or noisy preferences. This distinct focus is why we consider our method orthogonal to DRO. More technical differences are discussed in Line 108. **Q4: Can the authors provide a more concrete definition of the best win rate and best prediction accuracy?** We use two different criteria to select the model among all those trained with different hyperparameter configurations: * Best Win Rate: We identify the best policy based on its end performance, specifically the policy with the highest win rate, and then report the win rate and the preference prediction accuracy of the corresponding reward function (Figure 3). * Best Prediction Accuracy: We identify the best policy as the one whose corresponding reward function achieves the highest preference prediction accuracy (Figure 4). We present this results because win rate evaluation involves costly proprietary models and tuning based on the preference accuracy could be much more efficient in practice. **Q5: It will be helpful if the authors can provide more details on how the trajectories are constructed for the robotics experiments.** Thank you for pointing that out. Due to the wide variety of experiments and the numerous details about the experimental settings, we were unable to include all the detailed explanations in the main text. However, a brief explanation of how the trajectories are constructed for the robotics experiments is provided in Appendix C.1. ### Reference [1] Pebble: Feedback-efficient interactive reinforcement learning via relabeling experience and unsupervised pre-training. [2] SURF: Semi-supervised reward learning with data augmentation for feedback-efficient preference-based reinforcement learning. [3] PARL: A unified framework for policy alignment in reinforcement learning --- Rebuttal Comment 1.1: Title: Response to Rebuttal by Authors Comment: Thanks for providing concrete answers to my queries and issues. Most of my concerns have been resolved, hence I am increasing my score. Please add the references mentioned in detail and explain the reason for not comparing them with the baselines. However, mathematical justifications regarding the issue with linear dependence with the logit of the preference distribution is not very clear and would request authors to provide a clear explaination mathematically. --- Reply to Comment 1.1.1: Title: Thank you for the discussion Comment: Thank you for your valuable comments. We will include the mentioned references and the discussion in the paper. Additionally, we would like to address your remaining concern regarding the mathematical justification related to the issue of linear dependence. The BT model assumes a linear relationship between the reward difference ($\Delta r$) and the logit of the preference distribution ($\ell$), expressed as $\Delta r = \ell$. This assumption inherently limits the model’s ability to accurately capture scenarios where this relationship is non-linear. In contrast, our method models this relationship as non-linear. Specifically, in the case of linear regularization, the relationship is piecewise-linear and can be expressed as: $\Delta r = \tau_0\ell , \text{if } -t < \ell \leq t; \Delta r=\tau_\max\ell, \text{otherwise},$ where $t$ is a threshold related to $\rho$, and $\tau_0 < 1 < \tau_\max$ (see Proposition 3.1). For quadratic regularization, the relationship becomes even more complexly non-linear (see Proposition 3.2). Importantly, in both cases, when $\ell$ is small, $\Delta r$ is smaller, and when $\ell$ is large, $\Delta r$ is larger compared to the BT model. This adaptive, non-linear relationship enhances our method’s flexibility and allows it to better capture complex preference dynamics that the BT model cannot. Although we intended to include visualizations of these relationships to make them more intuitive, NeurIPS regulations prevent us from doing so in the rebuttal. However, we will add these plots in the paper to provide better clarification. Additionally, please refer to the examples on the summarization task in our response to Q1 for reviewer E9fU, which further illustrate how this non-linearity adapts to the data. Lastly, we noticed that, despite your mention of potentially increasing the score, it has remained unchanged. We would greatly appreciate it if you could reconsider the score in light of the explanations we provided.
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for the valuable feedback! Before we answer to each of the reviewers individually, we list and address common concerns below: > **1. The experimental ablation doesn't provide concrete indications of the strength of the proposed algorithm. Only a single run was conducted for the experiments in the NLP tasks. More repeated runs would be more convincing.** We would like to highlight that the proposed method achieves non-marginal improvement across multiple tasks. It outperforms DPO baseline by 9-25% percent in terms of return on Ant and Hopper task (Table 1), and achieve 11% higher win rate than DPO on Dialogue task (Figure 3). While the improvement on HalfCheetah and summarization is not as large, they demonstrate that our method is consistently better and effective. Furthermore, the smaller improvement on summarization could be due to Claude 2 not being a strong enough judge for evaluation. We conduct new experiments in NLP tasks replacing Claude 2 with Claude 3, a stronger judge, and use three different seeds. The results are presented in Table 1 in the attached file, showing that our method significantly improves over DPO and is stable across seeds. > **2. Clarification on sentence: "For some applications, it may lead to a reward function that is not flexible enough to differentiate a pair of segments...”** Human preferences are often influenced by numerous factors that interact in non-linear ways, making the BT model suboptimal as a reward model. For example, when the reward difference is small, even slight changes in certain features might lead to significant shifts in preference. The BT model may struggle to capture such rapid shifts due to its slower transition. Conversely, when the reward difference is already large, its effect on the preference distribution could be marginal, meaning that a small increase in preference probability corresponds to a large increase in the reward difference. The BT model, with its linear correspondence, will not learn a large enough reward difference for strong preference data due to its small gradient. Our method increases the gradient (as shown in Figure 1) in this case to more flexibly learn a wider range of rewards. We would like to emphasize that the scale of the reward difference is crucial, as it affects policy optimization, and a larger or smaller reward difference can benefit the learning of some downstream tasks. > **3. The compute costs of the algorithm is not discussed and could be significant. It hinders real-world mini-batch learning.** We would like to clarify that the additional cost of our proposed method is negligible, and it does not hinder mini-batch learning. Computationally, since the proposed loss is strictly **convex** and **univariate** in the scaling factor, the inner minimization problem (Lines 5-7 in Alg. 1) can be solved efficiently with a few iterations (k=5) of the Newton method. The additional computational cost of each update is **negligible** compared to the overall RLHF pipeline. Memory-wise, we need to temporarily store $\tau_i$ for each sample, which is minor considering the high-dimensional nature of the data, and we can free the memory after the backward update. Regarding the mini-batch learning, although the algorithm is presented in a per-data manner to show how each $\tau_i$ is optimized, it can be directly adapted for mini-batch learning. In practice, we update all $\tau_i$ in the mini-batch in parallel. Our experiments also utilize mini-batch learning to ensure efficiency. > **4. More discussion on the limitations of the proposed method, future directions and societal impact can be added.** We provide our discussion below and will include these points in our next version: One main limitation of our proposed method is the introduction of three hyperparameters to control the scaling ($\tau_0, \tau_{\max}, \rho$), which induces additional tuning costs. In the paper, we propose an extension to the quadratic penalty in Section 3.6 to eliminate $\tau_{\max}$, but the performance of this extension is not as good. Developing more efficient tuning approaches or regularization techniques to reduce the need for these hyperparameters will be considered as future work. Additionally, extending our adaptive loss to handle ranking data with more than two responses for preference optimization is another potential future direction. For societal impact, our method can be used to better align the LLM to user preferences, presenting some societal risks. Primarily, same as DPO, it may reinforce and amplify existing biases if the preferences and feedback used in training are skewed or prejudiced. Ethical concerns arise when the model aligns with harmful or unethical preferences. Pdf: /pdf/d3c4a5e56a8a61f13ad4d49da4f8feb3590470da.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Physical Consistency Bridges Heterogeneous Data in Molecular Multi-Task Learning
Accept (poster)
Summary: This paper presents a method to improve different tasks simultaneously by combining datasets of different fidelities and focuses, utilizing scientific laws that connect the tasks. Predicting the energy and equilibrium structure of molecules is used as an example. Two forms of consistency losses are developed based on (1) the rule that equilibrium structure minimizes energy, and (2) the relation between the probability distribution of structure and energy. The benefits of consistency losses in multi-task learning are demonstrated on quantum chemistry datasets. Strengths: - Developing consistency loss functions is an elegant approach to incorporating scientific laws into machine learning. - The proposed method does not require additional data to improve performance. Weaknesses: - The demonstration of the method is limited to one scenario (energy + equilibrium structure), where the formulation of consistency losses is ad hoc. The broader applicability is thus questionable. - The review of related works is not comprehensive (see Questions). Technical Quality: 4 Clarity: 4 Questions for Authors: - This work essentially incorporates scientific laws by modifying loss functions. There are other works following a similar approach, e.g., PINNs, should they be discussed as related works? - In the experiments, the demonstrated benefits of consistency learning are mainly about accuracy. While for structure prediction via diffusion, efficiency (e.g., the diffusion steps required to attain a near-equilibrium structure) is also important. Could you comment on the efficiency? - Training on 8× Nvidia V100 GPU for a week is a considerable computational cost. How does consistency loss affect computational cost and model convergence? Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: Discussed in Sec. 5. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your dedicated effort in reviewing our paper! We deeply appreciate your acknowledgement of our contributions, as well as informative feedback and suggestions.   ## Broader applicability Thank you for the opportunity to elaborate on this point. Please refer to the global rebuttal (item 1). ## Related work about PINN Thank you for the suggestion! We have included a discussion on related works on PINN as follows: Physics-informed neural networks (PINNs) [R1] are another example of incorporating physical laws in neural network training. The principle of PINNs involves representing the unknown target function with a neural network and optimizing it using a loss function derived from a system of partial differential equations (PDEs), such as the variational form of the PDE [R2]. This approach has shown promise in solving PDEs across various applications, including higher-dimensional problems [R3, R4]. PINNs are also applied to solve inverse problems by optimizing the parameters of PDEs [R5, R6]. More relevant to our case, PINNs can also be used to tackle data heterogeneity. For instance, HFM [R7] uses the Navier-Stokes equation to establish a connection between the concentration of contrast agents in the bloodstream, and the dynamic quantities of blood flow such as velocity and pressure. It is used to derive the latter from the inferred results of the former from concentration variations in medical imaging. Similarly, PhySR [R8] utilizes the underlying physical laws in the system, such as those governing the 2D Rayleigh-Bénard convection system, to reconstruct high-resolution results from low-resolution data. While sharing the same spirit, our work has some technical differences. Physical laws in molecular properties often do not come in the form of PDEs (unless solving the Schrödinger equation in the original form) but as algebraic or statistical equations. There is no need of grids, but involves multiple quantities, e.g., energy and structure in our case, so the laws are used to bridge different prediction models rather than learning one model. Moreover, neural networks can be treated as black-box models in PINNs, while in our case, in-depth analyses are needed to connect model output to the desired quantities: Sec. 3.2 analyzed how to produce a rough structure prediction from the output of the denoising model for optimality consistency, and Sec. 3.3 analyzed how to compute the score using the denoising model for score consistency. [R1] Raissi M, Perdikaris P, Karniadakis G E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations[J]. Journal of Computational Physics, 2019, 378: 686-707. [R2] Yu B. The deep Ritz method: a deep learning-based numerical algorithm for solving variational problems[J]. Communications in Mathematics and Statistics, 2018, 6(1): 1-12. [R3] Lu L, Meng X, Mao Z, et al. DeepXDE: A deep learning library for solving differential equations[J]. SIAM review, 2021, 63(1): 208-228. [R4] Han J, Jentzen A, E W. Solving high-dimensional partial differential equations using deep learning[J]. Proceedings of the National Academy of Sciences, 2018, 115(34): 8505-8510. [R5] Lu L, Pestourie R, Yao W, et al. Physics-informed neural networks with hard constraints for inverse design[J]. SIAM Journal on Scientific Computing, 2021, 43(6): B1105-B1132. [R6] Yu J, Lu L, Meng X, et al. Gradient-enhanced physics-informed neural networks for forward and inverse PDE problems[J]. Computer Methods in Applied Mechanics and Engineering, 2022, 393: 114823. [R7] Raissi M, Yazdani A, Karniadakis G E. Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations[J]. Science, 2020, 367(6481): 1026-1030. [R8] Ren P, Rao C, Liu Y, et al. PhySR: Physics-informed deep super-resolution for spatiotemporal data[J]. Journal of Computational Physics, 2023, 492: 112438. ## Efficiency for structure prediction via diffusion Thank you for bringing out the efficiency consideration. Our methods would introduce some cost in the training stage for going beyond the level of accuracy of training data, but they do not alter the way to generate structures in the inference stage, hence not affecting the efficiency of structure prediction. To improve the efficiency, we can directly leverage prevailing techniques to reduce the number of diffusion steps, e.g., DDIM [43], Heun's method [R9], and DPM-Solver [R10]. [R9] A. Jolicoeur-Martineau, K. Li, R. Piché-Taillefer, T. Kachman, and I. Mitliagkas. Gotta go fast when generating data with score-based models. CoRR, abs/2105.14080, 2021. [R10] Lu C, Zhou Y, Bao F, Chen J, Li C, Zhu J. DPM-Solver: A fast ODE solver for diffusion probabilistic model sampling in around 10 steps. Advances in Neural Information Processing Systems. 2022 Dec 6;35:5775-87. ## Computational cost Thank you for your careful read! Consistency training indeed introduces more training effort, but we made an implementation design as described in Appendix B.3, so that the additional cost is still manageable. The main reason that we need such a computational cost is that we used a relatively large model (around 130M parameters) to sufficiently capture the information on PM6, which is the largest public molecular dataset with DFT-level energy label. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' comprehensive response, which well addresses my concerns. I could raise the score for Presentation to 4 with the clarification questions answered. --- Reply to Comment 1.1.1: Comment: Thank you for your update! We are glad to know that our reply has addressed your concerns. We will include these additional clarification contents into our paper.
Summary: The paper proposed a scientific consistency based improvement of molecule structure and energy prediction task. Upon the overall diffusion process for structure prediction, the authors incorporated energy-guided losses, which enables direct information exchange between the two tasks. Based on the two benchmark datasets, the proposed consistency losses improved the naive multi-task learning framework. Strengths: - Well-written in Methods - It is notable to improve prediction performance based on the scientific correlation between molecular structure and energy without additional data Weaknesses: - The title and abstract of the paper are too general in relation to the specific task performed. - The proposed method's utility is limited as it can only be applied to tasks with a strong correlation, such as molecular structure and energy. - The paper specifies 200 test molecules, but it does not explain the criteria for their selection or why the number 200 was chosen. - In the experimental process, PM6 was used for pretraining, and identical molecules from PCQ and QM9 were removed; however, there is no mention of removing structurally similar molecules. Technical Quality: 2 Clarity: 3 Questions for Authors: - Do the authors think the proposed method would be useful for other tasks besides molecular structure and energy? If not, how about limiting the scope of the paper to molecular structure? - How was the number 200 determined for the test molecules? - What do the authors think the results would be if molecules from PCQ and QM9 with a similarity above a certain threshold (e.g., Tanimoto similarity 0.7) were additionally removed and the experiments were conducted? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: - The effectiveness of the proposed method is uncertain as there are no experiments applying it to the latest diffusion-based methods. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your devoted effort in evaluating our paper! We appreciate your informative feedback and solid suggestions. ## About the title and abstract Thank you for your feedback. Please refer to the global rebuttal (item 2). ## Utility of the proposed method Thank you for the opportunity to elaborate on this point. Please refer to the global rebuttal (item 1). ## About test molecules The 200 test molecules are uniformly randomly selected from the PCQ (or QM9) dataset to guarantee that they are in-distribution with the whole PCQ (or QM9) dataset, while excluding molecules that also appear in the PM6 (training) dataset. This setting, including the number 200, follows previous mainstream structure generation works, including ConfGF [39], GeoDiff [53], and DMCG [62]. ## Structurally dissimilar test molecules Thank you for the insightful suggestion! It makes sense to exclude structurally similar molecules from the test set. Nevertheless, we found our selected 200 test molecules are already sufficiently dissimilar from the PM6 (training) dataset. For each test molecule, we take the portion of PM6 molecules that have a Tanimoto similarity larger than 0.7 with the test molecule as a measure of similarity to the PM6 dataset. Figure R2 in the pdf file from the global rebuttal shows the distribution of this measure. We can see that most (almost all) of the 200 molecules only has less than 1e-7 (2.5e-7) similar molecules in PM6. This indicates that the presented results already show the performance on structurally dissimilar test molecules. ## Applicability to latest diffusion-based methods We'd like to mention that our consistency learning methods are model-agnostic and can be applied to any diffusion-based model. This is because they all predict the score at each diffusion time step (predicting the noise or clean sample (denoising) are equivalent to predicting the score), so at small time steps, the score should align with the energy gradient (score consistency), and at larger time steps, the score can be used to predict the equilibrium structure through the denoising formulation (Eq. 4) (optimality consistency). Due to limited time and computational resources during the rebuttal period, we are unable to provide results on more diffusion-based methods, but we'd be happy to try if you could specify the method in your mind. --- Rebuttal Comment 1.1: Comment: Thank you for the time and effort of the authors. The authors' rebuttal has addressed most of the concerns, but there are still doubts regarding the applicability of the paper and structurally dissimilar molecules between PM6 and training set of PCQ (or QM9). For this reason, I will maintain my original score. --- Reply to Comment 1.1.1: Title: Thank you for the follow-up message Comment: Thank you for sharing your updated comments! We are glad to know that our reply has addressed most of your concerns. ## Regarding the applicability of the paper We have made a clarification in the global rebuttal (item 1). We'd like to further highlight that beyond the utility to improve equilibrium structure prediction with energy, in Sec. 3.4, 4.3 we also showed the utility to leverage force labels and off-equilibrium structures to further improve structure prediction. This type of data heterogeneity is perhaps more ubiquitous than it seems to be. On one hand, these tasks bear a central importance and cover most problems in molecular science. The equilibrium structure provides a direct understanding of important properties of a molecule e.g. a quick judgement of whether it can bind to a protein target, and is the pre-requisite to calculate many properties e.g. phonon spectrum. Energy and force are central to molecular dynamics simulation which is perhaps the most important way to study functions and macroscopic properties of a molecule. On the other hand, data heterogeneity between energy, force and equilibrium structure is ubiquitous. As we mentioned in Lines 35-40, generating an equilbrium structure data point requires repeated energy calculation, which is inherently orders more costly than generating an energy label, so structure data are usually generated using a less costly but also less accurate method (there is a long-standing accuracy-efficiency trade-off in data generation methods), causing data heterogeneity. Moreover, we have explained in the Conclusion and in the global rebuttal that the proposed methods can be directly applied to leverage energy and force to improve thermodynamic ensemble sampling, which is a different task from structure prediction but also widely concerned since it can estimate statistical properties and functions of molecules. We would be more than happy if you could specify your doubts regarding the applicability. ## Regarding evaluation on structurally dissimilar molecules There seems a misunderstanding (if not a misspelling) in your description: we did not include any PCQ (or QM9) molecules in training (which only contains PM6 molecules). The test molecules are from PCQ (or QM9) that do not appear in the PM6 dataset. In the previous reply, we have shown that our results are already evaluation on dissimilar molecules. Figure R2 in the pdf file attached in the global rebuttal shows that the test molecules have a very rare, if any, similar (Tanimoto similarity > 0.7 as you suggested) molecules in the training dataset. Particularly, there are 49% of PCQ test molecules that do not have any similar molecules in the PM6 (training) dataset, and there are more than 80% of them that have less than 0.000008% (8e-8) similar molecules in PM6. We'd also like to point out that even if there are identical molecules (in terms of the same _molecular graph_, or equivalently, _SMILES_) in the test set, a well learned model may still unnecessarily give a good evaluation result: note that only the less accurate PM6-level equilibrium structures (recall that this refers to the _3D structure_ of a molecule; see Lines 125-126; note "A molecule (a given SMILES) in physical reality can take different structures") are available in training, while the evaluation compares model-predicted structures against DFT-level (more accurate) equilibrium structures. So the improvement in the evaluation results is a solid evidence that the proposed consistency learning takes effect. For a completely sanitized evaluation on dissimilar molecules, below we provide the results evaluated on the 49% of PCQ test molecules that do not have any similar molecules in the PM6 (training) dataset. Due to limited time, we provide the results (in terms of both RMSD and Coverage (see Appendix C.1)) only in the denoising generation setting corresponding to Tables 1 and 2, and will provide results in other settings in the revision. |Training Set|Method|Mean RMSD (Å) $\\downarrow$|Min RMSD (Å) $\\downarrow$|Mean Cov $\\uparrow$|Median Cov $\\uparrow$| |---|---|---|---|---|---| | PM6 (Table 1) | Multi-Task | 1.175 | 0.642 | 0.613 | 0.675 | | |Consistency| **1.135** | **0.625** | **0.644** | **0.745**| |PM6 + SPICE force (Table 2)|Multi-Task|1.136 | 0.609| 0.639 | 0.735| | |Consistency| **1.121** | **0.579**| **0.672**| **0.790** | |PM6 + subset force (Table 2)| Multi-Task| 1.174| 0.653| 0.612| 0.660| | |Consistency|**1.099**| **0.616**| **0.697**| **0.830**| We can see that the proposed consistency learning method still universally outperforms the baseline. The improvement is even larger than that in Tables 1 and 2. This result is a direct solid verification that the improvement does not come from memorizing training data. We would be more than happy if you could specify your doubts if there are still any.
Summary: &nbsp; The authors consider the multitask learning setting for molecular structure and energy prediction where the fidelity of the labels differs between tasks [1]. The authors note that they can leverage the relationship between high fidelity labels (energy) and low fidelity labels (structure) to design loss functions, a) the optimality consistency loss and b) the score consistency loss that operate as inductive biases in the multitask learning setting. Given that the proposed method is straightforward and appears to work well empirically, I consider the paper borderline at the moment since the code has not been supplied to reproduce the experimental results. I will revise my score if the code can be provided to ensure the reproducibility of the reported results. &nbsp; Strengths: &nbsp; The method introduced by the authors is straightforward and appears to work well empirically. &nbsp; Weaknesses: &nbsp; __MAJOR POINTS__ &nbsp; 1. The title is not fully descriptive of the authors' contribution. I would recommend the authors revise the title to be more descriptive of the paper content. Specifically, the contribution does not apply to molecular science as a whole, but rather to structure and energy prediction. From the title alone, it is also unclear what the meaning of scientific consistency is, given that this appears to be a neologism coined by the authors. &nbsp; __MINOR POINTS__ &nbsp; 1. It would be great if the references appeared in numbered order. 2. Line 2, typo, "at scale". 3. There are some discrepancies in capitalization in the references e.g. 3d and 3D. 4. Line 38, typo, "hundreds of times more costly". 5. The original paper on molecule generation with VAEs [2] should probably be cited at some point in the text. 6. Given that the standard deviations are crucial to establishing the improvement afforded by the consistency loss approach, I would recommend providing them in the main text instead of in the appendix. It may also be possible to perform a paired t-test to assess the statistical significance of the results e.g. https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ttest_rel.html 7. The axis labels of Figure 2 are somewhat confusing. The y-axis label is the predicted energy on the model-generated structure whereas the x-axis label is the energy of the equilibrium structure in the dataset. Would it be more appropriate to label the x-axis ground truth energy? The axis labels are confusing because the "predictor" for both energies is different. 8. In Figure 2, why is there a systematic deviation in the predicted energy of R_pred relative to R_eq? 9. 1. ADAM, reference 22, was published at ICLR 2015. 10. The details of the model should be provided in the main paper rather than the appendix. 11. In Section 4.4 the authors refer to the models from Section 4.2. It is not clear what these models are or how they were pre-trained. I would suggest including this in the main text. 12. Line 375, "by the abundance of the data involved". 13. It would be worth mentioning [3] as a reference source for multitask learning at some point in the text. &nbsp; __REFERENCES__ &nbsp; [1] Peherstorfer, B., Willcox, K. and Gunzburger, M., 2018. Survey of multifidelity methods in uncertainty propagation, inference, and optimization. Siam Review, 60(3), pp.550-591. [2] Gómez-Bombarelli, R., Wei, J.N., Duvenaud, D., Hernández-Lobato, J.M., Sánchez-Lengeling, B., Sheberla, D., Aguilera-Iparraguirre, J., Hirzel, T.D., Adams, R.P. and Aspuru-Guzik, A., 2018. Automatic chemical design using a data-driven continuous representation of molecules. ACS Central Science, 4(2), pp.268-276. [3] Caruana, R., 1997. Multitask learning. Machine learning, 28, pp.41-75. &nbsp; Technical Quality: 3 Clarity: 3 Questions for Authors: &nbsp; 1. In Section C.1, the authors state that Tables 4 and 5 contain the std of the predictions, however, the caption states that the table contains test coverage values. The caption of Table 4 also indicates that lower is better. Is this the correct direction? 2. Could the authors produce standard errors for the results presented in Table 7 of the appendix? &nbsp; Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: &nbsp; 1. One limitation of the work is the realm of applicability of the method since it applies specifically to molecular structure and energy prediction. 2. At the current point the authors have yet to provide code to reproduce the results. &nbsp; Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your dedicated effort in evaluating our paper! We can feel your careful read, and are grateful for your feedback and suggestions. ## About the title Thank you for your informative feedback. Please check item 2 of the global rebuttal. ## The minor points 1. Thank you for the professional suggestion. We have revised the references in numbered order. 2. Thank you for pointing it out. We have revised it. 3. Thank you for your careful read! We have revised the capitalization. 4. Thank you. We have revised it. 5. Thank you for your suggestion. We have cited the paper in Introduction. 6. Thank you for your suggestion. We have managed to insert standard deviation (in a smaller font) in the main tables. Following your instruction, we also conducted a t-test to all results in the main tables (1-3). Please check Table R1 in the pdf file from the global rebuttal. There are only 4 out of 32 cases that have a p-value > 0.05, and those are all under the "Min" case where multi-task learning may also have a chance to hit the target structure. The means are indeed close in those 4 cases. We have included these results in the revision. 7. Thank you for your feedback. We'd like to clarify that for the x-axis of Figure 2, the structures are from the PCQ dataset (vs. predicted by the model for the y-axis), but the energies of the structures are predicted by the energy prediction model (this part is the same as the y-axis). The purpose of this figure is to verify that the improved structure prediction accuracy by consistency training is indeed due to that the predicted structures achieve a lower DFT-level energy (instead of e.g. a better fit to the PM6 structures), hence it is the consistency learning that makes the model predict structures closer to DFT-level equilibrium structures. We did not compare the ground-truth energies since we do not have DFT energy labels for the model-predicted structures. The energy prediction model is trained on DFT-level energy labels on PM6 structures, hence can be used as a surrogate to evaluate DFT-level energy. 8. Alignment of predicted energy of R_pred and R_eq (i.e., points are on the diagonal) means the model predicts the same as R_eq, which is not what we expected, since the model does not see any DFT-level equilibrium structures in training (note only the less accurate, PM6-level equilibrium structures are available for training, while R_eq are DFT-level equilibrium structures from the PCQ dataset as the ground truth for evaluation). Instead, the point of Figure 2 is that consistency learning (still no DFT-level equilibrium structure data!) drives the model to predict structures closer to R_eq, which is verified in Figure 2 as the orange points (consistency) lie closer to the diagonal than the blue points (multi-task). 9. Thank you. We have revised the citation. 10. Thank you for your suggestion. We have added model details in Sec. 4.1. 11. Thank you for your feedback. We meant that the pre-trained models for the finetuning experiments in Sec. 4.4 are those trained under the settings described in Sec. 4.2 (the models corresponding to the results in Table 1). The detailed training settings for Sec. 4.2 are provided in Appendix B.3, which we have moved to Sec. 4.1 in the revision. 12. Thank you. We have revised it. 13. Thank you for your suggestion. We have added the reference in the introduction of multitask learning. ## About the questions 1. Thank you for your careful read! We apologize for the confusion. We have revised Sec. C.1 such that Tables 4 and 5 contain the coverage results, and revised the caption such that it is the higher the better. The corresponding std results are presented in Tables 9 and 10. 2. As the scale of energy depends on the size of molecule (energy is an extensive quantity), which also affects the standard deviation of the error, we provide a box plot (Figure R1 in the pdf file from the global rebuttal) for the energy MAE in each case corresponding to Table 7, which provides more details about the error distributions. Comparing results in each column, we see that training with consistency losses does not hurt energy prediction. Comparing results in each row, we find including force data in training leads to a more accurate energy model, which explains that consistency learning performs better in this case (comparing Table 2 to Table 1). This aligns with the observations from Table 7. ## About the limitations 1. Please refer to the global rebuttal (item 1) for broader applicability. 2. As we mentioned in the Paper Checklist (item 5), releasing code requires an internal asset release review process within our organization. We have started the process, but cannot guarantee availability during the review period. We have been pushing the process, but unfortunately it is still not completed. We have provided implementation details in Appendix B, and will provide more to guarantee reproducibility. We will push further to guarantee the release before the time of publication. --- Rebuttal Comment 1.1: Title: Many Thanks to the Authors for their Rebuttal Comment: &nbsp; Many thanks to the authors for their rebuttal. I will confine my response to the outstanding points given that the remainder have been addressed by the authors. &nbsp; 6. I would recommend using "null hypothesis" in place of "postulate" for the paired t-test. 7. I think the confusion arises because the model-generated structure is denoted as R_pred. As far as I understand there is a) the model-generated molecule and b) the model-predicted energy of the molecule. Currently both the molecule itself is referred to as a prediction in addition to its energy being referred to as a prediction. Perhaps a notation such as R_gen for the model-generated molecule would help disambiguate these cases? 8. Rather than the meaning of the figure, I was inquiring more as to the **systematic deviation** present in the figure. In other words all predicted energies lie higher than the predicted energy of the equilibrium structure. Could the authors remind me as to whether there is a constraint that enforces this in the model prediction? &nbsp; In terms of code release, it is unfortunate that the authors are subject to an internal review process which will delay the release of the code. Please notify me in the case that the internal review process completes before the rebuttal period. One option might be to provide a blank anonymous GitHub repository link and to update this post-rebuttal once the internal code review process completes. In that way the code release may still be accounted for prior to the final decision on the paper. &nbsp; --- Reply to Comment 1.1.1: Title: Thank You for the Follow-up Comments Comment: Thank you for your attention on our response, and for sharing your further thoughts and suggestions! We are glad to know that we have addressed most of your concerns, and are happy to make further discussions on the rest. 6. Thank you for your suggestion! We will change to the professional term "null hypothesis" in explaining the meaning of the presented numbers in the table. 7. Thank you for your informative suggestion. In the submission, we treated "model-predicted structure" and "model-generated structure" as the same. We will change the term to the latter and correspondingly change the label R\_pred to R\_gen in response to the information that the latter could reduce ambiguity. (In case of potential misunderstanding, we would like to mention that the energy, denoted as $E\_\\mathcal{G}(\\mathbf{R})$ in our paper, is a function of both a molecule (in terms of its molecular graph $\\mathcal{G}$) and a 3D structure $\\mathbf{R}$ of the molecule (coordinates of atoms in the molecule). In Figure 2, each point corresponds to one molecule $\\mathcal{G}$ in the PCQ dataset (as the test dataset). Its x- and y-coordinates are the model-predicted energies of the ground-truth equilibrium structure $\\mathbf{R}\_\\mathrm{eq}$ of the molecule $\\mathcal{G}$ (available from the dataset) and of the model-generated structure $\\mathbf{R}\_\\mathrm{gen}$ (formerly denoted as $\\mathbf{R}\_\\mathrm{pred}$) of the molecule $\\mathcal{G}$. Your description is accurate if "the model-generated molecule" is meant to be "the model-generated structure of a given molecule".) 8. To understand why the predicted energy of model-generated structure $\\mathbf{R}\_\\mathrm{gen}$ is higher than the predicted energy of the equilibrium structure $\\mathbf{R}\_\\mathrm{eq}$ for all the molecules, please recall that the definition of equilibrium structure is the structure that achieves the minimal energy (Line 74). In Figure 2, the equilibrium structures are from the PCQ dataset which are ground-truth DFT-level equilibrium structures (produced by carrying out actual DFT calculation), and the energy-prediction model is trained on DFT-level energy labels from the PM6 dataset. If the energy-prediction model is well learned, then any structure (including the model-generated structure) shall not have a lower energy than the ground-truth DFT-level equilibrium structure for all the molecules. Note that the structure-generation model is trained on the PM6 dataset whose structures are not DFT-level equilibrium structures, so the model-predicted energies (approximately the DFT-level energies) of model-generated structures are systematically larger than the ground-truth DFT-level equilibrium structures. With the proposed consistency learning technique, the systematic deviation becomes smaller, indicating that the model-generated structures become closer to the corresponding ground-truth DFT-level equilibrium structures. Thank you for the suggestion regarding code release. We have set up an anonymous GitHub repository, and will upload the code there once we have completed the internal review process. According to the NeurIPS review policy, this link should be sent to the area chair for verification before we can provide it to you. We will provide the link once we hear back from the area chair. --- Rebuttal 2: Comment: Thank you for taking the time. We are glad that our clarification regarding point 8 has resolved your question. We appreciate your willingness to upgrade the score based on the code release. However, as the deadline draws near, we have yet to receive the permission from the Area Chair regarding the code link. We have reached out but have not received a reply. If possible, could you assist by asking the Area Chair for the code link during the reviewer-AC discussion period? We would be grateful if you could consider raising your score accordingly. We look forward to your continued discussion in the reviewer-AC discussion phase. Title: Many Thanks to Your Further Comments
Summary: To handle heterogeneity in molecular data and different computational costs, authors propose to exploit molecular tasks that have scientific laws connecting them. Their results show that the more accurate energy data can improve the accuracy of structure prediction. Authors highlight that in contrast to conventional machine learning tasks defined by data, scientific tasks originate in fundamental scientific laws. Scientific laws impose explicit constraints between tasks defining the scientific consistency between model predictions on these tasks. By enforcing such consistency, model predictions for different tasks are connected and can explicitly share the information in the data of one task to the prediction for other tasks, hence bridging data heterogeneity. Authors demonstrate the practical value of the scientific consistency between energy prediction and equilibrium structure prediction. Authors demonstrate the advantages of incorporating the proposed consistency losses into multi-task learning. Authors design consistency losses to enforce scientific laws between inter-atomic potential energy prediction and equilibrium structure prediction. Strengths: To handle heterogeneity in molecular data and different computational costs, authors propose to exploit molecular tasks that have scientific laws connecting them. Their results show that the more accurate energy data can improve the accuracy of structure prediction. Authors highlight that in contrast to conventional machine learning tasks defined by data, scientific tasks originate in fundamental scientific laws. Scientific laws impose explicit constraints between tasks defining the scientific consistency between model predictions on these tasks. By enforcing such consistency, model predictions for different tasks are connected and can explicitly share the information in the data of one task to the prediction for other tasks, hence bridging data heterogeneity. Authors demonstrate the practical value of the scientific consistency between energy prediction and equilibrium structure prediction. Authors demonstrate the advantages of incorporating the proposed consistency losses into multi-task learning. Authors design consistency losses to enforce scientific laws between inter-atomic potential energy prediction and equilibrium structure prediction. Weaknesses: As authors point out this work is limited to the consistency between energy and molecular structure prediction, while more consistency laws can be considered in molecular science and the significance of improvement in this work is still limited by the abundancy of involved data. Technical Quality: 2 Clarity: 3 Questions for Authors: As authors point out the significance of improvement in this work is still limited by the abundancy of involved data. Can authors provide a measure to quantify the abundancy of dada for molecular equilibrium structure prediction? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: As authors point out this work is limited to the consistency between energy and molecular structure prediction, while more consistency laws can be considered in molecular science and the significance of improvement in this work is still limited by the abundancy of involved data. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your effort in evaluating our paper. Your informative feedback are greatly appreciated. ## Consistency Beyond Energy and Structure Thank you for the opportunity to elaborate on this point. Please refer to the global rebuttal (item 1). ## Abundancy of involved data In consistency training, energy landscape matters: the energy model needs to rank different structures of each molecule in optimality consistency, and provide gradient (slope) in score consistency. Although the PM6 dataset [31] is already the largest public DFT-labeled molecular dataset (up to our best knowledge), it provides energy on only one structure per molecule, which may be insufficient for learning the landscape. So we tried leveraging more data (Sec. 3.4, 4.3): we generated force (negative energy gradient) labels on a subset of PM6 molecules, and leveraged the force labels of multiple structures per molecule in the SPICE dataset [11]. They indeed improve energy landscape (Table 7: better energy prediction results) and lead to more accurate structure prediction with consistency (Table 2 vs. Table 1), but data abundancy is still limited: the generated data do not cover multiple structures, and there is a mismatch in the DFT settings of SPICE labels. We did not find public datasets providing energy or force labels under the same DFT setting as the PM6 dataset on multiple structures per molecule, and we did not have sufficient resource to generate such data. We expect more significant benefits of consistency training if such datasets are available. We'll include these discussions in our next version.
Rebuttal 1: Rebuttal: We thank all the reviewers for their careful read, informative feedback, and sincere suggestions. We provide responses to two common questions in this global rebuttal. 1. Applicability beyond energy and structure (for Reviewers B2Rx, fhSX, p18M, 3Qs1) Within the presented content, beyond energy and equilibrium structure, we'd like to mention that we also showed in Sec. 3.4, 4.3 that leveraging _force_ labels on _off-equilibrium structures_ in the proposed consistency learning methods can further improve the accuracy of equilibrium structure prediction. As we mentioned in Sec. 5, the proposed methods can be used for connecting energy and thermodynamic distribution (beyond predicting a single equilibrium structure but generating a thermodynamic ensemble of structures), since the score consistency still holds, and the optimality consistency can be adjusted to match model-derived structure statistics to macroscopic observations. This consistency training can potentially improve the accuracy of distribution beyond data-based training, as data samples are often only available from unconverged simulations (hence biased). We treated it as future work due to more tricky and elaborate training settings, evaluation protocols and benchmarks, and chose to develop and demonstrate the methods in the structure prediction scenario, which has already taken the capacity of a conference paper. We also mentioned broader possibilities following the same idea. Molecular properties (e.g., energy) are derived from electronic structure of the molecule following physical laws, and coarse-grained structure distribution is a partial integration of fine-grained structure distribution. We can design consistency training losses according to such laws to connect these tasks, tackling data heterogeneity in these cases. 2. About the title (for Reviewers fhSX, p18M) To better describe the content, we plan to revise the title to "Tackling Data Heterogeneity in Molecular Energy and Structure by Enforcing Physical Consistency". We have also revised the abstract to focus on molecular energy and structure and their consistency. * The new title highlights the major scenario ("Energy and Structure") for the proposed consistency techniques. We'd like to mention that we also involved force and off-equilibrium structure data (Sec. 3.4, 4.3), extending the relevance beyond energy and (equilibrium) structure. * We hope that "Enforcing Physical Consistency" could convey the sense that there is a _physical_ law connecting the two quantities, which defines a _consistency_ between the two quantities and can be _enforced_ by a loss. It is challenging to convey the precise meaning in the title, and we'd be more than glad if you have any suggestions. Pdf: /pdf/56249636fb5ae8bd6964488d8148bcd3323099fa.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null