title string | paper_decision string | review_1 string | rebuttals_1 string | review_2 string | rebuttals_2 string | review_3 string | rebuttals_3 string | review_4 string | rebuttals_4 string | global_rebuttals string | dataset_source string | conference_year int64 | review_5 string | rebuttals_5 string | review_6 string | rebuttals_6 string | review_7 string | rebuttals_7 string | review_8 string | rebuttals_8 string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Faster Local Solvers for Graph Diffusion Equations | Accept (poster) | Summary: The paper, "Faster Local Solvers for Graph Diffusion Equations," addresses the efficiency of computing Graph Diffusion Equations (GDEs) such as Personalized PageRank (PPR), Katz centrality, and the Heat kernel, which are essential for various graph-related problems like clustering and training neural networks. Traditional methods for solving GDEs are computationally intensive for large-scale graphs. This paper introduces a novel framework for approximating GDEs using local diffusion processes, which significantly reduces computational time and improves scalability by leveraging the localization property of diffusion vectors. The proposed local solvers are highly parallelizable and suitable for GPU implementation, offering up to a hundred-fold speed improvement and applicability to large-scale dynamic graphs. The paper also discusses the potential for these methods to enhance local message-passing mechanisms in Graph Neural Networks (GNNs).
Strengths: The introduction of a novel framework for localizing the computation of GDEs using local diffusion processes is a significant contribution. This approach reveals the suboptimality of existing local solvers and provides a more efficient solution.
The paper offers a solid theoretical foundation, proving that popular diffusion vectors have strong localization properties using the participation ratio. It demonstrates that Approximate Personalized PageRank (APPR) can be treated as a special case of the proposed framework, providing better diffusion-based bounds. The design of simple and fast local methods based on standard gradient descent and the local Chebyshev method for symmetric propagation matrices is well-founded.
Experimental results on GDE approximation for PPR, HK, and Katz show significant acceleration over standard methods. The proposed local solvers demonstrate up to a hundred-fold speed improvement. The paper also shows that the new methods can be naturally adopted to approximate dynamic diffusion vectors, and they outperform standard PPR-based GNNs in training speed.
Weaknesses: Precision Limitations: While the paper shows significant speedups for lower precision settings, the performance gain diminishes as higher precision is required. This limitation could affect the applicability of the methods in scenarios where high precision is crucial.
Sequential Nature: (Please correct if I am wrong.) Despite improvements, the reliance on sequential updates in some methods (like LocalSOR) still poses challenges for achieving maximal parallel efficiency.
Complexity of Analysis: The runtime analysis for some of the proposed methods, such as the Local Chebyshev method, is noted as complex and remains an open problem. Maybe simplifying this analysis or providing more intuitive explanations could enhance the paper’s accessibility.
Technical Quality: 3
Clarity: 3
Questions for Authors: What are the potential trade-offs between precision and computational efficiency, and how can practitioners balance these in practical applications?
The paper includes quite some datasets to test, with biggest of ogbn-papers100M. How does the proposed method handle very large-scale dynamic graphs in real-time applications, and what are the potential bottlenecks?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time and effort to review our paper. We appreciate your positive perspective. We resolve your concerns as follows.
---
**Q1.** What are the potential trade-offs between precision and computational efficiency, and how can practitioners balance these in practical applications? The performance gain diminishes as higher precision is required, which could affect the methods' applicability in scenarios where high precision is crucial.
**A:** Please see our general response to this concern. Furthermore, it is true that when $\epsilon$ is lower, the run time of our local methods is getting closer to global ones. In the large range of precision $\epsilon \leq 10^{-4}/n$, however, they are still competitive and use less time. Furthermore, high precision is not required in most empirical settings of graph learning problems, including local clustering, training GNN models, and learning graph embeddings. It is still an open problem that the minimal complexity is significant when $\epsilon$ is lower ( proportional to $\log 1/\epsilon$). In terms of training dynamic GNNs, the precision is about $\mathcal{O}(1/m)$. In terms of local graph clustering, the precision is about $1/n$. The essential question is whether or not our methods are optimal or close to the optimal local methods. We believe this question is worth studying further in our future work.
---
**Q2.** Despite improvements, relying on sequential updates in some methods (like LocalSOR) still poses challenges for achieving maximal parallel efficiency.
**A:** Yes, the sequential updates of LocalSOR are not suitable for parallelization. This is why we propose local gradient descent and the local Chebyshev method to speed up these methods further. However, we still think the local SOR is valuable when the GPU resources are not available and is more power efficient. In practice, when high precision is needed, we recommend using LocalSOR when GPUs are not available, as both LocalSOR and LocalCH have accelerated rates in the global setting.
---
**Q3.** Comments on the runtime analysis for some proposed methods, such as the Local Chebyshev method.
**A:** Thank you for this great point. We are still investigating this strategy. Please also see a more detailed explanation in our general response. The sublinear runtime analysis for LocalCH is complicated since it does not follow the monotonicity property during the updates. One cannot directly follow existing techniques. One possible solution is that the diffusion process can be effectively characterized by the residuals of LocalCH, which is a type of second-order difference equation with parameterized coefficients. We then use this second-order difference equation to provide a bound. However, the core difficulty is that this difference equation is difficult to analyze when the coefficients are parameterized. We are actively investigating this direction.
---
**Q4.** How does the proposed method handle very large-scale dynamic graphs in real-time applications, and what are the potential bottlenecks?
**A:** The potential bottleneck is memory usage when the precision is high. We tried to apply our local solvers to the ogbn-papers100M graph, but our GPU memory is limited to 24GB. It fails to train due to memory issues. One reason is that the graph size is too large. However, the other reason could be that memory usage is linear to $1/\epsilon$. We will investigate this further and explore more efficient ways to reduce memory usage in this case. However, our paper shows that speedup is very significant when the participation ratio is low for middle-scale dynamic graphs where we tested these local solvers on training dynamic GNN models on the ogbn-arxiv and ogbn-products datasets.
**We are happy to discuss any further concerns you may have.** | Summary: This paper proposes a novel framework for approximately solving graph diffusion equations using a local diffusion process. In addition, the proposed method can effectively localizes standard iterative solvers by designing simple and provably sublinear time algorithms.
Strengths: + The problem is well motivated and the paper is well-written.
+ Extensive experiments are conducted to showcase the efficiency of graph diffusion framework to approximating graph diffusion equations.
+ The paper provides a good summary of existing graph diffusion equations.
+ The authors provide code and details of implementations.
Weaknesses: - In this paper, the authors explore 18 different graphs, can the authors show/provide which local GDE solver achieves better performance on which type(s) of graphs?
- Computational complexity/cost is missing.
Technical Quality: 3
Clarity: 3
Questions for Authors: See comments in Weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Not applicable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time and effort to review our paper. We believe there may have been some misunderstandings, and your two concerns can be effectively resolved as follows:
---
**Q1.** Can the authors show/provide which local GDE solver achieves better performance on which type(s) of graphs?
**A:** **We answered this question in Section 5.1 (Line 272 to Line 282).** Our results in the lower precision setting suggest that LocalSOR and LocalGD perform best overall. Figure 6 illustrates the performance of LocalSOR and LocalGS for PPR and Katz. As $\epsilon$ becomes smaller, the speedup of LocalSOR over LocalGS becomes more significant. In practice, when GPU resources are available, using LocalCH (corresponding to an accelerated convergence rate) is recommended instead of LocalGD in the high precision range. When GPU resources are unavailable, using LocalSOR (corresponding to an accelerated convergence rate with optimal $\omega$) instead of APPR is recommended in the high precision range.
**More importantly, our local algorithms do NOT depend on graph types.** A fundamental assumption of our paper is that the computed diffusion vectors are highly localizable, measured by the participation ratio. As demonstrated in Figure 1, almost all diffusion vectors have low participation ratios collected from 18 real-world graphs. It is worth studying whether high-participation ratio diffusion vectors exist in real-world graphs. Interestingly, when the participation ratio is high, local solvers still offer some level of speedup compared to global ones. To verify this, we conducted a simple experiment where the graphs are grid graphs, and the diffusion vectors have high participation ratios. The results indicate that local solvers are more efficient than global ones, even when the participation ratios are high. **Please check our attached PDF file to see the detailed experimental results on grid graphs.**
---
**Q2.** Computational complexity/cost is missing.
**A:** **There seems to be some misunderstanding. The analysis of the computational complexity of local solvers is actually one of our core contributions.** The main contribution of our paper is to show the local runtime complexity of these two methods for different GDEs. Our framework is quite general and applicable to a broad range of GDEs. Specifically, these runtime complexities are stated in Theorem 3.3 (LocalSOR for PPR), Corollary 3.4 (LocalSOR for Katz), and Corollary 3.6 (LocalGD for PPR and Katz). Therefore, we are unsure whether you mean the runtime complexity of LocalSOR and LocalGD for Katz and PPR, or something else.
**We are ready and happy to discuss any further concerns you may have.**
---
Rebuttal Comment 1.1:
Title: Thank you.
Comment: I appreciate the author for the detailed response. After carefully reading the rebuttal, I am retaining my score.
---
Reply to Comment 1.1.1:
Title: Request for Clarification on Remaining Concerns (We are confused)
Comment: We sincerely appreciate your response and your careful reading of our rebuttal. However, we are still uncertain whether all your concerns have been sufficiently addressed. Based on your score, it seems there are reasons to reject the paper, such as limited evaluation, that outweigh the reasons to accept it, such as a good evaluation. But, currently, you only have one question and one misunderstanding point. So, we are confused.
To facilitate a valid discussion, please specify any other concerns you might have so we can address them appropriately. Thank you. | Summary: This paper proposes a new local iterative framework for solving graph diffusion equations (GDEs). Specifically, the framework approximates GDEs through a local diffusion process, leveraging the strong localization properties of diffusion vectors such as personalized PageRank, Katz centrality, and Heat Kernel. The proposed local solvers can achieve sublinear runtime complexity under certain monotonicity assumptions. Empirical results demonstrate that these solvers significantly accelerate their standard counterparts on several large-scale benchmark datasets.
Strengths: 1. The paper is well-structured and clear.
2. The theoretical analysis is rigorous and sound, providing runtime complexity bounds for some of the proposed local solvers, which are better than their standard counterparts.
3. The effectiveness of the proposed methods is well-supported by experimental results on large-scale benchmark datasets.
Weaknesses: 1. It is not clear how graph structures, such as sparsity and spectral properties, impact the runtime complexity of the local solvers. A discussion on the potential influence of graph structures on runtime complexity would be helpful.
2. The runtime complexity analysis assumes that the updates of local solvers satisfy monotonicity properties. However, LocalCH does not satisfy these properties during updates. Establishing runtime bounds for LocalCH may require different techniques.
3. In Figure 7, when $\epsilon \geq 2^{-31}$, the running time of LocalGD is the same as GD, indicating that LocalGD may not speed up the standard counterparts when $\epsilon$ is sufficiently small.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to Weaknesses.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are happy that you like our work. We appreciate your time and effort in reviewing our submission. We addressed your concerns as follows:
---
**Q1.** It is unclear how graph structures, such as sparsity and spectral properties, impact the runtime complexity of the local solvers.
**A:** In terms of graph structure, our local algorithms do not depend on graph types. A key assumption of our paper is that the computed diffusion vectors are highly localizable, measured by the participation ratio. It is worth studying whether high-participation ratio diffusion vectors exist in real-world graphs. To verify this further, we conducted a simple experiment where the graphs are grid graphs, and the diffusion vectors have low participation ratios. The results indicate that local solvers are more efficient than global ones, even with low participation ratios. **Please also check our general response and our attached PDF file to see the detailed experimental results.**
In terms of spectral properties, the performance of both local and global solvers is affected by the spectra of the underlying graph. For example, let $\lambda_1,\lambda_2, \ldots, \lambda_n$ be the eigenvalues of normalized Laplacian of the underlying matrix defined by the graph. Then, the condition number is determined by $\lambda_1$ and $\lambda_n$, which will dominate the convergence rate of all methods. Furthermore, when the eigenvalues are clustered together, all methods will be easier to converge due to the same reason. Thank you for this great point, and we will add discussions on this to our manuscript.
---
**Q2.** Comments on the difficulty of accelerated bound analysis for LocalCH.
**A:** The sublinear runtime analysis for LocalCH is complicated since it does not follow the monotonicity property during the updates. One cannot directly follow existing techniques. One possible solution is that the diffusion process can be effectively characterized by the residuals of LocalCH, which is a type of second-order difference equation with parameterized coefficients. We then use this second-order difference equation to provide a bound. However, the core difficulty is that this difference equation is difficult to analyze when the coefficients are parameterized. We are actively investigating this direction.
---
**Q3.** When $\epsilon$ is sufficiently small, LocalGD may not speed up the standard counterparts.
**A:** We admit that when $\epsilon \rightarrow 0$, it is hopeless to speedup the standard counterparts as the optimal first-order methods need run time $\mathcal{O}(m/\sqrt{\alpha})$ for computing PPR vectors. In the high precision ( in Figure 7), when $\epsilon \geq 2^{-31}$, the running time of LocalGD is slightly less than that of GD. More importantly, the computation of high-precision diffusion vectors is not our main goal. We aim to develop faster local methods when high precision is not a strong requirement. Many applications of graph learning problems only need lower precisions, such as local clustering and training GNN models.
To summarize, for the first time, we propose a novel framework for designing efficient local solvers that could be applied to many graph diffusion equations (GDEs). These GDEs solvers can be applied to local clustering and training graph neural networks, improving the efficiency of these algorithms. We are happy to have further discussions if needed!
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I continue to recommend acceptance and retain my score. | Summary: This paper proposes a suite of fast methods to approximately compute graph diffusion vectors such as Personalized PageRank, Katz centrality and the heat kernel. A notable feature of the proposed methods is that they are easily parallelizable and hence can achieve further acceleration on GPU. The authors also provide a running time bound for each method that they introduce. Empirical results show that the new local methods can achieve up to a hundred-fold speedup when compared to their global counterpart.
Strengths: - The paper is well-written and easy to follow.
- The local diffusion framework is applicable to computing several important graph diffusion vectors.
- The experiments are reasonably comprehensive.
Weaknesses: - When compared with APPR, the speedup is not really captured by Theorem 3.3 and Corollary 3.6. The authors only showed that $\overline{\mbox{vol}}(\mathcal{S}_T)/\bar\gamma_T \le 1/\epsilon$, but it requires strict inequality to achieve nontrivial speedup in terms of worse-case running time. It is not clear how tight this bound is in general. If the bound is tight, then in the worst case there is no speedup. The authors should comment on if there are classes of graphs over which there will be a notable gap between the 2 quantities $\overline{\mbox{vol}}(\mathcal{S}_T)/\bar\gamma_T$ and $1/\epsilon$, so that the result provides a meaningful improvement.
- Because the theorems concerning the worst-case running time do not seem to capture a clear improvement over simple baseline local methods, it is unclear where exactly the speedup reported in Table 2 comes from.
- In LocalGD and LocalCH, the authors did not provide an update rule (or even a definition) for $\mathcal{S}_t$. Since $\mathcal{S}_t$ is part of the local diffusion process, the authors should specify how $\mathcal{S}_t$ is updated or defined at each step. I guess that $\mathcal{S}_t$ depends on the termination condition, but I could not find where the authors mention about it in the main paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Line 188: The open problem is recently solved by Martínez-Rubio et al, Accelerated and Sparse Algorithms for Approximate Personalized PageRank and Beyond, 2023.
- When eps goes to 0, do the approximate solutions converge to the exact global solutions?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time and effort to review our paper carefully. We appreciate your positive perspective on our paper. Your concerns and our responses are listed as follows.
---
**Q1.** The concern of two quantities $ \overline{\operatorname{vol}}\left(\mathcal{S}_T\right) / \overline{\gamma}_T $ and $ 1 / \epsilon$: It is unclear how tight this bound is in general.
**A:** Thank you for this great point. Our bound is slightly rough. The actual inequality is $ \overline{\operatorname{vol}}\left(\mathcal{S}_T\right) / \overline{\gamma} _T < 1 / \epsilon$ as long as $T \geq 1$. The inequality used in Line 664 $(\leq)$ is actually $(<)$, i.e., $\sum _{i=1}^{ |\mathcal S_t|} r _{u_i} / ||\boldsymbol{r}^{(0)} ||_1 < \sum _{i=1}^{ |\mathcal S_t|} r _{u_i} / ||\boldsymbol{r}^{(t)} ||_1$ as long as $t \geq 1$. This strict inequality is achievable if the algorithm runs more than one iteration. The reason is that the monotonicity property of $|| \boldsymbol r^{(t)} ||_1$ is actually $|| \boldsymbol r^{(0)} ||_1 < || \boldsymbol r^{(1)} ||_1 <\cdots < || \boldsymbol r^{(t)} ||_1 < \cdots$ where there are at least some magnitudes that are moved from $\boldsymbol r^{(t)}$ to $\boldsymbol x^{(t)}$ per-iteration. Therefore, in the worst case, the improvement of our bound is significant if the factor $1/\epsilon$ is the main concern.
More importantly, it is better to compare two local bounds presented in Theorem 3.3 instead of only considering two quantities. We conducted a simple experiment to compare these two quantities (please see our detailed experimental results in the attached file) and two local bounds. Our results indicate the effectiveness of our local bounds. The time-evolving bound is more suitable and mirrors the actual performance.
---
**Q2.** It is unclear where exactly the speedup reported in Table 2 comes from.
**A:** Table 2 presents the speedup ratio of computing PPR vectors comparing the local and global solvers where we set $\epsilon = 1/n$ and report the value $\mathcal{T} _{\mathcal A} / \mathcal{T} _{ \text{Local} \mathcal A} $ where $\mathcal{T} _{\mathcal A}$ is the total number of operations needed for algorithm $\mathcal{A}$. The results of different local solvers are in Figure 4. These results indicate that local solvers speed up significantly compared to their global counterparts.
---
**Q3.** The definition of $\mathcal{S}_t$ in LocalGD and LocalCH is missing.
**A:** Thank you for pointing out this. The active set $\mathcal{S} _t$ indeed depends on the termination condition. We defined $\mathcal{S}_t = \\{ u _1, u _2, \ldots, u _{\left| \mathcal{S} _t \right|} \\}$ be the set of active nodes processed in $t$ iteration in Section 3.1. Specifically, at any time $t$, $\mathcal S _t$ maintains the set $\\{ u \in \mathcal{V} : |r_u| \geq \alpha \epsilon d_u \\}$ for calculating PPR. The definition of Katz's score is similar. We will clarify these $\mathcal{S} _t$ for all local solvers in our manuscript.
---
**Q4.** For the PPR computation, the open problem was recently solved by Martínez-Rubio et al. (2023, COLT)
**A:** Thank you for pointing out this latest work. We are also aware of Martínez-Rubio's work when we are submitting our manuscript. First of all, Algorithm 4 in Martínez-Rubio's work is proposed for solving PPR, not graph diffusion vectors (GDEs). But, the problem in our paper is solving GDEs which is more general than the open problem.
When we empirically compared our work, LocalSOR, with ASPR, we found that ASPR is much slower than our proposed LocalSOR. **The main reason is that ASPR is based on a nested subspace pursuit strategy, and the corresponding iteration complexity is bounded by $\mathcal O (|\mathcal S^\star| / \sqrt{\alpha} )$ where $\mathcal S^\star$ is the support of the optimal solution. This bound deteriorates to $\tilde {\mathcal {O}} (n / \sqrt{ \alpha})$ when the solution is dense, with $ n$ representing the number of nodes in $\mathcal G$, which could be less favorable than that of standard solvers under similar conditions. Please see the detailed experimental results in our attached file.** Moreover, the nested computational structure provides a constant factor overhead, which could be significant in practice.
---
**Q5.** When $\epsilon \rightarrow 0$, do $\boldsymbol{x}^{(t)}$ converge to $\boldsymbol{x}^{*}$?
**A:** Yes. For PPR using LocalSOR in Theorem 3.3, we proved that the estimate $\boldsymbol{x}^{(T)}$ satisfies $
||\boldsymbol{D} ^{-1}(\boldsymbol{x} ^{(T)}-\boldsymbol{f} _{\text{PPR}})|| _{\infty} \leq \epsilon$. This means $\boldsymbol{x} ^{(T)} \rightarrow \boldsymbol{f} _{\text{PPR}}$ when $\epsilon \rightarrow 0$. For Katz using LocalSOR in Corollary 3.4, the estimate $\hat{\boldsymbol{f}} _{\text {Katz }} =\boldsymbol{x}^{(T)}-\boldsymbol{e} _s$ satisfies $||\hat{\boldsymbol{f}} _{\text {Katz }}-\boldsymbol{f} _{\text {Katz }} ||_2 \leq ||(\boldsymbol{I}-\alpha \boldsymbol{A})^{-1} \boldsymbol{D} ||_1 \cdot \epsilon$. This means $\hat{\boldsymbol{f}} _{\text {Katz }} \rightarrow \boldsymbol{f} _{\text{Katz}}$ when $\epsilon \rightarrow 0$. One can obtain similar results for LocalGD.
---
We hope that our clarifications can address your concerns. **We are happy to have further discussions if needed!**
---
Rebuttal Comment 1.1:
Comment: I'd like to thank the authors for their detailed responses. My questions have been properly addressed. I will increase my score. | Rebuttal 1:
Rebuttal: **Our General Responses**
We thank all reviewers for their time and effort in carefully reading our paper. To address some general concerns, we included experimental results in the attached PDF file. Furthermore, some general concerns are worth to response as follows:
---
**Q1. Potential trade-offs between precision and computational efficiency: When high precision is required, i.e., $\epsilon \rightarrow 0$, the efficiency of local solvers diminishes (Reviewer kfsj, JA1M, X6sG).**
**A:** To clarify, the effective speedup of local solvers over the standard ones is when $\epsilon \leq 10^{-4}/n$ empirically. This already covers many downstream applications, including local clustering (where $\epsilon = 10^{-6}$ and $n\approx 10^{-6}$ see Fountoulakis's work in [1]) and training graph neural networks (where $\epsilon = 10^{-4}$ but $n \geq 10^{5}$ see Bojchevski's work on the PPRGo model in [2]), as far as we know. It is a reasonable observation that the efficiency of local solvers diminishes when $\epsilon \rightarrow 0$, as the number of iterations needed for both local and global solvers is proportional to $\log(1/\epsilon)$. These local solvers are more like their global counterparts. Specifically, for the optimal first-order global solvers, they require $\mathcal{O}(m \sqrt{\kappa} \log(1/\epsilon))$, where $m$ is the number of edges in the graph and $\kappa$ is the condition number (e.g., $\kappa = 1/\alpha$ when the graph diffusion equation is PPR). Similarly, for the local solvers, as proved in our Theorem 3.3, the number of *local iterations* is also proportional to $\log(1/\epsilon)$. It is unknown, but it remains interesting to see whether the runtime of *optimal first-order local methods* is proportional to $\log(1/\epsilon)$. We will study this problem in future work.
---
**Q2. Comments on the difficulty of accelerated bound analysis for LocalCH (Reviewer JA1M, X6sG)**
**A:** We are actively investigating this direction and have some preliminary results. The sublinear runtime analysis for LocalCH is complicated since it does not follow the monotonicity property during the updates. One cannot directly follow existing techniques. One possible solution (we are investigating it) is that the diffusion process can be effectively characterized by the residuals of LocalCH, which is a type of second-order difference equation with parameterized coefficients. We then use this second-order difference equation to provide a bound. However, the core difficulty is that this difference equation is difficult to analyze when the coefficients are parameterized. Due to the space limit, we believe the convergence analysis of more advanced local solvers, such as LocalCH and LocalSOR with optimal parameters, can be treated as independent work.
---
**Q3. The effectiveness of local bounds compared with existing local bounds. (Reviewer kfsj)**
**A:** Our bound is effective. By a refined analysis, the actual inequality is $ \overline{ \operatorname {vol} } (\mathcal{S} _T ) / \overline \gamma _T < 1 / \epsilon$ as long as $T \geq 1$. The inequality used in Line 664 $(\leq)$ is actually $(<)$, i.e., $\sum _{i=1}^{ |\mathcal S_t|} r _{u_i} / ||\boldsymbol{r}^{(0)} ||_1 < \sum _{i=1}^{ |\mathcal S_t|} r _{u_i} / ||\boldsymbol{r}^{(t)} ||_1$ as long as $t \geq 1$. This strict inequality is achievable as long as the algorithm runs at least for two iterations. The reason is that the monotonicity property of $|| \boldsymbol r^{(t)} ||_1$ is actually $|| \boldsymbol r^{(0)} ||_1 < || \boldsymbol r^{(1)} ||_1 <\cdots < || \boldsymbol r^{(t)} ||_1 < \cdots$ where there are at least some magnitudes that are moved from $\boldsymbol r^{(t)}$ to $\boldsymbol x^{(t)}$ per-iteration. Therefore, in the worst case, the improvement of our bound is significant if the factor $1/\epsilon$ is the main concern. We conduct a simple experiment to compare these two quantities. **Please check our attached PDF file to see the experimental explanations.**
---
**Q4. Comments on the performance of local solvers over different graph structures. (Reviewer JA1M, X7w6)**
**A:** Our local algorithms do not depend on graph types. A fundamental assumption of our paper is that the computed diffusion vectors are highly localizable, measured by the participation ratio. As demonstrated in Figure 1, almost all diffusion vectors have low participation ratios collected from 18 real-world graphs. It is worth studying whether high-participation ratio diffusion vectors exist in real-world graphs. Interestingly, when the participation ratio is high, local solvers still offer some level of speedup compared to global ones. To verify this, we conducted a simple experiment where the graphs are grid graphs, and the diffusion vectors have high participation ratios. The results indicate that local solvers are more efficient than global ones, even when the participation ratios are high. **Please check our attached PDF file to see the detailed experimental results.**
To summarize, for the first time, we propose a novel framework for designing efficient local solvers that could be applied to many graph diffusion equations (GDEs). These GDEs solvers can be applied to local clustering, training graph neural networks improving the efficiency of these algorithms.
---
**References**
- [1] Fountoulakis, K., Roosta-Khorasani, F., Shun, J., Cheng, X., & Mahoney, M. W. (2019). Variational perspective on local graph clustering. Mathematical Programming, 174, 553-573.
- [2] Bojchevski, Aleksandar, Johannes Gasteiger, Bryan Perozzi, Amol Kapoor, Martin Blais, Benedek Rózemberczki, Michal Lukasik, and Stephan Günnemann. "Scaling graph neural networks with approximate PageRank." In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2464-2473. 2020.
Pdf: /pdf/b92f4bcceefb5bd5ecbb9e2268c75c087ceb5f6e.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
A Swiss Army Knife for Heterogeneous Federated Learning: Flexible Coupling via Trace Norm | Accept (poster) | Summary: The authors present a new Swiss Army Knife-like approach that flexibly handles the heterogeneity challenges of a federated multi-task learning framework. The framework uniquely integrates tensor trace paradigms to handle cross-client data, model, and task heterogeneity. The title of the approach is interesting, but some problems remain.
Strengths: 1.The paper is easy to understand and comprehend, allowing even readers less familiar with the field to follow the authors' ideas.
2.The authors have made significant progress in addressing data heterogeneity, model heterogeneity, and task heterogeneity, and this ability to address heterogeneity in an integrated manner is a major highlight of this research.
3.The authors provide convergence guarantees and generalization bounds for the proposed FMTL framework in a non-convex setting, which further demonstrates the robustness and efficiency of FedSAK. These theoretical supports not only enhance the reliability of the method, but also demonstrate its potential in practical applications.
Weaknesses: 1.The authors consider the common assumptions that 1) the Lipschitz continuous gradient and 2) the variance is bounded. These assumptions are very stringent and the authors do not have a sufficient basis for these assumptions, adequate explanations, or empirical verification.
2.For a better understanding of the proposed technique, the authors may want to clarify some technical details. For example which layers of the model actually need to be shared in different heterogeneous setups.
3.The experimental results seem to show that the proposed algorithm has marginal gain compared to FedProto. At the same time, I would like to know why FedMTL has a faster convergence rate in Fig. 2.
4.The authors' model may significantly increase its computational cost as the size of the dataset increases and the network structure increases. Therefore the authors should go really such issues to focus on to enhance the generalization of the model.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1.Can you provide more detailed information on how the cost of communication is calculated in Figure 5? The author's scale is too close to FedAvg to detect this baseline.
2.Whether the task heterogeneity proposed by the authors is a sub-case of model heterogeneity.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please see weaknesses and questions
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable suggestions, your positive comments help us a lot. I will respond to your concerns next.
> **Weaknesses 1: Reasonableness of assumptions**
Thank you for your question, and I understand that these assumptions may be overly idealized in some cases, but Lipschitz continuous gradient, unbiased gradient, and bounded variance are standard assumptions for FL analysis [1-3]. Although these nonconvex assumptions are relatively strict, it remains an open question how to provide FL analysis without them.
The **Lipschitz continuous** gradient is a technical condition on the shape of the loss function and is the standard for nonconvex analysis. In contrast to previous assumptions on convex loss functions that are common in analyzing FL and stochastic gradient descent, the Lipschitz continuity assumption actually relaxes these conditions, allowing our analysis to cover a wider range of application scenarios.
The **bounded gradient** assumption allows our convergence analysis to better accommodate the heterogeneity of the data distribution, which is one of the core challenges of federated learning. In particular, the bounded gradient assumption is especially relevant in the case of certain activation functions (e.g., sigmoid functions) and bounded input features. Thus, this assumption not only provides an analytical basis in theory, but can also help cope with uneven data distribution in FL in practice.
Our theory is based on the convergence results of the algorithm of [3], and in addition we also perform a verification convergence analysis in the experimental part as a proof that our algorithm is convergent.
[1] On the Convergence of FedAvg on Non-IID Data. ICLR 2019.
[2] Fedproto: Federated prototype learning across heterogeneous clients. AAAI 2022.
[3] Heterogeneous federated learning with generalized global header. MM 2023.
***
> **Weaknesses 2: Technical details of model sharing**
Thank you for your valuable comments, and we apologize for the difficulty in understanding the shared layer structure due to the lack of detailed description in the paper. To address this issue, we provide a clearer description here and will make corresponding changes in the paper to enhance readability and reproducibility.
In our approach, the local model is decoupled into a feature extraction part $\theta$ and a prediction head part $\varphi$.
In the **data heterogeneity** scenario, we choose to share the whole model structure since it is the same for all the clients, as emphasized in line 286 of the paper.
In contrast, in the **model heterogeneity** scenario, we share only the prediction header part, which is illustrated in line 298, i.e., the final fully connected layer shown in Table 4 of the paper.
For the **task heterogeneity** scenario, we on the other hand share the feature extractor part, i.e., all the structures in the model except the final classification layer, which is described in line 309.
***
> **Weaknesses 3: Description of experimental results**
FedSAK shows advantages not only for data heterogeneous scenarios but also in **model and task heterogeneous** scenarios. From the results, there is no baseline comparable to FedSAK across all heterogeneous scenarios and datasets. We counted the results of the average growth of data heterogeneity:
| | FedAvg | pFedMe | Ditto | FedU | FedProto | FedGH |
| :-------------------------- | ------ | ------ | ----- | ---- | -------- | ----- |
| Relative improvement in acc | 13.2 | 3.06 | 2.47 | 3.03 | 1.22 | 3.24 |
Our improvements are valuable.
In addition, the effectiveness and flexibility of FedSAK is an important step forward compared to previous work that focused primarily on data heterogeneity. The regularization term of FedMTL differs from that of our design in that the former's regularization term, which primarily employs the L2 norm enables FedMTL to converge faster in the early stages of training.
In contrast, our approach better captures inter-task property differences by stacking models from different tasks and computing trace paradigms to encourage these task models to be represented in a shared low-dimensional space. As a result, our approach may be more sensitive to the regularization term, e.g., **in Fig. 6**, we show the significant effect of the regularization parameter lambda on model convergence.
***
> **Weaknesses 4 & Questions 1: Communications overhead**
Q1: In the case of data heterogeneity (i.e., model homogeneity), our communication cost is the same as FedAvg, because we are uploading all models to the server. Whereas, in the case of model heterogeneity and task heterogeneity, we are sharing only some of the models, which will greatly reduce our communication cost.
W4: It is due to the flexibility of FedSAK to choose the model sharing structure, which provides the possibility to flexibly cope with different model sizes and structures in large-scale federated learning environments. We provide a description of the computational complexity in **Appendix A.5.** In addition, due to the word limit, the experimental results in this section can be found in **Weaknesses 1 of our response to reviewer yu6g.**
***
> **Questions 2: Task heterogeneity**
In our study, although each task is associated with its unique classifier, these classifiers typically share the same model architecture. The task heterogeneity primarily manifests in the differences in classification objectives and data characteristics. Therefore, we do not classify this scenario under model heterogeneity. **For example**, in the context of drug classification, two clients may have tasks that involve binary classification for toxicity and activity, respectively. While both tasks use the same binary classifier architecture, their labels and objectives are entirely different, exemplifying task heterogeneity rather than model heterogeneity.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thanks. My issues has been addressed, so I raise the score to 7
---
Reply to Comment 1.1.1:
Comment: Dear reviewers,
We thank you for scrutinizing the discussion and raising the score. We are glad we were able to address your concerns. If you have any further questions, please feel free to let us know. We are more than happy to answer them for you. | Summary: This paper focuses on the issues of heterogeneous federated learning, including data heterogeneity, model heterogeneity, and task heterogeneity. To address these issues, the authors introduce a federated multi-task learning framework based on tensor trace norm.
Strengths: 1. Innovativeness: The authors focus on task heterogeneity scenarios, which sets this paper apart from other common federated learning studies. The ideas and topics presented by the authors are both intriguing and relevant.
2. Flexibility: FedSAK demonstrates strong adaptability to various forms of heterogeneity in federated environments, showcasing its generalization capabilities. Additionally, the framework enhances its adaptability through flexible upload and sharing structures.
3. Theoretical Analysis: This paper provides theoretical analysis as a guarantee.
4. Experiments: Evaluations on six real-world datasets show that FedSAK outperforms existing state-of-the-art models in terms of accuracy and efficiency, proving its effectiveness in handling heterogeneous federated learning scenarios.
5. Code: The author provides the code and is able to reproduce.
Weaknesses: 1. Computational complexity: As far as we know, the calculation of tensor trace norm is complex, so it is worth considering how to apply FedSAK to larger models.
2. Contribution: Although the author's innovation is commendable, the author did not clearly emphasize in the paper why the trace norm can make outstanding contributions. Can relevant ablation experiments be designed to further confirm the collinearity of the algorithm.
3. Convergence analysis: The author followed the common convergence analysis of federated algorithms. Can the author explain how the local convergence proposed by the author in Section 5.1 ensures consistency with achieving global optimization objectives?
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. Does the author's stacking operation on the server result in a significant increase in algorithmic memory?
2. By which dimension are the authors stacking convolutions in model heterogeneous scenarios?
3. Does stacking in different dimensions affect the model? The authors should discuss more details of the model.
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The author's limitations may be at a computing cost, but this is acceptable. It would have been more reasonable if this provided a short and clear section on the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your affirmative suggestions, which are of great support to us. Next, I will answer your questions one by one.
> **Weaknesses 1: Computational complexity**
The tensor trace norm is defined as the sum of matrix singular values, and its computational complexity on the server is
$O(\min_k d_k^2·\prod_{i\ne k}^pd_i)$, where $d_i$ denotes the $i$-th dimension of the tensor.
This may indeed increase as the size of the network or dataset increases. However compared to traditional methods that require uploading the entire model parameters e.g. FedAvg, FedProx, etc., our FedSAK method shows some flexibility in dealing with larger networks or datasets. We can **selectively share part of the model's structure**, which offers the possibility to flexibly cope with different model sizes and structures in large-scale federated learning environments. We provide a description of the computational complexity in **Appendix A.5**, which we tested using **Resnet18** (FedSAK only uploads the last FC):
| | FedAvg | pFedMe | Ditto | FedMTL | FedProto | FedGH | FedSAK |
| :--------- | ------ | ------ | ----- | ------ | -------- | ----- | ------ |
| ACC (%) | 68.05 | 75.84 | 76.86 | 73.68 | 78.34 | 75.95 | 77.69 |
| TIMES (s) | 6729 | 57406 | 19366 | 11757 | 12076 | 7508 | 7118 |
| MEMORY (G) | 1.75 | 2.63 | 3.42 | 2.58 | 1.75 | 1.71 | 0.918 |
***
> **Weaknesses 2: Contribution**
Thank you for recognizing our innovativeness.
The core idea of federated multi-task learning is to improve learning across tasks by sharing information, where different tasks share certain features or structures. **The tensor trace norm** encourages models of different tasks to be represented in a shared low-dimensional space by constraining the low-rank nature of the weight matrices, thus effectively capturing the common information among tasks.
For example, the parameters of multiple tasks can be viewed as different dimensions of a tensor, and the tensor trace paradigm automatically models inter-task dependencies by constraining the low-rank of this tensor and inducing the parameter matrices between different tasks to have a similar low-rank structure. Such dependencies help information sharing between tasks and improve the overall learning performance.
Unlike common ablation experiments, in our approach, after removing the trace paradigm the model will degrade to a local model if it is not aggregated, and will degrade to a FedAvg if the model is simply weighted and aggregated. Therefore, we put this part of the special note in the **Appendix, see A.4 and Fig. 8.**
***
> **Weaknesses 3: Convergence analysis**
We provide guarantees for the global optimization in the convergence analysis through the derivation of **Assumption 3 and Lemma 2**.
In the convergence analysis, **Assumption 1** shows that the local objective function is continuous and smooth, **Assumption 2** that the variance of the stochastic gradient over a batch of data is constrained by a constant, and **Assumption 3** that the difference between the parameters of the local shared layer and the updated parameters of the shared layer on the server side is bounded (here associated with global optimization).
Based on the above assumptions, we show by **Lemma 1** that the loss of the local model for any client is bounded.
**Lemma 2** further shows that after each round of communication, when the client replaces its local structure with the server's latest global shared layer, the loss of the local model is still bounded, thus helping to ensure the convergence of the model during training. Finally we obtain **Theorem 4 see Eq. 24** when we carry Lemma 1 into 2, which shows that the loss of the local model of any client is reduced in one round of communication with respect to the previous round, as a way to perform the convergence analysis.
***
> **Questions 1: Memory problem**
Thank you for your valuable input and inspiration. The fact that the memory overhead of our model is comparable to the base method FedAvg is due to the fact that we both need to upload the overall model to the server side. Our method just has the extra operation of stacking the models together and there is no additional memory constraint.
***
> **Questions 2 & Questions 3: The stacked form problem**
In our scenario, we adopt a client dimension stacking model-based approach for processing.
Specifically, suppose we have $M$ clients, and the corresponding model weight matrix of each client is denoted as $\theta \in R^{d_{1} \times d_2}$ . After the training is completed, each client uploads its local model weights to the server. Then, the server stacks these model weights from different clients according to the client dimensions to form a new tensor $\Theta \in R^{d_{1} \times d_2 \times M}$. The third dimension of this tensor corresponds to the different clients, such that the model weights of all clients be integrated in a unified tensor structure. This stacking method can effectively centralize the model information of each client, which facilitates further global model update or analysis.
It is worth noting that we are stacking parameters of the same size. In this way the correlation of different client models can be better explored. We have discussed the models for model heterogeneity in **Table 4**, and we will revise this section in detail and mention it in the main text.
---
Rebuttal Comment 1.1:
Comment: The authors replied carefully and basically solved my concerns.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer.
Thank you for your careful review of our discussion. We are happy to address your concerns. If you have any other questions, please feel free to let us know. We are more than happy to answer them. | Summary: This paper introduces a federated learning method called FedSAK. Compared to existing methods, FedSAK is more flexible and can accommodate data heterogeneity, model heterogeneity, and task heterogeneity. To achieve knowledge transfer between client models in a heterogeneous environment, this method employs tensor trace norm regularization. The authors provide both theoretical and empirical evidence to demonstrate the effectiveness of this approach.
Strengths: This paper is highly motivated and presents a clear and easy to understand methodology. The authors explain the methodology thoroughly and the logic is clear and easy to understand.
Experiments conducted on several different datasets demonstrate the effectiveness of the proposed methodology. The results consistently demonstrate superior performance compared to existing methods, highlighting the robustness and versatility of FedSAK.
Moreover, the method is theoretically sound and experimentally validated.
Weaknesses: The authors should have provided more specific ablation experiments to assess the effectiveness of the key components of the method.
The authors should explain why such a division of the data was used in the common case of data heterogeneity and whether this is more realistic. For example, in Table 1, there are four combinations of M and S on CIFAR-10 and CIFAR-100, but only two on the first two datasets.
The reviewer wonders why the authors focalize with the task heterogeneity scenario, which can be seen in the paper actually task heterogeneity each task has different classifiers, whether this is also a model heterogeneity scenario. If yes why it needs to be divided separately, if not the authors should provide more explanation for this scenario.
Technical Quality: 3
Clarity: 4
Questions for Authors: See above
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your recognition and valuable suggestions, I will respond to your questions next.
***
>**Weaknesses 1: Ablation experiments**
We appreciate your valuable comments. We understand the importance of ablation experiments in verifying the validity of a method. Due to specific design choices, we provide the results of the ablation experiments in **Appendix A.4, see Fig. 8**.
In the scenario of data heterogeneity, the models uploaded by FedSAK are isomorphic, which implies that FedSAK will degrade to the standard Federated Averaging (FedAvg) algorithm if weighted aggregation is performed only on the server side, as shown in the results in **Table 1.** This effectively amounts to an implicit ablation experiment, demonstrating the performance degradation that will occur when key innovations in our approach are removed.
For the model-heterogeneous and task-heterogeneous scenarios, what we upload to the share is a portion of the local model. If nothing is done to this part of the shared model, then our method will degrade to a locally trained model only, i.e., the Local model in **Tables 2** and **Table 3**. On the other hand, if a simple weighted aggregation is used, our method will degenerate into the FedAvg-c model in **Table 3**. These degradation scenarios can actually be considered as a form of ablation experiment, demonstrating the importance of the individual components of our method.
We show the results of the ablation experiments based on the above analysis as follows, which are described in detail in Appendix A.4 of this paper:
| | Local | FedAvg | FedSAK |
| :-------------------------------- | ----- | ------ | --------- |
| Data heterogeneity (CIFAR10) | 68.72 | 66.55 | **76.47** |
| Model heterogeneity (PACS) | 59.13 | 60.98 | **68.05** |
| Task heterogeneity (Adience Face) | 72.26 | 74.73 | **76.03** |
Thus, in **Appendix A.4**, we provide the results of the ablation experiments described above. However, our results do demonstrate the degradation of performance when key features of our method are removed or simplified. We will make this clearer in the revision of the paper so that the reader can better understand the effectiveness of our method and the role of the individual components.
***
> **Weaknesses 2: Explanation of dataset division**
Thank you for your question, we use the dataset partitioning method commonly used by our predecessors in federated multitask learning, **see [1, 2]** for references. A detailed explanation of the dataset can be found in **Appendix B.1**. As we mentioned in A.1 the dataset HumA contains the behavioral actions of 30 individuals, each of which is fixed to correspond to 6 action categories, so we keep the number of client categories constant.
Similarly, the MNIST dataset is an federated multi-task learning dataset divided based on previous experience. In contrast, the CIFAR-10 and CIFAR-100 datasets are more complex than MNIST, with more categories and more complex images. By using different number of clients and number of labeled categories, the robustness of the model in dealing with complex data distributions can be better evaluated.
[1] Smith V, Chiang C K, Sanjabi M, et al. Federated multi-task learning[J]. Advances in neural information processing systems, 2017, 30.
[2] Dinh C T, Vu T T, Tran N H, et al. Fedu: A unified framework for federated multi-task learning with laplacian regularization[J]. arXiv preprint arXiv:2102.07148, 2021, 400.
***
> **Weaknesses 3: Differences between task heterogeneity and model heterogeneity**
Thank you for your question, first of all task heterogeneity scenarios are very common in real world applications, where different tasks usually have different feature distributions and classification needs. **Task heterogeneity emphasizes the fact that each task has its own unique features and goals**, which often require independent classifiers to handle. Therefore, although each task uses a different classifier, we focus primarily on the heterogeneity of the task itself rather than the heterogeneity of the model.
In our work, task heterogeneity and model heterogeneity are indeed somewhat related, but they are **not the same concept**. Task heterogeneity refers to differences in data distribution and objectives between tasks, while model heterogeneity refers to the use of different model architectures for different tasks.
In our study, although each task has its unique classifiers, these classifiers do not necessarily use completely different model architectures, and thus we did not categorize them as model heterogeneity scenarios. For example, when doing drug classification, the two clients' tasks were binary classification tasks of determining the presence or absence of toxicity and the presence or absence of activity, respectively. Their classifiers are both binary classification tasks, but the task labels are completely different.
---
Rebuttal Comment 1.1:
Comment: I am satisfied with your responses. Thus, I improve my score slightly.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thank you for scrutinizing our discussion, and we are pleased to have been able to address your concerns. Thank you for improving your score for our paper. If you have any other questions, please feel free to let us know. We are more than happy to answer them. | null | null | Rebuttal 1:
Rebuttal: We thank the reviewers for providing detailed and thoughtful feedback on our submission. We thank the reviewers for appreciating our innovation and value to a large extent while suggesting improvements. We have addressed reviewer comments and questions in individual responses to each reviewer and in the accompanying pdf file. If any questions were not answered or our responses were unclear, we would appreciate the opportunity to engage further with our reviewers.
Briefly, the main points of our response are as follows:
**1. Regarding the ablation experiments**: reviewers kdUe and yu6g point out that more specific ablation experiments should be provided. We thank the reviewers for their attention to this issue, and due to our specific experimental setup, our experimental results actually provide an implicit ablation. We show the experimental results in our kdUe response.
**2. Regarding the computational complexity of the model:** reviewers yu6g and zrPd are concerned about the computational complexity of our model. We recognize that the computational complexity of the tensor trace paradigm increases with the size of forgetting. However, since our model is a model structure that can flexibly choose shared partial uploads, we can largely environment the stress caused by the increase in model size, which can be reacted to in our experiments, and draw attention to the brief experimental results in Appendix A.5.
**3. Explanation on task heterogeneity:** Both reviewers kdUe and zrPd raised questions about our proposed task heterogeneity scenario. We regret that we did not describe this task scenario carefully enough in the paper. We will revise it in a subsequent version.
We thank you again for your time and effort in reviewing this submission, and we are confident that these comments will enhance the clarity and motivation of our manuscript.
Pdf: /pdf/a5fd6f27ccbca1d3cd4d23ea58809e59f26c28be.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Handling Learnwares from Heterogeneous Feature Spaces with Explicit Label Exploitation | Accept (poster) | Summary: The paper designs a learnware system to handle the heterogeneous feature spaces. It applies the label information to manage the heterogeneous models. Then it also proposes a strategy to match the model with the additional conditional distribution. The paper also conducts experiments to compare with other learnware systems.
Strengths: 1. The observation that matching with only the marginal distribution is not enough is interesting and novel. The paper improves the matching with the conditional distribution to tackle this problem.
2. The experimental results are good.
3. The paper is well presented and easy to follow.
Weaknesses: 1. The paper focuses on the heterogeneous feature spaces in learnwares. However, it seems that the paper uses subspace learning to handle the heterogeneous feature spaces. Subspace learning is a standard and commonly used method to handle heterogeneous features. Therefore, the contributions to handle heterogeneous features of this paper seem limited. The proposed technology, such as to use the conditional distribution, seems to be irrelative to handling the heterogeneous features because it can also be used in a homogeneous setting. The paper should clarify the relations between each technical contribution and the heterogeneous feature spaces.
2. In the experiments, the paper should compare with more SOTA methods which handle the heterogeneous features. In current version, most compared methods are not designed for heterogeneous features.
3. In the experiments, the heterogeneous feature spaces are randomly divided from the original data sets. However, in real applications, the distribution of heterogenous features is not in this case. For example, in real-world applications, each feature space has its own semantics and is not random. Therefore, it would be better to conduct experiments on at least one real-world heterogeneous data.
Technical Quality: 2
Clarity: 3
Questions for Authors: In Eq.(2), there is a term $L_{similar}$, but in Section 5.2, there is a term called $L_{contrastive}$. Are them the same? Is this a typo or are they two different losses?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The paper claims the limitations in Checklist.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Clarify the relations between each technical contribution and the heterogeneous feature spaces.**
Thank you for your comments. We will provide a more detailed discussion in the revised version. The following is a brief explanation:
**Subspace learning is a standard method for handling heterogeneous features. The critical aspects are from what to learn and how to learn it.** The learnware framework protects model providers' privacy without accessing their raw data and avoids collecting auxiliary data across feature spaces. **Without raw and auxiliary data, how can the learnware dock system effectively learn a unified subspace?**
In the learnware paradigm, subspace learning can be performed based on the RKME specification of the model. The RKME specification compresses the marginal distribution $P(X)$ through a small number of weighted sample points. However, **previous methods that lack label information rely on unsupervised learning techniques, leading to suboptimal subspace learning results**. Without label information, categories from different tasks may overlap in the subspace, impacting model recommendation and reuse effectiveness.
To address this, **we introduce supervised information into subspace learning to better align samples from each class of different tasks within the subspace**, enhancing the heterogeneous model hub's performance. Thus, we **extend the original RKME specification representation $\\{\beta\_m,\boldsymbol{z}\_m\\}\_{m=1}^M$ to include label information**, obtaining $\text{RKME}\_\text{L}$, represented as $\\{\beta\_m,\boldsymbol{z}\_m, y\_m\\}\_{m=1}^M$. We propose two methods: compressing the marginal distribution first and then generating the label, or **simultaneously compressing both the marginal distribution $P(X)$ and the conditional distribution $P(Y|X)$**. By incorporating label information, we **add supervised term for subspace learning**, obtaining a subspace with better properties and corresponding mapping functions to original feature spaces.
In summary, our main techniques are designed to enhance subspace learning.
---
**Q2: In the experiments, the paper should compare with more SOTA methods which handle the heterogeneous features. In current version, most compared methods are not designed for heterogeneous features.**
Thank you for your comments. **Indeed, most of the compared methods are designed for heterogeneous features and are SOTA methods.** We will describe the contenders more clearly in the revised version.
**This paper explores using knowledge from tasks with heterogeneous feature spaces to improve user task performance.**
First, we evaluate user self-training performance using LightGBM with the same training setup as the learnware preparation for fairness.
Next, **we compare our approach with recent SOTA methods that use the heterogeneous model hub for the user task**:
- **Align_unlabel (KDD 2024):** Minimizes MMD distance between the user task and learnware task based on RKME. We modified it to traverse the best model in the hub, improving performance.
- **Align_label (KDD 2024):** Builds on Align_unlabel by using user-labeled data for feature augmentation, further improving performance. We also traverse the hub for the best model.
- **Hetero (IJCAI 2023):** Generates a subspace for model recommendation/reuse using basic RKME without label information, creating a linear mapping function.
These solutions learn feature space mapping without label information. Our paper fully utilizes label information, resulting in significantly better performance.
**We also compare with other SOTA methods that use raw heterogeneous task data, compromising privacy**:
- **TabPFN (ICLR 2023):** Uses a Transformer to fit the posterior predictive distribution from synthetic tasks and applies it to real-world tasks. It outperforms other deep methods in the small data paradigm.
- **Transtab (NeurIPS 2022):** Uses feature descriptions and values to generate a unified subspace and trains a shared Transformer backbone, pioneering deep network training across heterogeneous tasks.
- **Xtab (ICML 2023):** Trains a shared backbone for different tasks with a specific input processor and prediction head, without using feature descriptions, offering flexibility in real-world scenarios.
**In summary, the contenders are highly relevant to heterogeneous feature spaces. We compared not only with SOTA methods that exploit the heterogeneous model hub but also with other SOTA methods that use raw data from heterogeneous tasks for new user tasks.**
---
**Q3: Conduct experiments on at least one real-world heterogeneous data.**
Thanks for your suggestion, we add **two real-world projects** to demonstrate the efficacy of proposed method, please see author rebuttal for details.
---
**Q4: In Eq.(2), there is a term Lsimilar, but in Section 5.2, there is a term called Lcontrastive. Are them the same? Is this a typo or are they two different losses?**
We apologize for any confusion caused. Thanks for your careful reading.
**In short, $L\_{\text{similar}}$ is a general term for similarity loss, and $L\_{\text{contrastive}}$ is a specific implementation.**
In Section 4.2, we define the general objective for subspace learning as Equation (2), based on all model specifications $\\{\boldsymbol{s}\_i^{\text{dev}}:=\\{(\boldsymbol{\beta}\_{ij},\boldsymbol{z}\_{ij},y\_{ij})\\}\_{j=1}^{m\_i}\\}\_{i=1}^N$:
$$\min\_{\\{h\_k,g\_k\\}\_{k=1}^Q} L=\alpha\_1L\_{\text{reconstruction}} + \alpha\_2L\_{\text{similar}} + \alpha\_3L\_{\text{supervised}}.$$
The similarity loss ensures that embeddings of different slices of the sample $\boldsymbol{z}_j$ are similar. This can be implemented in various ways, such as through contrastive loss, manifold loss, etc.
In Section 5.2, we provide a detailed implementation of the loss function, specifically using contrastive loss to measure similarity, which is a quite popular loss for self-supervised learning.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' responses. The authors have addressed some of my concerns. Since I'm not quite familiar with the learnwares for heterogeneous features (as shown in my Confidence), I'm not sure about the contribution to this community. Especially Reviewer PBMH also proposes some questions about the contributions.
---
Reply to Comment 1.1.1:
Comment: Thank you for reviewing our manuscript. Please feel free to let us know your other concerns. We are ready to provide further explanations to address any issues you may have. The following is a brief discussion of our paper contribution:
The learnware paradigm allows users to reuse existing models instead of training from scratch. Expanding its use to heterogeneous feature spaces can broaden its applications, but current approaches either compromise privacy (MLJ 2022) or are less effective in subspace learning (MLJ2022, IJCAI 2023).
**Our paper is the first to utilize label information to enhance handling heterogeneous learnwares, marking a significant advancement of handling heterogeneous feature spaces from unsupervised to supervised.** We have refined the specification used for subspace learning to incorporate label information, which first encodes the model's capabilities. This new specification also facilitates better task matching between the learnware and the user task by considering the model’s abilities in the matching process, which is quite superior to previous methods based on solely the marginal distribution.
Broader Impact: As the specification is the most fundamental part of the learnware paradigm, its improvement can enhance the entire framework. **The previous specification lacked supervision information, but our new one includes it. This make top-layer procedure based on specification from unsupervised to supervised, which is a significant advancement.** With the more powerful specification, the learnware paradigm can address more difficult problems effectively. For example, the inclusion of supervision information in the newly proposed specification will be crucial in tackling challenges such as simultaneously heterogeneous feature and label spaces. This enhanced specification also opens the possibility for applying the learnware paradigm to more complex and diverse heterogeneous scenarios. | Summary: In this submission, authors focus on the learnware paradigm, which aims to help users leverage numerous existing high-performing models instead of starting from scratch. They find that label information, including model prediction and user’s minor labeled data, is crucial and previously unexplored and then explicitly explore this information to address the problem in handling learnwares from heterogeneous feature spaces.
Strengths: 1. Learnware is useful learning paradigm and handling learnwares from heterogeneous feature spaces has more wide application scenarios.
2. To my knowledge, for handling learnwares from heterogeneous feature spaces, it might be the first attempt towards explicitly exploiting label information, including model prediction and user's minor labeled data.
3. For this new setting, comparative studies are designed, and the experimental result validate the effectiveness of the proposed method.
Weaknesses: 1. To my knowledge, there have been some works aiming at handling learnwares from heterogeneous feature spaces:
[a] Peng Tan, Zhi-Hao Tan, Yuan Jiang, and Zhi-Hua Zhou. Handling learnwares developed from heterogeneous feature spaces without auxiliary data. In Proceedings of the 32nd International Joint Conference on Artificial Intelligence, pages 4235–4243, 2023.
[b] Peng Tan, Zhi-Hao Tan, Yuan Jiang, and Zhi-Hua Zhou. Towards enabling learnware to handle heterogeneous feature spaces. Machine Learning, 113(4):1839–1860, 2024.
However, these works are only cited in reference list while not adequately discussed.
2. The main contribution might be handling learnwares from heterogeneous feature spaces via explicitly exploiting label information. The current abstract does not focus on this contribution. Moreover, RKME is the most commonly-used specification in learnware, this paper is also based on RKME, so discussion on the newly proposed specification should be given in detail, especially the differences with existing RKME specification. If I misunderstand something, authors can rely in the rebuttal phase.
3. It is mentioned that "the recommended heterogeneous learnware significantly outperforms user self-training with limited labeled data", it is similar to semi-supervised learning. Is it fair to compare the proposed method with user self-training with limited labeled data?
Technical Quality: 3
Clarity: 3
Questions for Authors: Please clarify the weaknesses mention above.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: In this paper, authors discussed the limitations as follows:
This paper assumes that all models with heterogeneous feature spaces share the same label space. However, this assumption can be extended to include heterogeneous label spaces as well, through the use of multiple learnware recommendations proposed in previous work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: To my knowledge, there have been some works aiming at handling learnwares from heterogeneous feature spaces. However, these works are only cited in reference list while not adequately discussed.**
Thanks for your careful reading. We will discuss more related papers in the revised version. Here is a brief discussion:
These works focus on **constructing and utilizing heterogeneous learnware markets in specific scenarios** with relatively fixed feature combinations, such as relational databases constructed from multiple related tables. For a specific task, the complete feature space consists of several blocks, and the user and learnware feature spaces are combinations of these blocks.
- **MLJ 2024:** The **first work** to consider heterogeneous learnware. **The original training data is accessible, and auxiliary data across the entire feature space is collected.** The paper assigns specifications to heterogeneous models, learns the subspace using the model's original data and auxiliary data, and then generates the RKME specification based on the projection results.
- **IJCAI 2023:** This study explores **organizing and utilizing a heterogeneous model hub without accessing the original data or auxiliary data across the feature space.** It performs subspace learning via the RKME specification. Developers upload models and specifications to the learnware market. The market learns the subspace and mapping function based on the specification, aligning learnware specifications with user requirements.
- **This paper:** This work **identifies the limitations of the RKME specification,** noting its inability to effectively support subspace learning and model recommendation due to the lack of label information. Unsupervised subspace learning can mix data from different tasks and categories, leading to poor model recommendation and reuse. The RKME specification only describes the marginal distribution, matching models based solely on this without considering their capabilities. **This paper embeds label information and model capabilities into the specification** to improve the heterogeneous learnware market mechanism. This **allows subspace learning to incorporate supervised information**, better aligning samples from each class of different tasks. In model recommendation, the model's classification boundary is matched with the user distribution, enhancing the performance of the heterogeneous model hub.
---
**Q2: Discussion of newly proposed specifications**
Thank you for your advice. We will revise the abstract and provide a detailed discussion on the newly proposed specification in the revised version. Below are the major differences between the two specifications:
- **Summary:** Specifications describe a model's ability without exposing raw data. $\text{RKME}$ (TKDE 2023) sketches the marginal distribution of the training data but lacks label information, which is insufficient for handling heterogeneous learnwares. We extend $\text{RKME}$ to $\text{RKME}_\text{L}$ by adding label information to better encode the model's ability through its conditional distribution.
- **Representation Form:** The original $\text{RKME}$ specification includes minor weighted samples $\\{\beta\_m,\boldsymbol{z}\_m\\}\_{m=1}^M$. The extended $\text{RKME}\_\text{L}$ specification includes minor *labeled* weighted samples $\\{\beta\_m,\boldsymbol{z}\_m,y\_m\\}\_{m=1}^M$, where $\beta\_m$ is the weight, $\boldsymbol{z}\_m$ is the generated sample, and $y\_m$ is the label.
- **Generation Mechanism:** $\text{RKME}$ outlines the marginal distribution $P(X)$ of a dataset. In contrast, $\text{RKME}\_\text{L}$ includes label information, outlining both the marginal distribution $P(X)$ and the conditional distribution $P(Y|X)$. $\text{RKME}$ uses minor weighted samples to approximate the kernel mean embedding of the original data, with distance measured by MMD distance. $\text{RKME}\_\text{L}$ can either compress the marginal distribution $P(X)$ first and then generate labels or compress both $P(X)$ and $P(Y|X)$ simultaneously based on model predictions.
- **Impact on Subspace Learning:** With $\text{RKME}$, the lack of label information can result in poor model recommendation and reuse as different task categories may overlap in the subspace. In contrast, $\text{RKME}\_\text{L}$ allows for better alignment of samples from different tasks within the same class, leading to improved performance of the heterogeneous model hub.
- **Impact on Recommendation Mechanism:** $\text{RKME}$ only recommends models with a similar marginal distribution to the user task, ignoring the model's ability. $\text{RKME}\_\text{L}$ enables the system to match the model boundary with the user task boundary by comparing the conditional distribution $P(Y|X)$, resulting in better model recommendation outcomes.
---
**Q3: It is mentioned that "the recommended heterogeneous learnware significantly outperforms user self-training with limited labeled data", it is similar to semi-supervised learning. Is it fair to compare the proposed method with user self-training with limited labeled data?**
We apologize for any confusion caused and appreciate your careful reading. We will address this in the revised version.
**Our problem setup aligns with supervised learning; however, the user's training data may be insufficient.** In such cases, training a model with a small expected loss is challenging due to the lack of labeled data. By leveraging the heterogeneous learnware dock system to identify and reuse existing well-trained models, users can significantly enhance task performance, as demonstrated in the paper experiments.
---
**Q4: Extension for heterogeneous label space**
Thanks for your suggestion. Please see author rebuttal for a brief discussion.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. Some of my concerns have been addressed. I am open to hear discussions from other reviewers.
---
Reply to Comment 1.1.1:
Comment: Thank you for reviewing our manuscript. Please feel free to let us know your other concerns. We are ready to provide further explanations to address any issues you may have. | Summary: This paper introduces a novel approach for utilizing heterogeneous models with diverse feature spaces in the learnware paradigm. Key innovations include incorporating label information to improve model specification and subspace learning, and constructing a heterogeneous learnware dock system that uses pseudo labels to encode model capabilities. Extensive experiments show that the recommended heterogeneous learnwares outperform user self-training with limited labeled data and improve task performance as more labeled data becomes available.
Strengths: 1. Addresses a practical and important problem of handling heterogeneous models with diverse feature spaces in the learnware paradigm, which is often encountered in real-world scenarios.
2. Innovative approach of leveraging label information, including model predictions and user's minor labeled data, to enhance the model specification and subspace learning, going beyond previous methods that relied on raw data or auxiliary co-occurrence data.
3. Thorough experimental evaluation on real-world tasks, demonstrating the significant performance improvements over user self-training with limited labeled data.
Weaknesses: 1. The potential challenges or limitations of the proposed model are suggested to be given.
2. The paper does not provide a thorough analysis of the computational complexity and scalability of the proposed framework, which could be important for large-scale, real-world deployments.
3. The experiments are conducted on a single dataset, and a more diverse evaluation of different types of tasks and datasets would strengthen the generalizability of the findings.
Technical Quality: 3
Clarity: 3
Questions for Authors: None
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: The potential challenges or limitations of the proposed model are suggested to be given.**
Thanks for your advice. We discuss the limitations of the proposed method in the checklist (see Q2: limitations).
In this paper, we consider scenarios where the feature spaces of models in the learnware dock system are heterogeneous. However, real-world applications may also have heterogeneous label spaces. We will add more discussions on this in the revised version and please see the author rebuttal for a brief discussion.
---
**Q2: The paper does not provide a thorough analysis of the computational complexity and scalability of the proposed framework, which could be important for large-scale, real-world deployments.**
The process of the learnware paradigm is outlined as follows:
* **Building the Learnware Market:** Developers generate specifications, package them with models into learnware, and submit them to the learnware dock system. This system organizes the learnware and assigns system-level specifications to the models.
* **Utilizing the Learnware Market:** Users generate requirements and submit them to the system, which then recommends appropriate learnware. Users can then reuse the learnware.
Below, we analyze the complexity of each process. The relevant symbols are defined as follows: the size of the raw dataset is $ n $, the number of categories is $ c $ (1 for regression), the dimension is $ d' $, and the size of the specification is $ m $. There are a total of $ N $ learnwares in the learnware dock system, and the complete feature space contains $ K $ sub-blocks with a total dimension of $ d $.
**1. Specification/Requirement Generation**
* **Initialization:** k-means clustering, with a complexity of $ O(nmcd') $.
* **Iterative Optimization:** Achieved through alternating optimization, with the bottleneck being the inverse matrix computation of the kernel matrix, which has a complexity of $ O(cn^3) $.
The total complexity is $ O(c(nmd' + T\_s n^3)) $. Typically, the number of iterations $ T\_s $ is small, around 10, and the size of the specification $ m $ is much smaller than the dataset size $ n $. When the sample size $ n $ is large, the complexity of specification generation is $ O(cn^3) $, scaling cubically with the size of the raw dataset $ n $. However, **with GPU acceleration, the time required for specification generation is quite small. For the dataset openml__volkert__168331 with shape (58310, 180), the generation time of specification is 2.826s (A100 80G*1).**
**2. Subspace Learning**
* **Single Epoch, Single Specification:** The complexities for calculating the contrastive loss, reconstruction loss, and supervised loss are $ O(m^2 K^2 d) $, $ O(mK d^2) $, and $ O(mcd + md^2) $ respectively, totaling $ O(m^2 K^2 d + mK d^2) $. The complexity of updating the model using the loss functions (both are two-layer fully connected networks) is $ O(m d^2) $. Thus, the complexity for subspace learning with a single specification is $ O(m^2 K^2 d + mK d^2) $.
* **All Epochs, All Specifications:** Considering all specifications and the number of iterations, the complexity for subspace learning is $ O(N T\_{sub} (m^2 K^2 d + mK d^2)) $.
**This complexity is linearly related to the number of learnwares $ N $ in the market and quadratically related to the size of the specifications $ m $,the dimension $ d $ of the feature space and the number of feature blocks $K$.**
**3. Learnware Recommendation**
The time complexity is $ O(Ncm^2) $, **linearly related to the number of learnwares $ N $ in the market**.
**4. Model Reuse**
When reusing a model, it only requires mapping the raw data through a two-layer fully connected network to the feature space corresponding to the heterogeneous learnware and making predictions using the learnware, which has low time complexity.
**5. Summary**
**The most time-consuming procedure of the learnware paradigm is the subspace learning for heterogeneous learnware recommendation. Even with a large number of learnwares $ N $, the total training data $ Nm $ for subspace learning remains manageable due to the small size of each specification (typically $ m = 50 $). Furthermore, our methods train an encoder and decoder for each feature block instead of each model. Although the number of models can be quite large, the number of feature blocks is relatively small.** An extreme case involves $ K $ blocks, resulting in $ 2^K $ feature spaces and $ 2^K $ models for each feature space, our method trains $K$ encoders and decoders, not $2^{K}$.
Besides the theoretical analysis, we will also provide the actual running time in the attached PDF to demonstrate the efficiency of the proposed framework. Our method have much less time for preparation (construct the learnware market) compared to the pre-training methods, and the time for utilization (recommend and reuse learnware) is also efficient. Please see author rebuttal for details.
---
**Q3: The experiments are conducted on a single dataset, and a more diverse evaluation of different types of tasks and datasets would strengthen the generalizability of the findings.**
We have conducted experiments on real-world projects, please see author rebuttal for details. Thanks for your suggestion.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' rebuttal, I do not have any questions.
---
Reply to Comment 1.1.1:
Comment: Thank you again for reviewing our manuscript, your suggestions are valuable to us. We hope that our response adequately addresses your concerns. Please kindly let us know if you need any further clarification and we are prepared to provide any additional information that might be helpful. We would also greatly appreciate it if you could consider increasing the score. | Summary: The paper focuses on the learnware paradigm and finds that label information plays an important role in it, which is both practical and interesting. It proposes a new specification that enhances subspace learning and improves learnware management. Extensive experiments demonstrate the superiority of the proposed methods.
Strengths: 1.The paper is well-written and easy to follow.
2.The paper addresses a significant gap in the learnware paradigm by proposing a method for handling heterogeneous feature spaces, a common real-world scenario.
3.The experiments are thorough, covering both classification and regression tasks, and comparing the proposed method against a wide range of baseline methods.
4.The code is provided in the appendix, enhancing the reproducibility of the results.
Weaknesses: 1.The method seems complex. Thus, a running time comparison is necessary to show its efficiency.
2.I wonder if the method can be used for larger data to show its scalability.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors could clarify the limitations and future work for better understanding.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: The method seems complex. Thus, a running time comparison is necessary to show its efficiency.**
Thank you for your suggestion. In the author rebuttal, we provided a brief discussion; here is a more detailed explanation.
**1. Evaluation Setup**
The evaluation setup includes the preparation and utilization phases. Since the preparation phase is done once by non-user entities, we focus on the utilization phase time required for user tasks. The evaluation setup for each method is as follows:
- **lightgbm**: Utilization phase only, training from scratch on user-labeled data.
- **TabPFN, Transtab, Xtab**: Both phases. Preparation involves obtaining a pre-trained model; utilization involves fine-tuning on user data.
- **Align_unlabeled, Align_labeled, Hetero, Our_unify, and Our_cls**: Both phases are involved. Preparation includes constructing and organizing a model hub, while utilization involves generating task requirements, recommending models, and reusing models on user data.
- Our_unify requires a self-trained model for pseudo-labeling during specification generation and is influenced by the model type and training parameters. Therefore, $\textbf{Our}_{\textbf{unify}}$ records the time assuming a pre-existing self-trained model, whereas $\textbf{Our}\_{\textbf{unify}}^{\text{lightgbm}}$ includes the entire process, including model training time using the time-consuming lightgbm method. Users can opt for a simpler model to reduce time costs.
**2. Summary of Evaluation Results**
- Classification Tasks:
- $\textbf{Our}\_{\textbf{unify}}$ ranks 1st, $\textbf{Our}\_{\textbf{cls}}$ ranks 3rd
- $\textbf{Our}\_{\textbf{unify}}^{\text{lightgbm}}$ ranks 7th (lightgbm ranks 5th).
- Regression Tasks:
- $\textbf{Our}\_{\textbf{unify}}$ ranks 2nd
- $\textbf{Our}\_{\textbf{unify}}^{\text{lightgbm}}$ ranks 4th (lightgbm ranks 3rd).
**3. Analysis of Utilization Phase**
Compared to other methods with $\textbf{Our}_{\textbf{unify}}$:
- **lightgbm**: Requires training a model from scratch, whereas our methods reuse relevant heterogeneous models, making them faster.
- **TabPFN, Transtab, Xtab**: Our methods do not need the entire labeled user data for fine-tuning, making them quicker.
- **Align_unlabeled, Align_labeled**: Our methods do not traverse the model hub but directly find the most suitable model, making them faster.
- **Hetero**: Similar process, but Hetero might recommend multiple models, whereas our methods recommend only one, making them faster.
Among our methods, $\textbf{Our}\_{\textbf{unify}}$ is faster than $\textbf{Our}\_{\textbf{cls}}$. $\textbf{Our}\_{\textbf{unify}}$ compresses X first, then uses the model to predict on the reduced set, generating the specification quickly if user model is already prepared. $\textbf{Our}\_{\textbf{cls}}$ predicts first, then compresses X and Y distributions simultaneously, making it slower.
**By efficiently generating user task requirements and quickly reusing heterogeneous models, our method reduces the utilization phase time compared to other methods.**
**4. Analysis of Preparation Phase**
- **TabPFN**: Pre-training from simulated datasets generated by structure causal model, learning the posterior distribution $P(y\_{test}|x\_{test},D\_{train})$ and applying it to small-scale real tasks. Pre-training time: 20 hours (8*RTX2080Ti).
- **Xtab**: Learns a shared backbone network from many tasks; pre-training time unknown.
- **Transtab**: Uses feature descriptions and values from related tasks to generate a unified subspace, training a shared Transformer backbone. Average pre-training time: 672 seconds (classification) and 1425 seconds (regression).
- **Align_unlabeled, Align_labeled**: No model library organization time.
- **Hetero**: Organization time: 2.44 seconds (classification) and 2.55 seconds (regression).
- **Our_unify**: Organization time: 41.39 seconds (classification) and 40.78 seconds (regression).
- **Our_cls**: Organization time: 45.69 seconds (classification).
Times are based on a single A100 GPU.
**Compared to methods using pre-trained models, our methods significantly reduce preparation time by not requiring training on complete datasets but using specifications, which are smaller. We also do not train a unified backbone network, only performing simple subspace learning.**
---
**Q2: I wonder if the method can be used for larger data to show its scalability.**
Thank you for your question. Indeed, our experiments test datasets of various sizes and show that even with more labeled data, our paradigm enhances user performance. We also discuss the scalability of the learnware dock system with numerous heterogeneous learnware in the author rebuttal.
**1. Our experiments test datasets of various sizes**
We tested datasets with varying sizes. For classification tasks, sample sizes range from 1,000 to 58,310, feature dimensions from 7 to 7,200, and classes from 2 to 10. For regression tasks, sample sizes range from 418 to 108,000, with feature dimensions from 8 to 128.
**2. Our method boosts performance even with more labeled data**
Tables 1 and 2 in our paper show that with only 100 labeled samples, using the recommended heterogeneous model significantly outperforms user self-training. Figures 4 and 5 illustrate that combining the heterogeneous model with user self-trained models improves performance across different labeled data amounts. **On average, with 500 labeled samples, reusing the recommended model still outperforms self-training. With 2,000 labeled samples, the heterogeneous model continues to enhance performance. Even with 5,000 labeled samples, learnware improves 21% of classification cases and 50% of regression cases.** In some cases, like kin8nm, even using the entire training dataset, the heterogeneous model improves user performance.
**3. The cost of organizing a learnware dock system with numerous heterogeneous learnware is relatively small**
Please refer to the author rebuttal for details.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you for the rebuttal. As my original scoring is optimistic, I retain my scoring to this paper with higher confidence.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thank you for increasing your confidence level to 4 (confident but not absolutely certain). We greatly appreciate your thorough review of our manuscript and your valuable suggestions. If you have any further questions, please feel free to ask. We are ready to provide any additional information that might be helpful. | Rebuttal 1:
Rebuttal: Dear Reviewers,
**Please see the attached one-page PDF with a summary of additional experimental results regarding time analysis and performance on real-world projects.**
We would like to thank all reviewers for their constructive feedback, which has greatly improved our paper. We are encouraged by the reviewers' positive remarks, including:
* The **practical and important problem** of handling learnwares from heterogeneous feature spaces (Reviewer NnRb, 8wdE) and the recognition of learnware as a useful learning paradigm (Reviewer PBMH).
* The **novel observation** that matching with only the marginal distribution is insufficient (Reviewer HLyF).
* The well-written and **easy-to-follow** paper (Reviewer NnRb, HLyF), with thorough (Reviewer NnRb) and good (Reviewer HLyF) experiments .
---
We have diligently worked on addressing your suggestions and have summarized the additional experiments and analysis below.
**1. Running Time and Scalability**
**Running Time.** Our experiments demonstrate the efficiency of our method through comparative analysis (see Tables 1 and 2 in the attached PDF). The evaluation is divided into the preparation phase (pretraining a model/building the model hub) and the utilization phase (fine-tuning the pre-trained model/recommending and reusing models), with a primary focus on the latter as it involves user tasks. **When a user has a self-trained model for pseudo-labeling a reduced set during specification generation, our method** $\textbf{Our}\_{\textbf{unify}}$ **ranked 1st in classification tasks and 2nd in regression tasks.** Even when including our method with a self-training procedure using the time-consuming LightGBM, $\textbf{Our}\_{\textbf{unify}}^{\text{lightgbm}}$, it is still much quicker than most pretrained methods (Transtab, Xtab) and the model hub method that requires fine-tuning the model $\textbf{Align}\_{\textbf{label}}$. **The efficiency of our method is heavily based on specifications, which are much smaller than the original dataset.** Specifications quickly sketch the user task and help match the relevant heterogeneous learnware, while the mapping functions learned during the learnware dock system construction help quickly reuse heterogeneous learnware. **All procedures avoid using the whole dataset for fine-tuning.** In the preparation phase, our method is much quicker than pre-trained methods (detailed results in the Response to Reviewer NnRb), because our method only reuses specifications for subspace learning with simple mapping functions, rather than training a shared, complicated Transformer-based backbone with raw data.
**Scalability.** In this part, we talk about the case when the learnware dock system have too many heterogeneous learnware. The most time-consuming aspect of the learnware paradigm is the subspace learning required for heterogeneous learnwares organization. **Even with a large number of learnwares $N$ , the total training data $Nm$ for subspace learning remains manageable due to the small size of each specification (typically $m=50$ ).** Additionally, our methods **train an encoder and decoder for each feature block rather than for each model**. Despite the potentially large number of models, the number of feature blocks is relatively small. For $K$ blocks resulting in $2^K$ feature spaces and models, our method trains only $K$ encoders and decoders, not $2^K$. This also helps address user tasks with combinations of feature blocks not present in the learnware dock system.
**2. Performance on Real-World Projects**
We tested our method on two real-world projects: a hot sales-forecasting competition (Predict Sales Forecasting, short for PFS) on Kaggle, a regression task with raw data containing 6 tables, and a widely used large clinical database of critical care units called MIMIC-III [Johnson etal., 2016], which contains 26 tables.
For PFS, we selected a popular feature engineering in the competition, and for MIMIC-III, we used the MIMIC-III benchmark [Harutyunyan et al., 2019]'s preprocessing and selected the in-hospital mortality task, a binary classification task. For processed data, we split the feature space according to its semantics for PFS and randomly split for MIMIC-III due to its large feature space dimension (714). For PFS, we further split the data by location, as sales forecasting is evaluated locally. The data for each location is at the hundred thousand level.
**Our methods ranked first, demonstrating superior performance over other contenders in both real-world medical classification and business regression tasks.**
---
Finally, we discuss the extension of our method. Although our method is designed for heterogeneous feature spaces, it can also be naturally extended to handle both heterogeneous feature and label spaces. We give a brief discussion as follows:
**3. Extension for heterogeneous feature & label spaces**
We assume that the overall feature space and label space for a task are represented as $\mathcal{X}=\mathcal{X}\_i\times\cdots\times\mathcal{X}_Q$ and $\mathcal{Y}$, respectively. When the learnwares have heterogeneous feature spaces $\mathcal{X}\_i=\times\_{k\in C\_i}\mathcal{X}\_k$ and label spaces $\mathcal{Y}\_i\subseteq\mathcal{Y}$, our method can be naturally extended to address this problem. We present the following extension as an example. The subspace can be learned in the same way, **with the supervised term playing an important role in label space alignment within the subspace**. For learnware recommendation, the learnware dock system can recommend the most suitable model **for each class** of the user's task using the MMD distance. For the recommended learnwares, reuse can be achieved through **dynamic classifier selection**, as demonstrated in previous work (TKDE 2023).
---
For other questions, please refer to our reviewer-specific feedback for further details. We hope our responses are satisfactory and can address your problem.
Pdf: /pdf/f60abb91aa1365b898c7e27e9afe2c56f480221c.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
InfoRM: Mitigating Reward Hacking in RLHF via Information-Theoretic Reward Modeling | Accept (poster) | Summary: This paper adds an information bottleneck objective for training a hidden representation and reward head in RLHF. They then empirically investigate advantages in policy optimization using such a reward model: there is less reward model overoptimization as judged by a gold RM, and overoptimization can be spotted by outlier detection in the latent space.
Strengths: The main strength is the apparently strong empirical results, highlighting the potential practical applicability for frontier model training.
Weaknesses: In my weaknesses, I fully focus on the theoretical analysis, hoping that other reviewers touch on scrutinizing the empirical investigations. Overall, the theoretical derivations contain numerous local mistakes. Sometimes, these mistakes "cancel out" again, leading to correct overall conclusions, but in one case, I remain skeptical or confused. There is a chance that the authors actually implemented everything correctly by good intuitions, but the theoretical investigations themselves are lacking in clarity and correctness. I now elaborate on this further.
*I am willing to raise my score if my theoretical concerns are fully addressed, though this potentially depends on how convinced the other reviewers are of the experiments.*
**1. Lower bound of $I(Y; S)$:** The authors give several *different* formulas for this lower bound:
- Equation (4): $I(Y; S) \geq E_{(x, y)}\Big[\int p_{\phi}(s, x) \log q_{\psi}(y \mid s) ds\Big]$.
- Equation (9): $I(Y; S) \geq \int p_{\phi}(x, s) \log q_{\psi}(y \mid s) ds dy$.
- Equation (27): $I(Y ; S) \geq \int p_{\phi}(s, x) \log q_{\psi}(y \mid s) ds$.
All three formulas are incorrect. The correct formula is:
- $I(S; Y) \geq E_{(x, y)} \Big[ \int p_{\phi}(s \mid x) \log q_{\psi}(y \mid s) ds \Big]$.
This can be verified by reading their reference [1], "Deep variational information bottleneck", Equation (12). Equation (4) is incorrect since it contains a factor $p(x)$ too much. Equation (9) is incorrect since it's not coherent: $x$ is not integrated over, but appears in the integrand. Same for (27): $x$ and $y$ appear in the integrand, but are not integrated over.
However, despite these many local mistakes, their end result, Equation (29), seems to contain the correct (!) lowerbound once everything is written down in concrete samples. However, this requires one more unstated assumption, namely that $p_{\phi}(s \mid x) = \delta_{f_{\phi}(x)}(s)$, i.e., that the distribution in the hidden state is deterministically the output of $f_{\phi}$. If this is assumed, then we have an explanation for why the integral over $S$ in Equation (29) disappears, and everything seems correct.
**2. Upper Bound of $I(X, S \mid Y)$:**
- The authors claim there is a Markov chain $X \to S \to Y$. This is incorrect: There is a Markov chain $S \to X \to Y$. It is obvious that the first Markov chain cannot hold in general since the probability of $S$ given $X$ is given by the encoder, so if the encoder removes ALL information, then no information about $Y$ can be left. I think the authors may have confused themselves by thinking about the Markov chain $X \to S \to \hat{Y}$, where $\hat{Y}$ is the RV of *outputs of the model* instead of the true data distribution.
- The authors derive locally incorrect conclusions from the Markov chain property in Equations (22) and (23).
- Equation (24) is also wrong in general: E.g., if the encoder deterministically maps each $x$ to the same point $s = \star$, then $I(X; S) = 0$, but the RHS would usually still be positive.
Despite these many mistakes, the authors somehow conclude with the correct (!) inequality $I(X; S \mid Y) \leq I(X; S)$ in Equation (25), and the rest of the derivation does not make use of the mistakes before anymore. Note, however, that the inequality in (25) is very elementary and well-known, and so does not need a one page long proof. One could either state it without proof or very easily find references for it (e.g., it's a sort-of dual of the data processing inequality and e.g. [follows directly from the proof given in wikipedia](https://en.wikipedia.org/wiki/Data_processing_inequality)). Though my true suggestion, actually, would be to just replace $I(X; S \mid Y)$ by $I(X; S)$ entirely since this is closer to the variational bound, increasing clarity on what is done in this paper.
In the final formula for the upper bound of $I(X;S \mid Y)$ in Equation (29), there appears the following KL divergence: $KL[f_{\phi}(x_w), r(S)]$. **I do not understand what this is:** As established before, to make sense of the lower bound of $I(S; Y)$, I assumed that $f_{\phi}$ is a deterministic function. However, to put it into the KL divergence, it would need to be a distribution, so the type signatures do not match. Perhaps the authors mean the Dirac delta distribution on the unique output? (I did not check whether that case then makes sense)
**3. Minor Weaknesses (Addressing those won't change my score):**
- Footnotes should be placed after punctuation marks, not before
- Sometimes, some clarity is missing. E.g., I needed a bit of time to figure out that $X$ denotes *input pairs* and $Y$ denotes *choices* in Section 3.2. I first thought it would be inputs and rewards. I'd recommend clarifying this. The confusion arose, e.g., because you denoted $Y$ as the "reward model output". (Besides, it's not even the preference model output but instead the true preference of the gold RM! Clarity on this might have prevented the wrong Markov chain I mention above.)
- I recommend the notation $I(X; Y)$ over $I(X, Y)$, since the latter could be confused with the entropy of the joint variable $(X, Y)$.
Technical Quality: 2
Clarity: 2
Questions for Authors: See above.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Yes, though my impression was that the broader impacts and limitations should have been explained within the 9 page page-limit, though I am not entirely sure. In the submitted paper, both are on page 10. Maybe it's worth clarifying, should the paper be accepted, whether these sections should be within 10 pages or whether they're then allowed to even be on *page 11*.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for acknowledging our strong empirical results. We will address each of your comments below and also in our revised manuscript.
---
> **W1:** Some typos and missing assumptions in variational lower bound derivation.
>
**RW1:** Thank you for carefully checking our derivation process and pointing out the typos and missing assumptions. Based on your comments, we will modify the variational lower bound derivation process as follows:
- First, we will correct the typo in the lower bound of $I(Y; S)$ to $\mathbb{E} _{(x,y)} \left[ \int p _{\phi}(s \mid x) \log q _{\psi}(y \mid s) ds \right]$ in the revised version.
- Second, we will add the assumption about the hidden state distribution for Equation (29) in Appendix A of our paper:
“*The distribution of the hidden state $p_{\phi}(s \mid x)$ follows a Gaussian distribution with the mean and variance determined by the output of the encoder* $f_{\phi}(\cdot)$”
- Third, we will remove the proof of $I(X; S \mid Y) \leq I(X; S)$ in Appendix A of our paper and directly use it as a well-known inequality.
It is worth noting that although there are some local issues in our variational lower bound derivation, such as typographical errors and missing assumptions, **they are primarily located in Appendix A and do not affect the final derivation results.** Therefore, **they barely impact other parts of the paper, and our overall work remains valid.**
> **W2:** Question about **$KL[f_{\phi}(x_w), r(S)]$.**
>
**RW2:** Thanks for pointing this out. Based on the hidden state distribution assumption in RW1, this term should be corrected to $KL[p_{\phi}(s \mid x), r(s)]$. In the revised version, we will double-check our paper to avoid similar errors.
> **W3:** Some unclear notations.
>
**RW3:** Thanks for your careful suggestion. In the revised version, we will correct the position of the footnotes, further clarify the definition of $X$ and $Y$, and replace $I(X,Y)$ by $I(X;Y)$.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal!
> Second, we will add the assumption about the hidden state distribution for Equation (29) in Appendix A of our paper: “The distribution of the hidden state follows a Gaussian distribution with the mean and variance determined by the output of the encoder”
- Could you write down in formulas the relationship between $f_{\phi}$ and $p_{\phi}$? Including the precise formula for mean and variance?
- As I said, assuming a Dirac distribution, I could make sense of the first half of Equation (29). Could you provide a full derivation based on your Gaussian assumption?
> Third, we will remove the proof of $I(X;S \mid Y) \leq I(X;S)$ in Appendix A of our paper and directly use it as a well-known inequality.
I actually recommend (and also stated so) to directly start with $I(X;S)$ in the objective in Equation (3) since this is closer to what you actually optimize and the standard in the literature.
---
Rebuttal 2:
Title: Response to Reviewer YUjH [Part I]
Comment: > **Q1:** The relationship between $f_{\phi}$ and $p_{\phi}$.
>
**Response:** By assuming latent representation distribution $p_{\phi}(\boldsymbol s| \boldsymbol x)$ follows a multivariate Gaussian with a diagonal covariance structure, whose mean and covariance are determined by the output of the encoder $f_{\phi}(\boldsymbol x)$, the relationship between $f_{\phi}(\boldsymbol x)$ and $p_{\phi}(\boldsymbol s|\boldsymbol x)$ can be formulated as follows:
$p_{\phi}(\boldsymbol s \mid\boldsymbol x) = \mathcal{N}(\boldsymbol s \mid f_{\phi}^{\boldsymbol \mu}(\boldsymbol x), f_{\phi}^{\boldsymbol \sigma}(\boldsymbol x))=\frac{1}{\sqrt{(2\pi)^k |\boldsymbol \Sigma|}} \exp\left( -\frac{1}{2} (\boldsymbol s - f_{\phi}^{\boldsymbol \mu}(\boldsymbol x))^\top \boldsymbol \Sigma^{-1} (\boldsymbol x - f_{\phi}^{\boldsymbol \mu}(\boldsymbol x)) \right)$,
where $\boldsymbol x$ is the network input, $\boldsymbol s$ is the latent representation, $f_{\phi}^{\boldsymbol \mu}(\boldsymbol x)$ is the mean of $\boldsymbol s$, and $\boldsymbol \Sigma$ is the diagonal covariance matrix determined by $f_{\phi}^{\boldsymbol \sigma}(\boldsymbol x)$. **Specifically, the encoder $f_{\phi}(\boldsymbol x)$ generates two outputs: $f_{\phi}^{\boldsymbol \mu}(\boldsymbol x)$ and $f_{\phi}^{\boldsymbol \sigma}(\boldsymbol x)$. The first output, $f_{\phi}^{\boldsymbol \mu}(\boldsymbol x)$, represents the $K$-dimensional mean of the latent representation $\boldsymbol s$. The second output, $f_{\phi}^{\boldsymbol \sigma}(\boldsymbol x)$ is squared to form the diagonal elements of the $K \times K$ diagonal covariance matrix $\boldsymbol \Sigma$.**
In the revised version, We will further clarify the relationship between $f_{\phi}$ and $p_{\phi}$.
---
Rebuttal 3:
Title: Response to Reviewer YUjH [Part II]
Comment: > **Q2:** Full derivation of Equation (29) based on Gaussian assumption.
**Response:** The full derivation of Equation (29) in our paper is as follows:
Let $\boldsymbol X$, $\boldsymbol S$, and $Y$ denote the random variable of reward model input, latent representation, and human preference ranking respectively. The variational lower bound of our IB objective can be formulated as follows:
$J(\boldsymbol{\theta})=I(\boldsymbol S;Y)-\beta I(\boldsymbol X;\boldsymbol S) \geq \mathbb{E} _{(\boldsymbol x,y)}\left[\int p _\phi(\boldsymbol s|\boldsymbol x) \log q _\psi(y | \boldsymbol s) d\boldsymbol s \right] - \beta\ \mathbb{E} _{\boldsymbol x}\left[KL(p _{\phi}(\boldsymbol S|\boldsymbol x)||r(\boldsymbol S))\right]=L$,
where $r(\boldsymbol s)=\mathcal{N}(\boldsymbol{s};\mathbf{0},\mathbf{I})$ is the variational approximation of the marginal distribution $p(\boldsymbol s)$. Notably, $p_{\phi}(\boldsymbol{s}|\boldsymbol{x})$ is modeled as a multivariate Gaussian with a diagonal covariance structure, where the mean and covariance are both determined by the output of the encoder $f_{\phi}(\boldsymbol{x})$, specifically $f_{\phi}^{\boldsymbol{\mu}}(\boldsymbol{x})$ for the mean and $f_{\phi}^{\boldsymbol{\sigma}}(\boldsymbol{x})$ for the covariance; please see the response to Q1 for their detailed relationship. Then, given a latent representation $\boldsymbol s$ drawn from $p_{\phi}(\boldsymbol s|\boldsymbol x)$, the decoder $g_{\psi}(\boldsymbol s)$ estimates the human preference ranking $y$ based on the distribution $q_{\psi}(y|\boldsymbol s)$.
By estimating the expectation on $(\boldsymbol x, y)$ using the sample estimate based on the preference dataset $\mathcal{D}=[\boldsymbol x_n,y_n] _ {n=1}^N$, where $\boldsymbol x_{n}$ comprises a human-chosen sample $\boldsymbol x_{n}^w$ and a human-rejected sample $\boldsymbol x_{n}^l$, with $y_n$ representing the corresponding human preference ranking, the variational lower bound of our IB objective can be approximated as follows:
$L \approx \frac{1}{N} \sum_{n=1}^{N} \left[ \int p_{\phi}(\boldsymbol s|\boldsymbol x_n) \log q_{\psi}(y_n|\boldsymbol s)d\mathbf s - \beta \ KL(p_{\phi}(\boldsymbol S|\boldsymbol x_n) ||r(\boldsymbol S)) \right].$
Based on the Gaussian distribution assumption on $p_{\phi}(\boldsymbol s|\boldsymbol x)$, we can use the reparameterization trick to write $p(\boldsymbol s|\boldsymbol x)d\boldsymbol s = p(\boldsymbol \epsilon)d\boldsymbol \epsilon$, where $\boldsymbol \epsilon$ is an auxiliary Gaussian random variable with independent marginal $p(\boldsymbol \epsilon)$. In this way, $\boldsymbol s$ can be expressed by a deterministic function $\boldsymbol s = h_{\phi}(\boldsymbol x,\boldsymbol \epsilon)=f _ {\phi}^{\boldsymbol \mu}(\boldsymbol x)+ f _ {\phi}^{\boldsymbol \sigma}(\boldsymbol x)\boldsymbol \epsilon$.
Hence, we can get the following objective function:
$L \approx \frac{1}{N} \sum _ {n=1}^{N} \left[ \mathbb{E} _ {\boldsymbol \epsilon \sim p(\boldsymbol \epsilon)} \left[\log q _ {\psi}(y _ n | h _ {\phi}(\boldsymbol x _ n, \boldsymbol \epsilon)) \right] - \beta \ \text{KL} \left[ p _ {\phi}(\boldsymbol S|\boldsymbol x_n), r(\boldsymbol S) \right]\right]$
In our experiments, we further employ a sample estimate to determine $\mathbb{E} _ {\boldsymbol \epsilon \sim p _ (\boldsymbol \epsilon)} \left[\log q_{\psi}(y_n | h_{\phi}(\boldsymbol x_n, \boldsymbol \epsilon)) \right]$, by sampling a $\boldsymbol \epsilon$ from $p(\boldsymbol \epsilon)$, balancing computational complexity. Thus our objective can be estimated as follows:
$L \approx \frac{1}{N} \sum_{n=1}^{N} \left[ \log q_{\psi}(y_n | h_{\phi}(\boldsymbol x_n, \boldsymbol \epsilon)) - \beta \ \text{KL} \left[ p_{\phi}(\boldsymbol S|\boldsymbol x_n), r(\boldsymbol S) \right]\right]$ .
**Due to word count limitations, the derivation continues in the next block, i.e., Response to Reviewer YUjH [Part III].**
---
Rebuttal 4:
Title: Response to Reviewer YUjH [Part III]
Comment: **Continuing from the previous block, i.e., Response to Reviewer YUjH [Part II], we now complete our derivation.**
As established in the previous block, we have derived that our objective can be estimated as follows:
$L \approx \frac{1}{N} \sum_{n=1}^{N} \left[ \log q_{\psi}(y_n | h_{\phi}(\boldsymbol x_n, \boldsymbol \epsilon)) - \beta \ \text{KL} \left[ p_{\phi}(\boldsymbol S|\boldsymbol x_n), r(\boldsymbol S) \right]\right]$ .
According to the Bradley-Terry Model, the human preference distribution $p(y_n)$ can be formulated as:
$p(y_n) = p(\boldsymbol x_{n}^w \succ \boldsymbol x_{n}^l)= \sigma(r(\boldsymbol x_{n}^w)-r(\boldsymbol x_{n}^l))$,
where $\sigma(\cdot)$ is the logistic function, and $r(\cdot)$ is the reward model. Notably, in this work, reward model $r(\cdot)$ consists of the previously mentioned encoder $f_{\phi}(\cdot)$ and decoder $g_{\psi}(\cdot)$ and can be expressed as follows:
$r(\boldsymbol x) = g_{\psi}(h_{\phi}(\boldsymbol x_n, \boldsymbol \epsilon))= g_{\psi}(f_{\phi}^{\boldsymbol \mu}(\boldsymbol x)+ f_{\phi}^{\boldsymbol \sigma}(\boldsymbol x)\boldsymbol \epsilon)$.
Combining the two equations, we obtain:
$\log q_{\psi}(y_n | h_{\phi}(\boldsymbol x_n, \boldsymbol \epsilon)) = \text{log}\ \sigma(g_{\psi}(h_{\phi}(\boldsymbol x_n^{w}, \boldsymbol \epsilon)) - g_{\psi}(h_{\phi}(\boldsymbol x_n^{l}, \boldsymbol \epsilon)))$ .
Now, our estimation of the objective becomes:
$L \approx \frac{1}{N} \sum_{n=1}^{N} \left[ \text{log}\ \sigma(g_{\psi}(h_{\phi}(\boldsymbol x_n^{w}, \boldsymbol \epsilon)) - g_{\psi}(h_{\phi}(\boldsymbol x_n^{l}, \boldsymbol \epsilon))) - \beta \ \text{KL} \left[ p_{\phi}(\boldsymbol S|\boldsymbol x_n^w), r(\boldsymbol S) \right] - \beta \ \text{KL} \left[ p_{\phi}(\boldsymbol S|\boldsymbol x_n^l), r(\boldsymbol S) \right]\right]$ ,
in which $\text{KL} \left[ p_{\phi}(\boldsymbol S|\boldsymbol x_n), r(\boldsymbol S) \right]$ is replaced by $\text{KL} \left[ p_{\phi}(\boldsymbol S|\boldsymbol x_n^w), r(\boldsymbol S) \right] + \text{KL} \left[ p_{\phi}(\boldsymbol S|\boldsymbol x_n^l), r(\boldsymbol S) \right]$.
Recalling that $h_{\phi}(\boldsymbol x,\boldsymbol \epsilon)=f_{\phi}^{\boldsymbol \mu}(\boldsymbol x)+ f_{\phi}^{\boldsymbol \sigma}(\boldsymbol x)\boldsymbol \epsilon$, we abbreviate $g_{\psi}(h_{\phi}(\cdot, \boldsymbol \epsilon)))$ as $g_{\boldsymbol \psi} \circ f_{\boldsymbol \phi} (\cdot)$ for clarity and ease of understanding, leading to the final objective in our paper:
$L \approx \frac{1}{N} \sum_{n=1}^{N} \left[ \text{log}\ \sigma(g_{\boldsymbol \psi} \circ f_{\boldsymbol \phi}(\boldsymbol x_n^{w}) - g_{\boldsymbol \psi} \circ f_{\boldsymbol \phi}(\boldsymbol x_n^{l})) - \beta \ \text{KL} \left[ p_{\phi}(\boldsymbol S|\boldsymbol x_n^w), r(\boldsymbol S) \right] - \beta \ \text{KL} \left[ p_{\phi}(\boldsymbol S|\boldsymbol x_n^l), r(\boldsymbol S) \right]\right]$,
where $\sigma(\cdot)$ is the logistic function.
In the revised version, we will include this detailed derivation in the Appendix A
---
Rebuttal Comment 4.1:
Title: Response to Reviewer YUjH [Part IV]
Comment: > **Q3:** Suggestion about directly starting with $I(X;S)$ instead of $I(X;S|Y)$
>
**Response:** Thank you for your valuable suggestion. In the revised version, we will directly start by minimizing $I(X;S)$ and use this term consistently throughout the paper for greater clarity.
---
Rebuttal 5:
Title: Response to Reviewer YUjH
Comment: > **Q1:** Some clarification questions about $\boldsymbol S$.
**Response:** Thank you for your valuable suggestion. You are indeed correct, and we recognize that the notation for $\boldsymbol S$ has caused some confusion. In the revised version of our manuscript, we intend to make the following clarifications to address the points you raised:
1. We will clarify that the notation $\boldsymbol S$ represents the tuple $(\boldsymbol S^w, \boldsymbol S^l)$, where $\boldsymbol S^w$ and $\boldsymbol S^l$ denote the latent representation corresponding to the accepted and rejected samples, respectively.
2. We will replace $p_{\phi}(\boldsymbol s|\boldsymbol x_n^w)$ with $p_{\phi}(\boldsymbol s^w|\boldsymbol x_n^w)$, and similarly, $p_{\phi}(\boldsymbol s|\boldsymbol x_n^l)$ will be replaced with $p_{\phi}(\boldsymbol s^l|\boldsymbol x_n^l)$.
3. We will clarify that $p_{\phi}(\boldsymbol s|\boldsymbol x_n)$ is equivalent to $p_{\phi}(\boldsymbol s^w|\boldsymbol x_n^w) \cdot p_{\phi}(\boldsymbol s^l|\boldsymbol x_n^l) .$
4. We will further clarify that $r(\boldsymbol S)$ is the independent product distribution over the tuple $(\boldsymbol S^w, \boldsymbol S^l)$, i.e., for an instance $(\boldsymbol s^w, \boldsymbol s^l)$, $r(\boldsymbol s)= r(\boldsymbol s^w) \cdot r(\boldsymbol s^l)$, where $r(\boldsymbol s^w)$ and $r(\boldsymbol s^l)$ each represent a single standard Gaussian distribution $\mathcal{N}(\mathbf{0}, \mathbf{I})$.
Furthermore, we will thoroughly review our entire paper to prevent similar questions. We appreciate your valuable suggestion, which has significantly helped us enhance the clarity and precision of our derivations.
---
Rebuttal Comment 5.1:
Title: Summary of the discussion
Comment: Thank you for answering my last clarification questions!
Here's a summary of the discussion (e.g., for the area chair):
In the discussion, the authors provided a correct derivation of their optimization objective. Crucially, the discussion **revealed that the authors sample Gaussian noise via a reparameterization trick**. This was entirely omitted from the original submission, and thus the optimization objective in the discussion looks a bit different from the one found in the original submission. Importantly, **this is not just a detail in the derivation, but impacts the concrete implementation of the method**.
I am increasing my score to 5 (borderline accept), which comes with some trust that:
- The authors are able to use the derivations in the discussion as a basis to write up a coherent derivation in the paper;
- It is actually true that the authors have used the reparameterization trick in their implementation as revealed in this discussion.
In my opinion, it depends on the area chair's trust in these two points (together with the other reviewer's discussions) whether the paper can be accepted.
---
Rebuttal 6:
Title: Confirmation of the implementation of the reparameterization trick
Comment: Reviewer YUjH,
We sincerely appreciate your recognition of our work and your timely responses during the rebuttal process. Additionally, **the reparameterization trick has indeed been used in our implementation**. **The related evidence can be found in our submitted code** from Line 331 to Line 338 in `InfoRM_code/utils/model/reward_model.py` for your kind reference.
Best regards. | Summary: This paper presents a novel way to train reward models form human preferences using an information bottleneck architecture and training method. They provide a derivation of how to train a reward model with an information bottleneck, and then produce empirical evidence of improved performance when using the InfoRM vs several baselines in the AlpacaFarm instruction-following setting. Compared with a single reward model and mean-ensemble, their results show their method has better performance and a better reward-KL frontier. The authors also introduce a method, Cluster Separation Index (CSI), which utilises the latent space of the IB to detect when overoptimisation of the reward model may be happening. They provide qualitative evidence that this CSI does somewhat correspond to overoptimisation occurring.
Strengths: The issue of overoptimisation, and improving RLHF in general, is one that is very important to the community, and so this paper's focus on that problem is beneficial.
The paper's utilisation of the IB approach to reward modelling is novel (to my knowledge), and demonstrates encouraging performance improvements.
The quality of the work is reasonable. The method is demonstrated to work well vs various baselines empirically, and several evaluations are used to back up this conclusion.
The dual use of the method as both increasing RLHF performance and producing an interpretable latent space for detecting overoptimisation is an added benefit of the methodology.
Weaknesses: ## Bigger Points
### Insufficient and imperfect comparison to baselines
In general, the comparison to baselines in all the settings you consider isn't sufficient to justify the method is outperforming the existing SOTA. You don't compare against WCO or UWO from https://arxiv.org/abs/2310.02743, or WARM from https://arxiv.org/abs/2401.12187 (or a variant of it that just uses the ensemble methods), or https://arxiv.org/abs/2403.05171. Some of these are contemporary, but WCO, UWO, WARM aren't given they're all at least 4 months old from the time of submission, and so comparing against them is important.
Further, there is a lack of clarity about how hyperparameters were chosen for each method (e.g. KL, learning rate), including baselines, which makes it unclear how much the hyperparameters were optimised for your method vs the baselines. Picking 1 learning rate for all methods means that if that learning rate is optimal for your method, it's not necessarily optimal for other methods, and which is an unfair advantage to your method. Having a systematic way of choosing hyperparameters for all methods would make the comparison to the baselines much more compelling. This is particularly important as your method introduces two new hyperparameters on top of some of the baselines.
One of the ppo training choices is suboptimal - "The critic model was initialized with the weight of the SFT model" - it should be set to the reward model, as that's been shown in previous work to improve performance (https://arxiv.org/abs/2009.01325). This could affect all methods, or it could affect different methods differently, so it would be useful to correct this choice in the evaluations of your method.
Finally, while you do compare to some baselines in the simulated setting, you only compare to very weak baselines in the realistic setting, meaning it's impossible to know whether your method is actually outperforming existing work in this setting.
### Limited Evaluation
You only perform experiments on one datasets, with a single policy size and a small range of reward model sizes. It would be beneficial to perform experiments on another dataset (for example TL;DR summarisation or anthropic-HH). Additional sizes would also be beneficial but are less important that another dataset to demonstrate the generality of your method.
Further, it seems that all results are only for a single seed. Running multiple seeds is important in this setting, as performance can vary substantially between seeds.
### Non-neutral language describing their own method and contributions.
Throughout the paper, the language describing your method and contributions is not sufficiently neutral, which makes the paper harder to read and less clear. It would be better if this over-enthusiastic language could be toned down through the paper. Some examples:
* 52-53 "Which cleverly addresses" - you don't need "cleverly" here
* 66 "We surprisingly discover" - no need for "surprisingly"
* 68 "meticulously" - as above
* 73 "significantly *mitigates* the risk" - this is specifically egrerious as "significantly" is often taken to mean statistically significantly, but you perform no statistical tests throughout the paper.
* 74 "*marked* improvement in RLHF performance"
## Smaller Points
* There's a typo in the legend of figure 5: sandard -> standard.
### Unclear motivation for your method
Throughout the paper you argue that misgeneralisation is an important cause of overoptimisation, that existing methods don't tackle this but that your method does. Due to the reliance on this argument to motivate your method, it would be useful to see a clearer definition of what you mean by misgeneralisation, and how existing methods don't tackle it. In my mind, increasing model size or training an ensemble are both ways of producing models that generalise better and are more robust (not just when training RMs), so claiming that these methods don't tackle misgeneralisation needs further clarity and justification.
### Lack of quantitative evaluation of CSI
The CSI method and interpretability of the IB space are promising additional benefits of your method. However, it would be useful to see quantitative results of how well using CSI as an early stopping method actually works vs other baselines (e.g. no early stopping, or early stopping on other candidate metrics), to get a sense of whether this method would actually be useful or not in practice.
## Summary
Overall, I think the proposed method has some promise, but the existing empirical results aren't sufficient to demonstrate it's effectiveness vs the current literature, and the clarity and motivation of the paper is unclear and limited by overly enthusiastic writing. I'm currently reccomending a reject. If the issues with comparisons to baselines were fully addressed and the method still performed better than existing approaches I would raise my score to a weak accept, and additionally if the language of the paper was adjusted, better evaluation of CSI was provided and evaluation with multiple seeds and over additional datasets was provided then I would raise my score to an accept. However, as it stands I don't believe the paper is worthy of acceptance.
[EDIT]: I have raised my score to a 6. The new results show much more convincingly that this method is better than the baselines compared against.
Technical Quality: 3
Clarity: 2
Questions for Authors: (Some of these are covered in the weaknesses section, but I will repeat them here).
* How did you select the hyperparameters for all your methods?
* Could you report reward model accuracy scores under standard training and your method, both In-distribution and out-of-distribution (e.g. to human preferences on the alpaca_eval test set). That would help with seeing whether your method is producing better RLHF performance because of better RM accuracy/generalisation, or due to another reason.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors discuss some of the limitations of their work, but they don't discuss any of those I broad up in the Weaknesses section, which I think are all major limitations of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for acknowledging the novelty, reasonability, and performance improvements of our method. We will address each of your comments below and also in our revised manuscript.
---
> **W1:** Comparison with UWO or WARM.
**RW1**: Thanks for your valuable comment. We would like to clarify that we have compared our method with Ensemble RM, i.e., UWO proposed in [1], in our simulated experiments. However, we did not include this comparison in our real-world experiments focusing on larger RMs due to computational constraints, as UWO requires loading multiple RMs during the RL process.
Following your suggestion, **we further include WARM [2] in our real-world experiments.** The results, reported in Table 1 of the submitted PDF, demonstrate that **our InfoRM achieves better RLHF performance as compared with WARM.**
Additionally, we would like to highlight that **our InfoRM is a fundamental and flexible framework that can easily integrate with other techniques to complement each other.** The results in Table 1 of the submitted PDF indicate that integrating InfoRM with WARM further enhances RLHF performance.
> **W2**: Systematic hyperparameters selection for compared methods.
**RW2:** Thanks for your suggestion. **For all methods in our real-world experiments, we directly use the recommended hyperparameters from the widely recognized technical reports** [3,4], i.e., a learning rate of 5e-7 and a KL penalty of 0.001 where applicable.
To further address your concern, we report the performance of each compared method with different hyperparameter settings in Table 2 of the submitted PDF. The results show that **a learning rate of 5e-7 and a KL penalty of 0.001 are indeed the optimal settings for all methods**, validating the fairness and reliability of our experiments.
> **W3:** The choice of initializing critic model from SFT model.
**RW3:** Thanks for your careful feedback. We would like to clarify that while we modify the network structure of the reward model, we do not alter the structure of the critic model to control the variables. **Due to these structural differences, we cannot initialize the critic model from our modified reward model**. To ensure fair comparisons, all critic models in our experiments are initialized from the SFT model.
Additionally, we would like to kindly argue that **initializing the critic model using the SFT model is also a viable option.** As stated in [4]: "Initializing the critic model with a reward or SFT model will converge to similar results."
> **W4:** More datasets and reward/policy model sizes in our experiment.
**RW4:** We would like to kindly argue that **our experiments are not limited to one dataset, one policy model, and a small range of reward model sizes.** We clarify our experimental settings in the table below:
||Datasets|Reward Model Size|Policy Model Size|
|---|---|---|---|
|Simulated Experiments|AlpacaFarm| 70M, 410M, and 1.4B|1.4B|
|Real-world Experiments|Anthropic-HH|7B|7B|
Following your suggestion, **we also add the TL;DR summarization dataset to our real experiments**. The results, reported in Table 1 of the submitted PDF, **show that our InfoRM outperforms the compared methods on this task as well.**
> **W5:** Running multiple seeds is important.
**RW5:** Following your suggestion, we conduct our real experiments with a new seed (100) and report the results on the AlpacaFarm dataset in the table below. Due to time and resource constraints, we will include the results from more seeds in the revised version to ensure a robust evaluation.
|Model|Opponent|Win|Tie|Lose|
|---|---|---|---|---|
||Standard RM|53.1|21.3|25.6|
|InfoRM|Standard RM w/KL|42.3|28.6|29.1|
||WARM|38.3|31.5|30.2|
> **W6:** More neutral descriptions.
**RW6:** Thank you for your suggestion. We will carefully review the entire paper and revise any descriptions that lack neutrality to enhance the readability of our paper.
> **W7:** Why existing methods of increasing model size or using ensemble models cannot effectively solve misgeneralization?
**RW7:** The underlying principle behind these existing methods is to ***implicitly* remove spurious features by increasing the reward model's capability**, which fails to directly address this issue and results in an inefficient solution.
In contrast, **our method *explicitly* identifies and eliminates spurious features more efficiently**. Specifically, our InfoRM achieves this by maximizing the utility of the latent representation for reward prediction while minimizing irrelevant human preferences. Our experimental results show InfoRM's superiority over existing methods with the same model size.
> **W8:** Effectiveness of our CSI as an early stopping method.
**RW8:** Based on your suggestion, we report the comparison results of our InfoRM with and without early stopping in Table 3 of the submitted PDF. As shown, **the early stopping strategy according to our CSI metric indeed enhances RLHF performance**, particularly on the Anthropic-Harmless dataset.
> **Q1:** The accuracy comparison between our InfoRM and Standard RM.
**RQ1:** Thanks for your valuable feedback. To address your concern, we report the accuracy of InfoRM and Standard RM on in-distribution reward model benchmarks (Anthropic-Helpful and Anthropic-Harmless) and out-of-distribution reward model benchmarks (AlpacaEval and Truthful QA) in Table 4 of the submitted PDF. The results demonstrate that **our InfoRM achieves better RM accuracy and generalization, leading to improved RLHF performance**.
[1] Coste, Thomas, et al. "Reward Model Ensembles Help Mitigate Overoptimization." ICLR 2024.
[2] Rame, Alexandre, et al. "WARM: On the Benefits of Weight Averaged Reward Models." ICML 2024.
[3] Bai, Yuntao, et al. "Training a helpful and harmless assistant with reinforcement learning from human feedback." *arXiv preprint* (2022).
[4] Zheng, Rui, et al. "Secrets of RLHF in large language models part i: PPO." *arXiv preprint* (2023).
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you for your detailed response, and all the new results.
> We would like to clarify that we have compared our method with Ensemble RM, i.e., UWO.
To clarify, when you write "Ensemble RM" you mean UWO? It would be better if this was made clearer in the paper, as I would take "Ensemble RM" to just describe using a mean ensemble rather than UWO. Given you are using UWO, how do you select the variance coefficient hyperparameter?
It would also be beneficial to compare to WCO and mean ensemble from Coste et al. for a more thorough set of baselines.
> the early stopping strategy according to our CSI metric indeed enhances RLHF performance
How exactly do you do early stopping with the CSI metric? Could you describe the algorithm here.
Overall, the new results present a much more convincing story of the methods improved performance compared to the baselines. I will raise my score to a 6 (weak accept). I would still like to see comparisons with ensemble methods in the realistic setting, even if they are more computationally expensive. If these were presented and InfoRM outperformed them I would raise my score to a 7.
---
Rebuttal 2:
Title: Response to Reviewer XuFQ [Part I]
Comment: > **Q1:** To clarify, when you write "Ensemble RM" you mean UWO? It would be better if this was made clearer in the paper, as I would take "Ensemble RM" to just describe using a mean ensemble rather than UWO. Given you are using UWO, how do you select the variance coefficient hyperparameter?
>
**RQ1:** Thank you for your valuable suggestion. In the revised version of our paper, we will use "Ensemble RM (UWO)" to refer to the UWO method. Additionally, for the two new baselines that we plan to add, namely Mean and WCO, we will use "Ensemble RM (Mean)" and "Ensemble RM (WCO)" to refer to them, respectively. The experimental results for the new baselines are reported in the response to Q2.
Regarding **the selection of the variance coefficient hyperparameter for Ensemble RM (UWO)** in our simulated experiments, we directly use the recommended value (i.e.,$\lambda$ = 0.1) for PPO from [1], as our simulated experimental setup aligns closely with that in [1]. To ensure fairness and reliability, we report the simulated RLHF performance of Ensemble RM (UWO) with different variance coefficients in the table below. The results demonstrate that $\lambda$ = 0.1 is indeed the optimal choice in our simulated experiments.
| Variance coefficient $\lambda$ | 0.05 | 0.1 | 0.5 | 1.0 |
| --- | --- | --- | --- | --- |
| Final gold score | 5.71 | 5.96 | 5.68 | 5.43 |
> **Q2:** It would also be beneficial to compare to WCO and mean ensemble from Coste et al. for a more thorough set of baselines. & I would still like to see comparisons with ensemble methods in the realistic setting
>
**RQ2:** Following your suggestion, we further include the Ensemble RM (UWO), Ensemble RM (WCO), and Ensemble RM (Mean) [1] in our real-world experiments. The comparison results are presented in the following table. To ensure fairness and reliability, we also report the performance of Ensemble RM (UWO) with different variance coefficients. The optimal settings, selected based on the highest win ratio, are highlighted in bold. The results show that **our InfoRM using a single RM achieves better RLHF performance compared to the ensemble methods**, and **integrating InfoRM with the ensemble methods further enhances RLHF performance**. We will also include the Ensemble RM (WCO) and Ensemble RM (Mean) in our simulated experiments in the revised version.
| Model | Opponent | | | Anthropic Helpful | | | Anthropic Harmless | | | AlpacaFarm | | | TL;DR Summary |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | | Win | Tie | Lose | Win | Tie | Lose | Win | Tie | Lose | Win | Tie | Lose |
| | Ensemble RM (WCO) | 47.2 | 35.5 | 17.3 | 52.1 | 39.9 | 8.0 | 39.8 | 35.1 | 25.1 | 63.9 | 23.4 | 12.7 |
| | Ensemble RM (Mean) | 48.7 | 32.9 | 18.4 | 54.0 | 35.9 | 10.1 | 41.7 | 38.3 | 20.0 | 65.3 | 24.7 | 10.0 |
| InfoRM | Ensemble RM (UWO) ($\lambda$=0.5) | 46.9 | 31.8 | 21.3 | 50.5 | 33.5 | 16.0 | 38.2 | 35.9 | 25.9 | 62.9 | 27.5 | 9.6 |
| | **Ensemble RM (UWO) ($\lambda$=0.1)** | 43.1 | 33.1 | 23.8 | 49.3 | 34.8 | 15.9 | 37.3 | 37.8 | 24.9 | 61.4 | 28.1 | 10.5 |
| | Ensemble RM (UWO) ($\lambda$=0.05) | 43.5 | 33.6 | 22.9 | 50.1 | 35.1 | 14.8 | 37.8 | 35.3 | 26.9 | 61.6 | 29.3 | 9.1 |
| InfoRM + Ensemble RM (UWO) ($\lambda$=0.1) | Ensemble RM (UWO) ($\lambda$=0.1) | 48.7 | 35.7 | 15.6 | 52.5 | 35.1 | 12.4 | 41.2 | 38.2 | 20.6 | 63.3 | 30.1 | 6.6 |
[1] Coste, Thomas, et al. "Reward Model Ensembles Help Mitigate Overoptimization." ICLR 2024.
---
Rebuttal 3:
Title: Response to Reviewer XuFQ [Part II]
Comment: > **Q3:** How exactly do you do early stopping with the CSI metric? Could you describe the algorithm here.
>
**RQ3:** We will **first provide the details of our previous early stopping validation experiments**, explaining how we use the CSI metric to select the stopping point during model training. **Following this, we will provide an automated early-stopping algorithm** based on our CSI metric for executing early stopping.
**Early Stopping Validation Experimental Details**: In this experiment, we implemented early stopping by saving multiple checkpoints, visually inspecting their CSI values, and selecting the one before a significant increase in the CSI metric as the final checkpoint. This process is validated by our observations that overoptimization correlates with a significant increase in the CSI metric, making visual inspection effective, as demonstrated in Section 5 of our paper. However, we acknowledge that automating this process by quantifying CSI metric changes would be more cost-effective. Below, we provide an automated early-stopping algorithm based on the CSI metric.
**Automated Early Stopping Algorithm Based on the CSI Metric**: The CSI-based early stopping algorithm is detailed as follows:
1. Set a maximum tolerable CSI change rate, $\epsilon_{\text{max}}$, which is empirically set to a relatively large value of 10. Let $C_t$ represent the CSI value at the $t$-th evaluation step. The change in CSI at this step is given by $\Delta_t = |C_t - C_{t-1}|$.
2. Calculate the ratio of the CSI change at the $t$-th evaluation step, $\Delta_t$, to the average change across all previous steps, $\frac{1}{t-1} \sum_{i=1}^{t-1} \Delta_i$. This ratio is denoted as $\epsilon_t=\Delta_t / (\frac{1}{t-1} \sum_{i=1}^{t-1} \Delta_i)$.
3. If $\epsilon_t > \epsilon_{\text{max}}$, trigger early stopping and exit the iteration. Otherwise, continue training.
To facilitate understanding, we summarize this algorithm as follows:
---
**Input:** Maximum tolerable CSI change rate $\epsilon_{\text{max}}$, initial CSI value $C_0$, maximum steps $T$
**Initialize:** $C_{\text{prev}} \gets C_0$
1. **For** $t \gets 1$ to $T$ **do**:
1. Update model parameters.
2. $C_t \gets$ `evaluate_CSI(model)`
3. $\Delta_t \gets |C_t - C_{\text{prev}}|$
4. $\epsilon_t=\Delta_t / (\frac{1}{t-1} \sum_{i=1}^{t-1} \Delta_i)$
5. **If** $\epsilon_t > \epsilon_{\text{max}}$ **then**:
1. Trigger early stopping and exit loop.
2. **Break**
6. $C_{\text{prev}} \gets C_t$
**Output:** Final model before early stopping.
---
---
Rebuttal 4:
Title: Thanks for your response!
Comment: Dear Reviewer XuFQ,
Thank you for your positive feedback and for raising your score. We appreciate your detailed review and valuable comments. We hope our latest responses further address your concerns.
Best regards. | Summary: This paper proposes a variational information bottleneck (IB) objective for rewarding modeling in RLHF to mitigate the reward misgeneralization issue, which can cause overoptimization. The authors propose a variational information bottleneck objective to filter out irrelevant information and identify a correlation between overoptimization and outliers in the IB latent space. They also introduce the Cluster Separation Index (CSI) as an indicator for detecting reward overoptimization. Experimental results reveal the representation of IB latent space can be used to form an indicator of reward overoptimization.
Strengths: The paper is well-motivated, addressing a critical issue in RLHF, and is well-written with clear explanations of the introduced concepts.
Mitigating the reward misgeneralization issue using IB seems to be a good idea. The introduction of CSI as an overoptimization detection tool is a valuable contribution, though it is only constrained to InfoRM.
Weaknesses: Lack of reward model evaluations. Since the motivation of the paper is to mitigate the reward misgeneralization issue, there is no results to directly support such the claim as far as I am concerned.
It is not clear whether IB loss would affect the reward modeling abilities, such as accuracies and OOD generalization abilities.
Technical Quality: 3
Clarity: 4
Questions for Authors: Does IB loss total address the misgeneralization issue? If not, when is IB inefficient? Could you provide the Upper bound of generalization error?
Using GPT-4 to identify overoptimization is promising. Nevertheless, it’d be nice to study how much it align with human identifications.
What is the architecture of RM encoder?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The additional computational overhead of training InfoRM is not discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for acknowledging the clarity of our paper and recognizing the potential of using IB to address reward misgeneralization in RLHF. We appreciate your positive feedback on the introduction of CSI as a valuable contribution. We will address each of your comments and concerns below and also in our revised manuscript.
---
> **W1:** Direct results to support reward misgeneralization/ hacking mitigation.
**RW1:** Thanks for your feedback. Notably, there is no direct metric to assess reward misgeneralization/hacking. Currently, the only way to demonstrate reward misgeneralization/hacking is by observing the different trends of gold and proxy scores during the RLHF process in the simulated experiments, **which we have conducted in our paper and the results demonstrate the effectiveness of our method in reward misgeneralization/hacking mitigation**.
In addition, we also propose CSI as an auxiliary metric to measure reward hacking from the perspective of latent embeddings. Measuring hacking more effectively remains an open research question.
> **W2:** Whether IB loss would affect the reward modeling abilities.
**RW2:** Thanks for your feedback. To address your concern, we report the accuracy of InfoRM and Standard RM on both in-distribution reward model benchmarks (Anthropic-Helpful and Anthropic-Harmless) and out-of-distribution reward model benchmarks (AlpacaEval and Truthful QA) in Table 4 of the submitted PDF. Our results show that **IB loss can also enhances reward modeling abilities**. Furthermore, the notable improvement in RLHF performance achieved by our InfoRM, as demonstrated in our paper, further substantiates this claim.
> **Q1:** Does IB loss total address the misgeneralization issue? If not, when is IB inefficient?
**RQ1:** Our InfoRM specifically addresses the issue of reward misgeneralization from an optimization perspective. While our method significantly reduces misgeneralization, we acknowledge that **completely addressing the issue is challenging due to the uncontrollable quality of preference datasets in practical applications**. Specifically, IB loss may be less effective in scenarios where the dataset quality is poor or highly variable. Under such conditions, the balance parameters of our method must be carefully adjusted to optimize the trade-off between accurate reward modeling and effective mitigation of reward misgeneralization. We will add these analysis in the revised version.
> **Q2:** Could you provide the Upper bound of the generalization error?
**RQ2:** The upper bound of the generalization error for our method is provided in Theorem 1 below, with the proof available in [1]. **Theorem 1 demonstrates that the mutual information between the latent representation and observations, as well as the latent space dimensionality, upper bound the expected generalization error of our InfoRM method.**
***Theorem 1:*** *Let $|S|$ be the cardinality of the latent representation space of InfoRM, $l(\cdot)$ be the loss function following sub-$\sigma$-Gaussian distribution, $X$ be the reward model input, $S$ be the latent representation of InfoRM, and $\Theta$ be the network parameters, we have the following upper bound for the expected generalization error of our InfoRM:*
$$E[R(\Theta) - R_T(\Theta)] \leq \exp \left( -\frac{L}{2} \log \frac{1}{\eta} \right) \sqrt{\frac{2\sigma^2}{n} \log I(X,S)}\leq \exp \left( -\frac{L}{2} \log \frac{1}{\eta} \right) \sqrt{\frac{2\sigma^2}{n} \log |S|},$$
*where $L$, $\eta$, and $n$ are the effective number of layers causing information loss, a constant smaller than 1, and the sample size, respectively. $R(\Theta) = \mathbb{E}_{X \sim D}[l(X, \Theta)]$ is the expected loss value given $\Theta$ and $R_T(\Theta) = \frac{1}{n} \sum _{i=1}^{n} l(X_i, \Theta)$ is a sample estimate of $R(\Theta)$ from the training data.*
> **Q3:** How much GPT-4 align with human in reward hacking identifications?
**RQ3:** We follow your valuable suggestion and **conduct a human evaluation to validate GPT-4 as the hacking annotator**. Specifically, we randomly sample 100 cases each from Anthropic Helpful, Anthropic Harmless, and AlpacaFarm. Then we engage two expert annotators proficient in alignment studies of LLMs and fluent in English. We ask them to evaluate these cases’ hacking phenomenon based on our pre-given descriptions of the hacking phenomena and the inter-annotator agreement rate is 96%. For cases where the annotators disagreed, we requested that both annotators reassess their evaluations to reach a consensus. The annotation serves as the reference to calculate the accuracy of the GPT-4-based evaluator in reward hacking identification. **We find that the human-GPT agreement rate avengingly achieves a remarkable 96.7%, indicating the enhanced reliability of GPT-4 annotations in hacking detection.** We will include these results in the revised version.
||Anthropic Harmless|Anthropic Helpful|AlpacaFarm|
|---|---|---|---|
|human-GPT agreement|95%|98%|97%|
> **Q4**: What is the architecture of the RM encoder?
**RQ4:** In our experiments, the RM encoder is derived from the standard RM, with modification to the final layer.
> **L1**: The additional computational overhead of training InfoRM.
**RL1**: Thanks for your feedback. In fact, the primary consumption of time in RLHF occurs during the RL process. **Although the introduction of IB loss does indeed introduce some additional computational overhead for InfoRM training, its impact on the overall RLHF process is minimal.** To demonstrate this, we report the empirical estimates of the time costs for reward model training and RL process using Standard RM and our InfoRM in the table below.
||RM Training|RL Process|Overall|
|---|---|---|---|
|Standard RM| 0.35h | 9.00 h | 9.33 h |
|InfoRM|0.55h| 9.00 h | 9.55 h |
[1] Zhang, Sen, Jing Zhang, and Dacheng Tao. "Information-Theoretic Odometry Learning." IJCV 2022.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors' responses and additional experiments. My concerns are clarified. I lean to hold my previous decision.
---
Reply to Comment 1.1.1:
Title: Thanks for your response!
Comment: Dear Reviewer roDN,
We sincerely appreciate your feedback regarding our efforts to address your concerns, and we would like to express our gratitude for your positive support.
Best regards. | Summary: The paper proposes a regularization method to mitigate reward hacking using a variational information bottleneck objective. Their experiments show the potential that their method might be an alternative to KL divergence for preventing reward hacking.
Strengths: Reward hacking is an important problem in the field that is worth investigating.
Using an information bottleneck objective is an interesting idea. The derivation of the computable objective is insightful.
Weaknesses: Although the method sounds interesting and neat, the experimental evaluation seems to have several issues that make it difficult to evaluate the practical benefit of the proposed method.
- The study is motivated by a phenomenon called reward overgeneralization. It does not provide any evidence of solving reward overgeneralization. Length bias is mentioned as an example, but it is not evaluated in the paper.
- Why the variational information bottleneck reduces the spurious correlation is not discussed in the paper. It’s not clear to me why it would be better than a normal LLM-based RM.
- The experimental results show that it outperforms RM without KL penalty in Figures 7-13. We already know that RM without KL penalty is prone to reward hacking and KL penalty is one of the solutions. Still, the evaluations in the Appendices compare InfoRM against this weakened baseline.
Technical Quality: 3
Clarity: 2
Questions for Authors: - I would like to see the performance of KL regularized PPO with KL penalty larger than 0.01, say 0.1. Given that RM+KL is only marginally better than RM in Figure 4, I wonder if a larger KL penalty would improve the performance of RM+KL.
- It would be a great addition to the paper if RM+KL are compared in Figures 7-13.
- Does the proposed method solve the length bias problem? It was implied in the Introduction that it is one of the overgeneralization phenomena that the proposed method may solve.
- How is ensemble RM implemented?
- line 716: IB dimensionality is set to 3. Should we interpret it that ultimately the reward of the outputs can be explained by just three real numbers?
- Appendix D.2. Figure 15: Wouldn't removing ties make the estimation of the win rate worse than including ties as 0.5 wins? At least it would be informative to report the tie rate if it is removed from Figure 15.
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: I don't see any problems.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your positive feedback on the use of an information bottleneck objective and your recognition of the insightful derivation of the computable objective. We will address each of your comments and concerns below and also in our revised manuscript.
---
> **W1**: Evidence of solving reward overgeneralization, such as length bias.
**RW1:** Thanks for your comment. In fact, **we have discussed the role of our method in solving reward overgeneralization in Appendix C and cited it in Line 64, Introduction of our paper**. In this section, we demonstrate the significant impact of our method in mitigating length bias across twelve datasets. Furthermore, we also discuss other reward overgeneralization phenomena that our method can effectively mitigate, such as excessive caution.
> **W2:** Why does our method reduce the spurious correlation?
**RW2:** As stated in the Methodology section of our paper, by using variational information bottleneck, **our InfoRM is trained to maximize the utility of the latent representation for reward prediction while minimizing the irrelevance of human preferences within it**. This process eliminates the irrelevant information (i.e., spurious correlations), resulting in superior reward modeling as compared with the standard LLM-based RM, especially in alleviating reward overgeneralization, as verified in Appendix C of our paper. We will clarify this point further in the revised version.
> **W3:** Comparison with a weakened baseline in Figures 7-13.
**RW3:** Thanks for your feedback. Due to space constraints, we illustrate the **CSI values across various RMs (including RM+KL with differing KL penalties) during the RLHF processes using the AlpacaFarm, Anthropic-Harmless, and Anthropic-Helpful datasets in Figure 2 of the submitted PDF**. Additionally, the corresponding RLHF performance, as evaluated by GPT-4, is listed in Table 2 of the submitted PDF. Our results reveal the following:
1. As the KL penalty increases, the growth trend of CSI values is gradually suppressed, indicating a less pronounced reward over-optimization phenomenon. **This observation aligns with our intuition and further demonstrates the effectiveness of our CSI metric in detecting reward over-optimization**.
2. Although RM+KL achieves comparable performance to InfoRM in mitigating over-optimization, InfoRM consistently outperforms RM+KL in final RLHF performance. **This demonstrates that our method can significantly suppress the reward over-optimization phenomenon without largely compromising the final RLHF performance.**
We will include relevant results on all testing datasets in the revised version.
> **Q1**: Whether a KL penalty larger than 0.01 improve the performance of RM+KL in the simulated experiments?
**RQ1**: To address your concern, we provide the simulated RLHF results for RM+KL with different KL penalty values, which can be found in Figure 1(a) of the submitted PDF. **Our findings indicate that a KL penalty value of 0.001 yields the best performance. When the KL penalty exceeds 0.01, the RLHF performance significantly degrades.** We will include relevant discussion in the revised version.
> **Q2:** It would be a great addition to the paper if RM+KL are compared in Figures 7-13.
**RQ2:** Please see the response to W3.
> **Q3:** Does the proposed method solve the length bias problem?
**RQ3:** Please see the response to W1.
> **Q4:** How is ensemble RM implemented?
**RQ4:** Ensemble RM in our experiments is implemented by combining the average reward across all models in the ensemble with the intra-ensemble variance, strictly following the UWO implementation in [1]. We will include this detail in the revised version.
> **Q5:** Question about “IB dimensionality is set to 3." in Line 716.
**RQ5:** We apologize for this typo. As analyzed in Appendix D of our paper, **the IB dimensionality in our experiments is set to 128, indicating that the final reward can be represented by a vector of this length**. We will correct this typo and double-check our paper.
> **Q6:** Report the win rate considering ties in Figure 15.
**RQ6:** Thanks for your feedback. In Figure 15 of our paper, our calculation of the win rate closely follows [2]. Following your suggestion, **we also report the win rate considering ties in Figures 1(b) and 1(c) in the submitted PDF**. We will include these results in the revised version.
[1] Coste, Thomas, et al. "Reward Model Ensembles Help Mitigate Overoptimization." ICLR 2024.
[2] Li, Yuhui, et al. "RAIN: Your Language Models Can Align Themselves without Finetuning." ICLR 2024.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for the clarification.
Now I think the contribution of the paper is clear and the empirical results it brings are interesting for a wide range of audiences.
---
Rebuttal 2:
Title: Thanks for your response!
Comment: Dear Reviewer rGMr,
Thank you very much for your positive feedback. We appreciate your recognition of the paper's contributions and the significance of our empirical results to a broad audience.
Best regards. | Rebuttal 1:
Rebuttal: Dear all Reviewers,
Thank you for your effort in reviewing our paper. The submitted PDF file includes the tables and figures referenced in our responses to your comments. The main contents of this PDF are listed as follows:
- Table 1 presents **the comparison results of RLHF models using different RMs, including a recently proposed method WARM and an extra dataset RL; DR Summary**, demonstrating the superiority of our InfoRM.
- Table 2 presents **the results of different hyperparameter settings for all compared methods**, ensuring the fairness and reliability of our experiments.
- Table 3 presents **the improvement in RLHF performance brought by using the proposed CSI metric as an early stopping method**, demonstrating the effectiveness of our CSI metric.
- Table 4 presents the accuracy of Standard RM and our InfoRM on in-distribution and out-of-distribution testing datasets, demonstrating that our InfoRM achieves better accuracy and generalization.
- Figure 1(a) presents **the simulated RLHF results for Standard RM with varying KL penalty values**, as well as for our proposed InfoRM, further demonstrating the superiority of our method.
- Figures 1(b) and 1(c) present **the parameter sensitivity analysis of our InfoRM, where the win rate is calculated considering ties**.
- Figure 2 presents **the proposed CSI metric values during the RLHF processes of Standard RM with varying KL penalty values**, as well as our InfoRM, validating the effectiveness of our CSI metric for detecting reward overoptimization.
Thanks for your time!
Sincerely, Paper 9339 Authors.
Pdf: /pdf/4439b3328b0983745ce316019a5370bfdce790ed.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
GS-Hider: Hiding Messages into 3D Gaussian Splatting | Accept (poster) | Summary: This paper presents GS-Hider, a novel framework for steganography in 3D Gaussian Splatting (3DGS) models. The key innovation is a coupled secured feature attribute that replaces the original spherical harmonics coefficients, allowing the embedding of hidden 3D scenes or images into the original scene without compromising rendering quality. The framework uses a scene decoder and a message decoder to disentangle the original and hidden information. The authors demonstrate the effectiveness of GS-Hider across various experiments, showing its ability to hide multiple 3D scenes or single images while maintaining high fidelity, security, and robustness.
Strengths: Novelty: GS-Hider presents the attempt at steganography for 3D Gaussian Splatting, addressing an important challenge in protecting 3D assets.
Technical innovation: The coupled secured feature attribute and parallel decoder architecture effectively balance security, fidelity, and computational efficiency.
Versatility: The method can hide both 3D scenes and 2D images, demonstrating flexibility for various applications.
Comprehensive evaluation: The authors provide extensive experiments on fidelity, security, robustness, and capacity, comparing against baselines and ablation studies.
Real-time capability: Despite the additional complexity, the method maintains near real-time rendering speeds (45 fps), which is crucial for practical applications.
Weaknesses: Main weakness:
* The main weakness is the additional computational overhead introduced by using the scene decoder. Gaussian splatting significantly simplifies the neural rendering paradigm of NeRF into a point vector + rasterization approach, achieving very fast rendering - this is one of the most valued aspects of GS. However, the introduction of the scene decoder greatly undermines this advantage. This makes such an improvement meaningless. I am confident in this inference because the authors do not seem to report any FPS-related metrics, nor do they compare the FPS increase with the original 3DGS.
* The decoder is trained per scene, rather than being a generalizable decoder, which suggests it may be "memorizing" scene watermarks. This can be inferred from the issues involved in training a decoder for each scene and hiding only a single scene per scene. At the same time, the geometrically inconsistent signals present in the cover scene and hidden scene are well accommodated together, further indicating that the signals are to some extent being "memorized" in the decoder. Therefore, I speculate that the capacity of the decoder is not small. This further corroborates the aforementioned weakness.
Some other weakness:
* While the empirical results are strong, there's a lack of theoretical justification for why the coupled feature representation works so well.
* Comparison to recent work: The paper could benefit from comparing against more recent steganography methods, particularly those designed for other 3D representations that might be adaptable to 3DGS.
* Generalization: The experiments are limited to a single dataset (Mip-NeRF360). It would be valuable to see how the method performs on a wider range of 3D scenes and different types of hidden information.
Technical Quality: 3
Clarity: 3
Questions for Authors: Main issues that could be considered in rebuttal:
* what's the added inference time overhead due to using the scene decoder?
* the fps metrics for all experiments, so that we can see the impact on the practical usage. If the rendering speed is affected, the improvement about privacy is not useful.
* try to prove that the hidden scene is not simply memorized by the decoder. maybe trying to memory more than one hidden scenes or curating a general decoder that can be applied to not only one scene can be convincing
Other issues:
please see the weakness
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors provide a brief discussion of limitations in Section B of the appendix, acknowledging issues such as the lack of view-dependency and slightly reduced rendering quality. They also mention future work directions, including enhancing model expressiveness and extending to tasks like tampering detection. However, this discussion could be expanded to more thoroughly address potential failure modes or edge cases of the proposed method.
The broader impacts section (A.2) touches on both positive applications (e.g., copyright protection) and potential risks (sensitive data concerns). However, a more in-depth exploration of potential misuse scenarios and mitigation strategies would strengthen the paper's ethical considerations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments! If there are any additional comments to be added, please continue the discussion with us.
$\textcolor{red}{\textbf{The supplementary rebuttal PDF file can be found at the bottom of the overall response}}$.
> **Weakness #1, Question #1 and Question #2: Additional computational overhead.**
- We kindly remind the reviewer that we have already provided the rendering time of each scene in our $\textcolor{red}{\textbf{Table 2}}$ of the main paper and presented the FPS of our GS-Hider in $\textcolor{red}{\textbf{Line 236}}$. It achieves 45 FPS rendering speed, which surpasses the real-time rendering requirement of 30 FPS. Meanwhile, the added inference time of scene decoder (5 Conv layers) is only $\textcolor{red}{\textbf{0.006s}}$, only accounts for $\textcolor{red}{\textbf{0.006/0.0222=27}}$% of the total rendering time.
- The rendering speed of GS-Hider can be **further improved by adjusting some hyperparameters**, such as reducing the number of convolutions in the scene decoder ($N$) and the dimension of feature attributes ($M$), sequentially pruning Gaussian points in ascending order of Gaussian’s opacity. We report the FPS of GS-Hider under different settings and compare it with the original 3DGS on the $\textcolor{red}{\textbf{Table 1}}$ of the rebuttal PDF. The FPS of our lightest GS-Hider ($M=8, N=5$) reaches $\textcolor{red}{\textbf{71.429}}$ and is comparable to the original 3DGS with an acceptable rendering quality.
> **Weakness #2 and Question #3: Prove the hidden scene is not memorized by the decoder.**
- We have conducted experiments of hiding two hidden scenes, as detailed in $\textcolor{red}{\textbf{Section 4.6}}$ of the main paper. This proves that our decoder is not storing or memorizing the secret information.
- Our message decoder is very lightweight with only $\textcolor{red}{\textbf{5}}$ convolution layers. It only contains $\textcolor{red}{\textbf{0.465 M}}$ parameters, which is far from enough to memorize complex 3D scenes. In fact, the geometrical and structural information is mainly embedded in the coupled feature, and the role of the decoder is merely to $\textcolor{red}{\textbf{extract and decouple the secret information}}$, not to memorize the scene watermark.
- To show the role of our decoder, we further input the rendered coupled feature from another scene like 'playroom' to the message decoder that is trained to hide the scene 'bicycle'. The results are presented in $\textcolor{red}{\textbf{Figure 4}}$ of the rebuttal PDF. We find that the rendered scene retains most of the geometric structure of the 'playroom' scene, with only some colors resembling those of the 'bicycle' scene. This indicates that our decoder itself does not have the capability to memorize secret information.
> **Weakness #3: A lack of theoretical justification for why the coupled feature works.**
- The reason of the coupled feature representation works so well is that the feature attribute $\boldsymbol{f}_i$ has sufficient capacity and high flexibility. Combined with our decoder and designed loss function, it can effectively fuse two scenes, and hide secret information in some visually insensitive areas and the redundant feature channels. More analysis can be found in our response to the **Question #1** of reviewer X8jQ.
- Considering the black-box nature of deep learning, thoroughly analyzing the process of embedding information into high-dimensional features, and providing theoretical proof is an open, challenging, and interesting issue. We appreciate your valuable suggestions and will leave this for future work.
> **Weakness #4: Comparison to recent work.**
According to your valuable suggestion, we have tried our best to migrate the pipeline and decoding network of StegaNerf [1] to the 3DGS steganography task. Specifically, we feed the output of 3DGS to the decoding network of StegaNerf and let it approximate the hidden 3D scene. The results are reported on $\textcolor{red}{\textbf{Table 2}}$ of the rebuttal PDF. We find that GS-Hider is much better than 3DGS+StegaNeRF in the reconstruction quality of hidden scenes. If you find other comparison methods suitable for 3DGS steganography, please let us know and we will compare them accordingly.
> **Weakness #5: Generalization to other datasets and hidden information.**
- **Different dataset:** We have supplemented the experiments on two datasets, namely Tank&Template and Deep Blending, on $\textcolor{red}{\textbf{Table 3}}$ and $\textcolor{red}{\textbf{Figure 5}}$ of the rebuttal PDF. For Tank&Template, we hide the scene 'bicycle' into the two original scenes. For Deep Blending, we insert and hide the 'lego' and 'hotdog' in the Nerf synthetic dataset to the original scenes. It can be found that our GS-Hider can still achieve good results on these two datasets.
- **Different type:** Our GS-Hider can support hiding scenes or a copyright image in our main paper. It can also be applied to hide bits or audio since they can also be treated as a special image. We will realize it in our future work.
> **Limitations and broader impacts**
- We have reorganized the limitations of our approach and potential improvements in depth in our response to **Weakness #1** of reviewer koxG.
- Our GS-Hider needs to be used in conjunction with an online copyright protection platform to record the exclusive copyrights of different users and scenarios to prevent copyright conflicts. Meanwhile, GS-Hider can be combined with key technology for further protection to prevent users from abusing our models and sensitive data.
> **Reference**
[1] Steganerf: Embedding invisible information within neural radiance fields, in ICCV 2023.
---
Rebuttal 2:
Comment: Thank you for your response. It addresses my concerns regarding potential added overhead of inference time. So I raise the rating. I recommend that the revision could include the supplied results in your rebuttal.
---
Rebuttal Comment 2.1:
Title: Thank Reviewer crVb for recognizing our work
Comment: Dear Reviewer crVb:
We sincerely appreciate your prompt response, valuable suggestions, and recognition of our work. We will include the additional experiments from the rebuttal and provide detailed explanations in the final version to make our paper more rigorous and complete.
Best Regards,
Authors of #902 | Summary: The paper "GS-Hider: Hiding Messages into 3D Gaussian Splatting" proposes a steganography framework for 3D Gaussian Splatting (3DGS). GS-Hider embeds messages into 3D scenes by replacing spherical harmonics coefficients with a secured feature attribute and uses decoders to extract hidden and original scenes without compromising quality. Unlike traditional NeRF methods, 3DGS offers explicit 3D representation and real-time rendering. Experiments show GS-Hider maintains high fidelity, security, and robustness, making it suitable for copyright protection, encrypted communication, and 3D asset compression.
Strengths: 1. Innovative Steganography Framework for 3DGS:
GS-Hider is the first framework designed specifically for 3D Gaussian Splatting, allowing for the embedding and extraction of hidden messages within 3D scenes without compromising their fidelity and rendering quality.
2. Robust Security and High Fidelity:
The framework introduces a coupled secured feature attribute and parallel decoders, ensuring the secure and accurate extraction of hidden messages while minimally altering the original 3DGS structure, maintaining high fidelity of the rendered scenes.
Weaknesses: The authors discussed the limitations of their approach, which appear to be relatively minor.
Technical Quality: 3
Clarity: 3
Questions for Authors: I am curious about the scenario where an eavesdropper, let's say Eve, downloads Alice's publicly available model, renders it, and then trains her own GS. The purpose of Alice's steganography in Fig. 1 is to verify that the "Table" GS corresponds to her. However, if Eve can train her own "Table" GS, it seems Alice may not be able to protect her "Table" GS effectively. Could you please clarify how this concern is addressed?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please refer to the questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments! If there are any additional comments to be added, please continue the discussion with us.
> **Weakness #1: Minor Limitations.**
Due to limited space, we only discussed the limitations of our method in terms of rendering quality and speed in the supplementary material. We will delve deeper into these two aspects, provide additional insights, and introduce our potential improvements. Our limitations are reorganized as follows.
- **Compromised rendering quality:** Since the feature attribute $\boldsymbol{f}_i$ does not consider view-dependency compared to spherical harmonics, and we need to hide the secret scene while representing the original scene, our rendering quality is somewhat inferior to the original 3DGS. In fact, we inevitably need to make a trade-off between rendering quality and steganography capacity. However, **our GS-Hider is a universal framework that can be integrated with the latest 3DGS variants**, such as Mip-splatting [1], to enhance rendering performance. Details can be found in the response to **Weakness #2** of reviewer S9BH. Meanwhile, the current designs of the scene and message decoder are relatively simple. Integrating more efficient neural rendering and decoding designs (such as Scaffold-GS [2]) can also help improve the overall rendering quality of the framework.
- **Decreased rendering speed:** Due to the rasterization of high-dimensional features and the use of network decoding, although we can still achieve real-time rendering, the rendering speed has decreased compared to the original 3DGS. However, we can easily improve rendering speed by **pruning Gaussian points, reducing the dimension of feature attributes $\boldsymbol{f}_i$, and decreasing the number of convolution layers or feature kernels**. As shown in $\textcolor{red}{\textbf{Table 1}}$ of the rebuttal PDF, these approaches do not significantly impact rendering quality.
> **Question #1: How to resist Eve's re-rendering?**
Thank you for presenting such an interesting scenario. If Eve re-renders a GS using Alice's trained model, our copyright protection still holds for the following reasons:
- First, if Eve re-renders a GS, he cannot decode any pre-embedded copyright image or secret scene from it, making Eve's GS unauthorized. Therefore, Eve cannot prove that he owns the copyright of this GS, and the ownership of this GS model still belongs to Alice.
- Second, our GS-Hider works in conjunction with an online copyright database. When Alice uploads her model, the copyright image is registered in the database. If a similar GS scene is encountered later, the decoded copyright image must be matched with the one in the database. Otherwise, it will be judged as infringement.
- Third, the cost of re-training a GS is high. Eve would need to spend almost the same computational resources as Alice to steal such a 3D scene, which is difficult for the average thief to achieve.
> **Reference**
[1] Mip-Splatting: Alias-free 3D gaussian splatting, in CVPR 2024.
[2] Scaffold-gs: Structured 3d gaussians for view-adaptive rendering, in CVPR 2024.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. It has resolved my main concerns. Regarding the adversarial scenario we discussed, I believe revisiting it would be beneficial to ensure preparedness.
---
Reply to Comment 1.1.1:
Title: Thank Reviewer koxG for recognizing our work
Comment: Dear Reviewer koxG:
We are very grateful for your recognition of our work. We will include your valuable suggestion in the revised version. Your insights have significantly contributed to the improvement of our work.
Best Regards,
Authors of #902 | Summary: The paper presents GS-Hider, a novel steganography framework designed for 3D Gaussian Splatting (3DGS). The framework enables the invisible embedding of 3D scenes and images into 3DGS point clouds, ensuring accurate extraction of hidden messages without compromising rendering quality. Extensive experiments demonstrate GS-Hider's effectiveness in concealing multimodal messages while maintaining exceptional security, robustness, capacity, and flexibility.
Strengths: (+) The paper introduces an interesting steganography framework for 3DGS, which is a novel and emerging field in 3D scene reconstruction and rendering. GS-Hider maintains the rendering quality of the original scene while securely embedding hidden messages, addressing the challenges of fidelity and security effectively. The framework has significant potential applications in copyright protection, encrypted communication, and 3D asset compression.
(+) Comprehensive experiments are conducted to validate the performance, security, robustness, and flexibility of GS-Hider. The experiment results demonstrate robustness against various forms of degradation and support hiding multiple 3D scenes or images, showcasing its versatility. Further Applications effectively explain the results of the proposed method when dealing with other scenarios.
Weaknesses: (-) The implementation of GS-Hider involves some techniques and may require substantial computational resources, limiting its accessibility and usability for some ordinary users without deep learning backgrounds.
(-) The comparison with existing methods is somewhat limited, as it primarily focuses on a specific type of 3DGS. It overlooks a broader range of state-of-the-art techniques in 3DGS. It may be beneficial for the author to consider implementing their methods in other 3DGS variants like [1] to highlight the advantages of the proposed method.
[1] Mip-Splatting: Alias-free 3D Gaussian Splatting.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the weakness.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have addressed the limitations in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments! We hope that our responses can address your concerns. If there are still aspects that need further clarification, please feel free to continue the discussion with us!
> **Weakness #1: Limited accessibility and usability.**
- Our method is actually very simple, efficient, and user-friendly. It does not require substantial computational resources than the original 3DGS. In fact, our storage space is even smaller than the original 3DGS because we use a more compact feature representation.
- Additionally, our method can be optimized end-to-end in a $\textcolor{red}{\textbf{single GTX 3090 Ti Server}}$, with training and decoding easily encapsulated into an interface. **In the training phase**, users only need to input the original scene and the hidden scene or a copyright image to render a secure and private 3DGS for publishing or sharing. **In the verification phase**, users only need to use a private message decoder to extract encrypted information from the Gaussian point cloud. This makes it accessible for users without a background in deep learning to use it effortlessly.
> **Weakness #2: Extension to other variants of 3DGS.**
- According to your valuable suggestions, we realize a variant of GS-Hider based on Mip-splatting. Specifically, we retain the 3D smooth filter and 2D mip filter from Mip-splatting, only replacing the color attributes to high-dimensional features $\boldsymbol{f}_i$ to fit the GS-Hider framework. Then, we conducted experiments on 3D scene hiding on the mipnerf-360 dataset. The results are reported as follows. We also present some visualization results in $\textcolor{red}{\textbf{Figure 3}}$ of the rebuttal PDF file. This demonstrates that our GS-Hider is a universal steganography framework, not limited to specific 3DGS methods.
| Method | $\text{PSNR}_S$ | $\text{SSIM}_S$ | $\text{LPIPS}_S$ | $\text{PSNR}_M$ | $\text{SSIM}_M$ | $\text{LPIPS}_M$ |
| ------------------ | ------ | ------ | ------- | ------ | ------ | ------- |
| Mip-Splatting | 27.79 | 0.83 | 0.20 | - | - | - |
| Mip-GSHider (Ours) | 26.25 | 0.79 | 0.24 | 25.26 | 0.76 | 0.34 |
---
Rebuttal Comment 1.1:
Title: Thanks
Comment: Thanks for your detailed response, which has addressed my concerns. After reading the rebuttal and other reviews, I believe this work has practical applications in copyright protection of 3D assets. Meanwhile, the author implemented a variant of GS-Hider based on mip-splatting in the rebuttal, verifying that the proposed method is a general framework. Therefore, considering the novelty of this paper in 3D steganography, good rendering quality and real-time rendering speed, I decide to raise my score to 8. However, please also remember to address my concerns in your final version, especially those parts that you have promised.
---
Reply to Comment 1.1.1:
Title: Thank Reviewer S9BH for recognizing our work
Comment: Dear Reviewer S9BH:
Thank you for your prompt response and for acknowledging our work. We sincerely appreciate your valuable suggestions and assure you that we will include the additional experiments and explanation from the rebuttal in the final version.
Best regards,
Authors of #902 | Summary: The paper introduces GS-Hider, a novel steganography framework for 3D Gaussian Splatting (3DGS). Protecting the security and fidelity of 3D assets while embedding information into transparent 3DGS point clouds is challenging, and the method addresses this by invisibly embedding 3D scenes and images into original GS point clouds. It employs a coupled secured feature attribute, scene decoder, and message decoder. Extensive experiments demonstrate its effectiveness in concealing multimodal messages without compromising rendering quality a lot.
Strengths: 1) The first work to perform steganography for the 3D gaussian splatting, it might inspire more research into this direction.
2) The method works well and exhibits robustness and high capacity shown by the empirical results.
Weaknesses: 1) The rendering quality and speed are compromised a bit, but not much.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1) It is interesting to understand where and how exactly your method hides the secret information. As you show in Figure 6, at first glance, it seems that the rendered coupled feature map only contains information about the original scene. Does it mean that the hidden message is encoded as (spatial) high-frequency details in the feature channels? Or maybe it learnt to hide information in last bits just like the least significant bit method? Are there any experiments that you conducted to investigate this more?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments! We hope that our response will address all of your concerns. All discussions and supplementary analyses will be included in our revised version. If there are any additional comments to be added, please continue the discussion with us.
$\textcolor{red}{\textbf{The supplementary rebuttal PDF file can be found at the bottom of the overall response}}$.
> **Weakness #1: Compromised rendering quality and speed.**
- First, our rendering quality and speed are still $\textcolor{red}{\textbf{acceptable}}$.
- In the case of hiding a 3D scene, our rendering quality only drops a little bit (about 1dB) and is comparable to the original 3DGS and other 3D rendering methods (such as Instant NGP).
- Meanwhile, our rendering speed can reach 45 FPS, $\textcolor{red}{\textbf{which far exceeds the real-time rendering requirement of 30 FPS}}$.
- Second, our method has $\textcolor{red}{\textbf{unique advantages}}$ compared with the original 3DGS.
- As shown in Table 1 of the main paper, our GS-Hider significantly reduces the storage space by $\textcolor{red}{\textbf{385.05MB}}$ compared to the original 3DGS.
- Our GS-Hider has better privacy and can hide a 3D scene or a copyright image with high quality, which is suitable for tasks such as encrypted transmission and copyright protection.
- Third, the rendering speed of our GS-Hider can be $\textcolor{red}{\textbf{easily improved}}$.
- We can apply GS compression methods to reduce the number of Gaussian points and improve rendering speed. As shown in Table 2 of the main paper, sequentially pruning Gaussian points by 25% hardly affects the rendering quality and can bring a **20%** improvement in rendering speed.
- We can improve rendering speed by reducing the number of convolutions in the scene decoder or using a more efficient and lightweight network design. More FPS results of GS-Hider under different settings can be found on $\textcolor{red}{\textbf{Table 1}}$ of the rebuttal PDF.
> **Question #1: Where and how GS-Hider hides the secret information?**
- First, the hidden scene information is concealed in the **spatial high-frequency details of the coupled feature and some visually insensitive areas (such as artifacts, noise, and edges)**. The invisible hidden information in the coupled feature map will be amplified and decoupled by the message decoder, eventually forming an RGB hidden scene. We visualize the intermediate feature of the message decoder in the $\textcolor{red}{\textbf{Figure 1}}$ of the rebuttal PDF to illustrate this process.
- Second, the secret information is hidden in some **redundant feature channels** of the coupled feature field $\mathbf{F}\_{coup}$. To prove this, we randomly set some channels in $\mathbf{F}\_{coup}$ to $\mathbf{0}$, and eventually find that the hidden decoder can not reconstruct the complete secret scene. The results are presented in $\textcolor{red}{\textbf{Figure 2}}$ of the rebuttal PDF. This indicates that multiple feature channels are coupled and interact with each other, collectively storing the hidden information.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response. I will keep my score.
---
Reply to Comment 1.1.1:
Title: Thank Reviewer X8jQ for recognizing our work
Comment: Dear Reviewer X8jQ:
Thank you for your response and for recognizing our work. We will include the content from the rebuttal in the final version.
Best Regards,
Authors of #902 | Rebuttal 1:
Rebuttal: We sincerely appreciate all the constructive comments from the reviewers! Below is our brief overall response.
> **First, we are very honored to receive recognition from all the reviewers for various aspects of our work.**
- All reviewers have acknowledged the **soundness, presentation, contribution, and effectiveness** of our GS-Hider.
- All reviewers have recognized the **innovation** of GS-Hider in 3DGS steganography and consider it an interesting and important work.
> **Second, we would like to emphasize the value and contribution of our work.**
- GS-Hider is **the first attempt** at 3DGS steganography, which can be applied to the encrypted transmission and copyright protection of 3D assets, serving as inspiration for the future development of 3DGS steganography.
- GS-Hider presents **unique advantages** in security, privacy, robustness, and versatility, while having acceptable rendering quality and real-time rendering speed.
> **Third, we have tried our best to address all of the concerns raised by reviewers and added detailed analysis.**
- Regarding concerns on rendering speed raised by reviewer X8jQ and crVb, we list the added inference time by the scene decoder, present the comparison between GS-hider under different settings and original 3DGS, and give **potential improvement** methods.
- We realize **a variant of GS-Hider based on Mip-splatting**, **supplement the experiments on other datasets**, and **compare with more methods** to make our paper more comprehensive and rigorous.
- We further analyze why the coupled feature representation is effective, explore the limitations of our approach, and clarify some application scenarios.
We sincerely thank all the reviewers for their suggestions to improve our paper and kindly request the reviewers to thoroughly consider the value and contribution of our work. The additional experiments and analyses will be added to the final version of the paper. The detailed rebuttals for each reviewer can be found below.
Additionally, we have attached a $\textcolor{red}{\textbf{PDF}}$ file containing some figures and tables for the reviewers' reference.
Pdf: /pdf/e9937de797defef868f03c41cb16786f35ddc3db.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
SimPO: Simple Preference Optimization with a Reference-Free Reward | Accept (poster) | Summary: This paper presents SimPO, an offline preference optimization method for LLM alignment. SimPO replaces the KL term in DPO with a length-regularized log-probability and adds a margin value for regularization. Extensive experiments in chat benchmarks show that SimPO significantly outperforms DPO and other preference optimization variants, despite its simplicity. The authors conclude that the success of SimPO can be attributed to the better alignment between training and decoding objectives, as well as reduced exploitation of generating lengthy responses.
Strengths: + This paper is well-written and easy to follow. The motivation is clear, and the main approach is quite understandable.
+ The experiments are extensive and well-designed. In addition to benchmark numbers, the authors also provide an in-depth analysis of generation lengths, log probabilities, and differences in rewards for DPO and SimPO models. This greatly helps readers understand the source of the performance gains.
Weaknesses: I do not find any significant weaknesses beyond the limitations proposed by the authors. Since I generally agree with these limitations, I am reiterating them in my review:
+ A theoretical understanding of SimPO is lacking. Although SimPO's design is not theoretically grounded, it performs well in practice, as validated by the authors' experiments.
+ The experiments in the paper solely focus on evaluating helpfulness, disregarding safety, honesty, etc. I think this is the major weakness. SimPO removes the KL regularization from the reference model, so it might intuitively suffer from safety issues. This problem should be further studied in the paper.
Technical Quality: 3
Clarity: 4
Questions for Authors: + Could the authors explain why length-controlled WR is important? From my perspective, controlling response length is helpful during training because it encourages the model to generate short but useful responses. However, forcing the generation of short responses during evaluation is not necessary. Good responses are good regardless of length. Evaluating length-controlled WR seems to set a privileged metric for SimPO.
+ Since SimPO removes the KL regularization, how does it perform with poor data quality, which is a common application scenario? Is it more fragile and sensitive to data quality?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The limitations have been addressed by the authors in Section 6 and in my weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for acknowledging the clarity and comprehensive evaluation of our paper. We address your raised points as follows.
**KL regularization and safety**:
We would like to present our most recent results based on the gemma-2-9b-it model. We measured the original gemma-2-9b-it model, its DPO variant, and the SimPO variant using Sorry-Bench (Xie et al., 2024 [1]), a benchmark for testing the refusal behavior of language models when faced with unsafe queries. Our findings indicate that further tuning the Gemma model on the UltraFeedback dataset with either DPO or SimPO increases the attack success rate, probably due to the lack of safety-related data in UltraFeedback. However, we also find that SimPO is safer than DPO. Therefore, we believe that SimPO does not raise more safety concerns than DPO.
| model | Sorry-Bench attack success rate |
|----------------------|:------------------------------:|
| gemma-2-9b-it | 10.89% |
| w/ DPO | 50.44% |
| w/ SimPO (without safety preference data) | 33.33% |
We further direct the reviewer to the PDF attached to our general response, where we demonstrate that even without explicit KL regularization, SimPO achieves a comparable KL divergence from the initial SFT model as KL-regularized objectives like DPO.
[1] [SORRY-Bench: Systematically Evaluating Large Language Model Safety Refusal Behaviors](https://arxiv.org/abs/2406.14598)
**Why is length-controlled WR important**:
The length-controlled win rate (LC-WR), introduced by Dubois et al. (2024) [2], effectively addresses the length bias issue prevalent in model-based evaluations such as AlpacaEval. This metric has demonstrated a better correlation with human judgments compared to raw win rates [2]. Hence, it has been adopted as the preferred metric for ranking models on the AlpacaEval leaderboard. Given its significance and widespread acceptance in the field, we have incorporated the length-controlled WR as one of our primary evaluation metrics in this study.
[2] [Length-Controlled AlpacaEval: A Simple Way to Debias Automatic Evaluators](https://arxiv.org/abs/2404.04475v1)
**Is SimPO more fragile and sensitive to data quality?**
Our extensive experiments across various settings do not indicate that SimPO is more susceptible to low-quality data than other methods. In fact, the UltraFeedback dataset used in our study inherently includes a variety of data quality levels, as the responses are generated by randomly-sampled models from a diverse pool, including relatively weak models (e.g., Alpaca-7B). Despite this variability in data quality, SimPO consistently demonstrates significant empirical advantages over other baselines.
We hypothesize that SimPO's robustness to data quality issues stems from two key factors:
1. The use of a small learning rate, which is the common practice in preference optimization algorithms and naturally prevents significant divergence from the initial model.
2. The inherent robustness of some large language models (e.g., gemma models), which allows them to maintain performance even when exposed to some degree of low-quality input.
These factors combined may provide sufficient protection against catastrophic forgetting, even without the need for explicit KL regularization.
---
Rebuttal Comment 1.1:
Title: Thank you for your response.
Comment: I'v read the authors' comments to all reviewers. I don't have any additional concerns and would like to remain my score, voting for acceptance.
---
Rebuttal 2:
Title: Thank you for the support!
Comment: Dear Reviewer Wuuc,
Thank you once again for your thoughtful review and valuable feedback! We appreciate your support!
Sincerely,
Authors | Summary: This paper proposes a new offline-RLHF algorithm SimPO, which significantly improves current DPO variants on a collection of benchmarks.
Strengths: The SimPO algorithm is intuitive, simple to implement, and works well, with a good presentation.
A few concrete strong points are list below.
1. The algorithm is intuitive and the overall design is novel in the current literature. The paper proposes two simple techniques, including directly optimizing the generation probability and adding a discrepancy term. Both two techniques are intuitive. Overall, little efforts in offline RLHF methods have fully explored these two directions systematically.
2. The experiments are solid. This paper conducts experiments on a diverse collection of chat benchmarks. The results are significant.
3. The overall presentation is great and easy to follow.
Weaknesses: The following comments are for potential improvement of this work and future research directions. These are not complaints.
1. Some fine-grained case studies can be beneficial. The paper primarily focuses on quantitative evaluations. But it would be interesting to see concrete changes between DPO and SimPO, i.e., some representative responses generated by SimPO and DPO, to see what has changed. The LN experiments are great. But things can be better.
2. With the current formulation, the SimPO objective reminds me of those online RLHF methods. Assuming a -1/1 reward function, many online RLHF methods optimize similar objectives, i.e., direction optimizing the responses with a 1 reward response (winning responses) while reducing the reward of bad ones (losing responses). It will be interesting to see some discussions between SimPO and online RLHF methods. Also, many recent practices have observed that online DPO works greatly. It would be interesting to see if the same phenomena can be observed for SimPO, which looks much closer to online RLHF variants. This is a potential direction to make this paper a much stronger one.
3. Another possible direction to explore is about tasks other than chat, e.g., reasoning tasks like coding and maths. The evaluations in this paper are indeed sufficient as they are, but some discussions on a broad domain can be appreciated.
Technical Quality: 4
Clarity: 4
Questions for Authors: I do not have any specific questions. As a final comment, if possible, I personally like to see some concrete examples of responses generated by SimPO and DPO (beyond the response lengths) to have a better understanding of the actual improvement.
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Most limitations have been declared by the authors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for acknowledging the novelty, simplicity, and significant empirical results of SimPO!
**Fine-grained case studies**:
Thank you for the suggestion! We refer the reviewer to Figures 8 and 9 in our original submission, as well as Figure 2 in the PDF attached to our general response. These examples intuitively demonstrate why SimPO outperforms DPO:
* SimPO can produce better-structured answers than DPO. (Figures 8 and 9 in our paper)
* SimPO can generate more concise and clear responses than DPO. (Figure 2 in the rebuttal PDF)
We acknowledge that qualitative analysis of model outputs is becoming increasingly challenging, given the breadth and depth of the questions in these benchmarks. We will provide more qualitative cases in our next revisions.
**Online methods**:
Thank you for the suggestions! We are actively working on an online version of SimPO. Specifically, we alternate between generating online preference optimization data and training the model with the generated data. Here are some initial results we have for a stronger version of the model — gemma-2-9B-it:
| model | LC Win rate | Raw Win Rate |
|----------------|:-----------:|:------------:|
| gemma-2-9b-it | 51.1 | 38.1 |
| SimPO (1 iter) | 74.6 | 66.7 |
| SimPO (2 iter) | 76.5 | 76.2 |
We find that the online version of SimPO continues to improve performance with each additional iteration.
**Tasks other than chat, e.g., reasoning tasks like coding and maths**:
In our initial manuscript (also shown in the table below), we did find that after further training on the Llama-3-Instruct-8B model, the chat ability of the models enhances at the sacrifice of the degradation on GSM8k (math) and MMLU (knowledge). However, after more exploration, we find that such a performance degradation is largely attributable to the lack of robustness of the base model (i.e., Llama-3) rather than the SimPO objective.
We’d like to present the following new results from the gemma-2-9b-it model. The last two rows in the table below demonstrate that SimPO largely retains general knowledge (MMLU) and even slightly improves the original model’s math ability (GSM). Additionally, it significantly enhances coding ability, as demonstrated by the Arena-Hard benchmark, which primarily consists of real-world coding questions.
|models | AlpacaEval 2 LC | AlpacaEval 2 WR | Arena-Hard | GSM (0 shot) | MMLU (0 shot) |
|-----|:------:|:------:|:----------:|:----:|:---------:|
| Llama-3-Instruct-8B | 26.0 | 25.3 | 22.3 | 78.5 | 61.7 |
| Llama-3-Instruct-8B-SimPO | 44.7 | 40.5 | 33.8 | 71.3 | 58.5 |
| gemma-2-9b-it | 51.1 | 38.1 | 40.8 | 87.4 | 72.7 |
| gemma-2-9b-it-SimPO | 72.4 | 65.9 | 59.1 | 88.0 | 72.2 |
---
Rebuttal Comment 1.1:
Title: Thanks
Comment: Thanks for the reply and I appreciate the authors' efforts.
---
Reply to Comment 1.1.1:
Title: Thank you for the support!
Comment: Dear Reviewer TXhi,
Thank you once again for your thoughtful review and valuable feedback! We appreciate your support!
Sincerely,
Authors | Summary: Simpo introduces a simplistic new method of alignment leveraging the log-probability of the sequence eliminating the need to leverage the SFT policy making the implementation computationally less complex. By introducing a notion of margin in the loss function, SimPO outperforms several existing alignment algorithms including DPO and variants without increasing response length. Overall, it provides an interesting light-weight alignment procedure with strong empirical benefits.
Strengths: 1. The approach eliminates the need for the reference model which can be computationally heavy or might not be available in practice
2. The methods shows improved empirical performance over DPO and existing baselines through extensive experimentations.
3. The method provides an interesting way of regularizing the length and shows improved performance without increasing the length of the response.
Weaknesses: 1. The primary definition of RLHF has a constraint to the SFT policy through the KL regularization which restricts the model to overoptimize the reward function. Also, it ensures the base performance of the LLM remains intact on several other domains and tasks. However, the proposed method doesn't use KL to the SFT and removes the SFT which makes it unclear what to attribute the improvement to. Specifically, one can violate the constraint and do better with the reward for that specific or similar task, but what about the performance in other tasks that the pre-trained/SFT model was good at?
2. DPO has a closed-form unique solution due to the strong convexity of the KL regularized RLHF objective resulting in the particular objective. However, it's not clear if we don't use the KL regularization, what the original optimization problem we are optimizing for. Is it still strongly convex? If not, what are the insights that this objective will have some good convergence properties?
3. Under all these questions, the motivation for the objective is heuristically driven and unclear from where it comes from
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. Can you mention if your original objective is strongly convex or how the particular objective is derived?
2. Can you provide the KL regularization of your model to the SFT and show how much it deviates from the SFT in comparison to baselines?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Check above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We'd like to thank the reviewer for the thoughtful feedback. We address your questions as follows.
**Without KL to the SFT, one can violate the constraint and do better with the reward for that specific or similar task, but what about the performance in other tasks that the pre-trained/SFT model was good at?**
While reward hacking is theoretically possible without explicit regularization, several practical factors mitigate this risk: (1) the learning rate, (2) the preference dataset, and (3) the SFT model’s robustness to forgetting.
1. The learning rate of preference optimization algorithms is typically very small (e.g., 5e-7), which naturally prevents significant divergence from the SFT model.
2. The preference datasets are often constructed to cover a broad range of tasks and domains, helping retain the model's existing knowledge and task versatility.
3. Large language models generally have a substantial capacity to learn from new data without catastrophic forgetting of previously learned tasks. This robustness can mitigate the need for explicit regularization, similar to how instruction tuning (SFT) retains the model’s pretrained knowledge without KL regularization.
The above factors combined might be sufficient to ensure that the model learns human preferences while retains its generalization capabilities. Empirically, with appropriate hyperparameters, SimPO can result in a similar KL divergence to DPO (see our response to your last question) and comparable academic benchmark performance (see Table 9 in the Appendix). Additionally, we demonstrate new results in the table below that fine-tuning gemma2-9b-it with SimPO yields significantly improved instruction following ability without degradation on academic benchmarks including math (GSM) and general knowledge (MMLU).
|models | AlpacaEval 2 LC | GSM (0 shot) | MMLU (0 shot) |
|-----|:------:|:------:|:----------:|
| gemma-2-9b-it | 51.1 | 87.4 | 72.7 |
| gemma-2-9b-it-DPO | 67.8 | 88.5 | 72.2 |
| gemma-2-9b-it-SimPO | 72.4 | 88.0 | 72.2 |
**DPO has a closed-form unique solution due to the strong convexity of the KL regularized RLHF objective resulting in the particular objective. Without KL, is the objective still strongly convex? If not, what are the insights that this objective will have some good convergence properties?**
The derivation of the DPO objective from RLHF relies on the specific condition of achieving the optimal policy $\pi^*_{\theta}$. However, this assumption is rarely met in practice due to the inherent challenges in optimizing deep neural networks. Therefore, there's no guarantee that DPO faithfully implements the original RLHF objective. Our work departs from this assumption: we directly address the potential misalignment between the DPO objective's reward metric and the decoding objective. This focus allows us to tackle the practical limitations of DPO without relying on idealized assumptions.
Furthermore, comparing preference optimization (PO) objectives in terms of convexity or convergence is inherently difficult. All these objectives, including RLHF (PPO), DPO, and SimPO, are generally non-convex with respect to the model parameters $\theta$. This is because the policy model $\pi_{\theta}$ is parametrized by multi-layer, non-linear Transformer architectures, making it a non-convex function with respect to $\theta$. Consequently, any PO objective that depends on $\pi_{\theta}$ will also be inherently non-convex with respect to the model parameters.
**The motivation for the objective is heuristically driven and unclear from where it comes from**:
Our SimPO objective is systematically derived by aligning the reward in training with the generation likelihood in decoding:
1. We use the sequence likelihood objective, optimized by decoding algorithms, as the reward metric for winning/losing responses during preference optimization.
2. We then apply the Bradley-Terry objective over these reward metrics to formulate the SimPO objective.
3. The target reward margin hyperparameter in SimPO is analogous to the margin loss in SVMs and represents the home advantage in Bradley-Terry models.
**Can you provide the KL regularization of your model to the SFT and show how much it deviates from the SFT in comparison to baselines?** In the PDF attached to our general response, we illustrate the KL divergence of SimPO and DPO. Figure 1 shows that (a) increasing $\beta$ in DPO can reduce the KL divergence, but (b) it can also overly constrain the model from effective learning of the preference dataset, resulting in lower generation quality. Therefore, there is a trade-off between learning preference data and staying close to the initial model. With appropriate hyperparameters, SimPO can achieve similar KL divergence to DPO.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal by Authors
Comment: Thanks for the points. I, in general, agree with the comment regarding the closed form of DPO and that it directly doesn't hold for a parametrized setting, which is ok. However, at least for the Tabular class of policies, it holds true and it indeed has the closed form. However, in your problem, how the optimization expression comes is not clear, once we remove the KL.
Also, can you explain why even after not having any KL constraint, SimPO can achieve similar KL divergence to DPO ? Which hyperparameters will cause it to stay close and why?
---
Reply to Comment 1.1.1:
Title: Response to Reviewer oFKm
Comment: Thank you for the reply! We address your questions as follows:
**How the optimization expression comes is not clear, once we remove the KL**:
While DPO is derived from the KL-regularized RLHF objective, it's important to highlight that this equivalence only holds at the "optimal policy" (i.e., $\pi_{\theta} = \pi^*$). However, this optimal condition is rarely, if ever, achieved during the actual training process due to the complexities of optimizing deep neural networks, thereby breaking the correspondence between RLHF and DPO in practice. Therefore, we believe that a full correspondence between RLHF and *PO is not the prerequisite for *PO algorithms to perform well empirically.
The primary motivation behind SimPO is to **address the training-decoding discrepancy in DPO** which is a significant practical issue: In DPO’s objective, the implicit reward metric being optimized during training incorporates the reference model, which is absent during decoding. This mismatch can lead to a counterintuitive outcome: even if a winning response $y_w$ has a higher reward than a losing response $y_l$ during training (i.e., $r(x, y_w) > r(x, y_l)$), the model might still be more likely to generate $y_l$ during decoding, as illustrated in Figure 4 (b) in our manuscript.
**Also, can you explain why even after not having any KL constraint, SimPO can achieve similar KL divergence to DPO? Which hyperparameters will cause it to stay close and why?**
For *PO algorithms, the reference model is the initial SFT model where preference optimization begins (that’s why the KL divergence of DPO/SimPO always starts from 0). Two key hyperparameters in SimPO directly influence the KL divergence by controlling how much the model deviates from its initial state:
* **Learning rate**: The learning rate of DPO/SimPO is typically very small (e.g., 5e-7), which naturally constrains model updates from the initial checkpoint, resulting in small KL divergence from the reference model. Increasing the learning rate can lead to greater KL divergence and performance degradation for both methods.
* **The $\beta$ hyperparameter in the training objective**: The $\beta$ hyperparameter scales the reward difference in the SimPO objective: $\mathcal{L} = -\log \sigma \left( \frac{\beta}{|y_w|} \log \pi_\theta(y_w|x) - \frac{\beta}{|y_l|} \log \pi_\theta(y_l|x) - \gamma \right)$. A large $\beta$ amplifies even small differences between the winning likelihood $\frac{\beta}{|y_w|} \log \pi_\theta(y_w|x)$ and the losing likelihood $\frac{\beta}{|y_l|} \log \pi_\theta(y_l|x)$. Consequently, even a slight advantage of the winning response will lead to a near-zero loss, minimizing model updates and keeping the KL divergence small.
In practice, we observe that for both DPO and SimPO, hyperparameters can significantly impact the results, further supporting the point that deriving from RLHF does not guarantee empirical robustness.
---
Reply to Comment 1.1.2:
Comment: Dear reviewer oFKm,
As the discussion period is coming to an end, we kindly ask that you review our responses and let us know if you have any additional concerns. If possible, we would appreciate it if you could adjust your scores accordingly. Thank you! | Summary: The paper introduces SimPO (Simple Preference Optimization), an extension of Direct Preference Optimization (DPO), by replacing the reference-policy-dependent implicit reward with a reference-free reward.
Specifically, SimPO utilizes the average log probability of a sequence as the implicit reward, aligning it with the generation process and eliminating the need for a reference model. This results in improved computational and memory efficiency. Additionally, a target reward margin is introduced into the Bradley-Terry objective to ensure a significant reward difference between winning and losing responses. Extensive experiments show that SimPO consistently outperforms DPO and its variants across various benchmarks, demonstrating its effectiveness in improving model performance.
Strengths: - The focus on addressing the discrepancy between reward and generation in DPO is novel and valuable, providing a new perspective on extending DPO.
- SimPO enhances computational and memory efficiency by eliminating the need for a reference model.
- Extensive experiments across various benchmarks demonstrate that SimPO consistently outperforms DPO and its variants.
Weaknesses: - Although SimPO simplifies the optimization process by removing the reference model calculation of response probabilities, it introduces a new hyperparameter, the target reward margin $\gamma$, which requires tuning.
- While the motivation to address the discrepancy between reward and generation in DPO is understandable, there is a lack of mathematical discussion on how the proposed implicit reward is effective for preference optimization. For instance, standard random sampling with temperature 1 could eliminate the discrepancy without length normalization. However, the experimental results suggest that length normalization is crucial, questioning whether the initial motivation to align optimization and decoding is definitively effective for preference optimization.
- The target reward margin plays a critical role in SimPO, but there is no discussion or comparative experiments with "DPO with an offset" (Amini et al., 2024), which also introduces a similar margin into the DPO loss. This comparison is necessary to understand the relative advantages of SimPO.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Is it possible to add a careful discussion and conduct comparative experiments about DPO with a target reward margin (Amini et al., 2024)?
- The paper suggests that reducing the discrepancy between reward and generation in DPO results in a more preferable reward for preference optimization. While theoretical analysis might be challenging, is it possible to experimentally validate this hypothesis? For instance, can you evaluate the relationship between the discrepancy's size and preference optimization's performance? This could include testing different decoding methods, such as a beam search with larger beam widths in the length-normalized version that will align more with the average log-likelihood reward.
- Since SimPO omits KL divergence regularization from the initial policy, assessing how the KL divergence behaves would be helpful. Evaluating the model's performance in terms of KL divergence, as analyzed by Gao et al. (2023), could provide valuable insights.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We’d like to thank the reviewer for acknowledging the novelty and simplicity of our proposed approach. We address your raised points as follows.
**The target reward margin $\gamma$ requires extra tuning**:
We acknowledge that the newly introduced target reward margin requires additional tuning. However, we’d like to emphasize that within a reasonable range of $\gamma$ values, SimPO consistently outperforms the DPO baseline, as shown in the table below (also Figure 3a in the manuscript). Furthermore, we find that using $\gamma$=1.5 generally yields sufficiently good results across all settings. While slight tuning can further improve performance, it is not mandatory.
| Models | AlpacaEval LC Win Rate |
|------------------|:-------------------------:|
| SimPO ($\gamma$=0) | 16.8 |
| SimPO ($\gamma$=0.8) | 20.2 |
| SimPO ($\gamma$=1.6) | 22.0 |
| SimPO ($\gamma$=2.4) | 16.8 |
| DPO | 15.1 |
**Lack of mathematical understanding**:
While it will be challenging to derive a rigorous theoretical understanding yet, we offer two possible explanations:
1. SimPO's reward, the average log-likelihood of a sequence, closely aligns with the objective used during decoding. Although it's challenging to prove rigorously, random sampling with a temperature of 1 is likely to match SimPO's reward metric better than either (1) DPO or (2) SimPO without length normalization.
* DPO involves a reference model in its formulation, which is not used during decoding.
* Analogous to beam search, which ranks candidate sequences of varying lengths using average log-likelihood, length normalization is crucial when comparing sequences of different lengths (i.e., winning and losing responses) for reward calculation in SimPO. The reward without length normalization is biased, as evidenced by the tendency to assign higher likelihoods to longer sequences (Figure 2). This can result in excessively long outputs, such as degeneration into repetitive tokens.
2. Another perspective for understanding the method is that the SimPO reward decouples the reward from the sequence length. According to [1], sequence length positively correlates with model-based judgment. SimPO’s disentanglement allows us to train the policy without biasing it to generate longer sequences merely to achieve a higher reward. Instead, it encourages the model to learn and focus on the differences beyond sequence lengths.
We hope this helps clarify the effectiveness of our approach!
[1] [A Long Way to Go: Investigating Length Correlations in RLHF](https://arxiv.org/abs/2310.03716)
**DPO with an offset (Amini et al., 2024)**: We conducted additional experiments to determine if adding an offset (Amini et al., 2024) would further enhance DPO. After tuning $\gamma$, we found that it does not lead to any further improvement. Specifically, we used $\gamma$=0.1 for mistral-base and $\gamma$=0.0 for mistral-instruct. Increasing gamma beyond these values resulted in worse results.
| | | Mistral-base 7B | | | Mistral-instruct 7B | |
|--------------|:---------------:|:------------:|:----------:|:-------------------:|:------------:|:----------:|
| | AlpacaEval 2 LC Win Rate | AlpacaEval 2 Win Rate | Arena-Hard | AlpacaEval 2 LC Win Rate | AlpacaEval 2 Win Rate | Arena-Hard |
| DPO | 15.1 | 12.5 | 10.4 | 26.8 | 24.9 | 16.3 |
| DPO w/ $\gamma$ | 15.2 | 12.1 | 10.3 | 26.8 | 24.9 | 16.3 |
The objective of DPO inherently includes an instance-wise target reward margin $\gamma_{\text{ref}}$, as shown below:
$$
\mathcal{L} = \log \sigma \left( \beta \log \frac{\pi_\theta(y_w \mid x)}{\pi_{\text{ref}}(y_w \mid x)} - \beta \log \frac{\pi_\theta(y_l \mid x)}{\pi_{\text{ref}}(y_l \mid x)}\right)\\
= \log \sigma \bigg( \beta \log \pi_\theta(y_w \mid x) - \beta \log \pi_\theta(y_l \mid x) - \big(\beta \log \pi_{\text{ref}}(y_w \mid x) - \beta \log \pi_{\text{ref}}(y_l \mid x)\big)\bigg)
$$
where
$\gamma_{\text{ref}} = \beta \log \pi_{\text{ref}}(y_w \mid x) - \beta \log \pi_{\text{ref}}(y_l \mid x)$.
This may explain why adding an extra margin to DPO will not be as effective as it is with SimPO. We will add these results and discussions to our revision!
**Discuss about KL divergence**:
In the PDF attached to our general response, we illustrate the KL divergence of SimPO and DPO across different $\beta$, measured on the winning responses from a held-out set during training. In summary:
* We find that SimPO, even without KL regularization, yields a reasonably small KL divergence from the policy model to the SFT model, which is comparable to DPO under its optimal $\beta$.
* Within the range of $\beta$ explored, increasing $\beta$ reduces the KL divergence for both DPO and SimPO, but it can also overly constrain the model from effective learning of the preference dataset, resulting in lower generation quality. This suggests that KL divergence is not monotonically related to reward scores or model quality (Gao et al., 2023 present a similar phenomenon).
We hope this helps clarify the reviewer’s concern about KL divergence!
---
Rebuttal 2:
Title: Explore beam search with a large beam
Comment: Thanks for the insightful suggestion to test the discrepancy between the reward metric and decoding methods! We follow the reviewer's suggestion to test beam search with different beam sizes (e.g., 1, 5, 10) using the mistral-base-SimPO model. We find that increasing beam sizes does increase the generation quality measured by win rate, as shown in the following table. It's even better than the number reported in our paper! We think that this helps validating that aligning rewards and decoding metrics more explicitly is beneficial. However, a clear downside is that the runtime significantly increases as beam size increases, and we think this might be why people stick with greedy decoding/sampling instead of using beam search for LLM generation.
| Decoding Method | LC Win Rate | WR | runtime |
|:---------:|:-----------:|:----:|:--------:|
| sampling | 21.5 | 20.8 | 5min |
| greedy (beam=1) | 22.6 | 22.3 | 5min |
| beam=5 | 22.2 | 21.6 | 1h 51min |
| beam=10 | 24.5 | 23.5 | 3h 56min |
---
Rebuttal Comment 2.1:
Title: Looking forward to your feedback!
Comment: Dear Reviewer a8Vq,
Thank you once again for your thoughtful review and valuable feedback! As the discussion period is ending tomorrow, we would greatly appreciate knowing whether our response has adequately addressed your questions.
If you have any additional comments or concerns, please feel free to share them with us.
Sincerely,
Authors
---
Rebuttal 3:
Title: Reponse to Reviewer a8Vq
Comment: Thank you for the thoughtful comments; this is an excellent question! To clarify how length normalization (LN) impacts preference learning, we need to consider three factors:
- **Length bias of total log-likelihood**: As you mentioned, longer sequences typically accumulate a lower (more negative) summed log-likelihood, inherently disadvantaging longer sequences.
- **Length difference in preference pairs**: We found that, in many instances, the winning response ($y_w$) is longer than the losing response ($y_l$).
- **Objective of preference learning**: Preference learning aims to assign higher rewards to winning responses ($y_w$) compared to losing ones ($y_l$).
Without LN, the reward metric $r_{\text{SimPO w/o LN}}(x,y) = \log \pi_\theta(y \mid x)$ is based on the total log-likelihood. Due to the length bias, longer winning responses ($y_w$) generally have a lower total log-likelihood compared to shorter losing responses ($y_l$). To correctly rank these $y_w$ higher, the model must overcome this bias by assigning disproportionately high probabilities to each token in $y_w$ to ensure the reward on $y_w$ exceeds that on $y_l$. This compensatory effect can cause the model to become miscalibrated, as illustrated in Figure 2(c), where it begins to excessively favor longer sequences. As a result, during decoding, such an ill-calibrated model tends to generate longer sequences. Essentially, LN acts as a calibration mechanism to prevent this overcompensation in the training stage.
To further illustrate this for a mathematical understanding, let's analyze the loss of SimPO with and without LN. For simplicity, we omit the expectation symbol in the following formulas:
- With LN: $ L_{\text{SimPO}} (\pi_\theta) = - \log \sigma \left( \frac{\beta}{|y_w|} \log \pi_\theta(y_w|x) - \frac{\beta}{|y_l|} \log \pi_\theta(y_l|x) - \gamma \right)$
- Without LN: $ L_{\text{SimPO w/o LN}} (\pi_\theta) = - \log \sigma \left( \log \pi_\theta(y_w|x) - \log \pi_\theta(y_l|x) - \gamma \right)$
It can be seen that without LN, the reward difference $\Delta r = \log \pi_\theta(y_w|x) - \log \pi_\theta(y_l|x)$ is based on **total log-likelihoods**, and is generally more negative when $y_w$ is longer than $y_l$ due to the length bias. Consequently, $\sigma(\Delta r - \gamma)$ approaches 0, and thus the final loss for such instances, $- \log \sigma(\Delta r - \gamma)$, will be very large. This heavily biases the model towards learning such preference pairs. As demonstrated in Figure 2(a), this can result in the failure of learning the opposite cases where $y_w$ is shorter than $y_l$ (the loss for these cases will be very small by applying a similar reasoning).
We hope this addresses the reviewer's question, and we’ll include these clarifications in the revision. If you find this explanation satisfactory, we kindly ask you to consider raising the score, thank you so much for your time! We are also happy to help further clarify any questions you have!
---
Rebuttal Comment 3.1:
Comment: Thanks for the reply. The explanations provided were convincing and significantly contributed to a deeper understanding of SimPO. I appreciate the detailed discussion and would encourage careful inclusion of these points in the revised paper. I have also reviewed the other reviewers' comments and the corresponding responses. I have no additional concerns, so I have raised the score by one level. | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers for their thoughtful feedback, and we'd like to share some additional analysis and results since the submission!
**Additional Analysis in the attached PDF**
We include the following additional studies:
* KL divergence plots of SimPO vs. DPO
* Qualitative studies of SimPO vs. DPO
**Exceptionally strong results from applying SimPO to gemma-2-9b-it**
Furthermore, we’d like to present our new results from training gemma-2-9b-it with SimPO. The resulting model tops the AlpacaEval 2 leaderboard and the Arena-Hard benchmark among similar-sized models. Importantly, we find that training the gemma model with SimPO retains its knowledge (MMLU) and even slightly improves the original model’s math ability (GSM). This exciting new set of results demonstrates the effectiveness of SimPO across different model types without performance degradation on other benchmarks.
|models | AlpacaEval 2 LC | AlpacaEval 2 WR | Arena-Hard | GSM (0 shot) | MMLU (0 shot) |
|-----|:------:|:------:|:----------:|:----:|:---------:|
| gemma-2-9b-it | 51.1 | 38.1 | 40.8 | 87.4 | 72.7 |
| gemma-2-9b-it-SimPO | 72.4 | 65.9 | 59.1 | 88.0 | 72.2 |
Pdf: /pdf/a4b6b78c19ea1d4094fd17901fb6dc9c27f3f013.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Lexicon3D: Probing Visual Foundation Models for Complex 3D Scene Understanding | Accept (poster) | Summary: This paper presents a comprehensive analysis of leveraging visual foundation models for complex 3D scene understanding. The authors unify three kinds of feature representation: image, video and 3D in a unified paradigm and analyze the effectiveness of those representations on different kinds of 3D tasks.
Strengths: 1. The study is thorough and meaningful. I think it provides interesting finds to the 3D vision community.
2. Experiments are extensive and comprehensive. The chosen tasks and methods are representative.
Weaknesses: 1. This work lacks a unified conclusion. The experimental observations are independent (in some extent like an experimental report rather than a research paper). It is better for the authors to summarize the core underlying principles, or design some models according to the observation, which may make this work more applicable.
2. Since indoor 3D perception is mainly applied in embodied AI system, I suggest the author further study 3D scene understanding in an online setting [1, 2] rather than the current offline setting, which can be directly adopted in real-world robotic tasks.
[1] Fusion-aware point convolution for online semantic 3d scene segmentation, CVPR 2020
[2] Memory-based Adapters for Online 3D Scene Perception, CVPR 2024
Technical Quality: 3
Clarity: 4
Questions for Authors: No
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Limitation is well addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your comments, and address your concerns as follows:
***
1. *Q: This work lacks a unified conclusion. The experimental observations are independent (to some extent like an experimental report rather than a research paper). It is better for the authors to summarize the core underlying principles, or design some models according to the observation, which may make this work more applicable.*
We appreciate the reviewer's suggestion. We would like to highlight that our work presents a **unified and principled evaluation and probing framework**, emphasizing simplicity and generalizability. Hence, the conclusions we arrive at are accurate and universal. Empirically, we did not observe a single unified conclusion: for example, no single VFM can uniformly dominate all visual tasks. However, we think this diversity in performance aligns with the varied pretraining tasks and input modalities of these 2D foundation models.
To address the reviewer's concern, we summarize our **key findings as general principles**:
* Leveraging 2D VFMs in 3D scene perception and multi-modal reasoning tasks consistently yields significant improvements.
* Pretraining tasks and input modality have significant influence over a foundation model’s strengths and weaknesses, which can be effectively interpreted with our simple and unified probing framework.
* The straightforward yet efficient concatenation-based Mixture-of-Vision-Experts (MoVE) effectively leverages complementary knowledge from different VFMs.
These principles demonstrate the applicability of our work and provide valuable insights for future research and applications in the field. They guide the selection of appropriate VFMs for specific tasks and highlight areas for improvement in existing models. Our comprehensive evaluation framework, while yielding diverse results, offers a unified approach to understanding and comparing VFMs in 3D vision tasks.
***
2. *Q: Since indoor 3D perception is mainly applied in embodied AI systems, I suggest the author further study 3D scene understanding in an online setting [A, B] rather than the current offline setting, which can be directly adopted in real-world robotic tasks.*
We appreciate the reviewer's suggestion to explore online 3D scene understanding. We will include the discussion of these works in the revised manuscript.
* First, we would like to emphasize that our current exploration in the offline setting holds significant value.
* It provides a clean, simplified scenario suitable for pure evaluation of visual embedding qualities.
* To our knowledge, there hasn't been a systematic exploration under the offline setting in the community until now.
* Regarding the interesting online setting, based on insights from our offline evaluation, we can offer a few insights and some preliminary thought experiments:
* 2D visual foundation models (VFMs), especially video-based ones, are likely to excel in online scenarios, as most perception devices capture video modality rather than point clouds directly.
* Inference time would become a crucial metric for selecting VFMs. For high-resolution or high-frame-rate videos, acceleration methods like key-frame selection may be necessary to ensure timely computation.
* For extremely long videos, memory-bank or feature compression techniques might be required to optimize storage.
* While the online setting is an intriguing direction, it lies beyond the scope of our current submission. Due to time constraints during the rebuttal phase, we consider this a promising avenue for future work, building upon the insights gained from our offline analysis.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer,
The author-reviewer interaction period has started. Please read the responses provided by the authors, respond to them early on in the discussion, and discuss points of disagreement.
Thanks
---
Rebuttal Comment 1.2:
Comment: The authors' rebuttal has solved most of my concerns. I will raise the score from 5 to 6.
---
Reply to Comment 1.2.1:
Title: Thank you for your positive feedback!
Comment: We appreciate the reviewer for the positive feedback. Your constructive comments and suggestions are indeed helpful for improving the paper. Also, many thanks for raising the score.
We will continue to improve our work and release the code. If the reviewer has any follow-up questions, we are happy to discuss! | Summary: This paper explores the importance of scene encoding strategies in the context of 3D scene understanding, an area gaining significant attention. The authors investigate the optimal encoding methods for various scenarios, addressing the lack of clarity compared to image-based approaches. They conduct an in-depth study of different visual encoding models, evaluating their strengths and weaknesses across multiple scenarios. The study examines seven foundational vision encoders, including image-based, video-based, and 3D models, across four key tasks: Vision-Language Scene Reasoning, Visual Grounding, Segmentation, and Registration. The findings reveal that DINOv2 performs exceptionally well overall, video models are particularly effective for object-level tasks, diffusion models excel in geometric tasks, and language-pretrained models have unexpected limitations in language-related tasks. These results challenge existing assumptions, provide fresh insights into the use of visual foundation models, and underscore the need for adaptable encoder selection in future vision-language and scene-understanding research.
Strengths: 1, The paper is well-written and easy to understand.
2, It surveys most of the current visual foundation models.
Weaknesses: 1, I do not see any new insights from this work. Numerous previous studies [1, 2, 3] have already demonstrated that leveraging foundation models can improve 3D understanding.
2, This work simply projects visual foundation models from different views onto the 3D point cloud and fine-tunes them. There is no specific design to better integrate/distill these features.
3, The latest 3D foundation model, Uni3D, is not discussed in this work.
4, Inference time is a significant issue, as the input requires multiple-view images.
[1] Multi-View Representation is What You Need for Point-Cloud Pre-Training
[2] Bridging the Domain Gap: Self-Supervised 3D Scene Understanding with Foundation Models
[3] CLIP2Scene: Towards Label-efficient 3D Scene Understanding by CLIP
[4] Uni3D: Exploring Unified 3D Representation at Scale
Technical Quality: 2
Clarity: 3
Questions for Authors: See weakness.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 1
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your comments, and address your concerns as follows:
***
1. *Q: New insights from this work? Numerous previous studies [A, B, C] have demonstrated that leveraging foundation models can improve 3D understanding.*
We appreciate the references provided, and will include the discussion of these methods in the revised manuscript. We would like to emphasize that **our work aims to comprehensively understand the strengths and limitations of a large group of VFMs in various 3D scenarios**, rather than to merely “*demonstrate that leveraging VFMs can improve 3D perception*”. The unique novelty of our work lies in:
* **Scope**: Unlike these prior methods [A,B,C] that focus on improving specific perception tasks (detection and segmentation) using a limited set of models (usually CLIP and DINOv2), our work is the first systematic study of a broad range of VFMs across diverse tasks, including 3D perception, shape registration, and multi-modal grounding and reasoning. These VFMs (CLIP, DINOv2, StableDiffusion, LSeg, StableVideoDiffusion, V-JEPA, Swin3D) are pretrained on diverse data using different objectives, and some of them, such as the image/video diffusion-based ones are never explored as visual encoders for 3D understanding.
* **Objective**: Our primary aim is to comprehensively understand the strengths and limitations of different VFMs, rather than merely improving performance on downstream tasks. As mentioned by Reviewers **vFsT** and **4P6H**, “the insights and findings from the analysis are meaningful and crucial for the 3D vision and language community.”
* **Novel findings**: Our analysis reveals several new insights, as discussed in Lines 50-61, including
* The effectiveness of video foundation models in object-level tasks (Section 3.3)
* Previously overlooked limitations of language-pretrained models (Sections 3.2, 3.3)
* Advantages of generative-pretrained models in geometric tasks (Section 3.4)
These findings demonstrate the uniqueness of our work and provide valuable insights for future research and applications in the field. They guide the selection of appropriate VFMs for specific tasks and areas for improvement in existing models, which are not provided in the study of [A,B,C].
***
2. *Q: This work simply projects visual foundation models from different views onto the 3D point cloud and fine-tunes them. There is no specific design to better integrate/distill these features.*
Our approach is intentionally designed to be simple and straightforward. It is the simplicity of our probing framework that makes our evaluation generalizable. More specifically, we choose this simple design for these reasons:
* **Focus on VFM comparison**: Our primary goal is to analyze and compare different VFMs, rather than to propose new architectures or improve task performance. A simple architecture allows us to minimize confounding factors and focus on the intrinsic capabilities of the VFMs. In comparison, having a more advanced integration or distillation design would introduce more entanglement in the evaluation. But we agree with the reviewer that a more advanced design can be direct future work based on our insights.
* **Consistent with prior work**: This approach aligns with recent studies like Probe3D (CVPR 2024, [6]) and ATO2F (NeurIPS 2023, [97]), which also employ simple architectures to prioritize model analysis. Compared with these methods, our work performs investigations on a broader range of foundation models, and is the first to systematically study them in 3D scene-level multi-modal tasks.
* **Effectiveness of simple integration**: Despite its simplicity, our Mixture-of-Experts approach using straightforward concatenation demonstrates significant performance improvements, as evidenced in Section 4.2 and Figure 8.
***
3. *Q: The latest 3D foundation model Uni3D is not discussed.*
We will include the discussion of this method in the revised manuscript. While Uni3D is a general transformer-based 3D VFM, it is pretrained on object-centric datasets (Objaverse, ShapeNet, etc.), with a restriction of input and output dimensions. In contrast, our research focuses on more challenging and universal scene-level understanding.
**To demonstrate the performance of Uni3D**, we conduct experiments with its features on our evaluation benchmarks. Due to the restriction of rebuttal length, we include the results and observations in the **General Response**. Please refer to that part for more details.
***
4. *Q: Inference time is a significant issue, as the input requires multiple-view images.*
* **Study Objective**: Our study's primary aim is to provide a simple, unified framework for evaluating and analyzing different VFMs, rather than proposing a new method optimized for low latency. Our analysis, particularly in Table 6 and Figure 6, clearly illustrates the trade-off between performance and inference time in existing methods.
* **Model-Dependent Inference Time**: The inference time of our evaluation method depends on the specific type of VFMs we use, rather than our probing framework. For example, when proving the Swin3D model, we directly take 3D point clouds as input, rather than multi-view images, leading to faster inference time.
* Regarding inference time of multi-view images, our work also reveals several key insights:
* For scenes captured by a large number of multi-view images (long videos), video foundation models like V-JEPA can be more efficient than single-frame models.
* We propose a simple yet effective keyframe sampling strategy (Figure 7, Section 4.2) that significantly reduces inference time while maintaining performance for video foundation models.
* **Encoding Context**: Many indoor scene understanding and reasoning tasks [34, 35, 40, 54] use an approach where models first encode the input, and then perform inference using lightweight decoder heads. In this context, longer encoding time does not significantly impact overall system performance.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer,
The author-reviewer interaction period has started. Please read the responses provided by the authors, respond to them early on in the discussion, and discuss points of disagreement.
Thanks
---
Rebuttal 2:
Comment: Thanks for the clarification. I will increase my score to 5.
---
Rebuttal Comment 2.1:
Title: Thank you for your positive feedback!
Comment: We appreciate the reviewer for the positive feedback. Your constructive comments and suggestions are indeed helpful for improving the paper. Also, many thanks for raising the score.
We will continue to improve our work and release the code. If the reviewer has any follow-up questions, we are happy to discuss! | Summary: This paper examines various scene encoding methods for 3D scene understanding, encompassing image, video, and 3D models. It explores four distinct tasks: registration, scene reasoning, visual grounding, and segmentation. The experimental results indicate that different encoding techniques excel in different tasks, underscoring the importance of selecting appropriate encoders for enhanced understanding in 3DVL.
Strengths: 1. This paper provides thorough analysis on different 2D and 3D performance on scene understanding. Currently, we have not seen these features compared in one framework
2. This probing framework and the insights from the experimental results are crucial for the 3D vision-language community.
Weaknesses: 1. Details of using 3D feature field for these tasks should be discussed. What is the baseline model for 3D grounding, QA and registration. These information should be elaborated in appendix.
2. The combination of 3D and 2D feature should be studied in different tassk.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In 2D visual grounding, results using image feature and swin3D features are worse than M3DRef, which model do you use to conduct visual grounding?
2. See weakness above, details of these experiments should be included to make these results more convincing.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: More 3D encoders should be considered, like point transformer, pointnet++, and sparse conv UNet.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your comments, and address your concerns as follows:
***
1. *Q: Details of using 3D feature fields for these tasks should be discussed. What is the baseline model for 3D grounding, QA, and registration.*
* **3D QA**: We use 3D-LLM [35] as our baseline and backbone. We replace the original visual embedding with outputs from different VFMs, while retaining the same visual projector and large language model for answer decoding.
* **3D grounding**: We employ Multi3DRefer [103] as our backbone. We substitute the original visual embedding with outputs from different VFMs and use the same attention-based decoder head for object-text matching.
* **Segmentation**: Given per-point features from our unified architecture, we directly use a lightweight linear probing decoder to output semantic labels for each point.
* **Registration**: We use REGTR [94] as our backbone, adopting its transformer encoder followed by a lightweight decoder to determine the corresponding positions of points between point clouds. However, the original REGTR evaluation with point clouds is not compatible with 2D foundation models. Hence, we modified the problem setting and created our own evaluation dataset, as detailed in Lines 277-285.
For all models, we adjust the dimensions of projection and decoding layers to match the output embedding channels of the VFMs. We thank the reviewer for the suggestion, and will clarify these points in our revised manuscript.
***
2. *Q: The combination of 3D and 2D features should be studied in different tasks.*
In Section 4.2 Line 320 and Figure 8, we studied the combination of two 2D VFMs and one 3D VFM on the semantic segmentation task, and demonstrated that combining 3D and 2D features leads to great improvement in this visual perception task.
Here, as requested, we conduct the combination of 2D and 3D VFMs in registration, grounding, and VQA tasks. We select three models, including 2D, video, and 3D models, and leverage the same setting from Section 4.2.
| 3DQA | ScanQA (CIDEr) ↑ | SQA3D (CIDEr) ↑
| - | - | -
| CLIP | 70.3 | 124.5
| CLIP+Swin3D | 71.4 | 127.2
| CLIP+SVD | 71.8 | 128.5
| CLIP+SVD+Swin3D | **73.3** | **129.9**
|
| Grounding | Overall (accuracy) ↑
| - | -
| V-JEPA | 52.9
| V-JEPA+Swin3D | 54.1
| V-JEPA+DINOv2 |53.5
| V-JEPA+DINOv2+Swin3D | **54.9**
|
| Registration | RRE (°) ↓ | RTE (m) ↓
| - | - | -
| SVD | 0.83 | 0.060
| SVD+DINOv2 | 0.79 | 0.055
| SVD+Swin3D | 0.73 | 0.053
| SVD+DINOv2+Swin3D | **0.71** | **0.050**
|
Results show that the combination of 2D image, video, and 3D features consistently improved performance across all tasks. These results reinforce our earlier findings and lead to an important observation: the mixture of vision experts, especially those with different modalities, is a simple yet effective method for improving performance across various 3D vision tasks. It also suggests the potential for future research in developing fusion techniques for these complementary features.
***
3. *Q: In 2D visual grounding, results using image features and swin3D features are worse than M3DRef, which model do you use to conduct grounding?*
Thank you for your question. We'd like to clarify several points regarding our visual grounding methodology and results:
* **Backbone Model**: As we elaborated in the answer to Q1, we use M3DRef [103] as our backbone visual grounding method, replacing only the original "vision detection module" with various visual foundation models (VFMs).
* **Comparability**: Note that the numbers achieved by our probing model are not directly comparable to the numbers achieved by M3DRef. The reasons are:
* Multiple visual encoders: M3DRef utilizes both 3D object features and 2D image features during its feature extraction, effectively employing an internal mixture-of-expert mechanism to enhance their visual features and achieve better performance. In contrast, our method uses only one feature map from a single VFM in each experiment.
* Model finetuning: M3DRef finetunes its visual encoders on the training dataset, while we keep the visual encoders fixed and only probe their feature embeddings to clearly demonstrate different VFMs’ generic capabilities.
* While not directly comparable, we include M3DRef's original results as a reference point, to give readers a general idea of how well various VFMs perform in a zero-shot setting relative to an established benchmark.
We will clarify these points in our revised manuscript to prevent any misinterpretation of the results.
***
4. *Q: More 3D encoders should be considered, like point transformer, pointnet++, and sparse conv UNet.*
We appreciate the reviewer's suggestion to consider additional 3D encoders. The encoders mentioned by the reviewer were carefully considered and not included in our submission due to the following reasons:
* Point Transformer and PointNet++:
* These encoders are not typically considered visual foundation models as they lack official, generalized pretrained checkpoints.
* Using ScanNet-specific checkpoints (either trained by ourselves or from third parties) could introduce severe data leakage and lead to unfair comparisons with other visual foundation models.
* Sparse Conv Unet:
* This architecture, exemplified by MinkowskiNet, is already utilized as the backbone for Swin3D [92], which is included in our study.
* We believe that the inclusion of the more advanced and scalable Swin3D effectively covers the capabilities of Sparse Conv UNet.
* **Uni3D**: However, in the general response, we include Uni3D, a **large-scale pretrained object-centric 3D foundation model**. Please refer to the general response for the observations and analysis.
We will include a discussion of these considerations in our revised manuscript to provide clarity on our methodology and model selection criteria.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer,
The author-reviewer interaction period has started. Please read the responses provided by the authors, respond to them early on in the discussion, and discuss points of disagreement.
Thanks
---
Rebuttal Comment 1.2:
Title: Thank the authors for the rebuttal
Comment: I have read the rebuttal and other reviews. Most of my concerns have been solved, so I will maintain my original score as 6: Weak Accept.
---
Reply to Comment 1.2.1:
Title: Thank you for your positive feedback!
Comment: We appreciate the reviewer for the positive feedback. Your constructive comments and suggestions are indeed helpful for improving the paper.
We will continue to improve our work and release the code. If the reviewer has any follow-up questions, we are happy to discuss! | Summary: This paper conducts a large-scale study to answer the unexplored question: which method (among image-based, video-based, and 3D foundation models) performs the best in 3D scene understanding? The results show that DINOv2 demonstrates superior performance, video models excel in object-level tasks, diffusion models benefit geometric tasks, and language-pretrained models show unexpected limitations in language-related tasks.
Strengths: The paper is well-structured overall. The investigated question is very interesting to me. I also like the extensive experiments involved in this paper.
Weaknesses: (1). What about more advanced object-centric encoders like Segment Anything (SAM) for complex 3D scene understanding? Are they better or worse than LSeg?
(2). Will the results be different when a different probing method is used (e.g., a pyramid network to aggregate multi-scale features from the foundation model)?
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see 'Weaknesses'.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have discussed the limitations of this paper, and this paper has no direct negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your comments, and address your concerns as follows:
***
1. *Q: How about Segment Anything (SAM)? Does it perform better than LSeg?*
Thank you for the suggestion. Here we include the performance of SAM as a visual foundation model for our evaluation benchmarks. We use the official pretrained model checkpoint with ViT-L as the backbone encoder, matching the model size with other visual foundation models in our submission.
| Model | 3D VQA (CIDEr) ↑ | 3D Grounding (Accuracy) ↑ | Segmentation (mIoU) ↑ | Registration (RTE) ↓
| - | - | - | - | -
| LSeg | 71.0 | 50.4 | 47.5 | 0.59
| SAM | 68.6 | 50.1 | 30.9 | 0.09
|
With the results shown above, we offer the following analysis:
* First, it is crucial to highlight the fundamental differences between LSeg and SAM. LSeg is designed to conduct language-driven **semantic** image segmentation, providing semantic-aware representations. In contrast, SAM is primarily an **instance** segmentation model that focuses on local representations and excels in detecting edges, as illustrated in **Figure 1** of our PDF rebuttal. These distinctions result in varied performances across the four tasks in our evaluation.
* Among the four tasks, *3D VQA* and *Semantic segmentation* require a deep semantic understanding of the 3D scenes, where LSeg naturally outperforms SAM. For *3D Grounding*, both semantic and spatial understanding are necessary; hence, LSeg and SAM exhibit similar, yet suboptimal, performance in this task. The *Registration* task, however, demands matching point clouds using distinguishable local features. Here, SAM's ability to provide precise local features positions it as a strong performer in this geometry-oriented task.
* Overall, SAM is not well-suited for numerous downstream tasks, particularly those requiring semantic comprehension. This conclusion is consistent with previous studies, such as [A, B]. However, we additionally reveal that it excels in tasks benefiting from robust local feature representation.
* *[A] AM-RADIO: Agglomerative Vision Foundation Model - Reduce All Domains Into One, CVPR 2024*
* *[B] SAM-CLIP: Merging Vision Foundation Models towards Semantic and Spatial Understanding, CVPRW 2024*
***
2. *Q: Will the results be different when a different probing method is used (e.g., a pyramid network to aggregate multi-scale features from the foundation model)?*
We have conducted additional experiments to evaluate the impact of multi-scale feature aggregation on various visual foundation models (VFMs). Our analysis focused on CLIP, SAM, StableDiffusion, and Swin3D, which represent a diverse range of VFMs with different pretraining settings and input modalities.
For CLIP and SAM which use ViTs as backbones, we follow [6,97] and split the layers into four equally sized blocks and extract features after the last three blocks (i.e., 12-th, 18-th, and 24-th layers in ViT-L). For StableDiffusion and Swin3D, the decoding portion of the UNet and MinkowskiNet consists of feature upsampling blocks. We extract features after three of these upsampling blocks.
| Model | Feature Configuration | 3D VQA (CIDEr) ↑ | 3D Grounding (Accuracy) ↑ | Segmentation (mIoU) ↑ | Registration (RTE) ↓
| - | - | - | - | - | -
| CLIP | Single-scale | 70.3 | 50.4 | 3.4 | 0.44
| | Multi-scale Aggregation | 70.9 | 51.1 | 3.8 | 0.28
| SAM | Single-scale | 68.6 | 50.1 | 30.9 | 0.09
| | Multi-scale Aggregation | 69.0 | 50.5 | 31.7| 0.09
| StableDiffusion | Single-scale | 68.2 | 50.6 | 42.6 | 0.09
| | Multi-scale Aggregation | 69.8 | 51.7 | 29.8 | 0.07
| Swin3D | Single-scale | 62.3 | 43.6 | 18.1 | 0.71
| | Multi-scale Aggregation | 70.0 | 51.6 | 35.2 | 0.23
|
From the results we observe:
* **Performance improvement**: Most models and tasks showed improved performance with multi-scale feature aggregation. This improvement was more pronounced in CNN-based architectures compared to Transformer-based methods.
* **Architectural differences**: CNN-based (or UNet-based) models benefited more due to the complementary nature of features from different convolutional layers with varying receptive fields. In contrast, Transformer-based (or ViT-based) models, with their fully connected and attended layers, showed less significant improvements.
* **Model-specific observations**: StableDiffusion exhibited a performance drop after multi-scale aggregation for the semantic segmentation task. Further analysis revealed that its final layers focus on high-frequency textures, which are unsuitable for tasks like segmentation (yielding only 16.2 mIoU when used alone). These results provide valuable insights for the mixture-of-experts approach, suggesting the importance of carefully selecting which layers or models to include in the expert pool to achieve optimal performance.
We will add these results and discussion into the analysis section of our revised manuscript.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer,
The author-reviewer interaction period has started. Please read the responses provided by the authors, respond to them early on in the discussion, and discuss points of disagreement.
Thanks
---
Rebuttal Comment 1.2:
Comment: Thank you for the rebuttal. My concerns are addressed by the authors. Therefore, I will keep my score.
---
Reply to Comment 1.2.1:
Title: Thank you for your positive feedback!
Comment: We appreciate the reviewer for the positive feedback. Your constructive comments and suggestions are indeed helpful for improving the paper.
We will continue to improve our work and release the code. If the reviewer has any follow-up questions, we are happy to discuss! | Rebuttal 1:
Rebuttal: # General Response
***
We are thankful for the feedback and suggestions from all the reviewers. We are glad that the reviewers recognize our intriguing and meaningful insights for the entire 3D vision and multi-modal community (4P6H, vFsT), representative tasks and wholesome coverage of visual foundation models (yLXP, vFsT), extensive and comprehensive experiments and analysis (q2Jg, 4P6H, vFsT), and well-structured manuscript (q2Jg, 4P6H, yLXP).
We address each of the reviewers’ concerns in the individual response. Here, we would like to highlight the **key objectives and contributions** of our paper:
* Being the first comprehensive study on the role of image, video, and 3D visual foundation models (VFMs) in 3D multi-modal perception and reasoning scenarios, our work focuses on thoroughly understanding the strengths and limitations of different VFMs. Instead of optimizing the performance for a single or a few tasks, we primarily promote the breadth and generalizability of our discovery. We achieve this by employing the most straightforward, simplest, and unified design across a wide range of tasks, including 3D question answering, object grounding, semantic segmentation, and geometric registration.
* In addition to our key observations and insights as demonstrated in our original manuscript (Lines 50-61), we also summarize several universal and generalizable principles: (1) Empirically, no single VFM can uniformly dominate all visual tasks. However, leveraging 2D VFMs in 3D scene perception and multi-modal reasoning tasks consistently yields significant improvements. (2) Pretraining tasks and input modality have significant influence over a foundation model’s strengths and weaknesses. However, the straightforward yet efficient mixture of vision-experts boosts performance for all tasks by effectively leveraging complementary knowledge from different VFMs.
Therefore, we kindly suggest that our contributions lie in presenting the applicability of our discovery and the intriguing emergent behaviors across a wide range of tasks, instead of focusing on scaling up and optimizing individual tasks. We are grateful that reviewers (4P6H,vFsT) approach our paper from this perspective.
**Additional Experiments**: In addressing the reviewers’ main concerns, we provide additional experiments in the individual responses. A summary of new results is shown below:
* **Mixture-of-Vision-Experts (MoVE)**: We validate that combining multi-layer features from the same visual model and features from multiple visual models both lead to a consistent boost of performance across different tasks. This reinforces our earlier findings about the significance of leveraging Mixture-of-Vision-Experts (MoVE) in 3D scene understanding and multi-modal reasoning scenarios. Please refer to the response to reviewers q2Jg and 4P6H for more details.
* **SAM**: We provide the evaluation of the Segment Anything Model and its comparison with LSeg. Due to the max word restriction for a response, please refer to the separate response to reviewer q2Jg for more details.
* **Uni3D**: We provide the evaluation of a latest large-scale object-centric pretrained model Uni3D.
**Implementation**: Following the part segmentation details in Uni3D's appendix (Sec. B), we used Uni3D-giant, selecting features from the 16th, 28th, and 40th (last) layers to form grouped point patches. We then employed PointNet++'s feature propagation to upsample group features into point-wise features. It's worth noting that Uni3D's ScanNet visualizations in their paper were achieved by applying Uni3D to each object instance based on ground truth instance segmentation, not by direct application to the whole scene.
The results are shown in the following Table.
| Model | 3D VQA (CIDEr) ↑ | 3D Grounding (Accuracy) ↑ | Segmentation (mIoU) ↑ | Registration (RTE) ↓
| - | - | - | - | -
| Swin3D | **70.9** | **51.6** | **35.2** | 0.23
| Uni3D | 63.1 | 51.1 | 2.7 | **0.08**
|
* **Uni3D Results and Observations**:
* Scene-level tasks (3D VQA and Semantic Segmentation): Uni3D underperforms compared to the scene-level pretrained Swin3D model. This is likely due to the object-centric pretraining recipe of Uni3D, causing the failure of feature extraction on large scenes with orders of magnitude more points than single objects.
* Object-centric tasks (3D object grounding): Uni3D achieves comparable results with Swin3D. However, some grounding questions require not only object-level semantics, but also inter-object relationship and global room information, which Uni3D lacks. We believe combining object-centric and scene-level representations would be a great future direction to achieve better object grounding in a complex 3D scene.
* Registration: Uni3D achieves better performance than Swin3D, suggesting that geometric knowledge from object-centric pretraining generalizes well to scene-level geometric matching, especially given the task's use of downsampled partial scenes bridging the distribution gap between object-level and scene-level point clouds.
Pdf: /pdf/73c637cbd695542e69cd369ccc40d02ffffed83f.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Theoretical Foundations of Deep Selective State-Space Models | Accept (poster) | Summary: This work establishes density results of input-controlled differential equations, which formulate different types of state-space models such as S4 and Mamba in the continuous-time idealization. Under this framework, different closures of these models are derived, indicating their distinct inductive biases in the (universal) approximation sense, particularly implying the advantage of data-dependent modeling.
Strengths: 1. This work provides fundamental theoretical justifications on the expressivity of different types of SSMs.
2. The derived inductive biases clearly distinguish the recent new data-dependent modeling (e.g. Mamba) from the classic original data-independent modeling (e.g. S4) thorough a input-selective framework, which shows rigorously the superiority of the former architecture.
3. Other theoretical results also lead to useful insights. For instance, diagonal recurrence weakens the approximation capability, but this can be alleviated via stacking.
4. All these insights are basically verified by numerical experiments.
Weaknesses: 1. In theory, only the density type of results is provided. That is, although models can rates certain targets universally when models' parameters go to infinity, the convergence *rates* are not characterized. This is much more important since the convergence can be rather slow in certain situations (e.g. curse of dimension), leading to possibly vacuous bounds. The theoretical part would be strengthened and more convincing if authors can discuss more on the approximation rates of SSMs, especially the improvements (in parameter efficiency) of data-dependent modeling.
2. In experiments, the simulations are conducted on low-dimensional (2 & 3) and synthetic datasets. Is it due to the ability of path signature? Can the path signature be powerful and efficient when handling high-dimensional (and real-world) input data?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Please provide more details of questions raised in the weaknesses section above.
2. In formulation, the mentioned "gating" mechanism in this work seems to have a different meaning as usual. Here, gating refers to the transformation of inputs (i.e. data pre-processing). However, in practice, gating often means the (multiplicative) interaction between the hidden states and readout layers (i.e., to get outputs). Can authors explain more about this?
3. Minor issues: What is the definition of (linear) NCDE (see line 311, 312, 315, 317 and 643).
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: As is stated by authors, this work applies a continuous-time idealization in the formulation. The role of concrete discretization schemes is also worthy of explorations, particularly under the case of data-dependent discretization.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful feedback on our paper. We appreciate the positive assessment of our work's soundness, presentation, and contribution. Below, we address the raised points.
## Weaknesses
- **Rates of convergence**: While our current theoretical framework in general provides only density-type results, we agree that the characterization of convergence rates is a crucial question. We envisioned this paper as one establishing the language and theoretical groundwork for further analysis in such a direction.
We want to point out that for Theorem 4.2, the bounds are explicit in $N$, as given in equation (55).
Nevertheless, a study of the rates in the general case, particularly for the stacked diagonal systems, would be more technically involved and is considered beyond the scope of the current work. However, we acknowledge its importance and will highlight it as a critical direction for future research.
- **Experiments**: We would like to clarify a potential misunderstanding: our approach does not utilize the path signature as part of the method itself. Instead, we are simply using specific terms in the path signature as the labels for the synthetic datasets.
We opted for low-dimensional, synthetic datasets to provide empirical evidence for the theoretical results in our paper in a simple setting. These experiments were not intended to comment on the effectiveness of the path signature when handling high-dimensional data. There are a number of interesting works which apply variants of the signature to real-world data, with some recent examples being the signature kernel (https://arxiv.org/abs/2006.14794, https://arxiv.org/abs/2006.05805), the path development layer (https://arxiv.org/abs/2204.00740), the randomized signature (https://arxiv.org/abs/2201.00384), and log neural controlled differential equations (https://arxiv.org/abs/2402.18512).
## Questions
- See above
- **Gating**: Our definition of gating is aligned to that of forget and input gate in the LSTM, GRU and SSM literature. In particular, what the reviewer refers to is known as output gate: what instead $\omega$ and $\xi$ modulate is how inputs are selected and forgotten. The multiplicative interaction crucial in this setting is $Z_t^X d\omega_t^X$ - this can be thought of as a forget gate on the hidden state, with input-dependent gating function induced by $\omega$. We note that the gate terminology was introduced in the LSTM paper, but is also used in recent SSMs paper such as in Mamba and Griffin.
- **NCDE**: NCDE stands for Neural CDE. In fact a linear NCDE is just a linear CDE, hence there was no need to mention the “Neural” part – we used this terminology since it is linked to some results on rough path theory applied to neural networks (e.g. https://arxiv.org/abs/2005.08926) but here this connection is not necessary. We thank the reviewer for pointing this out. We will revise the manuscript to use the term “CDE” instead of “NCDE” in the relevant sections to avoid any confusion.
## Limitations
We agree with the reviewer on this point. We believe that our casting of SSMs in the language of Rough Paths theory might open the way for such a study of discretizations, using the honed tools of the field.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply. I still feel that the convergence rate characterization is much more important than the density analysis. Certainly, eq. (55) derives the dependence on hidden dimensions, but for sequence modeling problems, the approximation rate regarding the *input dimensions* and *dynamical properties* is more crucial, and the numerical verification here is performed only for low-dimensional tasks. Does the current analysis framework have the potential to solve this difficulty?
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the message and the additional questions.
We agree that including insights on the effects of width is interesting; see next paragraph. However, note that width alone is insufficient to capture all the high-order terms of the signature in the diagonal (Mamba-like) case. Our objective in this paper, besides completely characterizing the density, is to highlight a fundamental limitation of diagonal models that is independent of width (this is not an upper bound - our result is tight). Our insights can guide further analysis, such as the potential design of minimal epsilon-sparse recurrences that still capture the full effects of the signature without severely affecting speed or parameter count.
Regarding your question: It is possible to derive width-dependent bounds, yet these are unlikely to be tight, similar to what is observed in standard MLPs.
We will develop on this in a new subsection in our revision. Techniques and results going in this direction are actively researched by the Rough paths in ML community. As a primal example we would like to refer you to the paper titled "Generalization Bounds for Neural Controlled Differential Equations" (https://arxiv.org/abs/2305.16791). This work provides a generalization bound for a broader class of learners, specifically Neural Controlled Differential Equations (NCDEs), which, as explained in our previous response, are closely related to Linear CDEs, offering a detailed analysis of both generalization bounds and approximation biases. In order to do this the data streams and models are studied from a rough path perspective, dynamical bounds are produced by considering Lipschitz properties of the vector fields and the regularity of the input streams via their 1-variation. These are the natural tools to employ with CDEs like the ones we specify in this work.
What we just described is an example of how our work establishes a solid theoretical connection between State Space Models (SSMs) and Rough Paths, setting the stage for further studies, and allowing researchers to leverage the rich literature of existing results. | Summary: This paper proposes a framework for better understanding key features that allow the success of SSMs. To be specific, the authors first show that recent SSM-based models are linear controlled differential equations (CDEs). Then, the expressive power of linear CDEs are explored, depending on whether the matrices $A_i$ are diagonal.
Strengths: * This paper focuses on timely and important research problem.
* I could not fully read the proof, but the results seem correct.
Weaknesses: * It would be better to provide preliminaries on rough path theory for readers who are not familiar with that.
* Although the mathematical results are interesting, I am not sure about the implications of the theoretical results in practice. It would be better if the empirical results are designed to provide the messages that are useful in practice.
Technical Quality: 3
Clarity: 2
Questions for Authors: * How can we derive Eq.3? Why suddenly exponential term appears? Any reference for the zero-order hold discretization? (Actually, I searched for such terminology, but could not make a connection between it and Eq.3)
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: * Check the weakness part & questions part
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful feedback on our paper. We appreciate the positive remarks regarding the timeliness and importance of the research problem we address. We would like to address the points raised concerning weaknesses and questions.
## Weaknesses
- **Preliminaries on Rough Path Theory**: We understand the importance of making our paper accessible to readers unfamiliar with Rough Path Theory. In Appendix A (referenced at line 268), we provide an introduction to the Signature Transform and reference several in-depth studies and presentations of Rough Path Theory in the context of machine learning. Additionally, in Appendix E, we provide a self-contained theory studying the solutions to general Linear CDEs. To enhance clarity and ease of access, we will ensure to reference both Appendix A and Appendix E more prominently in the main body of the paper.
- **Implications in Practice**: Our results fit into an ever-growing theoretical literature on the implicit limitations deriving from architectural choices in the context of SSMs (see e.g. https://arxiv.org/pdf/2404.08819, https://arxiv.org/abs/2406.05045), which we enhance by providing explicit, analytical characterizations and by establishing a theoretical foundation, in terms of Rough Path theory, which we envision as a base for further study. The take-away from this literature is clear: devising a non-diagonal, input-dependent, and efficiently computable transition mechanism would allow you to overcome the expressive limitations of present SSMs without the need for an arbitrarily high number of layers. A promising avenue is using low-rank or even highly sparse weights, as suggested by recent works (e.g. https://arxiv.org/abs/2310.16597, https://arxiv.org/abs/2407.08459, https://arxiv.org/abs/2406.05045).
- **Further experiments**: As a further example of the practical implications of our theoretical results, we have performed additional experiments focused on the A5 benchmark introduced by https://arxiv.org/pdf/2404.08819. This benchmark is designed to evaluate models on their state-tracking, a crucial ability for solving problems that involve permutation composition, such as tracking chess moves. A key result of their paper is that state-space models such as Mamba require stacking in order to perform well on this benchmark. Our experiments have shown that even on the longest sequence length in the benchmark, where Mamba requires 4 stacked layers to achieve >90% test accuracy, a linear CDE with a trainable transition matrix requires only one layer to achieve >90% test accuracy. We intend to include a full discussion of these results in the final version of our paper.
## Questions
- **Derivation of Eq.3 and the Appearance of the Exponential Term**: We appreciate the reviewer's attention to the details of our derivation. A self-contained explanation of Zero-Order Hold (ZOH) discretization is provided in Appendix F (referenced just before 164). To increase clarity, we will highlight the reference to this section in the main body of the paper. We also note that ZOH discretization is a conventional nomenclature for such a scheme in the SSM literature (see e.g. https://arxiv.org/abs/2303.06349).
We hope these clarifications address the reviewer's concerns and enhance the overall readability and impact of our paper. | Summary: This paper proposes a framework of using Rough Path Theory to understand the expressivity of SSMs and Mamba. The paper establishes connections to linear CDEs and then uses tools from Rough Path Theory to explain why gates are so powerful in SSM models.
Strengths: This is a really nice theory explaining SSMs. It gives a new way of looking at the expressivity/quality results that have been observed in prior work. Hopefully this theory can lead to new insights about how to design better SSMs in the future.
Really solid work.
Weaknesses: Not much weaknesses in the work itself. The ultimate test of theory is its predictive power - the paper would be stronger if the theory could "close the loop" and propose a modification to SSM layers that would enable further performance. However, this is a high bar to clear, and I think the work stands on its own as a solid contribution even without a methodological contribution.
Technical Quality: 3
Clarity: 3
Questions for Authors: Can you think of any improvements to SSM layers/models that the RPT theory would suggest?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful feedback on our paper. We appreciate the positive remarks on the theory's presentation and contribution. We would like to address the points raised regarding “closing the loop”.
Our paper shows how non-diagonal transitions lead to a substantial increase in expressivity. However, this theoretical insight is currently impeded by the reality of computation with dense layers, which is practically infeasible due to the associated computational costs.
At the same time, recent literature (e.g. https://arxiv.org/pdf/2402.19427, section 4.2) has shed light on how the computation of linear diagonal RNNs is dominated by memory transfers. This indicates that there is leeway allowing for an increase in the complexity of the sequential mechanism, presenting itself as a very promising area of research.
Such a mechanism would have to be non-diagonal but efficiently computable. Recent works (e.g. https://arxiv.org/abs/2310.16597, https://arxiv.org/abs/2407.08459, https://arxiv.org/abs/2406.05045) suggest that low-rank or even highly sparse weights should lead to the same limiting behaviors found in the dense case. This is a promising avenue, which we have not studied in this work, as it would require substantial theoretical justification as well as thorough empirical evaluations at scale. As the purpose of this paper is to outline the hypothesis class and power of SSM variants, we decided to leave this avenue for future research.
However, given the importance and interest of such observations, we will augment the concluding section with them in the camera ready version, using the additional available space.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you for the rebuttal. I will be keeping my high score. | Summary: This work analyses the modeling capability of different SSMs (S4-S6 and others) using Rough Path Theory by viewing SSMs as (input-) controlled differential equations (CDE). To this end, the authors show that SSMs with with dense transition matrices (A) are able to approximate arbitrarily close any continuous functions of the inputs. In contrast, SSMs with diagonal transition matrices lack this property. However, when stacking/chaining multiple diagonal CDEs (as is the case for the deep architectures), full expressivity is gained again as the depth approaches infinity. The work is mostly theoretical, and, hence, there is only a single toy experiment showing RMSE for Mamba and S5 models with depth 1 and 2, showing that the more limited models are not able to solve the toy task, as they cannot approximate the corresponding function.
Strengths: - The paper is generally well written and the authors managed to make the very involved theory and proofs more accessible by walking through the interpretation and takeaways on a more high level.
- The theoretical results are important contributions for categorizing the expressivity of existing SSM models and may potentially inform about future architecture design of SSMs and related models.
Weaknesses: Despite the well-described theoretical contributions and insights, it is still not very clear to me what to take away practically.
In particular, it is not clear how useful the result regarding chaining diagonal CDEs is in practice: while it is important to have a proof that an infinite number of chains of diagonal CDE with mixing can recover the full expressiveness of the dense CDEs, the infinite case is not practically important. Instead, it would be important for practical considerations (computational efficiency vs model expressiveness) to know whether one should use dense CDEs or diagonal CDEs with mixing to achieve a certain level of expressivity. However, as far as I understand, there is no result that quantifies e.g. how many diagonal plus mixing layers have a similar expressiveness as a certain number of dense CDEs. The authors speculate in the summary that architectures with non-diagonality might improve performance, but I do not see how this actually follows from this work, since there is no comparison between diagonal+mixing and dense in the regime of finite number of parameters.
The experimental section is very weak. While this work is mostly theoretical, this work would nevertheless be improved a lot by a greater experimental section that validates the claims. Showing error curves with a single random seed is not that interesting. I would be better to have a table of final performance that includes error bars. Furthermore, it would be great to explore a few different tasks and several ablations such as number of layers. This would make it at least empirically possible to compare dense and diagonal + mixing and random dense + learned C (or MLP) in the finite parameter setting, where its not clear what to take away from the theorems.
Technical Quality: 3
Clarity: 3
Questions for Authors: - what is NCDE? This was not introduced.
- does random linear CDE perform best for Area computation and is also a strong baseline for Volume? I understand that with high probability we can just take a random initialized A matrix and train only the C matrix (Theorem 4.2) to achieve full expressivity. However, that model should still not be better than a dense CDE where everything is trained such as in case of S5. Or did you choose S5 with diagonal transition?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Just a single limitation is mentioned in the conclusion. I am sure, the authors can think of many more limitations, such as my main concern raised above regarding the comparison of different mixed diagonal + mixing vs dense architectures. Or the limited experimental evidence.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive feedback on our theoretical work. We are pleased you found our expressivity results important and our proofs accessible. We spent a considerable amount of time making our manuscript easy to parse despite the high technicality of the content.
## Weaknesses
The reviewer is right in saying that our work does not explore the finite-regime setting and does not compare architectural options in terms of model parameters needed to achieve a desired level of expressivity. This would be highly desirable but is not an easy task; in addition results can be confounded by issues such as optimization, inductive biases (OOD generalization), GPU bottlenecks (e.g. memory dependency on sequence length) etc. Indeed, current research did not yet unveil such clean comparisons even for Transformers vs SSMs : the attention mechanism has more parameters compared to the S6 block in Mamba, but how exactly expressivity compares at a fixed compute budget or model dimension is unclear.
While touching on the issues above is for sure needed when proposing a new model, we like to point out that the purpose of our work is not suggesting a new mixing component (i.e. dense RNNs linear in the state), but instead exploring the source of limitations of modern sequence mixing components at generic widths. To do so, we study dense linear (in the state) recurrences, and then dive into the diagonal setting. As such, our dense linear CDE framework is not intended for direct implementation in a deep model, but provides an upper bound on the expressivity which is achievable with one RNN (SSM) linear in the state but nonlinear in the input.
Note that
- **What happens at finite width**. Our results prove that one linear CDE layer can approximate any nonlinear transformation, while diagonal CDEs (e.g. Mamba) cannot. This is however only a subset of our results. Leaving out chaining from the discussion here, our theorems identify the functional form for dense and diagonal settings in closed form – this holds at any width. Specifically, why the output in the dense setting can be written as $\int_0^t \Phi(\omega^{X}_{[s,t]}) \cdot d\xi^{X}_s$ (eq 7), for width-dependent function $\Phi$ determined by the model parameters. Instead, in the diagonal setting, the functional form for the output is restricted to $\int_0^t \phi(\omega^{X}_t - \omega^{X}_s) \cdot d\xi^{X}_s$. Note that width can only modify the complexity of the nonlinear functions $\Phi$ and $\phi$, but cannot modify the output structure.
- **Experiments**. While we believe very much that experiments can provide valuable insights in some setting, our objective here is to provide a theoretical foundation, hence the title. We remark that our results are not bound - they provide a tight description of the fundamental elements interacting when outputs are constructed, with model dimension only controlling functions $\Phi$ and $\phi$ above. We also note that our results are novel - also in the realm of Rough Paths theory as an independent mathematical subject. We refer the reviewer to the supplementary PDF for augmented and new results, as discussed in the **general comment**.
## Questions:
- **NCDE**: NCDE stands for Neural CDE. In fact a linear NCDE is just a linear CDE, hence there was no need to mention the “Neural” part – we used this terminology since it is linked to some results on rough path theory applied to neural networks (e.g. https://arxiv.org/abs/2005.08926) but here this connection is not necessary. We thank the reviewer for pointing this out. We will revise the manuscript to use the term “CDE” instead of “NCDE” in the relevant sections to avoid any confusion.
- **S5**. As you stated, we expect that a dense CDE with a trainable transition matrix would be the best performing model on this benchmark. However, the computational burden of training such a model is very high. In practice, S5 is parameterized using a diagonal transition matrix, as discussed in Section 3.2 of the S5 paper (https://arxiv.org/abs/2208.04933). Consequently, the same theoretical results on expressivity apply to both S4 and S5. In fact, any model where $d\omega$ is $1$-dimensional (i.e. any model with only one transition matrix $A_1$) is equivalent over $\mathbb{C}$ to a diagonal model, by virtue of $A_1$ being similar to a diagonal matrix. To obtain the full expressivity results more than one $A_i$ must be present, and these matrices must not be simultaneously diagonalizable. As demonstrated in the proof of Theorem 4.3, in the diagonal case, the relevant terms of the signature are those appearing in its symmetric part. The signature of a $1$-dimensional path being fully symmetric shows consistency of our findings. On reflection, given this equivalence, we believe introducing S5 in our experiments is redundant. To streamline our presentation and strengthen the connection between our theoretical and experimental results, we have chosen to replace S5 with S4 in our experiments. We hope this revision addresses your concerns.
## Limitations:
We will augment the list of limitations in the camera ready version, using the additional available space.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for your response.
Note that I did not question the novelty of this work and also agree that the theoretical results are a great contribution, which is why I voted to accept the paper. However, I also remain of the opinion that the theoretical results do not offer a direct and very practical suggestion regarding design of new architectures. This is, because the proofs are indeed *not* independent of the width if you consider practical settings with multiple layers (chaining) and non-linearities in between. I do not want to suggest that such a proof should have been made, because it very likely is not possible. However, it remains not clear from this work, e.g. how much chaining would be sufficient with diagonal matrices and whether this would actually be a problem for Mamba, given that it a very deep model.
I also understand that the contributions of this work are of theoretical nature and do not expect a very experimental section. However, I do still think that a bit more than Fig. 1 to support the theoretical findings would have been not only insightful, but important to underscore these findings.
---
Reply to Comment 1.1.1:
Comment: We appreciate your continued feedback and your recognition of the theoretical contributions of our work. We understand your concerns regarding the practical implications of our results and the empirical validation to support our theoretical findings.
**Regarding your point about the practical impact of our work**:
We agree that understanding the implications of chaining diagonal matrices and the expressivity of models like Mamba in practical scenarios with multiple layers is important. While our work does not offer a direct blueprint for designing new architectures, we believe it lays the groundwork for future research in this area by identifying key components that could influence expressivity. In particular the fact that diagonal systems capture only the symmetric part of the signature could be leveraged to extend the chaining results to the non-linear case: approximating the result with a linear functional on the symmetric part of the signature, then leveraging the algebra structure of the shuffle product. This procedure would show how even $n$ chained diagonal (Mamba-like) layers, interleaved with non-linearities, would capture the non-symmetric part of the signature fundamentally only up to level $n$. Note that width alone cannot capture the higher terms of the signature; what we point out is a fundamental and *width-independent* drawback of diagonal models, a limitation which can serve as a guide for further avenues of analysis. We also note that it can be possible to derive width-dependent bounds, however those would likely not be tight (as in the standard MLP case).
**Regarding the empirical validation**:
We understand the importance of supporting theoretical claims with empirical evidence. As mentioned in our initial response, we have indeed conducted new experiments, including a novel analysis using the A5 benchmark of Merril et al. 2024 (“Illusion of state” paper). These experiments were designed to provide additional insights into the practical performance of different architectures, particularly focusing on the depth required for models like Mamba to achieve high performance on state tracking (a task that cannot be easily solved by attention). We included these results in the supplementary material and in the general response pdf.
Given your feedback, it seems that these new experiments may not have been fully considered. We are sorry if we perhaps did not emphasize them. If this is the case, we encourage you to review the supplementary material pdf and general response where these additional results are discussed in detail.
We value your constructive feedback and would be open to any further suggestions on how we could better present or highlight these results in the final version of our paper. | Rebuttal 1:
Rebuttal: We would like to extend our gratitude to all reviewers for their insightful comments and valuable feedback. We appreciate the time and effort invested in evaluating our work. Below, we address the primary clarifications about relevance and practical implications.
- **Our Contribution**. Ours is a theoretical paper studying approximation of sequence to sequence maps with modern (gated) state-space models. Our results are to be inserted in the vast literature on expressivity and computational power of recurrent mechanisms, but with a fresh look at modern architectural components (e.g. the Mamba block). Our results are novel, and our tools draw a strong connection to powerful techniques in Rough Path Theory. We provide a closed-form analytical characterization of the class of learnable functions implemented by SSM variants, and discuss how this is affected by computationally critical choices such as the use of diagonal matrices. Our results on universality hold in the width limit, as common in much of the deep learning literature, but crucially the input mixing mechanics we identify provide valuable insights also at finite width, acting as upper limits to computational power.
While our paper is mainly intended for readers interested in deepening our theoretical understanding of new deep learning blocks, we are also concerned with practical implications and the road ahead for future research:
- **Relevance of the results**. Most importantly, the functional forms we identify using Rough Path Theory reveal how input tokens are processed in dense (idealized) versus diagonal (Mamba-like) models. In diagonal models only first order information about the input sequence is used when producing an output, whereas, in the dense setting the entire history contributes to the computation. This is a significant difference, independent of width. Recent investigations, such as those found in https://arxiv.org/abs/2209.11895, https://arxiv.org/pdf/2402.01032, https://arxiv.org/pdf/2404.08819, and https://arxiv.org/abs/2312.04927, have explored similar distinctions in token processing strategies. Compared to these works, our results also offer tight guarantees on expressive power and on the effects of chaining.
- **New Experiments**. We have rerun the signature prediction experiment for each model with 5 different random seeds. We have augmented our plot of the validation RMSE to show the range of the validation RMSE over the 5 runs, and this can be found in the supplementary PDF file. We have also conducted additional experiments on the A5 benchmark introduced by https://arxiv.org/pdf/2404.08819. This benchmark is designed to evaluate models on their state-tracking ability, which is crucial for tasks involving permutation composition, such as tracking chess moves. A key finding from their paper is that state-space models like Mamba require stacking to perform well on this benchmark. Our experiments demonstrate that even for the longest sequence length in the benchmark, where Mamba needs 4 stacked layers to achieve over 90% test accuracy, a linear CDE with a trainable transition matrix achieves over 90% test accuracy with only one layer. We have included plots of these new results in the supplementary PDF file and we plan to include a broader discussion of these results in the final version of our paper.
- **Practical Implications**. Our results outline some inherent limitations of diagonal recurrent computation. We are not making claims of the type "diagonal is less expressive, use dense”; even though dense recurrences are more expensive. However, implications and directions for future research are clear: recent studies, such as https://arxiv.org/pdf/2402.19427, indicate that linear diagonal RNNs are memory bound, with computation dominated by memory transfers. Increasing the complexity of the sequential mechanism is thus a promising research area. Our paper shows that advancements towards efficient (perhaps sparse) non-diagonal computation are supported by increased expressivity. Our analysis also highlights specific components to target for enhancing compute with direct impacts on expressivity. Note that https://arxiv.org/pdf/2404.08819 arrives at similar conclusions, with drastically different tools but without identifying completely the hypothesis classes, as we instead do here.
Pdf: /pdf/fed588bfcc5786e766af58c5d2c087857c3a0615.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Generative Adversarial Model-Based Optimization via Source Critic Regularization | Accept (poster) | Summary: The paper proposed GABO, a novel Bayesian optimization method for offline model-based optimization problems. GABO regulates the surrogate model with source critic actor so that the BO procedure remains in the in-distribution. Experiment results validate that GABO outperforms several baselines in terms of the mean rank.
Strengths: - Bayesian optimization is a widely used algorithm for black-box optimization but unexplored in offline MBO settings. As far as I know, it is the first paper that improves BO for offline MBO settings
- Mathematical formulation seems to be valid, and practical algorithm is given
Weaknesses: - It seems that the tasks used for evaluation from the Design Bench are discrete, **biological sequence design tasks**. There are several continuous tasks, such as Superconductor, Ant, and Dkitty, which are also high-dimensional and challenging problems. I think the research scope is then limited to offline biological sequence design, not general offline optimization tasks. Then, authors should compare their method with papers specialized in biological sequence designs, such as BIB[1] and BootGen[2].
- If my understanding is correct, the **evaluation procedure is a bit different from the offline MBO conventions**. The authors first train a surrogate model and choose the top-1 candidate among 2048 candidates according to the predicted score of the surrogate model. As the surrogate model gives inaccurate predictions on the data points outside of the offline dataset, it is not convincing to choose the best one with the surrogate model for evaluation. The authors should elaborate on the reason why they changed the evaluation setting.
[1] Chen, Can, et al. "Bidirectional learning for offline model-based biological sequence design." International Conference on Machine Learning. PMLR, 2023.
[2] Kim, Minsu, et al. "Bootstrapped training of score-conditioned generator for offline design of biological sequences." Advances in Neural Information Processing Systems 36 (2024).
Technical Quality: 3
Clarity: 2
Questions for Authors: - In the experiment part, there is a new task called Warfarin, which is a conditional offline MBO task. I think the problem setting is practical and important. However, it seems that the proposed method lags behind other baselines in terms of performance. Could authors elaborate on why this phenomenon happens?
- It is also hard for me to accept the results of BONET and DDOM in the Branin task, as both papers deeply analyze the behavior of their methods in the Branin function. Is it due to the different evaluation settings?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: As written in the weakness section, authors should specify the scope of the research and compare their method with proper baselines. Furthermore, the authors should explain why the evaluation setting has been changed.
There are a few minor comments on the manuscript.
- In the related work part, it might be beneficial that authors clearly state the limitations of prior methods rather than just explain the methods.
- In the background part, it might mislead readers if we define offline MBO as solving the optimization problem in Eq (2). There are several methods that formulate the problem as conditional generative modeling. Even for forward approaches, they do not solve the problem in Eq (2) and propose various approaches such as regularization.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer yPeb for their thoughtful comments and insights. We address their outstanding questions in our response below.
**Research Scope**
While we focus on the discrete, biological sequence design tasks from Design Bench, it is important to recognize that GABO and Source Critic Regularization (SCR) can be used for both continuous and discrete tasks (unlike BIB and BootGen). This is why we evaluate SCR and GABO on continuous optimization tasks in our experiments, such as the Branin and Warfarin tasks. In choosing which Design bench tasks to evaluate, we focus on the biological sequence design tasks because these tasks have been the most reproducible across recent work, as highlighted by Reviewer bMXC. The research scope is *not* limited to offline biological sequence design.
Nonetheless, we evaluate BootGen on the discrete biological sequence design tasks to compare its performance against GABO for these subset of tasks. Our results are shown here:
| Top $k=1$ Evaluation | LogP | TFBind | GFP | UTR | ChEMBL | Avg. Rank |
| --- | --- | --- | --- | --- | --- | --- |
| BootGen | -59.1 $\pm$ 69.2 | 0.398 $\pm$ 0.002 | **3.60 $\pm$ 0.03** | **7.61 $\pm$ 0.55** | **0.63 $\pm$ 0.02** | 6.3 |
| GABO | **21.3 $\pm$ 33.2** | **0.570 $\pm$ 0.131** | **3.60 $\pm$ 0.40** | 7.51 $\pm$ 0.39 | 0.60 $\pm$ 0.07 | **4.3** |
| Top $k=64$ Evaluation | LogP | TFBind | GFP | UTR | ChEMBL | Avg. Rank |
| --- | --- | --- | --- | --- | --- | --- |
| BootGen | 30.8 $\pm$ 14.2 | 0.401 $\pm$ 0.000 | 3.62 $\pm$ 0.00 | **8.29 $\pm$ 0.57** | 0.65 $\pm$ 0.00 | 6.6 |
| GABO | **98.0 $\pm$ 37.6** | **0.942 $\pm$ 0.026** | **3.74 $\pm$ 0.00** | 8.25 $\pm$ 0.17 | **0.67 $\pm$ 0.02** | **2.1** |
The main conclusions of our manuscript are not changed after the inclusion of BootGen as an additional baseline method. We are unable to compare GABO against BIB due to limitations in the BIB authors' publicly available code implementation.
**Top-1 Experimental Evaluation**
Thank you for this comment. As discussed in **Section 1**, of our manuscript, the primary motivation in evaluating the top-1 candidates is that **real-world use cases for offline generative design do not have access to large oracle query budgets**. This evaluation schema is actually studied in other related offline MBO work (e.g., [Kim et al. Proc NeurIPS 2024](https://arxiv.org/abs/2306.03111)). This evaluation schema is important because offline optimization is most helpful when evaluating newly proposed molecules requires expensive experimental laboratory setups, or when we want to optimize a patient's drug regiment without being able to test multiple doses on that patient. In these settings, the evaluation of up a large number of designs (sometimes as much 256) as in prior work is not feasible and not representative of how an algorithm would perform in practice. This change in evaluation setup is also why BONET and DDOM perform worse on the Branin task in this more realistic setting.
In fact, GABO actually performs ***better*** using the more standard top-64 metric (**Supp. Tables B1 and B2**). In particular, GABO achieves an average rank of 2.1, which is better than any other reported method. Thus, our motivation for showing top-1 results is only because we believe it is more practical and relevant for real-world applications of offline model-based optimization.
We also cite Reviewer bMXC's review as well for additional insights. In particular,
- "*One thing I really like about the paper is its evaluation which truly mirrors the offline optimization setting. The paper compares the proposed approach and baselines on a single evaluation from the oracle which I believe is the right way to evaluate algorithms for offline model-based optimization.*"
**Performance on the Warfarin Task**
Thank you for raising this point, and we appreciate that the Reviewer recognizes the importance of introducing practical tasks such as the Warfarin task. Compared with the other tasks assessed, the landscape of true oracle function for the Warfarin task (a LASSO model reported previously by domain experts) is uniquely a smooth convex function, and the trained surrogate likely captures similar properties over the design space. As a result, we hypothesized that this task was more conducive to first-order offline optimization methods, which is exactly what is shown in our results.
**Definition of Offline MBO in Eq (2)**
Thank you for this comment; we used the definition of offline MBO as in Eq (2) because (1) it is the definition of offline MBO that best frames the problem setup for our proposed approach of SCR; and (2) it is the definition consistent with the original Design-bench publication from [Trabucco et al. CoRR 2022](https://arxiv.org/abs/2202.08450). In related work that study forward approaches to offline MBO (e.g., [Yu et al. Proc NeurIPS 2021](https://arxiv.org/abs/2110.14188); [Trabucco et al. Proc ICML 2021](https://arxiv.org/abs/2107.06882); [Chen et al. Proc NeurIPS 2022](https://arxiv.org/abs/2209.07507)), the authors similarly first consider a motivating problem setup for offline MBO identical to Eq (2), and then extend on it to solve a related problem through their own methodological contributions as pointed out by the Reviewer. This is how we approached framing the problem formulation to motivate SCR as well.
However, we also agree with the Reviewer that a few recent related work have used alternative formulations of offline MBO to motivate their work (e.g., [Kim et al. Proc NeurIPS 2023](https://arxiv.org/abs/2306.03111); [Krishnamoorthy et al. Proc ICML 2023](https://arxiv.org/abs/2206.10786); [Krishnamoorthy et al. Proc ICML 2023](https://arxiv.org/abs/2306.07180)). In the final manuscript, we will be sure to clarify that (1) such other methods for offline MBO exist; (2) we consider one possible definition of offline MBO in Eq (2); and (3) how these additional problem formulations relate to Eq (2) considered in our work.
---
Rebuttal 2:
Comment: Thank you for your detailed response. Unfortunately, several questions remain on the paper. Here are some follow-up questions.
**Research Scope)**
While authors conduct continuous tasks, Branin and Warfarin, those tasks are close to toy experiment settings. Branin is a 2D function, and Warfarin is a smooth convex function. While the superconductor task in Design-Bench has a reproducibility issue, other tasks, such as Ant and Dkitty, do not suffer from that issue.
Furthermore, it seems the results of BootGen significantly deviate from the reported score. It is hard for me to accept the results that the maximum score of $k=64$ in TFBind8 is 0.401, which is even below the maximum of the dataset. As shown in the Figure 4.1 of BootGen, the performance of even K=1 seems higher than 0.8.
**Top-1 Experimental Evaluation)**
While I also agree with the motivation that we do not have large oracle query budgets, evaluating top-1 candidates may lead to high variance. Furthermore, using a proxy to select one candidate is also not convincing as a proxy is fragile to out-of-distribution errors.
I strongly recommend that authors compare the proposed method with a commonly used evaluation setting, sample 128 designs without filtering with proxy (I think this part is also a procedure of the proposed algorithm and should not be included in baselines), and report the maximum and median scores.
**Performance on the Warfarin Task)**
The authors say that the oracle function of Warfarin is a convex function. However, the motivation of offline MBO is optimizing high-dimensional and complex black-box functions. I think the Warfarin task may not be appropriate for evaluating MBO algorithms.
Furthermore, it may be appreciated if the proposed algorithm is compared with [1], which also deals with conditional MBO problems.
[1] Chen, Mingcheng, et al. "ROMO: Retrieval-enhanced Offline Model-based Optimization." Proceedings of the Fifth International Conference on Distributed Artificial Intelligence. 2023.
---
Rebuttal Comment 2.1:
Comment: We thank Reviewer yPeb for their continued feedback and discussion of our work. Please find our responses to the questions below.
**On the difference in our results compared with those reported by the BootGen authors:** This difference is because the authors normalize their reported scores and report $z=\frac{y-y_{\text{min}}}{y_{\text{max}}-y_{\text{min}}}$, where $y_{\text{min}}$ and $y_{\text{max}}$ are the worst and best scores in the offline dataset, respectively. In contrast, we report the unnormalized score $y$ in our work. For the TFBind task, an unnormalized score of $y=0.398$ (as reported by our experiments above) corresponds a normalized score of $z\approx (0.398 - 0.0)/(0.439 - 0.0)\approx 0.906$, which is aligned with expected results as reported in the original BootGen paper.
**On the inclusion of experimental results using the Ant and D'Kitty datasets:** We are happy to evaluate our method on the D'Kitty task:
| Method | D'Kitty Score (Top-1 Evaluations) | Final Average Rank |
| --- | --- | --- |
| $\mathcal{D}(\text{best})$ | 199 | N/A |
| Grad. Ascent | -185 $\pm$ 228 | 11.0 |
| L-BFGS | -504 $\pm$ 0 | 9.4 |
| CMA-ES | -503 $\pm$ 0.7 | 11.8 |
| Sim. Anneal | -204 $\pm$ 216 | 7.3 |
| BO-qEI | -140 $\pm$ 184 | 9.9 |
| TuRBO-qEI | -343 $\pm$ 225 | 10.0 |
| BONET | 74.6 $\pm$ 0.0 | 5.9 |
| DDOM | -501 $\pm$ 97.4 | 12.8 |
| COM | 218 $\pm$ 20.4 | 6.8 |
| RoMA | -517 $\pm$ 327 | 10.5 |
| BDI | -59.0 $\pm$ 0.0 | 12.5 |
| ExPT | -70.7 $\pm$ 239 | 10.3 |
| ROMO | -96.1 $\pm$ 332 | 11.3 |
| **GABO** | -10.2 $\pm$ 9.9 | 4.4 |
| Method | D'Kitty Score (Top-64 Evaluations) | Final Average Rank |
| --- | --- | --- |
| $\mathcal{D}(\text{best})$ | 199 | N/A |
| Grad. Ascent | -127 $\pm$ 206 | 14.4 |
| L-BFGS | -504 $\pm$ 0 | 11.9 |
| CMA-ES | 199 $\pm$ 0.0 | 10.0 |
| Sim. Anneal | 199 $\pm$ 0.0 | 9.1 |
| BO-qEI | -4.1 $\pm$ 7.5 | 5.7 |
| TuRBO-qEI | -1.4 $\pm$ 0.6 | 6.4 |
| BONET | 78.1 $\pm$ 0.0 | 6.7 |
| DDOM | 3.4 $\pm$ 1.6 | 9.1 |
| COM | 230 $\pm$ 22.0 | 10.7 |
| RoMA | -225 $\pm$ 68 | 8.7 |
| BDI | -59.0 $\pm$ 0.0 | 12.9 |
| ExPT | 76.6 $\pm$ 65.2 | 8.7 |
| ROMO | 191 $\pm$ 43.6 | 14.3 |
| **GABO** | -2.2 $\pm$ 3.9 | 2.1 |
After including all 8 offline MBO tasks, GABO still ranks as the top method according to both the top-1 and top-64 evaluation metrics. These new experimental results further support the utility of GABO and SCR across a wide variety of different domains.
Regarding the Ant task, we cite [this GitHub issues link](https://github.com/brandontrabucco/design-bench/issues/23) describing a critical issue of the Ant task that we replicated in our own experiments. For this reason, we argue that it would be best to include only results on the D'Kitty task instead of the Ant task, although are happy to discuss further with the Reviewer as needed.
**On the high variance of top-1 candidate evaluation:** We run 10 random seeds to reduce variance of the top-1 estimate. Importantly, running the top-1 sample multiple times with different random seeds is very different from taking the top-k from one random seed; in fact, the latter is potentially still a high variance evaluation since the $k$ different samples can be highly correlated. Furthermore, the fact that the top-1 value is high-variance a *very important reason* to present top-1 candidates with multiple random seeds instead of top-k results, as the use of top-k metrics for $k\gg1$ can misleadingly hide this variability.
Finally, we note that we have included the top-64 candidate evaluations in **Supplementary Tables B1 and B2**, which would be useful for readers that may be interested in settings with a potentially larger oracle query budget. In addition, as requested by the Reviewer, we have also run similar results for top-128; GABO again performs by far the best:
| Method | Top-128 Evaluation |
| --- | --- |
| Grad. | 15.3 |
| Grad. Mean. | 13.9 |
| Grad. Min. | 13.6 |
| L-BFGS | 10.7 |
| BOBYQA | 13.6 |
| CMA-ES | 11.1 |
| Sim. Anneal | 9.4 |
| BO-qEI | 5.6 |
| TuRBO-qEI | 6.3 |
| BONET | 7.8 |
| DDOM | 9.7 |
| COM | 12.0 |
| RoMA | 8.9 |
| BDI | 14.0 |
| ExPT | 9.7 |
| BootGen | 6.0 |
| ROMO | 14.7 |
| Ours | 2.7 |
We will add these additional experimental results to our final manuscript.
---
Reply to Comment 2.1.1:
Comment: **On using a proxy surrogate function to select one candidate:** First, we note that using the proxy surrogate function to rank and select candidates has been used extensively in virtually all prior work that use a proxy forward predictive model for offline MBO (i.e., [Yu et al. Proc NeurIPS 2021](https://proceedings.neurips.cc/paper/2021/hash/24b43fb034a10d78bec71274033b4096-Abstract.html); [Trabucco et al. Proc ICML 2021](https://proceedings.mlr.press/v139/trabucco21a.html); [Fu et al. Proc ICLR 2021](https://arxiv.org/pdf/2102.07970)).
In addition, we agree that naive selection of the design using just the function $f_{\theta}$ is a poor strategy in offline MBO due to out-of-distribution errors. This is why in our method, GABO and GAGA select the optimal designs according to the ***penalized surrogate***, i.e. the Lagrangian $\mathcal{L}(\mathbf{x}; \lambda)$ shown in Equation (8) on page 4 of our manuscript. In this way, proposed designs that are wildly out-of-distribution would have a lower penalized surrogate score, which helps make sampling based on the Lagrangian proxy more robust. Such a "sampling via regularized proxy approach" is also employed by other offline MBO works (e.g., [Trabucco et al. Proc ICML 2021](https://proceedings.mlr.press/v139/trabucco21a.html)). We will be sure to better highlight this feature of our approach in our final manuscript.
Finally, as mentioned above, we have included top-64 results in our manuscript (and will add top-128 results as shown above), and GABO achieves very strong performance on this metric (in fact, stronger than top-1 performance).
**On evaluating baselines without filtering via the proxy:** We clarify that for each baseline, we have used the strategy implemented by that baseline to choose the top-k candidates. In particular, the following baselines use the proxy to rank and select candidates: [Yu et al. Proc NeurIPS 2021](https://proceedings.neurips.cc/paper/2021/hash/24b43fb034a10d78bec71274033b4096-Abstract.html); [Trabucco et al. Proc ICML 2021](https://proceedings.mlr.press/v139/trabucco21a.html); [Fu et al. Proc ICLR 2021](https://arxiv.org/pdf/2102.07970). And, the following baselines use the last $k$ samples: [Mashkaria et al. Proc ICML 2023](https://proceedings.mlr.press/v202/mashkaria23a.html); [Krishnamoorthy et al. Proc ICML 2023](https://proceedings.mlr.press/v202/krishnamoorthy23a); [Nguyen et al. Proc NeurIPS 2023](https://arxiv.org/abs/2310.19961).
**On the inclusion of the Warfarin Task as a new offline MBO task:** We thank the Reviewer for this comment. We note that **the key motivation of offline MBO is optimizing expensive-to-evaluate black-box functions with real-world significance**. In many cases, such real-world functions are high-dimensional and non-convex as noted by the reviewer. However, this is by no means a *requirement* for offline MBO to be useful -- as demonstrated by the Warfarin task, real-world tasks can also have convex objectives that are *just as important* for offline MBO methods to perform well on. Many prior works on offline MBO have exclusively focused on the evaluating nonconvex real-world tasks, although convex real-world objectives also have important utility in evaluating the real-world significance of methods such as GABO.
**On including ROMO as a baseline method:** **Across all 8 tasks, ROMO has an average rank of 11.3 on the top-1 evaluation; 14.3 on the top-64 evaluation; and 14.7 on the top-128 evaluation.** Of specific interest is how ROMO compares with GABO on the Warfarin task, which is an example of a constrained MBO (CoMBO) problem of the kind specifically tackled by ROMO:
| Warfarin Task | Top-1 Evaluation | Top-64 Evaluation | Top-128 Evaluation |
| --- | --- | --- | --- |
| ROMO | -0.71 $\pm$ 2.10 | -0.27 $\pm$ 0.55 | 0.76 $\pm$ 1.91 |
| GABO | 0.60 $\pm$ 1.80 | 1.00 $\pm$ 0.28 | 1.00 $\pm$ 0.03 |
GABO outperforms ROMO on the above evaluation metrics. Furthermore, based on the method rankings across the 8 tasks, ROMO evidently struggles on general, unconstrained MBO tasks (i.e., all tasks assessed in our work other than the Warfarin task). In contrast, our method (SCR) is not specifically designed for such problems, and is a task- and optimizer-agnostic framework for tackling a wide variety of continuous, discrete, and constrained MBO problems.
We thank Reviewer yPeb again for their continued engagement and interest in our work! We would be happy to answer any additional questions they might have. | Summary: This work tackles offline Bayesian optimization using adaptive source critic regularization. The authors propose generative adversarial Bayesian optimization, which optimizes against a learned surrogate model without querying the true oracle function during optimization. It utilizes a Lipschitz-bounded source critic model to constrain the optimization trajectory. Finally the authors provide experimental results against various baseline methods.
Strengths: - The setup of offline Bayesian optimization seems interesting.
Weaknesses: - The rationale behind the proposed method is unclear.
- Details of experiments are missing.
Technical Quality: 2
Clarity: 1
Questions for Authors: - Could you explain how these examples "Evaluating newly proposed molecules requires expensive experimental laboratory setups, and testing multiple drug doses for a single patient can potentially be dangerous" agree with your problem formulation?
- This sentence "Leveraging source critic model feedback for adversarial training of neural networks was first introduced by Goodfellow et al. (2014)" is not true. The authors should revise it.
- I think that this work just proposes a better regression method in the offline setting.
- Can you provide the details of $f_\theta$? Why did you choose such parameterization?
- How does a learned surrogate model provide additional information without querying the true oracle function? Source critic regularization doesn't add anything.
- When do you evaluate a true objective function? Didn't you evaluate query points when comparing your method to baseline methods?
- Equation (10): Is argmin correct? Should it be min?
- Can you explain the rationale behind $\lambda = \alpha / (1 - \alpha)$?
- Tables 1 and 2: How can the baseline methods show the results over the best in $\mathcal{D}$? I think that it is bounded in $\mathcal{D}$.
- Tables 1 and 2: What is the purpose of the row of the best? How can I interpret the results better than the best?
- It is a minor thing, but "bolded" is not a correct word. It should be just "bold."
Confidence: 4
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: There is no particular societal limitations of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We respond to Reviewer AH6U's comments below:
**Rationale of SCR**
To summarize our work and as discussed in our manuscript, **the primary rationale behind our proposed source critic regularization (SCR) method is to stop optimization algorithms from extrapolating against offline surrogate models in generative design.** We accomplish this by showing how to optimally and tractably balance the tradeoff between exploring the design space for potentially better designs while ensuring we don't over-extrapolate against the learned surrogate model.
**Experimental Details**
We have described the details of our experiments in **Section 5** and **Appendix A** of our manuscript, and also have made our code available for easy reproducibility and transparency. If the Reviewer has any remaining questions on experimental details, please feel free to let us know.
**SCR is not a regression method.**
To clarify, our work is not proposing a regression method; instead, **the primary contribution of SCR is to show how to balance the tradeoff between (1) optimization against a regressor model and (2) staying in-distribution in a computationally tractable way in offline MBO**.
To validate this primary contribution empirically, we make use of generic, task-agnostic surrogate regression models trained using standard machine learning techniques (see **Section 5.2**, L271-5 for details). We do not optimize the hyperparameters of our learned surrogate regression models in any way, and use the same generic parametrization of $f_{\theta}$ across all tasks (L271-2). There are no special training methods that were used in constructing the surrogate regressor models. We are happy to clarify further if the Reviewer has any additional questions regarding the primary motivation and contributions of our work.
> ***How does a learned surrogate model provide additional information without querying the true oracle function? Source critic regularization doesn't add anything.***
The learned surrogate model provides information on the true oracle function because it is trained on historical designs and their corresponding true oracle function scores (represented as the dataset $\mathcal{D}$ in our manuscript). In regions of the design space that contain many examples of these historical designs in $\mathcal{D}$, the surrogate model agrees with the true oracle function because it has been trained on past observations of the true oracle function. However, in regions of the design space that do *not* contain many examples of historical designs, the surrogate model and true oracle function likely disagree due to the problem of **extrapolation**. A good illustration of this is shown in **Figure 1** of our manuscript.
**The value added by source critic regularization is in preventing optimization algorithms from taking advantage of these extrapolated regions of the design space that result in falsely "optimal" designs.** In offline optimization, we want to avoid such designs that "look promising" according to the surrogate but in reality score poorly on the true oracle due to extrapolation of the surrogate model.
> ***When do you evaluate a true objective function?***
For each of the optimization methods (including GABO), we query the true objective function once (for **Table 1**) as the final step after an offline optimization algorithm is finished running and proposes a single design to evaluate using the hidden oracle objective function. This evaluation strategy is aligned with the motivation for this work; after we perform offline optimization experiments to propose a candidate design(s), we need to evaluate these designs with the actual oracle function to see how good these designs actually are. This evaluation schema is consistent with that used in most (if not all) other related work in offline generative design.
**Reparametrization of $\lambda**
The rationale behind re-parameterizing $\lambda$ in terms of $\alpha$ is that the feasible search space for $\lambda$ is all non-negative real numbers, which cannot be tractably searched over in a clear way. Our approach to solve this problem is to instead re-parameterize $\lambda$ in terms of $\alpha$ so that searching over a finite $\alpha\in[0, 1]$ is "equivalent" to searching over $\lambda\in[0, \infty)$.
**Interpretation of $\mathcal{D}$(best) in Tables 1 and 2**
Recall that $\mathcal{D}$ refers to the dataset of historical designs and their corresponding true oracle function scores that was used to train the learn surrogate model. In Tables 1 and 2, $\mathcal{D}(\text{best})$ refers to the best oracle function score observed from this set of prior historical designs. For example, in **Table 1**, the value of $\mathcal{D}(\text{best})=11.3$ for the LogP task means that out of all the 79,564 unique molecules and their corresponding oracle LogP values in the base offline dataset associated with the LogP task, the maximum score achieved by any given molecule is 11.3.
Therefore, if an optimization algorithm like GABO proposes a design with an oracle score greater than the best in $\mathcal{D}$ (e.g., a molcule with an oracle LogP score greater than 11.3), this means that we have "discovered" a new design not previously seen in the historical dataset that is even better than the best design in the historical dataset. Finding designs better than $\mathcal{D}(\text{best})$ is the main goal of offline optimization for generative design.
Our choice of notation and in reporting $\mathcal{D}(\text{best})$ is consistent with that used in other related work ([Krishnamoorthy et al. Proc ICML 2023](https://arxiv.org/abs/2206.10786), [Krishnamoorthy et al. Proc ICML 2023](https://arxiv.org/abs/2306.07180), [Trabucco et al. Proc ICML 2021](https://arxiv.org/abs/2107.06882), [Chen et al. Proc NeurIPS 2022](https://arxiv.org/abs/2209.07507)).
---
Rebuttal Comment 1.1:
Comment: Hi, hope you're doing well! Thank you again for your feedback and consideration of our manuscript. We hope that we have been able to answer your questions, and highlight how GABO and SCR may be a useful resource for the scientific community. We wanted to check if there are any remaining questions we can help address during this discussion period? | Summary: This paper considered an offline optimization problem where a surrogate objective function instead of the oracle objective can be queried. The surrogate objective function is trained using a reference offline dataset and thus may falsely predict the optimum due to overestimation errors. To resolve this issue, this paper proposed a Generative Adversarial Bayesian Optimization (GABO) algorithm which exploits the adaptive Source Critic Regularization (SCR) to achieve robust oracle function optimization. The proposed method is demonstrated to outperform other tested offline optimization baselines on several synthetic and real-world datasets.
Strengths: 1. The idea of adding a regularization penalty to the original offline optimization problem is interesting. The technical issues are clearly introduced.
2. The empirical performance improvement of the proposed GABO is significant compared to the other BO and model-based optimization algorithms.
Weaknesses: I have reviewed this paper before and had multiple rounds of discussions with the authors in the previous review process. My major concern is still about the motivation and positioning of this work. The key contribution of this work is the design of the surrogate objective in (7) and its dual formulation in (8) for a model-based optimization (MBO) problem. This new objective is independent of BO and can actually be optimized using the zero-order, one-order, or any other optimization algorithm. Even though this paper has highlighted and empirically shown that BO is an effective choice among multiple optimization methods, I still think the combination of SCR and BO is trivial and should not be the focus of this work.
I cannot consider the proposed GABO algorithm as a novel **BO** framework since all the new techniques proposed are not specifically designed for BO. The challenges of applying BO in the generative adversarial optimization problem are unclear. I highly suggest the authors to reconsider the title and motivation of this work such that its contributions can be clearer.
Technical Quality: 3
Clarity: 2
Questions for Authors: In Lines 210-213, it is mentioned that the objective of GABO is time-varying, which is an interesting and non-trivial issue of applying BO to this problem. Do you have any idea about how to resolve this issue?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: This limitations are clearly discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer Yd2B for their thoughtful review of our work, and appreciate that they recognize our proposed source critic regularization (SCR) formulation and strategy as a key contribution of our work.
**Framing of the Manuscript's Narrative**
We agree that our main contribution is Source Critic Regularization (SCR) in **Algorithm 1**, which is optimizer-agnostic and can be used with any optimization method. Our experimental evaluation of SCR using Bayesian Optimization (BO) is because we find it to be the most effective optimizer to use in conjunction with SCR, likely due to BO's well-documented effectiveness as an offline optimizer as outlined in **Section 3.2** (L110-7). Indeed, as acknowledged by the Reviewer, we have already included evidence of the effectiveness of BO compared to alternatives such as Gradient Ascent (GA)$\textemdash$in particular, our comparison to Generative Adversarial Gradient Ascent (GAGA) in **Supplementary Table B5**. In addition, GABO also outperforms vanilla BO in all our experiments, demonstrating that SCR has significant added value as a methodological contribution.
In alignment with the feedback from the Reviewer, we are happy to revise the title of our work to the following: **Generative Adversarial Model-Based Optimization via Source Critic Regularization**. This improvement will help better emphasize that our main contribution is SCR independent of the choice of baseline optimizer. We will also do our best to revise our paper to make it more clear that SCR is our main contribution, and that BO is simply the most effective instantiation of our method.
**Future Work on Time-Varying Optimization**
Thank you for this comment; indeed time-varying BO is an interesting and nascent field of research and a complex problem in general. We hypothesize that there may be opportunities to exploit existing methods in time-varying BO for SCR if the updates to the source critic function $c^*$ follow some sort of pattern over time. In practice, we were unable to identify any such patterns in preliminary studies largely due to the limited number of updates to $c^*$ made over the course of the optimization process. Furthermore, factoring in time-varying BO was not necessary to demonstrate that GABO and SCR are effective empirically. Given the complexity of this problem and the empirical success of GABO and SCR already demonstrated in our work, we leave this opportunity to explore time-varying BO for future work.
---
Rebuttal Comment 1.1:
Comment: Hi, hope you're doing well! Thank you again for your continued consideration of our manuscript. We hope that the title change and focus of SCR as the primary contribution of our work better align with your helpful feedback. We have also included [new results](https://openreview.net/forum?id=3RxcarQFRn¬eId=n2sQJLH7uv) comparing both GABO and GAGA against other baseline optimization methods to help readers better assess the utility of SCR in the main text of our updated manuscript. We wanted to check if there are any remaining questions we can help address during this discussion period? | Summary: The paper considers the problem of offline black-box optimization where only limited (zero-shot or few-shot) online interactions with the objective function is available. Existing approaches commonly train a neural net parametrized surrogate model of the objective using the offline data. The paper proposes to use a source critic model, inspired by discriminator training in generative adversarial networks, to regularize this surrogate. This is accomplished by formulating a constrained optimization problem which constrains the optimization (over the surrogate model) trajectory samples to be similar to the training data using the source critic model. The Lagrangian version of this constrained optimization is used as the objective for a standard batch mode Bayesian optimization algorithm. Experiments are performed on tasks from Design-bench benchmark and a new Warfarin task.
Strengths: - One thing I really like about the paper is its evaluation which truly mirrors the offline optimization setting. The paper compares the proposed approach and baselines on a single evaluation from the oracle which I believe is the right way to evaluate algorithms for offline model-based optimization.
- The proposed approach shows good performance across all the tasks on both zero-shot and few-shot evaluation. The ablation study in 5.5 also shows that adaptively tuning the lagrangian coefficient is also important for good performance.
- The constrained optimization formulation is well-formulated and principled approach to tackle this problem. The design choice of constraining the surrogate model using the source critic model makes sense and is relevant for the problem.
- Some of the tasks in design-bench benchmark have multiple errors which makes them not so informative for evaluation. I like the paper didn't include superconductor task where the original offline dataset itself has multiple copies of the same inputs but with different outputs. Similarly, I think the ChemBL and UTR tasks are also not useful. In my practical experience, samples searched over ChemBL generates a lot of syntactic errors. I like the fact that the paper introduces new tasks like LogP and Warfarin which will be useful to the broader community.
Weaknesses: - One question I have is about the broader picture of using Bayesian optimization (BO) for this problem space. Why should we first train the neural network surrogate model and then fit the Gaussian process (GP) on top of the neural surrogate model to do few steps of BO? Why can't we just fit a GP (with a latent space presumably to handle high/structured dimensionality) directly on the offline data and do one step of BO? I am assuming the TurBO-qEI baseline doesn't do that and also fits the Turbo's GP on the neural network surrogate.
- It would be nice to see more justification for the BO hyperparameters (like sampling budget (T) and batch size (b)). Is the method sensitive to the choice of these parameters?
- One thing I like about the paper is that it can work with any choice of the surrogate model. I would also like to point out one very recent related work that also constrains the optimization given any surrogate model by formulating the search as an offline RL problem.
- Yassine Chemingui, Aryan Deshwal, Nghia Hoang, Janardhan Rao Doppa. Offline Model-based Black-Box Optimization via Policy-guided Gradient Search. Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI), 2024.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see weaknesses section.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please see weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer bMXC for their thoughtful comments and insights, and share their enthusiasm for the proposed significance and strengths of our work. We address their outstanding questions in our response below.
**The Role of the Surrogate NN in MBO**
We thank the reviewer for this comment. We use this setup for generality; in many applications, the surrogate objective is not under our control. For instance, it might be a physics simulator or other domain-specific function that has been optimized by domain experts to give more information about the hidden oracle function across the entire design space than what can be accomplished with GPs alone. (Note that task-specific optimization of the surrogate is *not* done in our work.) By fitting a GP to this surrogate instead of just the observations of the hidden oracle function, we can take advantage of the information encoded in a more complex (and hopefully accurate) surrogate.
To demonstrate this empirically, we explore this potential strategy of using the single GP from BO directly as the both the surrogate function *and* for BO sampling. We leverage Source Critic Regularization (i.e., **Algorithm 1**) using this framework. We refer to this as Generative Adversarial Bayesian Optimization with a GP Surrogate (GABO GP-Surrogate), and compare this method with GABO:
| One-Shot Oracle Evaluation | Branin | LogP | TFBind | GFP | UTR | ChEMBL | Warfarin | Avg. Rank |
| -------------------------- | ------ | ---- | ------ | --- | --- | ------ | -------- | --- |
| **GABO GP-Surrogate** | -37.4 $\pm$ 4.4 | -57.9 $\pm$ 159 | **0.576 $\pm$ 0.058** | 3.51 $\pm$ 0.69 | 6.84 $\pm$ 1.24 | **0.65 $\pm$ 0.01** | -0.27 $\pm$ 2.13 | 7.7 |
| **GABO** | **-2.6 $\pm$ 1.1** | **21.3 $\pm$ 33.2** | 0.570 $\pm$ 0.131 | **3.60 $\pm$ 0.40** | **7.51 $\pm$ 0.39** | 0.60 $\pm$ 0.07 | **0.60 $\pm$ 1.80** | **4.3** |
| $k=64$-Shot Oracle Evaluation | Branin | LogP | TFBind | GFP | UTR | ChEMBL | Warfarin | Rank |
| ----------------------------- | ------ | ---- | ------ | --- | --- | ------ | -------- | --- |
| **GABO GP-Surrogate** | -8.4 $\pm$ 1.6 | 0.66 $\pm$ 0.01 | 0.720 $\pm$ 0.068 | 3.74 $\pm$ 0.00 | **8.27 $\pm$ 0.08** | 0.66 $\pm$ 0.01 | 0.98 $\pm$ 0.05 | 6.6 |
| **GABO** | **-0.5 $\pm$ 0.1** | **98.0 $\pm$ 37.6** | **0.942 $\pm$ 0.026** | **3.74 $\pm$ 0.00** | 8.25 $\pm$ 0.17 | **0.67 $\pm$ 0.02** | **1.00 $\pm$ 0.28** | **2.1** |
As we can see, using a more complex, neural-network surrogate function for GABO leads to better optimization results than directly using the GP as the surrogate function.
Our evaluation strategy of using an neural network surrogate for GABO is also identical to the one used in the existing offline MBO literature to make it easier to compare our work with pre-existing methods: for instance, [Trabucco et al. (ICML 2021)](https://arxiv.org/abs/2107.06882), [Yu et al. (NeurIPS 2021)](https://arxiv.org/abs/2110.14188), [Chen et al. (ICML 2023)](https://arxiv.org/abs/2301.02931), [Krishnamoorthy et al. (ICML 2023)](https://arxiv.org/abs/2306.07180), and [Krishnamoorthy et al. (ICML 2023)](https://arxiv.org/abs/2206.10786) in addition to our work.
**Sensitivity to Sampling Budget and Batch Size**
Thank you for this comment. Prior work has shown that BO methods are relatively robust to perturbations in the batch size and other hyperparameters (i.e., Figure 7 in [Eriksson et al. (NeurIPS 2019)](https://arxiv.org/abs/1910.01739)), and we have observed that this similarly applies to GABO. We will include a discussion on the sensitivity to the optimization hyperparameters in an updated version of our manuscript.
**Additional Citation**
Thank you for sharing this related work from Chemingui et al. with us. We will include a citation to this work and appropriate associated discussion in the final manuscript.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thanks for the response to my questions. I am happy with the response and believe this paper will be a good addition to the offline model based optimization literature. Hence, I will keep my positive score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your careful consideration and support of our work! | Rebuttal 1:
Rebuttal: # Summary of Revisions Made to the Paper
We thank the Reviewers for their thoughtful comments and consideration of our paper. We are grateful that the Reviewers find our method novel (Reviewer czFU, bMXC, Yd2B, yPeB), well-justified (Reviewer bMXC, Yd2B, yPeB), well-written (Reviewer U2YG), and a useful resource (Reviewer bMXC, yPeB) for the broader scientific community.
We have the following general comments, including several changes we have made to our manuscript to address reviewer comments:
1. Several reviewers asked about the importance of Bayesian Optimization (BO) in our framework. Indeed, our main contribution is to use Source Critic Regularization (SCR) for Model Based Optimization (MBO), as illustrated in **Algorithm 1**. While SCR can be applied to several different optimization methods, we find BO to be the most effective (**Section 3.2**, **Supplementary Table B5**). This motivates using BO as our vehicle to evaluate SCR. To illustrate that SCR can also be used with other optimization methods, we have also demonstrated how SCR can be used to improve upon Gradient Ascent, resulting in Generative Adversarial Gradient Ascent (GAGA). To better emphasize that our main contribution is SCR, we are happy to change our title to **Generative Adversarial Model-Based Optimization via Source Critic Regularization**.
2. We have included a plot of the distribution of scores for the Penalized LogP task for different optimizers in the attached PDF file for the interested reader to analyze how the distribution of oracle scores vary across different methods.
3. We have included further justification for our top-1 evaluation of designs using the oracle, which is different from prior work as noted by some Reviewers. Similar to Reviewer bMXC, we believe that this evaluation metric is "right way to evaluate algorithms for offline model-based optimization." Nevertheless, we emphasize that we have already included top-64 scores in **Supplementary Tables B1 and B2**, which is a "more common" metric in related work. In fact, GABO performs even better in terms of the top-64 metric than it does at our top-1 metric; our focus on top-1 is solely because we believe it to be more relevant to real-world tasks.
4. We have included additional experimental details regarding the computational time and resources to run our experiments compared with baseline optimization methods, which we discuss in more detail in responses to individual reviewers.
We look forward to discussing further with the Reviewers to answer any outstanding questions or concerns. Thank you!
Pdf: /pdf/314ba2f2291496b480f1807a8061337dacb68800.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper proposes a novel approach for offline MBO that combines Source Critic Regularization and Bayesian Optimization. Offline MBO aims to train a surrogate model from offline data and subsequently extrapolates it to find an optimal design (as opposed to online MBO which collects more data as it trains the surrogate model). One key challenge of offline MBO is knowing when to trust the surrogate model, since it could be highly erroneous at ood inputs. This paper handles said challenge by incorporating a critic constraint that penalizes candidate that are not sufficiently similar to the training samples.
Strengths: - Clear description, easy to follow
- Practical and novel method for MBO. As a regularization method it can be incorporated into many other MBO frameworks.
- Hyperparameter \lambda is chosen methodically
- Perform well on 100th percentile metric
Weaknesses: - Doesn't seem to be very good on 90th percentile metric (i.e., mean rank ~ 8). Does this mean that the method is very good at picking out high-reward outlier, but the overall quality of the surrogate is about the same as everything else? If that's the case is there any justification for this behavior? I would prefer if the authors plotted out the entire scatter plot of 100 candidates for each method, as that would allow us to compare performance on a distributional level.
Technical Quality: 4
Clarity: 4
Questions for Authors: - Why run BO on the learned surrogate when it is already differentiable and can be numerically optimized using GA? (I saw the ablation study on GAGA but just want to hear an argument for the benefit of GABO).
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: No potential negative societal impact
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer U2YG for their thoughtful feedback on our work, and address their outstanding questions and concerns below.
**Performance of GABO on the 90th percentile metric**
Unlike the other evaluation metrics assessed in our work, GABO indeed does not perform as well on the 90th percentile metric. However, this is not surprising since GABO is not designed to target this metric (and it is not our metric of interest). In particular, in our analysis, we have found that this is largely due to the nature of the underlying Bayesian optimization (BO) optimization algorithm. Because BO is not an iterative first-order algorithm, the designs proposed by any BO-based algorithm often have high variance in practice. This is what we observe across all of our experiments, including in **Table 1** and **Supplementary Tables B1 and B3**.
We also note that in most offline optimization applications, the 90th percentile metric$\textemdash$or any metric that does not use for the best proposed design(s)$\textemdash$is not as useful as the other metrics assessed where GABO *does* perform well. This is because in offline optimization tasks with a restricted budget to query the hidden, expensive-to-evaluate oracle function, we are not interested in "wasting" this budget on subpar design candidates. While the 90th percentile and similar metrics can be helpful to understand the limitations of algorithms (as in this case), we believe that the 100th percentile metrics reported in the main text are more useful and practical in assessing each of the optimization algorithms.
For completeness, we also show the distributions of the proposed designs for across all of the optimization methods for the LogP task to allow interested readers to compare the performance of the different methods on a distributional level. The plot is available here: [Link](https://bashify.io/files/dxnMON). We will include this result in the final manuscript.
**Why run BO on the learned surrogate when it is already differentiable and can be numerically optimized using GA?**
Thank you for this question. We choose to use BO for the basis of our work because it is a well established principle in recent Bayesian Optimization (BO) literature that BO is useful even for differentiable objectives, outperforming baselines such as GA [Eriksson et al. (NeurIPS 2019)](https://arxiv.org/abs/1910.01739), [Maus et al. (NeurIPS 2022)](https://arxiv.org/abs/2201.11872), [Hvarfner et al. (2024)](https://arxiv.org/abs/2402.02229), [Eriksson et al. (UAI 2021)](https://proceedings.mlr.press/v161/eriksson21a/eriksson21a.pdf), [Astudillo et al. (ICML 2019)](https://arxiv.org/abs/1906.01537). It is the dimensionality and non-convexity of the search space that makes the optimization problems in our benchmark challenging, leading first order methods like GA to struggle. Based on these results, we believe BO is a natural basis for our work. We discuss this in detail in **Section 3.2** (L110-7).
Nevertheless, we emphasize that our proposed source critic regularization (SCR) algorithm can be used with other optimization methods as well, such as Gradient Ascent (GA) as suggested. Indeed, as already noted by the Reviewer, we also evaluated SCR with GA, which we call Generative Adversarial Gradient Ascent (GAGA), and include results in **Supplementary Table B5**. Comparing GABO with GAGA helps motivate our decision to use BO for experimental evaluation of our SCR algorithm. To better emphasize that our main contribution is SCR, we plan to revise the title of our work to the following: **Generative Adversarial Model-Based Optimization via Source Critic Regularization**
---
Rebuttal Comment 1.1:
Comment: Hi, hope you're doing well! Thank you again for your feedback and consideration of our manuscript. We hope that (1) the additional distribution plot results included in the summary of revisions; and (2) the detailed rationale of why we evaluate SCR with BO has helped address your initial concerns. We wanted to check if there are any remaining questions we can help address during this discussion period? | Summary: This paper proposes to use an adversarial regulariser in the BO setting. In particular, the authors propose a systematic way to compute the regularization parameter through a lagrange duality. Overall the regularized method performs on average better than existing methods across a suite of benchmark datasets.
Strengths: - The paper is the first to propose an adversarial regularization objective for BO.
- The method is well motivated and a practical way to obtain the hyperparameter is given, albeit through a grid.
- The proposed methods performs on par with baselines and on average is better across several benchmark datasets.
Weaknesses: - to clarify, the way you choose alpha is through a grid. Wouldnt that mean your computational cost is significantly larger than baseline methods, please add a section on computational time as well as cost to better understand the tradeoffs of the proposed method over existing methods. Please comment on the above and clarify any misunderstandings please. Especially this part: "To ensure a fair evaluation schema, all MBO methods were evaluated using a fixed surrogate query budget of 2048". Can you confirm that no extra data was used to obtain the alpha in your algorithm? There might be a misunderstanding here, in particular what how exactly to pick the best alpha based on the grid of 200 and the associated costs?
- VAEs as well as GAN are often time hard to train and need very specific architectures to work well. How, were these chosen and is there any ablation study? how does the quality of "c" affect the problem setting? My concern is that there are so many moving parts that is becomes hard to understand what the contribution of each part is
- I dont understand D(best) how come some values in the table are able to be better than the oracle? Am i reading the table correctly?
I am more than happy to change my score if the above have been clarified
Technical Quality: 3
Clarity: 3
Questions for Authors: see above
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer czFU for their thoughtful review of our work, and appreciate that they recognize the technical and empirical contributions of GABO compared with prior work. Please find our responses to their questions below.
**Search for $\alpha$**
Importantly, we confirm that no extra data is used to compute the value of $\alpha$ in our algorithm -- because only the source critic $c^*$ is retrained dynamically over the optimization process, we only need to query the easy-to-compute neural network $c^*$ in computing $\alpha$.
We also confirm that we compute $\alpha$ through a grid search. However, in our implementation used for our experiments (which will be made publicly available), the grid search to compute $\alpha$ is highly vectorized such that the computational time is non-limiting even using a single-GPU setup. To benchmark our implementation, we evaluate both BO and Gradient Ascent (GA) both with and without our Generative Adversarial (GA) source critic regularization algorithm on the Branin and Penalized LogP optimization tasks:
| **Method** | **Branin Task Compute Time (sec)** | **LogP Task Compute Time (sec)** |
| --- | --- | --- |
| BO-qEI | 92.1 $\pm$ 10.2 | 965 $\pm$ 17.9 |
| GABO | 328 $\pm$ 146 | 1245 $\pm$ 55.2 |
| GA | 9.68 $\pm$ 0.23 | 765 $\pm$ 6.64 |
| GAGA | 75.6 $\pm$ 25.4 | 818 $\pm$ 10.5 |
Compute time is reported as mean $\pm$ standard deviation over 10 random seeds. As a reminder, the Branin task is a standard benchmarking task for offline optimization, and the Penalized LogP task is subjectively the most challenging task assessed in our manuscript with the highest dimensional design space out of the seven assessed tasks.
While there are obviously additional compute costs associated with running our Source Critic Regularization method (i.e., **Algorithm 1**), as made evident by the above results, we note that in most applications of offline optimization, obtaining labeled data is the main bottleneck in many practical applications; thus, it is often worth spending this extra compute to ensure the best results for the given budget.
**Selection of Model Hyperparameters**
The exact same generic FCNN-based source critic as introduced by the initial WGAN authors [Arjovsky et al. (2017)](https://arxiv.org/abs/1701.07875) is used across all of the different tasks assessed in our work. Similarly, we perform **no** hyperparameter fine-tuning of VAEs and use standard task-agnostic VAE architectures across the different tasks in our work. By using generic model architectures that have *not* been optimally tuned to each task, we can focus on assessing our contributions that are primarily algorithmic in nature. That is, the VAEs and source critic models may perform well on certain tasks and may perform poorly on others that were assessed. By evaluating our source critic regularization algorithm across a wide variety of different benchmarking tasks, we are able to give a good picture of the "average" performance of GABO and other optimizers independent of how well VAEs and source critics are tuned to the specific optimization task at hand.
**What is D(best)?**
In our results, $\mathcal{D}$(best) refers to the best oracle score achieved by any previously observed design in the offline dataset. For example, in **Table 1**, the value of $\mathcal{D}_{\text{best}}=11.3$ for the LogP task means that out of all the 79,564 unique molecules and their corresponding oracle LogP values in the base offline dataset associated with the LogP task, the maximum score achieved by any given molecule is 11.3. Therefore, if an optimization method (such as GABO) proposes a molecule design that achieves an oracle LogP score greater than 11.3, then we have found a design better than any of the designs previously seen in the offline dataset. Our choice of notation is consistent with that used in related work ([Krishnamoorthy et al. Proc ICML 2023](https://arxiv.org/abs/2206.10786), [Krishnamoorthy et al. Proc ICML 2023](https://arxiv.org/abs/2306.07180), [Trabucco et al. Proc ICML 2021](https://arxiv.org/abs/2107.06882), [Chen et al. Proc NeurIPS 2022](https://arxiv.org/abs/2209.07507)).
---
Rebuttal Comment 1.1:
Title: response
Comment: Thanks for the rebuttal.
All my questions have been addressed and hence i am happy to increase my score to 6.
---
Reply to Comment 1.1.1:
Comment: We are grateful for Reviewer czFU's thoughtful consideration and support of our manuscript. Thank you! | null | null | null | null |
Spatiotemporal Predictive Pre-training for Robotic Motor Control | Reject | Summary: This paper studies how to extract useful visual features from out-of-domain and action-free human videos to enhance robotic visualmotor control. Specifically, the authors argure that naively extracting spatial features via MAE is insufficient for robotics control, in contrast, jointly captureing spatial control and temporal movement will be more effective. To do so, the authors propose STP, a new self-supervised learning method, that simutaneously performs MAE on current frame to extract spatial information and predict furture frames to extract temporal motion clues. The overall motivation, idea and method are straightforward and reasonable. The authors evaluate STP on diverse benchmarks including 21 tasks spanning from simulation to real world tasks using imitation learning.
Strengths: 1. The paper is well-motivated, highlighting the importance of pre-training visual features for robotic foundation models.
2. The logic in the paper is clear and easy to follow.
3. The proposed method is straightforward and simple to implement.
Weaknesses: 1. The high costs associated with evaluating real-world tasks using different random seeds make it challenging to report variances. However, assessing the impact of multiple random seeds in simulated tasks could provide more reliable statistical insights. As shown in Table 1, STP's performance improvement over baselines is marginal (STP 63.7 vs. VC-1 61.8, and STP-L/16(Post PT) 78.4 vs. MAE-L/16(Post PT) 76.7). Given the inherent stochastic nature of imitation learning and reinforcement learning, evaluations across multiple episodes and various random seeds are crucial to validate the proposed methods effectively.
2. Some previous methods also consider the temporal movements when extracting the visual features. For instance, the video-language alignment loss in R3M [1] tries to align language with correct visual transitions, which can extract semantic informations about visual movements. Voltron[2] and DecisionNCE [3] also try to extract the semantic features of the temporal movements between two frames. VIP[3] and LIV[4] use RL to extract visual features, which may also capture long-term movements via bootstrapping. Therefore, the authors could strengthen their paper by highlighting these related works, demonstrating awareness of existing methods, situating their contributions and highlighting the differences between STP and these baselines.
[1] R3M: A Universal Visual Representation for Robot Manipulation. CoRL 2023
[2] Language-Driven Representation Learning for Robotics. RSS 2023.
[3] DecisionNCE: Embodied Multimodal Representations via Implicit Preference Learning. ICML 2024.
[4] VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training. ICLR 2023.
[5] LIV: Language-Image Representations and Rewards for Robotic Control. ICML 2023
Technical Quality: 2
Clarity: 3
Questions for Authors: Please refer to the weakness for details.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors have properly discussed the limiations in the Appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **C1: The high costs associated with evaluating real-world tasks using different random seeds make it challenging to report variances. However, assessing the impact of multiple random seeds in simulated tasks could provide more reliable statistical insights. As shown in Table 1, STP's performance improvement over baselines is marginal (STP 63.7 vs. VC-1 61.8, and STP-L/16(Post PT) 78.4 vs. MAE-L/16(Post PT) 76.7). Given the inherent stochastic nature of imitation learning and reinforcement learning, evaluations across multiple episodes and various random seeds are crucial to validate the proposed methods effectively.**
**R1:** Thanks for your suggestions. To address your concern, we first report results with multiple random seeds, then clarify some confusion points in Tab 1, and finally report extra experiment results with more evaluation.
**(1) We rerun the experiments and report the mean and standard deviation, demonstrating the stability of our experiments.**
For the same BC seed, our policy training is **fully reproducible**, with the only uncertainty being the **slight difference that still exist in MuJoCo rendering** even with the same policy and evaluation seed. Therefore, we **further rerun** our paper's STP-B and MAE-B baseline twice during limited time, obtaining three results and their mean and variance, as shown below. These demonstrate the stability of our experiments. In addition, the rendering in Trifinger is not subject to randomness, hence the results are **fully reproducible**.
| | Number of BC seeds × Number of evaluation seeds × Number of runs | STP-B | MAE-B |
|:------------:| :------------: |:-------:|:-------:|
| Meta-World | 25×3×3 | 94.1 93.6 94.1 **93.9±0.2** | 85.1 84.8 84.3 **84.7±0.3** |
| Franka-kitchen | 50×3×3 | 42.5 43.5 43.8 **43.3±0.6** | 36.7 37.9 37.1 **37.2±0.5** |
| DMControl | 25×3×3 |61.6 60.3 60.7 **60.9±0.5** | 59.2 60.3 60.3 **59.9±0.5** |
| Adroit | 25×3×3 |47.3 48.7 48.0 **48.0±0.6** | 43.4 45.3 44.7 **44.5±0.8** |
| WA | 25×3×3 |63.9 63.8 64.1 **63.9±0.3** | 58.3 59.1 58.7 **58.7±0.3** |
**(2) On confusion of Tab 1.: we use the same pre-training data to conduct a fair comparison between MAE and STP.**
Our STP (EgoClip) utilize less pre-training data than the publicly VC-1 (full Ego4D+MNI). Under the same pre-training data and number of training epochs, our STP-B outperforms MAE-B baseline `(63.7 vs. 59.6)`, which shows a significant improvement of `4.1`. Note that VC-1 shares the same technique with MAE only with different pre-training data.
**(3) About the performance improvement of ViT-Large (PT).**
We analyze that under the post-pre-training and ViT-Large setting, the representation might be overfitted to the target domain, and the performance improvement tends to saturate. In the future, we may resort to larger pre-training data.
**(4) Our initial evaluation scale was enormous, hence the performance improvement is solid and stable.**
The Number of evaluation episodes = Number of tasks × Number of BC seeds × Number of evaluation seeds × Number of camera views. Therefore, the total number of our evaluation episodes is
`5×3×25×1 + 5×3×50×2 + 5×3×25×1 + 2×3×25×1 + 2×1×25×1 = 2450`.
| | Number of BC seeds | Number of evaluation seeds | Number of camera views
| :-------: | :------: | :---------: | :---------: |
| Meta-World | 3 | 25 | 1 |
| Franka-Kitchen | 3 | 50 | 2 |
| DMControl | 3 | 25 | 1 |
| Adroit | 3 | 25 | 1 |
| Trifinger | 1 | 25 | 1 |
**(5) We add the evaluation results from RLbench, demonstrating the generality of our improvements.**
In addition, we have added a performance comparison between STP and VC-1 on `20` randomly selected tasks in RLBench. We use the advanced RVT [6] as the policy module, training a multi-task policy with only `20` demonstrations per task. Each task is evaluated over `25` episodes, and the results are in the PDF file; our `STP (52.0)` outperforms `VC-1 (43.2)` by `8.8`, further proving the effectiveness of our STP.
[6] Rvt: Robotic view transformer for 3d object manipulation.
**C2: The authors could strengthen their paper by highlighting these related works, demonstrating awareness of existing methods, situating their contributions and highlighting the differences between STP and these baselines.**
**R2:** Thank you for your suggestion. We will further strengthen the discussion of these related works and highlight our STP's differences in the revised paper.
Our STP has distinct differences from these works that consider the temporal movements in terms of objectives and techniques. R3M, LIV, and DecisionNCE capture the alignment between language instructions and task progression through contrastive learning for representation learning or reward learning. VIP focuses on self-supervised contrastive learning to pretrain a reward function. Voltron performs language-guided reconstruction for multimodal video representation.
In contrast, our STP is a purely **self-supervised** method through **mask-based generative modeling** for spatiotemporal prediciton, using asymmetric masking and decoder architecture to jointly capture content and motion features. Our method does not require language descriptions. Our generative pre-training paradigm differentiates STP to these contrastive methods from the technique perspective.
In summary, our STP has the following advantages: It is more **general** (learning standard image representation instead of focusing on a specific application), **scalable** (self-supervised learning and plain ViT), **efficient** (high masking ration), and **effective** (asymmetric masking and decoder architecture design ensure joint learning of content and motion features).
---
Rebuttal Comment 1.1:
Title: More in-depth discussion about Voltron should be provided
Comment: Thanks for the efforts and these detailed responses! I acknowledge the evaluation efforts the reviewer made to solidify their claims. However, it seems that this paper shares many similarities with Voltron: `they both use masked reconstruction on current and future frames to extract the temporal as well as the spatial features for downstream robot learning`.
In my view, the core differences are three folds:
1. `Settings`: Voltron studies the multi-modal setting, but this paper studies the uni-modal setting.
2. `Methods`: Voltron uses the same mask ratios for both current and future frames, but this paper adopts different ratios.
3. `Methods`:Voltron adopts the same transformer decoder to reconstruct current and future frames, but this paper designs a spatial and temporal decoder to decode them separately.
So, for me, it looks like this paper degenerates from more complex multi-modal settings in Voltron to simpler uni-modal setups, with the overall objective (reconstructing both the current and future frames) very similar. In this sense, the technical contributions look more like some detailed implementation improvements. Therefore, it would be better for the authors to discuss more on the very similar Voltron.
---
Reply to Comment 1.1.1:
Title: Discussion on the difference with Voltron
Comment: Thanks for your prompt reply to our response and acknowledgement to our evaluation efforts. On the difference of our STP with Voltron, we would like to make the following clarifications and hope this can well address your concern.
1. Our STP shares a simpler design than Voltron just as you mentioned from multi-modal setting to uni-modal setting, which makes our STP a more scalable approach than Voltron as uni-modal data is more easily obtained.
2. The key difference is that our STP decouples the current frame and future frame for separate modeling, while Voltron employs the MAE-ST (VideoMAE) pre-training to jointly model the whole clip (2 frames). Our decoupled design leads to several important differences in technique design:
a. The encoder design is different. Our STP encoder processes each frame independently and there is no attention operation between frames. However, the Voltron encoder operates on a clip to learn a video-level representation and there is cross-frame attention operations. Our design aims to encourage our encoder to learn an image-level representation that is temporally-sensitive for prediction. We find this image-level encoder is more friendly for the downstream adaption compared with the video-level representation (video encoder has higher computational cost).
b. The decoder design is different. Our STP has two decoders: one for spatial prediction and the other for temporal prediction, to treat current frame prediction and future frame prediction separately. For the spatial decoder, we use `joint self-attention` to process the current frame; for the temporal decoder, we add `cross-attention` to capture the interaction between the current frame and future frame (See Fig. 2 in the paper). `This design ensures the predictive property of STP, in sense that the representation of current frame will not see the future and acts as condition for future prediction`. The Voltron only employs a single decoder to reconstruct the whole video directly.
c. The masking strategy is different. The decoupled design allows STP to assign different masking ratios to current frame and future frame. Specifically, we use ratio of `75%` and `95%` for them, while Voltron uses the same ratios for all frames.
3. The ablation studies in in Tab 2(a), Tab 2(b) and Tab 2(c) demonstrate that these different technique designs as mentioned above are `crucial` for achieving excellent performance. Meanwhile, during rebuttal, we add a direct comparison with MAE-ST (The architecture is the same with Voltron without language input). Our STP is better (63.7 vs. 52.6, see response to Reviewer QdYc).
4. Finally, our evaluation is more comprehensive than Voltron on the backbone scaling, pre-training data, and robotic motor control downstream tasks. In terms of pre-training data, Voltron only uses `small-scale` Something-Something-v2 dataset for pre-training. At the same time, the model size of Voltron (V-gen) is only `small`. In contrast, STP uses larger scale training data and trains the `ViT-Large`. Our self-supervised and uni-modal settings ensure the scalability and generality of STP.
If you have further concerns, please feel free to comment. We would like to answer your question.
---
Rebuttal 2:
Comment: I agree with the authors that this paper conducts more comprehensive ablations and evaluations to support the effectiveness of each design choices. In this sense, I am very happy to increase my score. Meanwhile, considering the similarity to Voltron (the authors provide more detailed techinique insights about the differences to voltron, but the high-level differences are like what I pointed out). I decide to increase to a 5 (boardline accept).
---
Rebuttal Comment 2.1:
Comment: Thanks for your comments and the recognition of our responses. | Summary: The paper presents a new spatio-temporal pretraining algorithm for representation learning for robotics. The authors propose using masked autoencoding for reconstructing the current frame (for spatial reasoning) and a future frame (for temporal reasoning). The authors provide extensive experimentation across simulated and real-world settings and provide ablation studies to justify their design choices.
Strengths: - The paper addresses the important topic of including temporal dynamics in video data for pretraining robot representations.
- The paper does a good job of explaining the method and detailing the various experimental settings.
- The authors provide policy performance using both the pre-trained representations and post-pre-trained representations which helps assess both the quality of representations learned from internet data as well as the advantage of finetuning representations on the task-specific data. Overall, the proposed method has been extensively evaluated over varied settings across a variety of simulated settings.
- The authors provide an insightful ablation study to justify their design choices.
Weaknesses: - It is unclear where the diverse image data for STP trained with Ego+I in Table 1 is obtained from. Some information about this would be helpful.
- The real-world experiments seem limited with only two real-world tasks where the MAE also performs reasonably well.
- The authors must include comparisons with prior works using MAE for spatiotemporal learning [1].
[1] Feichtenhofer, Christoph, Yanghao Li, and Kaiming He. "Masked autoencoders as spatiotemporal learners." Advances in neural information processing systems 35 (2022): 35946-35958.
Technical Quality: 3
Clarity: 3
Questions for Authors: It would be great if the authors could address the “Weaknesses” listed above.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations have been addressed adequately.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **C1: It is unclear where the diverse image data for STP trained with Ego+I in Table 1 is obtained from.**
**R1:** Sorry for the confusion. "STP trained with Ego+I" means that we perform a **hybrid pre-training** using EgoClip and ImageNet data. Specifically, we first initialize ViT with the ImageNet-MAE weight. During the pre-training process, for the image data from ImageNet, we conduct MAE pre-training; for the video data from EgoClip, we conduct STP pre-training. This resultes in a 0.5 performance improvement, indicating that our STP can also benefit from more diverse image data.
**C2: The real-world experiments seem limited with only two real-world tasks where the MAE also performs reasonably well.**
**R2:** Our STP has achieved significant advantages in the pouring task (**45% -> 65%**). It can more accurately align with the moving bowl and the pot. In addition, although MAE and STP have a same success rate in picking tasks, STP tends to execute grasping in a better position. Some works such as [2] have demonstrated that real-world environments and simulation settings yield similar conclusions, hence we have not carried out more and costly real-world evaluations. We will release our STP weight in the future, hoping that it could contribute to the community and be applied to more real-world environments and tasks.
[2] What do we learn from a large-scale study of pre-trained visual representations in sim and real environments?
**C3: The authors must include comparisons with prior works using MAE for spatiotemporal learning [1].**
**R3:** Thanks for your suggestion. Our STP differs from MAE-ST as STP executes **asymmetric** masking and decoder architecture design for **decoupled** spatial and temporal prediction on the **2D image model**. On the contrary, MAE-ST jointly performs spatiotemporal reconstruction to pre-train the **3D video model**, processing the temporal dimension and spatial dimension **symmetrically**.
We pre-trained a 4-frame MAE-ST based on the EgoClip dataset, and the results are shown below. We believe that the poorer performance of MAE-ST is due to the gap in temporal interaction between upstream and diverse downstream environments, which lead to a significant risk of cumulative error in the paradigm of imitation learning.
| | Meta-World | Franka-Kitchen | DMControl | Adroit | Trifinger | WA |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| STP (EgoClip)| 92.0 | 40.9 | 62.1 | 48.0 | 69.3 | **63.7** |
| MAE-ST (EgoClip) | 68.5 | 30.5 | 53.9 | 47.3 | 70.5 | 52.6 |
---
Rebuttal Comment 1.1:
Title: Thank you for the clarifications
Comment: I thank the authors for the clarifications and the additional results. After considering the rebuttal, I would like to keep my score.
---
Reply to Comment 1.1.1:
Title: Thanks for your positive response
Comment: Thanks for your quick reply and acknowledgement to our response. | Summary: This paper proposes STP, a visual representation learning method for robotic motor control. Trained on human videos, STP uses masked auto-encoders for spatial-temporal prediction. The spatial decoder predicts the current frame from its representation with 75% of patches masked. The temporal decoder predicts the future frame using the representations of 75%-masked current frame and the 95%-masked future frame. Experiments on various simulation and real-world tasks show the effectiveness of STP compared with baselines.
Strengths: 1. The proposed method is simple yet effective, utilizing a masked spatial-temporal prediction objective to learn visual representations for robotics.
2. The paper presents extensive experimental results in both simulation and real-world settings, comparing with proper visual representation baselines.
Weaknesses: 1. Many works have considered temporal information for robot visual representation learning. This paper should mention these and highlight the differences. For example, R3M [1] uses temporal contrastive learning, while VIP [2] and V-PTR [3] use temporal difference.
2. Though STP outperforms the baselines in many benchmarks, the performance gap is not significant (Table 1). The slight performance difference may be due to hyperparameter selection and randomness, as the paper did not provide error bars over multiple seeds.
[1] R3m: A universal visual representation for robot manipulation, 2023
[2] Vip: Towards universal visual reward and representation via value-implicit pre-training, 2022
[3] Robotic Offline RL from Internet Videos via Value-Function Pre-Training, 2023
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. I think VIP and V-PTR should be included as baselines.
2. What is the evaluation protocol for downstream tasks? Does all evaluation use an expert dataset and perform imitation learning to learn a policy? How did you collect the dataset for real-world experiments?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have discussed the limitations. These cannot be addressed within the scope of this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **C1: This paper should highlight its differences with R3M, VIP, and V-PTR.**
**R1:** Thank you for your suggestion. We will highlight these differences in the revised paper. Our STP has distinct differences from these works in objectives and techniques.
VIP and V-PTR respectively pre-train the value function through contrastive learning and TD learning, focusing on **visual reward functions and value function in RL**. R3M pre-trains image representation using **time-contrastive learning** and **video-language alignment**.
In contrast, our STP performs **self-supervised**, masking-based **generative modeling**, which is more **efficient** (high masking ratio), and **scalable** (self-supervised learning and simple backbone of ViT allows for large-scale pre-training). Our generative pre-training paradigm differentiates us to these contrastive methods. The superior performance of STP demonstrates the advantage of generative pre-training against them.
**C2: The performance gap is not significant. The slight performance difference may be due to hyperparameter selection and randomness, as the paper did not provide error bars over multiple seeds.**
**R2:** **First, we clarify some confusions on the result in Tab 1.**
**(1) In a fair comparison, our improvement is significant.**
Our STP (EgoClip) utilize less pre-training data than the VC-1 (full Ego4D+MNI). Under the same pre-training data and the number of training epochs, our STP-B outperforms MAE-B baseline `(63.7 vs. 59.6)`, which shows a significant improvement of `4.1`. Note that VC-1 shares the same technique with MAE only with different pre-training data.
**(2) Our initial evaluation scale was enormous, hence the performance improvement is stable.**
The Number of evaluation episodes = Number of tasks × Number of BC seeds × Number of evaluation seeds × Number of camera views. The total number of our evaluation episodes is
`5×3×25×1 + 5×3×50×2 + 5×3×25×1 + 2×3×25×1 + 2×1×25×1 = 2450`.
| | Number of BC seeds | Number of evaluation seeds | Number of camera views
| :-------: | :------: | :---------: | :---------: |
| Meta-World | 3 | 25 | 1 |
| Franka-Kitchen | 3 | 50 | 2 |
| DMControl | 3 | 25 | 1 |
| Adroit | 3 | 25 | 1 |
| Trifinger | 1 | 25 | 1 |
**Second, we rerun the experiments with multiple seeds and report the mean and standard deviation.**
During pre-training and BC, we did not deliberately select hyperparameters. The pre-training hyperparameters of STP follow MAE, and all representations adhere to the same setting during BC. For the same BC seed, our policy training is **fully reproducible**, with the only uncertainty being the **slight difference that still exist in MuJoCo rendering** even with the same policy and evaluation seed. Therefore, we **further rerun** our paper's STP-B and MAE-B baseline twice during rebuttal, obtaining three results and their mean and variance, as shown below. These demonstrate the stability of our STP. Additionally, the rendering in Trifinger is not subject to randomness, hence the results are **fully reproducible**.
| | Number of BC seeds × Number of evaluation seeds × Number of runs | STP-B | MAE-B |
|:-:| :--: |:-:|:-:|
| Meta-World | 25×3×3 | 94.1 93.6 94.1 **93.9±0.2** | 85.1 84.8 84.3 **84.7±0.3** |
| Franka-kitchen | 50×3×3 | 42.5 43.5 43.8 **43.3±0.6** | 36.7 37.9 37.1 **37.2±0.5** |
| DMControl| 25×3×3 |61.6 60.3 60.7 **60.9±0.5** | 59.2 60.3 60.3 **59.9±0.5** |
| Adroit| 25×3×3 |47.3 48.7 48.0 **48.0±0.6** | 43.4 45.3 44.7 **44.5±0.8** |
| WA | 25×3×3 |63.9 63.8 64.1 **63.9±0.3** | 58.3 59.1 58.7 **58.7±0.3** |
**Finally, we add extra evaluation results from RLbench, demonstrating the generality of our improvements.**
In addition, we added a performance comparison between STP-B and VC-1 base on `20` randomly selected tasks in RLBench. We use the advanced RVT [5] as the policy, training a multi-task policy with only `20` demonstrations per task. Each task is evaluated over `25` episodes, and the results are in the PDF file; our `STP (52.0)` outperforms `VC-1 (43.2)` by `8.8`, further proving the effectiveness of our STP.
**C3: VIP and V-PTR should be included as baselines.**
**R3:** Thanks for your suggestions. Since V-PTR has not released weight, we report the results for VIP and LIV [4] as baseline comparison. It is worth noting that both VIP and LIV utilize ResNet50 as their backbone, making a direct comparison with ViT-B is **unfair**. Additionally, the difference in feature dimensions between ResNet50 and ViT-B (2048 vs. 768) results in a discrepancy in the number of trainable MLPs policy parameters. Therefore, to enable a **a more fair comparison**, we construct the policy with an equivalent number of parameters for STP-B, which we refer to as STP†, and the results demonstrate the superior performance of our STP.
| | Meta-World | Franka-Kitchen | DMControl | Adroit | Trifinger | WA |
| :-: | :-: | :-: | :-: | :-: | :-: | :-: |
| VIP | 86.4 | 38.1 | 70.5 | 55.3 | 68.9 | 64.4 |
| LIV| 81.3 | 37.3 | 54.0 | 52.0 | 68.3 | 58.1 |
| STP† | 93.6 | 44.0 | 69.7 | 51.3 | 67.9 | **67.1** |
**C4: What is the evaluation protocol for downstream tasks? Does all evaluation use an expert dataset and perform imitation learning to learn a policy?**
**R4:** We follow a single-task setup due to without task condiiton. Yes, for each task, we learn a single policy based on the representation. In extra RLbench experiments, we use the multi-task setup.
**C5: How did you collect the dataset for real-world experiments?**
**R5:** As shown in A.5, following the approach in [6], we collect robot data using a VR tele-operation setup.
[4] LIV: Language-Image Representations and Rewards for Robotic Control.
[5] Rvt: Robotic view transformer for 3d object manipulation
[6] Openvr: Teleoperation for manipulation.
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply. I appreciate your efforts in running experimental results for baselines and for multiple seeds. While STP underperforms compared to VIP in some benchmarks, it excels in others. I have decided to raise my score to 6.
---
Reply to Comment 1.1.1:
Title: Thanks for your comments
Comment: Thanks for your comments and the recognition of our responses. | Summary: In this paper, we present a self-supervised pre-trained visual representation in robotic motor control, with spatiotemporal prediction with dual decoders, utilizing large-scale video data. The spatial prediction follows a standard MAE pipeline, and the temporal prediction tries to predict the future based on the current frame. The trained encoder is applied to downstream tasks and real-world robot task for better sample efficiency.
Strengths: 1. This paper adopts actionless human video data for representation learning, which can be easily obtained. The learned representation can be adapted to downstream robotics tasks.
2. The experiments contain several real-world tasks, which could be more valuable for applying a pre-trained visual encoder to real-world domains that lack data.
Weaknesses: 1. The major concern is the novelty of the previous methods, considering several related papers that leverage human data and visuals pertaining to downstream tasks have been proposed [1-3].
2. The experiment only contains imitation learning experiments in downstream tasks, while the reinforcement learning framework with sub-optimal data is not considered.
[1] Learning Manipulation by Predicting Interaction. RSS 2024
[2] Large-Scale Actionless Video Pre-Training via Discrete Diffusion for Efficient Policy Learning. https://arxiv.org/html/2402.14407
[3] Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation. https://arxiv.org/abs/2312.13139
Technical Quality: 3
Clarity: 3
Questions for Authors: N/A
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **C1: The major concern is the novelty of the previous methods, considering several related papers that leverage human data and visuals pretraining to downstream tasks have been proposed [1-3].**
**R1:** Thanks for your comments. Our STP exhibits some essential differences or advantages with these works as follows.
As for [2] and [3], our STP has different motivation and techniques. Both [2] and [3] perform a **history-aware policy pre-training** through **language-driven video prediction**, where [2] uses VQ-VAE and video diffusion techniques, while [3] employs auto-regressive GPT-style techniques based on frozen ViT representation. In contrast, our STP performs **image representation pre-training** through joint spatiotemporal prediction in a **self-supervised manner**, using **asymmetric masking and decoder architecture design**. Representation pre-training is orthogonal to these two methods.
As for [1], it pre-trains visual representations by predicting the transition frame and detecting the interaction object. It requires language and bounding box annotations, using only 93K video clips for pre-training. Instead, our STP is a purely self-supervised method without language or object annotations. Our STP is a much simpler design without the multi-frame causality modeling, multimodal token aggregator, and multiheaded attention pooling from [1], and only uses only ViT backbone. The following fair comparison (both use ViT-B) indicate that our STP achieves stronger performance, thanks to the scalability (masking and self-supervised learning enable lager scale data) of our proxy task, the asymmetric masking strategy, and specific decoder architecture design for spatiotemporal prediction.
| | Meta-World | Franka-Kitchen | DMControl | Adroit | Trifinger | WA |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| STP-B | 94.1 | 42.5 | 61.6 | 47.3 | 66.7 | **64.2** |
| MPI-B | 82.1 | 38.4 | 55.7 | 49.3 | 67.7 | **58.7** |
In summary, our STP makes **orthogonal contributions** to [2] and [3]. Compared to [1], it has the following advantages: It is more **general** (learning standard ViT representation), **scalable** (self-supervised learning), **efficient** (high masking ration), and **effective** (asymmetric masking and decoder architecture design for spatiotemporal prediction). Finally, thank you for your reminder, and we will strengthen these distinctions in the revised paper.
**C2: The experiment only contains imitation learning experiments in downstream tasks, while the reinforcement learning framework with sub-optimal data is not considered.**
**R2:** Thanks for your suggestion. In rebuttal, we select the `Panda-Door` and `Panda-TwoArmPegInHole` tasks from the Robosuite [4] simulation environment for reinforcement learning evaluation. We employ DrQ-v2 as our RL algorithm and compare the results of VC-1 (ViT-B) and our STP (ViT-B) as frozen visual representations. Due to the large fluctuations in success rate, we report the maximum reward value under 200,000 steps, and the results preliminarily verify the effectiveness of our STP within the RL framework.
| | Panda-Door | Panda-TwoArmPegInHole
| :---: | :---: | :---: |
| STP | **95.6** | **130.7** |
| VC-1 | 88.8 | 123.1 |
[4] A modular simulation framework and benchmark for robot learning.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. The sub-optimal setting with RL requires further investigation. I keep the original evaluation.
---
Reply to Comment 1.1.1:
Comment: Thank you for your comments and acknowledgement of our responses. We primarily follow the setting of a series of previous works [1,5,6,7,8], employing a **computation and data-efficient** paradigm of **few-shot behavior cloning by learning from demonstrations (Lfd)** to verify the effectiveness of the visual representation. After carefully considering your suggestions, we have reported some preliminary reinforcement learning results in the review, and we will further explore a more comprehensive evaluation in the future. We hope that the insights gained from our experiments will contribute to further improvements in reinforcement learning for robotics motor control in future work.
[5] The (Un)Surprising Effectiveness of Pre-Trained Vision Models for Control.
[6] Real-World Robot Learning with Masked Visual Pre-training.
[7] Language-driven representation learning for robotics.
[8] An unbiased look at datasets for visuo-motor pre-training. | Rebuttal 1:
Rebuttal: We thank all reviewers' efforts in reviewing our paper and giving insightful comments and valuable suggestions. The reviewers' main concerns are concentrated on two primary issues, which we have addressed individually.
**1. There should be a more in-depth discussion on the difference of STP with other works that perform robotics pre-training using video data (hmK3, UXtm, and Dk2S).**
We further strengthen the differences analysis of our STP in terms of contributions, techniques, and results. We emphasize from the following aspects:
- **Contributions & Differences:** Our goal is to present a genreal and scalable representation pre-training method for robotic motor control without any supervision siginal. However, several existing methods all need language annotations. In addition, our design is simple and efficient with a plain ViT. It is not a video or multimodal representation (Voltron) and does not have any intricate structures (MPI). Furthermore, we will not focus on certain specific applications (VIP, V-PTR), nor will we design policy pre-training based on a specific policy architecture (GR-1, VPDD).
- **Technique**: Different from contrastive pre-training (VIP, R3M), STP adopts masked generative pre -training. Our STP uses asymmetric masking and decoder architecture design to conduct decoupled spatiotemporal prediction, jointly capturing content and motion features.
- **Experiment and Comparison**: We carry out the largest-scale BC evaluation of PVRs for robotic motor control to demonstrate the effectiveness of STP and yield some insightful observations. In a fair comparison, the average performance of our STP is superior to existing representations.
**2. There may be a perceived stability concern regarding the performance improvement of STP over the MAE baseline (UXtm and Dk2S).**
We detailed the scale of our extensive evaluation (a total of `2450` eposides). and also provide further rerun evaluations, reporting the **mean and standard deviation**, demonstrating that our improvement is **solid and stable**. In addition, we further evaluate the multi-task setting on extra `20` tasks in the RLBench simulation environment, with detailed results in the attached PDF file. Our STP shows an `8.8` improvement in success rate compared to VC-1 (`from 43.2 to 52.0`), which further demonstrates the effectiveness of STP.
Pdf: /pdf/4003ff28539b3bb5d7e9ddb37eaf4507e6dbeac9.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Post-Hoc Reversal: Are We Selecting Models Prematurely? | Accept (poster) | Summary: This paper shows empirical evidence that common approaches to model selection can be improved upon by less greedy alternatives. In particular, authors highlighted what they referred to as post-hoc reversal, when post-training transformations reverse the trends observed in independent training runs. For instance, in certain overfitting situations, transformations such as temperature scaling, weight averaging, and ensembling, reversed growing test error trends and closed the generalization gap to a greater extent than common model selection approaches. This observation then yielded practical recommendations of incorporating post-training transforms into the model selection pipeline, which resulted in improved performance across a number of settings, as well as on smoother error curves that render model selection less noisy. Authors attributed post-hoc reversal to the high variance involved in the training of neural networks, which can be smoothed out by some of the studied transformations. The paper lays out clear practical recommendations as a result of the reported empirical observations, which I believe would be highly relevant to the community.
Strengths: - The paper is well written and easy to follow, and tackles a highly relevant problem in practice.
- The evaluation is broad and covers a number of model classes and sizes, and data modalities.
- Conclusions lay out very clear and easy to implement recommendations to improve model selection.
Weaknesses: - This requires no action from the authors but just for the record, I have mixed feelings about the presentation choices regarding the use of notation. The text is notation heavy, which makes it precise, but at the cost of readability. The discussion seems simple enough to enable the use of almost plain text only. But I reiterate that I consider this to be mostly matter of writing style, and I expect no effort from authors addressing this comment.
- The bulk of the empirical assessment focuses on a somewhat small scale setting involving variations of the Cifar-10 dataset and relatively small neural networks. While this is obviously due to the high cost involved in replicating these experiments in larger scale settings, it limits the strength of the presented empirical evidence. However, section 6 adds results for other settings including fine-tuning of very large models, which address this concern to an extent.
- The experiments focus on multi-epoch training settings, and it's a bit unclear how those results transfer to now common pre-training situations where very few epochs are used. While one can replace epoch-end checkpoints by checkpoints obtained every k steps, it's unclear how the choice of k affects results, for instance.
- One component that seems to have been left out of the analysis is robustness. Does post-hoc reversal still happens in situations where the test set is somehow shifted relative to the validation set used for post-hoc selection? It might be the case that there's not so pronounced of a gap between post-hoc and greedy/naive selection in such a scenario.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Would it be possible to replicate a subset of the results reported in section 4 and 5 for a larger scale dataset (e.g., ImageNet)?
- Does post-hoc reversal still happen in situations where the test set is somehow shifted relative to the validation set used for post-hoc selection?
- How does one leverage post-hoc selection in a non-multi-epoch training situation?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please refer to the "Weaknesses" section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer GiJ5,
Thank you for your review
and thoughtful suggestions about writing.
We will take them into account
while revising the manuscript.
We address your concerns regarding experiments below.
> Would it be possible to replicate a subset of the results reported in section 4 and 5 for a larger scale dataset (e.g., ImageNet)?
Due to limited time in the rebuttal period,
we are unable to report results on ImageNet
at this time.
However, we appreciate this suggestion
and will look into incorporating larger scale datasets
in the final version of this work.
> Does post-hoc reversal still happen in situations where the test set is somehow shifted relative to the validation set used for post-hoc selection?
Yes.
We report an experimental analysis
on the FMoW dataset below.
## Experiment 1:
We used the distribution-shifted
val and test sets provided by WILDS [1].
Here, the val set is shifted w.r.t. the train set
and the test set is shifted w.r.t. both the train and val sets.
The rest of the setup is same
as for Table 1 in the paper.
For convenient comparison,
we also reproduce the
in-distribution FMoW numbers
from the paper.
The in-distribution FMoW dataset is denoted with ID,
while the distribution-shifted (out-of-distribution)
version is denoted with OOD.
The 4 tables below
show the results for
the test loss and test error metrics
and for the SWA+TS and SWA+Ens+TS transforms.
_Base_ column has numbers without any post-hoc transform,
while _Naive_ and _Ours_ represent
the application of post-hoc transforms
with naive and post-hoc selection strategies respectively.
__Test Loss, SWA+TS:__
| | Base | Naive | Ours | Diff |
|-----|-------|-------|-----------|-------|
| ID | 1.583 | 1.627 | **1.554** | 0.073 |
| OOD | 1.831 | 1.840 | **1.788** | 0.052 |
__Test Loss, SWA+Ens+TS:__
| | Base | Naive | Ours | Diff |
|-----|-------|-------|-----------|-------|
| ID | 1.583 | 1.494 | **1.305** | 0.189 |
| OOD | 1.831 | 1.700 | **1.571** | 0.129 |
__Test Error (%), SWA+TS:__
| | Base | Naive | Ours | Diff |
|-----|-------|-------|-----------|-------|
| ID | 43.20 | 42.69 | **39.92** | 2.77 |
| OOD | 49.32 | 49.70 | **46.75** | 2.95 |
__Test Error (%), SWA+Ens+TS:__
| | Base | Naive | Ours | Diff |
|-----|-------|-------|-----------|-------|
| ID | 43.20 | 37.95 | **34.93** | 3.02 |
| OOD | 49.32 | 46.74 | **41.56** | 5.18 |
We observe that post-hoc selection is about as effective
in the OOD case as in the ID case.
Interestingly, the improvement
for OOD as compared to ID
is slightly lower for test loss
but higher for test error.
Thank you for suggesting this experiment.
We will add it to the paper.
> The experiments focus on multi-epoch training settings, and it's a bit unclear how those results transfer to now common pre-training situations where very few epochs are used. While one can replace epoch-end checkpoints by checkpoints obtained every k steps, it's unclear how the choice of k affects results, for instance.
Thank you for suggesting this analysis;
we believe it would be a valuable addition to the paper.
We have conducted this experiment
on our LLM instruction tuning setup from Section 6.1,
where best results are obtained within 3-4 epochs.
## Experiment 2:
We vary the checkpointing interval
as a fraction of 1 epoch,
and record the best test accuracy,
as well as the epoch at which it is obtained
for both SWA+TS and SWA+Ens+TS.
Figs. 4 (a) and (b) in the PDF attached to the global rebuttal
shows the results.
We find that a checkpointing interval of 0.7 epochs gives the best results,
with higher and lower intervals performing slightly worse.
This makes sense:
higher intervals include too few checkpoints for SWA,
lower ones include too many weaker checkpoints from earlier in training.
Also, we find that the optimal epoch is shifted
further at smaller checkpointing intervals
(by about 2 epochs when the checkpointing interval is 0.1 epochs),
showing that post-hoc reversal is even more important
in this setting.
This is likely because with more checkpoints being averaged,
even more overfitted checkpoints can be accomodated
while still increasing the overall performance.
Please let us know if any of your concerns are still unaddressed.
Thanks,
Authors
[1] WILDS: A Benchmark of in-the-Wild Distribution Shifts
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications and extra results. | Summary: The authors investigate applying three post hoc transforms, namely temperature scaling (TS), ensembling, and stochastic weight averaging (SWA), to trained models after training, separately and jointly. They oppose their approach to the "naive approach" and provide empirical observations of a phenomenon they refer to as "post hoc reversal." This phenomenon corresponds to the performance trends of models that change after these transformations. In particular, they focus on the observation in noisy settings of the following observations.
1- Epoch-wise post hoc reversal. The authors show that post hoc transforms reduce overfitting, double descent, and loss-error mismatch. In particular, SWA and ensembling reduce overfitting and flatten double descent peaks.
2- Model-wise post hoc reversal. The model's size influences the improvement in the performance of the post hoc transforms, i.e., larger models tend to perform better under post hoc transforms.
3- Hyperparameter-Wise Post Hoc Reversal. It appears that constant learning rates improve performance better than decaying learning ones and the optimal number of epoch shifts.
The experiments are broad, and metrics show good non-trivial improvement based on the method.
Strengths: 1 - The paper provides an extensive empirical study across various domains (vision, language, tabular, graph), aiming to demonstrate the generality of their approach, i.e., post hoc transforms.
2- The authors use a variety of performance metrics (error rates, loss, MMLU accuracy) to validate the effectiveness of post hoc transforms, leading to a robust evaluation.
3- The authors identify a potentially impactful post hoc reversal phenomenon that could challenge commonly adopted practices.
4- The approach seems to lead to consistent performance gains across different datasets and settings. The focus is on noisy environments, which can be particularly relevant for less curated datasets, which might be what the field needs.
5- The authors suggest that their approach could lead to guiding principles for model development, including early stopping and checkpoint selection.
Weaknesses: The main weakness of the approach, which is recurrent in most current empirical LLM observations, is that understanding the source of the improvement needs to be clarified.
In fact, it is not clear why the benefits of post hoc transforms are more pronounced in high-noise settings.
Making assumptions to verify potentially synthetic and controlled datasets could be helpful. As of one, this corresponds to an observation.
Technical Quality: 3
Clarity: 3
Questions for Authors: If the author can provide a good answer to the question above that would address my concerns.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer NSoG,
Thank you for your review.
We provide a detailed reponse
regarding the intuitions and explanations for post-hoc reversal (PHR)
in the global rebuttal, along with experimental analyses on CIFAR-10N
and a synthetic dataset to back our claims.
Below we highlight how the global rebuttal answers your questions.
> the source of the improvement needs to be clarified.
Even when performance degrades due to overfitting,
the model continues to learn generalizable patterns
from the clean training points,
but this is outweighed by the spurious patterns
learnt from the noisy training points.
Under post-hoc transforms,
the generalizable patterns reinforce each other,
whereas the spurious patterns cancel out.
This is responsible for the performance improvement
seen under PHR.
We expand on this intuition in the [Intuition 3] section of the global rebuttal,
and further demonstrate it for CIFAR-10N under Experiment 2,
and for a synthetic dataset under Experiment 3.
> it is not clear why the benefits of post hoc transforms are more pronounced in high-noise settings.
In high-noise settings, noisy training points have
a greater adverse effect on the learnt model.
Post-hoc transforms subdue this to a large extent.
This produces a more pronounced benefit.
In contrast, the base models themselves perform well in the low-noise setting,
leaving less room for improvement by post-hoc transforms.
We give a more detailed explanation in the global rebuttal,
separately discussing the mechanisms by which temperature scaling
and ensembling/SWA operate in the presence of noise
([Intuition 2] and [Intuition 3] sections respectively),
thereby elucidating their increased efficacy with higher noise levels.
> Making assumptions to verify potentially synthetic and controlled datasets could be helpful.
Thank you for suggesting this.
In the global rebuttal,
we replicate post-hoc reversal
on a synthetic controlled-noise dataset with 2 input features,
and visualize the learnt decision surfaces to
verify our proposed intuitions
(Experiment 3).
We plan to further improve this analysis
and incorporate it in our paper,
along with the intuitions/explanations above.
Please let us know if any of your concerns are still unaddressed.
Thanks,
Authors | Summary: The paper discusses the phenomenon of *post-hoc reversal*, where the performance trend is reversed after applying post hoc transforms, namely, temperature scaling (TS), stochastic weight averaging (SWA), and ensembling (Ens).
The paper conducts an empirical study to observe the phenomenon across different epochs, model sizes, and hyperparameters.
They propose a *post-hoc selection* strategy where the optimal epoch/model size/hyperparameters are selected considering the performance after applying post-hoc transforms.
The paper focuses on the noisy data setting and shows experimental gains in that setting for some datasets.
Experiments have been conducted for the FMoW dataset, CIFAR-10-N, and CIFAR-100-N, as well as some additional experiments on text, tabular, and graph datasets.
Strengths: 1. The paper demonstrates the phenomenon of post-hoc reversal.
2. They conduct experiments on diverse domains.
3. Performance gains are achieved with the proposed strategy in some of the noisy data settings.
Weaknesses: 1. The paper proposes *post-hoc selection* strategy, simply selecting the optimal epoch (or model size, etc.) after applying already existing (and widely used) post-hoc transforms (SWA, En, TS). They do not propose any *novel* mechanism to tackle *post-hoc reversal*.
2. It is a findings paper; the study is empirical and lacks theoretical insights. Given the empirical nature of the study, the performance gains in some datasets are marginal (e.g., C-100-N Noisy test error, Table 1).
3. For the datasets in Table 5, the post-hoc selection strategy does not consistently provide performance gains, i.e., it performs worse in some cases.
Technical Quality: 2
Clarity: 3
Questions for Authors: **Questions**
1. Why is post-hoc reversal prominent in the noisy data setting? Can the authors provide intuition or reasoning (other than empirical observations)?
2. Can the authors provide intuitions to (at least) some of the observations? For example, why do SWA and Ens handle double descent, but TS causes it?
3. In Figure 7, what is the SWA ensemble? It has not been discussed in Section 5.
4. For the datasets Yelp, Income, and Reddit-12K, the post-hoc reversal is observed in Figure 7, but the post-hoc selection strategy either shows worse performance or marginally better. Does this indicate that the post-hoc selection strategy is ineffective in these settings? Can the authors provide any reasoning/explanations?
**Suggestions**
* The paper title should include "noisy data setting", as the paper focuses on the noisy data setting.
* Include intuitions/reasonings/explanations (wherever possible) for the observations in the experimental results.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer vEtm,
Thank you for your review, including suggestions to improve the paper.
We address your concerns below:
> They do not propose any _novel_ mechanism to tackle _post-hoc reversal_.
First, we would like to clarify that
post-hoc reversal (PHR) is not a problem.
On the contrary,
PHR usually provides
an opportunity to improve performance.
By post-hoc selection (PHS), we show that
a simple change to existing methodology
is sufficient to reap benefits.
We leave smarter checkpoint selection to future work,
as it is beyond the scope of this work,
whose primary aim is to demonstrate and characterize PHR.
> Given the empirical nature of the study, the performance gains in some datasets are marginal (e.g., C-100-N Noisy test error, Table 1).
> For the datasets in Table 5, the post-hoc selection strategy does not consistently provide performance gains, i.e., it performs worse in some cases.
When solely considering the test error metric, this is indeed true.
Please note that test error is capped by the Bayes error and the model's strength.
For example, C-100-N Noisy has \~40% noise and ResNet18 gets \~10% test error with clean data.
Given this lower bound of \~50%, the achieved error of 50.26% could still be considered impressive.
In fact, test error is not the best metric for datasets with high Bayes error.
Hence our equal focus on the test loss metric, which evaluates predicted probabilities.
We humbly point out that across _all_ our datasets and transforms, PHS outperforms naive selection, and in most cases quite significantly so.
Even for test error, PHS is worse in only 5 of our 21 datasets, and except in one of these (Reddit-12k) it's less than 0.2 pts worse. Given that PHS is simple and cheap,
we stand by our recommendation to employ it in practice.
> Why is post-hoc reversal prominent in the noisy data setting? Can the authors provide intuition or reasoning (other than empirical observations)?
Please see our global rebuttal
where we provide detailed intuitions
for PHR and highlight it's important connection to noise.
Our explanations cover
epoch-, model-, and hyperparamter-wise PHR;
TS, Ens and SWA transforms;
and imporant consequences such as to
loss-error mismatch, catastrophic overfitting and double descent.
We further validate our proposed intuitions
with experimental analysis on CIFAR-10N,
and visualization of the learnt decision surfaces
on a synthetic dataset.
> Can the authors provide intuitions to (at least) some of the observations? For example, why do SWA and Ens handle double descent, but TS causes it?
We again refer you to our global rebuttal for
comprehensive explanations of various observations.
Here we restate our explanation for the particular observation mentioned,
namely, why SWA and Ens handle double descent but TS causes it.
In all our experiments,
we do not observe double descent in the base test loss curves.
Overfitting occurs too drastically in the test loss
for the second descent to occur.
This is because once a noisy training point is fit,
the model can lower the loss simply by upscaling the logits.
This leads to overconfident predictions
and poor generalization loss around noisy training points.
However, scaling the logits does not affect error.
Indeed test error overfits less,
sometimes even exhibiting a second descent,
where the continued learning overpowers the overfitting.
By rescaling the logits, temperature scaling
does not so much cause double descent
as it removes the loss-error mismatch.
If the test error curve has a double descent,
the post-TS test loss curve does too,
simply because the post-TS loss tracks the error more closely.
As mentioned earlier, we don't observe double descent in loss curves,
so also no instances where TS removes double descent,
as TS is unable to affect the test error metric.
For intuitions on why Ens/SWA can suppress double descent,
please refer to [Intuition 3] in the global rebuttal.
> In Figure 7, what is the SWA ensemble? It has not been discussed in Section 5.
SWA ensemble refers to the ensemble of SWA models,
i.e., first apply SWA to checkpoints from the same training run,
then ensemble the SWA models across different runs.
Thank you for pointing this out.
We will clarify in the manuscript.
> For the datasets Yelp, Income, and Reddit-12K, the post-hoc reversal is observed in Figure 7, but the post-hoc selection strategy either shows worse performance or marginally better. Does this indicate that the post-hoc selection strategy is ineffective in these settings? Can the authors provide any reasoning/explanations?
This is a fair observation.
Looking carefully at Figure 7,
one finds that while PHR occurs quite prominently,
the optima of the post-hoc curves are only marginally better than
the optima of the corresponding base curves.
Another source of error is that while PHR curves
are all drawn for the test set,
for the PHS results,
the epoch is selected based on the val set,
and the reported numbers are evaluated on the test set.
We believe that this does not diminish the usefulness
of PHS, because it is never substantially worse
than naive selection.
Further, depending on the setting, one might improve results
by ensembling more models.
We ensemble 8 models throughout the paper for uniformity,
but for small tabular datasets like Income,
it is feasible to ensemble many more models.
A final factor that may be playing a role here,
is that post-hoc transforms under any selection strategy
cannot be expected to give too much improvement,
if the base models already achieves close to
the Bayes error for the dataset,
but this is hard to evaluate beforehand.
Please let us know if any of your concerns are still unaddressed.
Thanks,
Authors
---
Rebuttal Comment 1.1:
Title: Acknowledgement to Author's Rebuttal
Comment: Thanks to the authors for the detailed response.
Most of my queries have been addressed. I do not have any further questions at this point. | null | null | Rebuttal 1:
Rebuttal: # Overview
We thank all the reviewers for their feedback
and helpful suggestions on improving the work.
We have added extensive explanations and intuitions,
backed by numerous additional experiments and analyses.
We summarize them below:
1. Epoch-, model-, and hyperparameter-wise post-hoc reversal (PHR)
can be viewed under the common lens of effective model complexity (EMC),
allowing us to focus on intuitions for epoch-wise reversal (Intuition 1).
2. Loss-error mismatch results from increasingly overconfident predictions
blowing up the test loss in the presence of noise, but not test error.
Temperature scaling fixes this by downscaling the logits,
resulting in PHR (Intuition 2).
3. Neural networks learn noisy points differently
than clean ones.
To wit,
predictions for noisy points fluctuate more during training
(see Expt. 1 on CIFAR-10N),
indicating unstable decision boundaries around them
(see Expt. 3 on synthetic dataset).
4. Building on the above,
ensembling and SWA act differently
on patterns learnt from clean and noisy points
(see Expt. 2 on CIFAR-10N),
reinforcing the former and
suppressing the latter
(see Expt. 3 on synthetic dataset).
Post-hoc reversal is observed when after transform
continued learning from clean points outshines
overfitting from noisy points (Intuition 3).
5. Post-hoc selection is effective even under distribution shift
(See Expt. 1 in response to reviewer GiJ5).
6. Post-hoc reversal is even more relevant for
checkpointing at fractional epochs in very-low-epoch settings.
(See Expt. 2 in response to reviewer GiJ5).
# [Intuition 1] Effective Model Complexity (EMC)
Epoch-, model-, and hyperparam-wise PHR can be unified via EMC,
introduced in [1] to unify epoch-, and model-wise double descent.
EMC measures memorization capacity,
and is important to us because
memorization of noisy point plays a key role in PHR.
EMC increases with epochs and model size,
and different hyperparams can impact it in different ways.
For example, EMC increases with epochs more rapidly for constant LR
than annealed LR, explaining our observations in Section 4.2.3.
# [Intuition 2] Temperature Scaling (TS) and Loss-Error Mismatch
Once a neural net has fit a training point,
the cross-entropy loss on it
can be lowered simply by upscaling the weights of the linear output layer.
This makes the model overconfident later in training,
as shown in [2].
For a noisy training point,
this leads to worse loss on similar test points.
The test error is not affected
as it depends only on the argmax of the class probabilities.
In high-noise settings,
test loss can worsen due to fitting noisy training points,
even as the test error improves from continued learning on clean points,
leading to loss-error mismatch.
TS fixes this by downscaling the logits.
Indeed, one finds that the temperature
(as obtained with a held-out set)
increases with epochs.
(see Fig. S1 in the Supplementary of [2]).
# [Intuition 3] Ens/SWA and Delayed Catastrophic Overfitting
We focus on test error as post-TS loss behaves similarly
and the intuitions transfer.
Flattening double descent is a special case
of delayed catastrophic overfitting,
as applied to the ascent to the peak.
From clean training points,
models learn generalizable patterns
and from noisy points, spurious ones which cause overfitting.
When noise is low,
the former dominates
and overfitting is benign.
Otherwise, overfitting is catastrophic.
The core intuition for Ens/SWA delaying catastrophic overfitting
is that generalizable patterns across checkpoints get reinforced,
while the spurious patterns cancel out.
Further intuition is that this is enabled by
the decision boundary being more "unstable" around noisy points
as compared to clean ones.
The experiments below substantiate these claims.
## Experiment 1
In Fig. 1, we see that
across epochs, the prediction flips
for a much higher fraction of the noisy points
than of the clean ones,
indicating higher instability.
The dataset here is CIFAR-10N Worst (~40% noise),
and training setup is same as in the paper.
## Experiment 2
Here, we measure the average predicted probability
for the clean and noisy subsets of CIFAR-10N Worst,
as proxy for the extent of memorization.
In Fig. 2 (a) and (b),
we find that SWA lowers the memorization
of clean points only a bit (\~0.1 probability),
but of noisy points by a lot (\~0.5 probability),
clearly establishing the differential effect.
## Experiment 3
Here, we replicate PHR on a synthetic dataset
with 2 input features,
with the aim of visualizing learnt decision surfaces
to solidify our intuitions.
We train 4-layer MLPs with 512 ReLU units per hidden layer on a 2-class spirals dataset of 1000 training points, with 20% of the labels flipped at random.
We train 16 MLPs and track the mean test error across epochs, as well as the test error of the ensemble (Fig. 3 (b)).
As per [3,4] Ens/SWA helps when the data has a "multi-view" structure,
or equivalently, the loss landscape has multiple modes.
This is hard to achieve for 2D dataset,
so instead we simulate the effect by training each MLP on a random 50% subsample
of the training data.
Fig. 3 (a) shows decision surfaces at epochs 440 and 1000 for 2 MLPs
and the ensemble.
Decision boundaries are spiky around noisy points
and smoother around clean ones.
While the generalizable parts of the spiral are retained in the ensemble,
the effects of noisy points are diminished.
Between epochs 440 and 1000,
individual models spike around noisy points more prominently
than they learn new parts of the spiral,
but the ensemble surface is relatively unchanged,
except for small improvements to learning the spiral.
We will further polish and incorporate the above in the paper.
# References
[1] Deep Double Descent: Where Bigger Models and More Data Hurt
[2] On Calibration of Modern Neural Networks
[3] Towards Understanding Ensemble, Knowledge Distillation and Self-Distillation in Deep Learning
[4] Deep Ensembles: A Loss Landscape Perspective
Pdf: /pdf/8f3b7687cb6a1f78cbbf3b8bf29560dc03a3517d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Flexible Context-Driven Sensory Processing in Dynamical Vision Models | Accept (poster) | Summary: Artificial neural networks are loosely based on biological neural networks. In this study, the authors construct neural networks whose structures are based on general principles of visual signal pathway such as retinotopy. They used this model to ask if the context-dependent feedback mimicking top-down signals in the brain could help artificial neural networks perform cue-delay tasks more efficiently.
Strengths: The authors constructed a simple but effective model (DCNet) that can capture the gist of interplay between low-order sensory areas and high-level cognitive areas. They showed that their model can outperform conventional deep neural networks and large language models, although DCNet has a small number of learnable parameters. Also, they performed simulated legion experiments and psychophysics experiments, providing intriguing similarities between biological visual systems and DCNet.
The strong influence of top-down (feedback in their study) on sensory signal processing is already well known, but this study provides some evidence that such top-down modulation can be used to build a new type of deep neural networks. The authors tested DCNet on a single task only, but this study may still contribute to advancing bio-inspired computing.
Weaknesses: The paper is well written and easy to follow, but this reader finds it a little strange that the authors did not specify basic information on their model or baseline models. First, they provided mathematical descriptions of the recurrent “neuron” in the model, but it remains unclear how they implement these neurons. Based on the given source codes and the appendix, this reader thinks that individual neurons are 2D convolutional filters and that the intermediate states are used to realize recurrent behaviors. Additionally, 2D convolutional filters have multiple parameters and are not listed in the manuscript. I did find some parameters in the source codes, but they are not good enough. Second, the authors should provide more details on the baseline models. They mentioned “Traditional 6-Layer convolutional backbone feeding into a gated recurrent unit (GRU) [44] with N = 2048 neurons”, but the meaning of traditional 6-layer convolutional backbone is unclear, and they did not explain how 2048 GRU neurons provide the final answers. Once again, the authors need to provide pertinent details of each component. Third, the modulation signals are computed with two linear projects r1 and r2. It looks like they were estimated from all 4 layers, but it is unclear how they are concatenated to create r1 and r2.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please see the weakness section.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors provided the limitations of their study.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their overall positive comments. We have addressed your comments below including full model and baseline specifications. We hope that we have sufficiently addressed your concerns enough to revise your score.
> **Capture the gist of interplay between low-order sensory areas and high-level cognitive areas. Top down modulation can be used to build a new class of DNNs**
We sincerely thank the reviewer for this positive comment. This was indeed one of our primary motivations to pursue this line of work.
> **Did not specify basic information on their model**
First, we would like to thank the reviewer for checking our source code. We also acknowledge that we could have done a better job of explaining our model components. We provide a full model description in the general response (and will include this information as part of our manuscript). Here, we provide more specific details on exact model parameters.
Layer 1 (Input size 3 x 128 x 128):
Excitatory cell types (16), Inhibitory cell types (4).
Kernel (5, 5), Padding (2, 2)
Layer 2 (Input size 16 x 64 x 64):
Excitatory cell types (32), Inhibitory cell types (8)
Kernel (5, 5), Padding (2, 2)
Layer 3 (Input size 32 x 32 x 32):
Excitatory cell types (64), Inhibitory cell types (16)
Kernel (5, 5), Padding (2, 2)
Layer 4 (Input size 128 x 16 x 16):
Excitatory cell types (128), Inhibitory cell types (32)
Kernel sizes (3, 3), Padding (1, 1)
Final Readout: Fully connected (1024 inputs, 6 outputs)
All layers have the following convolutional kernels ($\textbf{W}$s specified in the governing equations): Input to excitation, excitation to excitation, excitation to inhibition, inhibition to excitation, and input to inhibition. The input and output dimensionalities of these convolutions are listed layer-wise above, along with the kernel and padding shapes for all convolutions in that layer.
> **More details on the baseline models**
Convolution filters are mentioned as: output channels x input channels x spatial size (height, width). Pooling layers are mentioned as: kernel size (height x width), stride. Inputs to every layer were normalized with LayerNorm.
Layer 1 (Input size 3 x 128 x 128): Conv2d (8 x 3 x 5 x 5), ReLU, AvgPool2d (5, 5, stride 2).
Layer 2 (Input size 8 x 64 x 64): Conv2d (16 x 8 x 5 x 5), ReLU, AvgPool2d (5, 5, stride 2).
Layer 3 (Input size 16 x 32 x 32): Conv2d (32 x 16 x 5 x 5), ReLU, AvgPool2d (3, 3, stride 2).
Layer 4 (Input size 32 x 16 x 16): Conv2d (64 x 32 x 3 x 3), ReLU, AvgPool2d (3, 3, stride 2).
Layer 5 (Input size 64 x 8 x 8): Conv2d (128 x 64 x 3 x 3), ReLU, AvgPool2d (2, 2, stride 2).
Layer 6 (Input size 128 x 4 x 4): Conv2d (128 x 128 x 3 x 3), ReLU, AvgPool2d (1, 1, stride 1).
Project convolutional outputs to GRU: Fully connected (2048 inputs, 2048 outputs), ReLU
Recurrence: GRU (2048 inputs, 2048 outputs)
Final Readout: Fully connected (2048 inputs, 6 outputs)
The convolutional backbone serves as a feature extractor for the GRU network. We train this baseline model by passing in the cue for the first $T$ time steps followed by the scene for the next $T$ timesteps. For our experiments we set $T=3$. A CrossEntropy Loss is used on the readout activities at the final timestep.
We will include these details in the Appendix.
> **Unclear modulation signal is formed from concatenating r1 and r2s**
We apologize for this confusion. The modulatory signals are computed in a layer-matched manner and hence there is no issue of concatenation.
However, we include in this rebuttal experimental results (in response to **qySN**) for a variant of the modulation signal that pools from all layers before computing the modulation factors. For this implementation, we linearly projected excitatory neuron activities from every layer into the dimension of interest (based on which layer is being modulated), followed by a sigmoided linear-nonlinear transformation.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarification. This model stands between biologically realistic models, which focus on simulating realistic brain activity, and functional models, which are engineered to perform specific tasks. This type of model is rare and should be encouraged to foster meaningful interactions between neuroscience and AI research communities. As the author provided more clarification, I think it could be possible for this model to serve as a reference for those who are interested in building such models. Thus, I would like to increase my score to 6.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: Thank you very much for your feedback and your revised evaluation recommending acceptance. In particular, thank you for highlighting the future promise of such work to serve as a bridge between neuroscientists and AI researchers. | Summary: The authors present a convolutional dynamical systems model if visual processing with separate excitatory and inhibitory populations in each layer. They add a low rank modulation of each layer by a factor that is computed as a linear map of the layer activities. This is meant to represent top down modulations by some higher level area. The authors train their network on visual search tasks, including a new one which requires participants to count the number of objects in a scene that match a target in color, shape, or both. The context modulation is required for good performance of the model and the model reproduces two effects from the visual search literature.
Strengths: The presented model has a relatively high level of biological detail by including a split in excitatory and inhibitory pools, temporal dynamics and a low-rank top down mechanism.
Also the network is image computable, can be trained for reasonably complex tasks and the authors compare to some reaction time data.
Weaknesses: All results to evaluate the network are qualitative, in contrast to claims of the authors that their network closely replicates known reaction time results.
While higher distractor heterogeneity, which makes humans slower, makes the network more uncertain (6c), target absent trials usually yield longer action times while the entropy of the network is lower here. So even the qualitative result is not as in humans.
The positive slope in 7c seems to rely heavily on the data point at 1 distractor, does this actually keep growing?
For 7d I would love some comparison to human data as the feature difference does not tell me much such that I cannot judge whether this actually matches human behaviour.
Also, It is customary to look at the accuracy of responses as well somewhere.
While the proposed network does perform better than standard DNNs the difference is small, especially to ResNet-18. Also the generalization gap is quite substantial for the proposed network as well. So overall the performance improvement is marginal.
While the authors comment on the dynamics observed in their model in Figure 5 and the accompanying text, I am not really following this part. Proper analyses of the dynamics should contain some quantitative measures of the dynamics and comparisons of these to neural data. Just looking at a set of randomly chosen activity traces is insufficient I think.
Technical Quality: 3
Clarity: 3
Questions for Authors: Did the authors explore variants of the model where the modulation depends on all layers, has a memory or similar ideas? For a higher order representation, the current implementation seems extremely reduced.
Did the authors run any quantitative comparisons between their model and any neural or psychophysical measurements.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: I agree with the authors that their work is unlikely to have negative societal impact. Limitations of their work could have been discussed more carefully though.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive comments. We believe that some of the suggestions raised by the reviewer will increase the quality of the manuscript and we now provide additional analyses and responses to these.
> **Relatively high level of biological detail. The network is image computable and can be trained for reasonably complex tasks**
We thank the reviewer for this note. As we noted in our response to *BT2x*, one of our goals was to tackle the increasing divergence between successful deep learning approaches that model visual computations and biological visual processing.
> **Direct reaction time matches to human data**
We agree with the reviewer that target-absent trials yield higher reaction times in humans while we don’t see this in our model output entropy. We wish to highlight that model entropy is far from the perfect RT metric that one can extract from neural dynamics. In fact, the subject of computing model RTs is an active research area (Spoerer et al. (2020); Goetschalckx et al. (2024); Subramanian et al. (2022); Graves (2016) to name a few), and exploration of different metrics is beyond the scope of this work. Our goal here was to expose the potential for studying model RTs in the context of cued-paradigms (through DCNet). Cued-contextual paradigms constitute a wealth of data in human psychophysical research but to our knowledge, there is no current modeling approach to study this in a naturalistic manner.
As for Fig. 7c: We limit the number of distractors to 6 to account for the canvas size and placement of the objects within it. The reviewer’s point is well taken. Our response is the same as above – entropy isn’t the perfect metric. We genuinely hope to extend on this in future work.
Human data comparison to 7d: While generating our stimuli, we used perceptually uniform color spaces to stay close to the experimental paradigm discussed in Wolfe and Horowitz (2004) Fig. 5a-e. We agree that this isn’t a direct comparison to human RTs and emphasize that we wished to only show human-like trends (which is strongly present in our analysis). Exact comparisons to human RTs is beyond the scope of this work, but it is something we are really interested in exploring in future work.
We will add the limitations of our chosen RT metric as part of the limitations and outlook for the future in our manuscript.
> **Customary to look at the accuracy of responses as well somewhere**
Thanks for the comment. We do refer to the accuracy of model responses in the manuscript.
L222 “DCnet achieved an overall accuracy of 97% on the test trials”.
L238 “When fine tuned on this task, DCnet achieved an overall accuracy of 95% on the test trials”.
> **Overall the performance improvement is marginal**
As the reviewer points out, the proposed network performs marginally better than ResNet-18 in-distribution (Fig. 2b), the difference is much greater when tested out-of-distribution (Fig. 2c) – our model drops by $\sim 0.2\%$ while ResNet-18 drops by $\sim 0.6\%$. We wish to highlight that performance was not our only criteria of desire. Rather, we wanted to build a competitive and biologically-faithful framework that will allow us to study the dynamics of contextual-cueing phenomenon.
> **Proper analyses of the dynamics...activity traces is insufficient.**
We fully agree with the reviewer and thank them for raising this point. While comparisons to neural data is beyond the scope of this rebuttal timeline, we perform several analyses to better understand and quantify the internal dynamics of our model. We missed the point with Fig. 5, but will now update it to include the analyses we detail below (also in Fig. R2).
As originally presented in Fig.5, we drive DCNet activity by uncorrelated, time-varying Gaussian noise inputs. We then compute:
1. The *Dynamic Range* (DR) of excitatory cells in each layer of DCNet and compare that to the DR of corresponding excitatory cells from DCNet (Lesioned Inhibition) (Fig. R2c). DR is computed as the Interquartile range (a measure of statistical dispersion) of a neuron’s activity over a time period of $128 ms$ when driven by noise. We take the mean over 64 trials for each neuron and plot this distribution per-layer in Fig. R2c. We find that excitatory neurons in the DCNet model have a significantly higher DR across layers compared to DCNet (Lesioned Inhibition) suggesting the role of inhibitory interactions in expanding the range of computations carried out by each neuron. A Kolmogorov–Smirnov test confirmed these significant differences (Layer 1 (statistic=0.667, p < .001); Layer 2(statistic=0.99, p < .001); Layer 3(statistic=1.0, p < .001); Layer 4(statistic=0.97, p < .001)).
1. The lag 1 autocorrelation as a measure of *stability* of the excitatory neurons (Fig. R2b). We find that DCNet excitatory neurons are significantly more stable than excitatory neurons in the DCNet (Lesioned Inhibition) model. A Kolmogorov–Smirnov test confirmed the significant difference (statistic=1.0, p < .001).
1. The E-I correlation coefficient as a measure of co-tuning in DCNet. We find that the average (across neurons) E-I correlation is as follows: -0.076 (Layer 1), 0.766 (Layer 2), 0.699 (Layer 3), 0.535 (Layer 4). This confirms the visual intuition provided in Fig. R2a that E-I co-tuning is weaker in early compared to late layers.
> **Did the authors explore variants of the model...extremely reduced.**
The reviewer makes an excellent suggestion. Inspired by this, we implement and test a new form of top-down modulation that pools information from all cortical layers before computing time-delayed top-down modulating factors as a non-linear transformation (one per layer) of this pooled input. This model achieved an overall accuracy of $92.35\%$ on vis-count, which is better than the metrics we report in the manuscript. However, we note that more work needs to be done to understand this better as the biological analogues of this process is not apparent.
---
Rebuttal Comment 1.1:
Title: Read Rebuttal and keep rating
Comment: I just read through the authors responses and am going to keep the rating I gave initially, I thank the reviewers for their additional details.
In particular like to see that the model with attention driven by all layers works better, as I think that fits our knowledge about biology better than feedback computed separately per layer.
However, I still think proper quantitative comparisons are necessary to show that this model does indeed behave like a biological brain. If the authors believe that entropy is a bad predictor for RT, they should compare it something else like a confidence score and extract a sensible measure to predict RTs. They cite papers on how to do this. And for the dynamics I would really like to see some direct comparisons showing in what respects the dynamics are similar or different to some concrete actual measurements of brain data.
The limited evaluation in the rating text matches my impression quite well.
---
Reply to Comment 1.1.1:
Title: further experiments with a better motivated reaction time metric
Comment: Per your comment and the current timeframe of the discussion period, we have implemented a reaction time metric inspired by evidential learning (EDL) theory, as proposed in Goetschalckx et al. (2024). Furthermore, we render a version of the task presented in Fig. 7 with up to 15 distractors (we reduce the radii of the discs to achieve this while maintaining the overall canvas size).
We test our model (that we trained with EDL on the original dataset) **zero-shot** on the new stimuli to study the effect of increasing the number of distractors on the model reaction time. Our model achieves an impressive $\sim 70$% OOD generalization accuracy. We present results from "correct" trials below. We find that the linear RT trend for the low target-distractor difference trials firmly holds as we increase the number of distractors. The slope for low target-distractor trials is positive and significantly higher than that obtained for the high target-distractor trials.
| # distractors | $\xi_{cRNN}$ (Low T-D difference) | $\xi_{cRNN}$ (High T-D difference) |
| ----------- | ----------- | ----------- |
| 1 | 0.11 | 0.15 |
| 3 | 0.21 | 0.16 |
| 5 | 0.37 | 0.20 |
| 7 | 0.51 | 0.21 |
| 9 | 0.58 | 0.24 |
| 11 | 0.69 | 0.24 |
| 13 | 0.76 | 0.28 |
| 15 | 0.77 | 0.31 |
We hope we’ve convinced the reviewer of our framework's usefulness and general flexibility/applicability, regardless of the particular reaction time metric we choose to incorporate (a vast and ongoing area of research).
We would appreciate if the reviewer could update their score if this response eases their concerns. | Summary: The authors introduce DCNet, a hierarchical recurrent vision model that draws inspiration from the structure and function of biological visual circuits. This novel architecture consists of excitatory and inhibitory bottom-up neurons modulated by a (low-rank) representation of a higher area, analogous to the higher cortical and thalamic regions of the brain. Through their proposed model dynamics, the authors demonstrate emergent task-driven contextualization of bottom-up responses. Experimental results show that DCNet successfully replicates human behavioral findings on various classic psychophysics-inspired visual cue-delay search tasks, highlighting the model's potential in understanding and mimicking human visual perception.
Strengths: + The paper presents a noteworthy contribution with the proposed DCNet architecture, which creatively integrates biological anatomical insights with deep neural networks. The authors provide a clear and compelling motivation for their design, effectively addressing the need for contextual modulation of visual responses.
+ The experimental results demonstrate the promise of DCNet as a model of human visual search behavior. Notably, the findings in Figure 2 (vis-count) and Figure 6 (visual search with distractors) showcase DCNet's ability to learn generalizable contextual visual search solutions, outperforming baseline models and aligning with human behavioral patterns.
+ A particularly intriguing aspect of the paper is the emergence of repeated trajectories and attractor states for cued stimuli, as seen in Figure 3. This phenomenon offers valuable insights and warrants further exploration.
+ Figure 4 is a promising visualization validating that task-relevant information is stored in the low-rank higher area modulation. It is interesting to see how the early layer lesions are impacting shape selectivity, which one expects to emerge in higher layers of the perception stream.
Weaknesses: - While the proposed DCNet model is intriguing, it would be beneficial to more explicitly highlight its anatomical constraints in relation to prior work. Conversely, the authors could further emphasize the novelty and significance of the low-rank modulation aspect, which appears to be a unique contribution. Additional signposting would help to clarify its impact on the observed results.
- To strengthen the paper, the authors may consider including ablation studies to dissect the contributions of individual components within the DCNet architecture. This would provide valuable insights into which elements are crucial for the model's high performance and alignment with human behavior.
- The authors cite relevant models in references [18-21], but it is unclear how these models perform on the vis-count task. A more comprehensive comparison would be helpful, including a detailed discussion of the differences between these models and DCNet. This would enable a more nuanced understanding of the proposed model's advantages and limitations.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to my review above.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have addressed limitations adequately.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their majorly positive comments. We believe that the suggestions raised by the reviewer will increase the quality of the manuscript and we now provide additional analyses and responses to these.
> **Explicitly highlight DCNet’s anatomical constraints relative to prior work**
Thanks for this comment. Also, in line with comments from the other reviewers, we now provide a full model specification in the general response. We will make sure to highlight and signpost anatomical facets of our model that are unique including: distinct E and I subpopulations, lateral recurrent feedback, learnable neuron time constants per cell-type, and long-range top-down feedback conceptualized as multiplicative low-rank perturbations.
> **Ablation studies to dissect individual components of DCNet architecture**
We agree with the reviewer. In this regard, we now perform two ablation studies.
First, we train a version of the model where we lesion the Inhibitory cell populations in every layer. We analyze the internal input-driven dynamics of this model variant in Fig. R2. We train DCNet (Lesioned Inhibition) on vis-count (color condition; for the purposes of time). While DCNet (Lesioned Inhibition) learns the task ($95$% accurate), we find that neurons have a significantly smaller dynamic range and are comparatively less stable (details of these analyses are given in our response to **qySN**) when driven for longer time periods.
Second, we train a version of the model with lesioned top-down feedback. DCNet (Lesioned Top-down) drops significantly in performance to $38$%, further highlighting the importance of top-down feedback.
> **Unclear how relevant models [18 - 21] perform on vis-count task and differences between those models and DCNet**
Thanks for raising this point. We did not evaluate these models on vis-count as none of these models work within a cueing-paradigm. We will, however, make sure to highlight aspects of these models that are similar to our work. [18, 19, 20] are single-layer convolutional recurrent neural network models with lateral feedback and distinct E/I populations. [21] is an extension of [20] to include a feature processing hierarchy. All these models included ML-esque normalization operations (such as BatchNorm or LayerNorm) to impart stability during training and none of these models included cell-type specific learnable integration constants as well as time-delayed top-down modulation.
---
Rebuttal 2:
Title: feedback on our rebuttal
Comment: Thanks again for your positive evaluation of our manuscript. We hope you had a chance to review our rebuttal and the analyses presented there in. If you believe we had addressed your concerns, we would appreciate it if you can increase your score. Many thanks! | Summary: The paper presents a model for how high levels of (presumably cortical) representation modulate lower levels. A multilayer neural network with recurrent connections both within each layer and between layers is trained on a visual cue-delay-search task. The phenomenology of the model is then studied and characterized, showing how low level representations are modulated by task demands, which makes predictions for experiments.
Strengths: - Mechanisms motivated by behavioral and physiological results
- Evaluations based on psychophysical tasks
- Relatively small compared to other deep learning models
- Attempts to understand recurrent interactions in deep networks, currently missing from much of the literature.
Weaknesses: - The framing of the central question of the paper in terms of a “modulatory homunculus” that must decide what features to attend to in lower-levels seems unnatural and odd. It seems exclusively oriented toward modeling a specific laboratory task. But what about natural vision? It would seem these recurrent connections must always be in play for myriad tasks in daily life. It would have been more compelling to see the model trained on a broader range of tasks - e.g., visual scene analysis - rather than this specific laboratory task.
- There are many recurrent interactions in the model, but the results focus more on characterizing the phenomenology of the model as opposed to what is learned by the recurrent weight matrices, which from a mechanistic point of view would seem most interesting. It would also be worthwhile to start with a minimal model that demonstrates the principles of what is being learned, as opposed to throwing it all in the kitchen sink of the model shown in Figure 1.
- Because the model has so many components and it is not well-explained in the main body (including not defining all the variables in the equation - see below), by the time you get to results there is still no intuition for how the model is cued and performs the task, or how it learns.
- Unclear if the comparison to models without time dynamics and explicit cuing is informative.
Other comments:
- homunculus is misspelled both times
- 65: I don’t think there are explicit testable predictions/potential experiments that come from the model anywhere in the paper
- 110: Even if there is more model specification in appendix, one should at least define all the variables and give basic intuition for the equations. What is r_1, r_2 ? W? Where are the time constants? Not clear how the variables correspond to interneurons, lateral connections, etc. What is the role of the spatial pooling operator (not explained even in the appendix)?
- Fig 2b: does not seem to actually outperform ResNet, even though it seems like it might have an advantage over ResNet because of cuing (though this is unclear because they don't explain cuing)
- 120: Besides the Conv RNN model (maybe?), are these comparable baselines if they have no temporal dynamics?
- 144: Don't seem to describe the novel cues and scenes anywhere. How different are they from the training data? Is a 55% accuracy impressive?
- 148: How do they know it included the cyan cylinder and not another object (and therefore conclude it's an illusory conjunction)? Could it also be confused by the occlusion of the blue cube?
- 169: "We believe"... this whole paragraph seems like speculation based on looking at Fig. 2, but no actual proof/argument?
- 184: Unclear what "cortical gradients" means, I'm assuming trajectory in PCA space? Is it that interesting/surprising that trajectories are different for color and shape given the model presumably would have to disentangle in order to get high accuracy?
Technical Quality: 2
Clarity: 2
Questions for Authors: see above
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: see above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback and insightful comments. We provide a point-by-point response below. Your comments have made the manuscript stronger. We hope that we have sufficiently addressed your concerns enough to revise your score.
>**Model mechanisms are motivated by physiology/behavior and an attempt is made to understand recurrent interactions in deep neural networks which is missing from much of the literature.**
We are particularly happy that the reviewer pointed this out. One of our motivations in pursuing this line of work is to tackle the increasing divergence between successful deep learning approaches that model visual computations and biological visual processing. We will make sure to highlight this in our discussion.
> **What about natural vision? Why focus on a “laboratory task”?**
Thanks for raising this point. It is true that feedback processes in the visual system are critical for a variety of functions. Having said that, we wish to highlight that our approach, to the best of our knowledge, is among the first to tackle contextual cueing paradigms in a realistic manner.
Our focus was on the problem of how models can learn sensory representations of “cue”-ed attributes to look for in a “scene”. Please note that these attributes may not directly be present in the visual scene. For example, when cued for a specific color, that color may (or may not) exist in the scene under various lighting conditions, shadows, occlusions, object textures, material properties, and at different scales. To do this, one needs to be able to generate systematic variations on a theme. Although vis-count may not be “naturalistic” in a traditional sense, it contains non-trivial variations that are important to study. We highlight that there is a lack of similar datasets for naturalistic visual search at scale.
We do hope to build towards naturalistic scale, and as a first step, we now try training our sensory backbone on naturalistic object recognition tasks (without standard ML bells and whistles such as model ensembling, test time augmentations, etc.). We report an accuracy of 84.79% on CIFAR-10 (Fig. R4). This is only a preliminary result, but ones which strongly demonstrate the potential of our framework to scale up.
> **The focus is on phenomenology as opposed to gaining mechanistic insights.**
Point well taken. We believe these are non-orthogonal interesting things to study. We have now performed a host of quantitative analyses to gain insights into the role of inhibitory influence in determining the dynamic range and stability of the excitatory neurons and co-tuning (please see our response to **qySN**). Additionally, we also highlight a macroscopic gradient in learned time constants in Fig. R1.
> **``Kitchen sink” approach to Figure 1.**
We wish to point out that the anatomical constraints that we include are simplified forms of well-studied elements in biophysics. Our ablation studies make it clear that the design constraints here (separate excitatory and inhibitory populations, top-down feedback, learnable dynamics, etc.) are both necessary for model performance and facilitate comparisons to relevant neuroscience literature.
> **Intuition for how the model is cued and performs the task, or how it learns**
We apologize for the lack of clarity in this aspect. With our revised model description (provided in the general response), we have tried to make this point clearer.
Both cues and scenes are visual inputs ($\in \mathbb{R}^{128 \times 128 \times 3}$) provided to Layer 1 of our model. Cues are persistently provided to the network for the first $T$ discrete time steps followed by scenes for the next $T$ time steps. The output activities of the last layer at the last time step ($t = 2T$) is transformed into logits and a supervision signal is provided via a cross entropy loss (ground truth labels are counts from 0 to 5).
Intuitively, our model learns to “up-modulate” features in the scene that resemble features of the cue, and ultimately use these up-modulated features to inform its final output. To do so, it needs to learn a disentangled feature basis (shared between cues and scenes as they are processed by the same backbone) because aspects of the same scene can be up- or down- modulated differentially based on the cue. This is what we try to expose in Figure 3.
> **Unclear if the comparison to models without time dynamics and explicit cuing is informative**
We believe that these comparisons are important particularly because it helps understand the upper bounds of expected performance. Our “implicit” cueing condition is one where the cue and scene are provided to the model **at that same time**. In practice, for a model without time dynamics, such as the ResNet and the Transformers, this is realized by stacking these two inputs before passing them into the model. This allows for the model to perform direct comparisons between the features of these two inputs, as opposed to time-delayed comparisons that happen in the explicit-cuing condition. Hence, this provides a potential upper-bound on training performance. Interestingly, while some of these implicit models can learn the task well (such as the ResNet in Fig. 2b), they fail to generalize (Fig. 2c).
---
Rebuttal 2:
Title: Rebuttal, Part 2
Comment: > **homunculus is misspelled both times**
Thank you for pointing this out. We have made the correction.
> **No explicit testable predictions/potential experiments that come from the model anywhere in the paper (L65)**
We believe our framework is among the first to link physiology to behavior (through computations) and is a necessary first step towards hypothesis generation. We will clarify this further in the manuscript.
> **Lack of clarity in model specification**
We apologize for this confusion. We take this comment seriously and we now spell out the entirety of our model formulation in the general response. We are happy to answer any further questions the reviewer might have.
> **Fig 2b: does not seem to actually outperform ResNet, even though it seems like it might have an advantage over ResNet because of cuing (though this is unclear because they don't explain cuing)**
We would like to clarify this point. In Fig. 2b what is shown is the performance of the models when the cues are in-distribution. As we point out in our answer above, the implicitly-cued models are supposed to serve as a potential performance **upper-bound**. The implicit-cue condition is computationally easier compared to the explicit-cue condition. We point the reviewer to Fig. 2c which shows that on a generalization condition, when the cues are out-of-distribution the same ResNet model falls to chance.
> **Besides the Conv RNN model (maybe?), are these comparable baselines if they have no temporal dynamics?**
Thanks for raising this point. As we discussed above, the baselines were primarily constructed for two reasons. First, we wanted to understand performance bounds and verify that vis-count is a non-trivial computational challenge. Second, we wanted to study the computational implications (and benefits) of two components of our framework: temporal dynamics, and top-down feedback. The Conv RNN model has temporal dynamics, but no top-down modulation. The feedforward baselines have neither.
> **144: Don't seem to describe the novel cues and scenes anywhere. How different are they from the training data? Is a 55% accuracy impressive?**
Thanks for raising this point. The scenes are split into a training and test set. They are i.i.d. samples from our data generation process. None of the scenes used for evaluation were seen by the model during training. We perform two types of tests with the cues. The weak generalization experiment contained cues (colors, shapes, conjunctions) that the model had seen during training (but for different scenes). The stronger generalization experiment used cues never encountered in training. Specifically, for our stronger generalization experiment, we keep the color cues green and blue; the shape cue cube; and conjunctions of these out of the training set.
Chance performance for all of these tasks are $16.67$%. An accuracy of $55$% is well above chance and significantly higher than the performance of models with orders of magnitude more parameters (Fig. 2c; hatched data).
We will add these clarifications to the manuscript.
> **148: How do they know it included the cyan cylinder and not another object (and therefore conclude it's an illusory conjunction)? Could it also be confused by the occlusion of the blue cube?**
The reviewer is correct in pointing this out. Identifying either the blue-cube or the cyan-cylinder would constitute a “binding error”. We highlighted this example since the feature dissimilarity between the target object and distractors (apart from the cyan-cylinder and blue-cube) are very high and is unlikely to be the cause of the error. However, the reviewer’s general point is well taken and we will perform an in-depth model explainability analysis. In the meantime, we will reword this sentence to reflect that this is a possibility and not a certainty.
“When cued to find “blue cylinders", the model mistakenly appears to include either the cyan cylinder or the occluded blue cube in its count; which would count as an illusory conjunction”.
> **184: Unclear what "cortical gradients" means, I'm assuming trajectory in PCA space? Is it that interesting/surprising that trajectories are different for color and shape given the model presumably would have to disentangle in order to get high accuracy?**
We believe there is a misunderstanding here. In Fig. 4 we show the results of a lesion experiment we performed, where we systematically cut-out modulations from each layer of the model. Data presented in Fig. 4 are not trajectories in the PCA space. We find that lesioning later layer modulations in the model significantly impacted accuracy on the “shape” trials but not the “color” trials. Lesioning early layer modulation significantly impacted accuracy on “color” trials. From these we conclude that color selectivity emerges early while shape selectivity emerges late. The selectivity profile across layers of the model is what we refer to as a “cortical gradient”.
---
Rebuttal 3:
Title: Rebuttal, Part 3
Comment: > **169: "We believe"... this whole paragraph seems like speculation based on looking at Fig. 2, but no actual proof/argument?**
We believe that the reviewer is referring to Fig. 3 here, and not Fig. 2. In Fig. 3a, we consider model trajectories of the *same* several hundred scenes (individual trajectories shown in gray) when modulated by four different cues (subpanels). Our point is better illustrated by Fig. R3. When a given scene is cued with different colors, dynamics are driven to context-relevant states though the bottom-up responses from the scene are exactly the same. This is possible only by the inactivation of context-irrelevant subspaces.
---
Rebuttal 4:
Title: Any response to our rebuttal?
Comment: We hope you had a chance to review our rebuttal. Given your detailed feedback and suggestions, we believe our paper has improved. Please let us know if there is anything else that we can clarify. We kindly ask you to consider revising your score if you agree that the primary concerns raised in the review were addressed. | Rebuttal 1:
Rebuttal: We thank the reviewers for their time in reading our manuscript and for their extensive feedback. In this general response, we address some common themes across the reviews. We provide detailed answers to specific reviewers' comments in subsequent responses. To go with this rebuttal, we also provide a PDF with additional figures labeled **Fig. R1-R4**.
First, we sincerely thank the reviewers for their generally positive feedback. All reviewers noted the importance and novelty of our model construction, as well as the utility in understanding recurrent dynamics in deep networks and the promise of building a new class of biologically motivated models with top-down modulation.
One common critique across reviews pointed to the lack of details with respect to the model specification as well as quantitative analyses.
Towards addressing questions raised by the reviewers, we perform and include several new analyses in this rebuttal. Specifically,
1. We perform two kinds of ablation studies on DCNet. We train one version of DCNet with its inhibitory populations lesioned and one version of DCNet with top-down feedback lesioned. (**7BUE**)
1. We perform a series of mechanistic interpretability analyses to better understand the learned cell type specific time constants in our model, quantify the dynamic range and stability of excitatory neurons (in the presence and absence of inhibitory interactions), and quantify the degree of co-tuning between the excitatory and inhibitory populations in the model. (**BT2x**, **qySN**)
1. We train the DCNet backbone model on CIFAR10, an object recognition task, and report a competitive performance of $84.79$%. (**BT2x**)
1. We implement an alternative form of top-down modulation that factors in information from all layers. (**qySN**)
1. We include a detailed model specification (below) as requested by multiple reviewers (**BT2x**, **BdqG**).
We hope you agree that our manuscript has improved through your feedback and that our findings will have a significant impact on computational cognitive neuroscience.
### Model specification
Neurons in our model are either excitatory (exc) or inhibitory (inh). $i$ and $j$ are indices used to describe excitatory and inhibitory cell types respectively. $(x, y)$ describes a specific spatial location. $l$ denotes the network layer. $z^{(l)}$ is the feedforward input to layer $l$. $h$ refers to a neuron's instantaneous state. Specified below are the governing dynamics for the neurons in our model.
$$
\tau\_{\text{exc}\_i}^{(l)} \frac{d h\_{\text{exc}\_i}^{(l, x, y)}}{dt} = -h\_{\text{exc}\_i}^{(l, x, y)} + g\_{\text{exc}} ( z^{(l)}, h\_{\text{exc}}^{(l)}, h\_{\text{inh}}^{(l)}, i, x, y )
$$
$$
g\_{\text{exc}} ( z^{(l)}, h\_{\text{exc}}^{(l)}, h\_{\text{inh}}^{(l)}, i, x, y ) = f( [ W\_{\text{input} \to \text{exc}}^{(l)} z^{(l)} + \lfloor W\_{\text{exc} \to \text{exc}}^{(l)} \rfloor\_{{}\_{{}\_+}} h\_{\text{exc}}^{(l)} +
\lfloor W\_{\text{inh} \to \text{exc}}^{(l)} \rfloor\_{{}\_{{}\_-}} h\_{\text{inh}}^{(l)} ]\_{i, x, y} + b\_{\text{exc}\_i})
$$
$$
\tau\_{\text{inh}\_j}^{(l)} \frac{d h\_{\text{inh}\_j}^{(l, x, y)}}{dt} = -h\_{\text{inh}\_j}^{(l, x, y)} + g\_{\text{inh}} ( z^{(l)}, h\_{\text{exc}}^{(l)}, h\_{\text{inh}}^{(l)}, j, x, y )
$$
$$
g\_{\text{inh}} ( z^{(l)}, h\_{\text{exc}}^{(l)}, h\_{\text{inh}}^{(l)}, j, x, y ) = f( [ W\_{\text{input} \to \text{inh}}^{(l)} z^{(l)} + \lfloor W\_{\text{exc} \to \text{inh}}^{(l)} \rfloor\_{{}\_{{}\_+}} h\_{\text{exc}}^{(l)}
]\_{j, x, y} + b\_{\text{inh}\_j})
$$
$\tau\_{\text{exc}\_i}^{(l)}$ and $\tau\_{\text{inh}\_j}^{(l)}$ are cell-type specific learnable neural time constants. Synaptic connections $\boldsymbol{W}$s are sparse matrices on which we impose translational invariance. These are, in practice, realized as convolutions. $b\_{\text{exc}\_i}$ and $b\_{\text{inh}\_j}$ are excitatory and inhibitory cell-type specific thresholds. $f(.)$ is a non-linear activation function. We use the hyperbolic tangent as our activation function $f$. An average pooling operation $\texttt{Pool}$ is applied to layer pyramidal outputs ($h\_{\text{exc}}$) to increase the receptive field size by a factor of two.
$$
z^{(l + 1)}[t] = \texttt{Pool} ( h\_{\text{exc}}^{(l)} [t]) \odot \Gamma(\xi(h\_{\text{exc}}^{(l)}[t - T])) \\ \texttt{if}\\ t\\ \geq T \\ \texttt{else}\\ \texttt{Pool} ( h\_{\text{exc}}^{(l)} [t])
$$
We make discrete time approximations to train our model. The cue is presented first to the network ($z^{(1)}[0])$) and the dynamics are unrolled for $T$ steps followed by scene presentation ($z^{(1)}[T])$) for another $T$ steps. While the scene is presented, inputs to each layer are modulated as follows, where $\xi(.)$ is a pooling operator that computes the average activity per cell type across all $(x,y)$. $\Gamma(\textbf{e})$ is the low-rank modulation function defined as follows:
$$
\Gamma(\textbf{e}) = \textbf{e} \odot \sigma \left( [\textbf{W}\_{l,1} \textbf{e}^{T} + \textbf{b}\_1] \otimes [\textbf{W}\_{l,2} \textbf{e}^{T} + \textbf{b}\_2 ] \right)
$$
Here, $\textbf{W}\_{l,1}, \textbf{W}\_{l,2}$ are learnable linear projections and $\textbf{b}\_1, \textbf{b}\_2$ are learnable biases. $\otimes$ denotes outer product and $\odot$ denotes pointwise scaling. By construction, the output of $[\textbf{W}\_{l,1} \textbf{e}^{T} + \textbf{b}\_1] \otimes [\textbf{W}\_{l,2} \textbf{e}^{T} + \textbf{b}\_2 ]$ is a low-rank matrix.
Pdf: /pdf/fe8d3f574c08702d79d568eb346c6d916b19464f.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Stylus: Automatic Adapter Selection for Diffusion Models | Accept (oral) | Summary: This work presents a novel system, Stylus, designed to enhance the efficiency and effectiveness of generating high-quality images using diffusion models like Stable Diffusion. The key challenge addressed by this work is the automatic selection and composition of relevant fine-tuned adapters from a vast pool of over 100,000 available adapters, which are often poorly documented and highly customized. This work advances the field by providing a robust and automated solution by leveraging the vast number of available adapters.
Strengths: - The creation of StylusDocs, a curated dataset with 75K adapters and pre-computed embeddings, adds substantial value to provide a rich resource for further experimentation and development.
- Comprehensive Evaluation: The paper provides a thorough evaluation of Stylus across multiple metrics (CLIP, FID) and diverse datasets (Microsoft COCO, PartiPrompts). This robust evaluation framework enhances the credibility of the claimed performance improvements.
- Open Source and Reproducibility: By planning to release Stylus and StylusDocs as open-source resources, the authors contribute to the transparency and reproducibility of their research. This aligns well with the community’s values and encourages further developments based on their work.
Weaknesses: - Unclear Motivation for the Method: In the refiner step, the paper does not clearly explain why Gemini Ultra is trusted to generate better adapter descriptions. If other multimodal language models (MLLMs) were used, how would the results differ?
- Incomplete Ablation Study: The ablation study is not comprehensive as it does not include an ablation of the refiner component. Understanding the impact of the refiner step on the overall performance of Stylus is crucial.
- Quality Assurance of Adapter Descriptions: The paper does not provide sufficient details on how the quality of the adapter descriptions generated by the refiner is ensured. It is unclear whether any validation or verification steps were taken to confirm the accuracy and reliability of these descriptions.
- Insufficient Description of Masking Process: The description of the masking process is not detailed enough. Specifically, the meaning of α_j in Equation 2 and the function Mask() are not adequately explained. Additionally, the masking process is not reflected in Figure 2, which outlines the Stylus algorithm.
Technical Quality: 3
Clarity: 2
Questions for Authors: - Motivation of the selected MLLM
- Additional ablation study
- Check the quality of adapter descriptions
- Details on masking process
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Maybe potential bias and fairness issues will related with this work
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the thoughtful review and the constructive feedback! We hope the following clarifications address all the questions raised.
**Quality Assurance for Adapters.** We have taken careful preemptive measures to filter out problematic adapters and have outlined in Appendix A.7 a continual curation process, where we invite community members to report any adapters missed in our initial curation. Below, we highlight some key preventative measures for quality assurance and refer readers to Appendix A.7 and A.8 for a full discussion on safety and reliability of Stylus.
- **Safety.** Although LLM safety remains an actively evolving research area [1], StylusDocs employs a multi-stage filtering pipeline to identify and exclude problematic adapters. First, all explicit adapters tagged by Civit.ai are excluded from StylusDocs (Sec 3.2). Second, we use filters from Google’s VertexAI API to reject unsafe adapters based on LLM-generated descriptions. This especially catches problematic adapters that have innocuous original descriptions (Sec. A.7).
- **Accuracy/Reliability.** We note that, without access to the original fine-tuning dataset, we cannot guarantee the descriptions are completely error-free. However, we manually inspect commonly selected adapters in StylusDocs, blacklisting adapters that produce low quality images or cases where the StylusDocs description is inconsistent with observed adapter behavior (Sec A.4). Furthermore, authors are incentivized to oversell the model’s abilities. As such, Stylus’s refiner takes in example images in Civit.ai’s model card as a grounding mechanism to generate more accurate descriptions (Sec A.1).
Even with the safety measures we’ve taken, we acknowledge Stylus has the same risks of misuse as other public domain image generation tools including improper use for misinformation, producing explicit content, and reproducing copy-righted material from the training data. We emphasize the research prototype is not meant to be used in production without further application informed guardrails.
**Refiner Ablation.** In the following table containing CLIP and FID scores, we ablate Stylus's refiner:
| Baselines (CFG=6) | CLIP: ↑ is better | FID: ↓ is better |
|--------------------------|-----------------------|-----------------------|
| SD v1.5 | 27.22 | 23.96 |
| No-Refiner | 24.91 (-2.31) | 24.26 (+0.3) |
| Gemini-Ultra Refiner | 27.25 (+0.03) | 22.05 (-1.91) |
| **GPT-4o Refiner** | **28.04 (+0.82)** | **21.96 (-2.00)** |
The baselines are:
- SD v1.5 - The base Stable Diffusion model with the RealisticVision checkpoint.
- No-Refiner: Use base author-provided descriptions from Civit.ai or Huggingface.
- Gemini-Ultra Refiner: Use Gemini-Ultra as the Refiner’s VLM to generate better adapter descriptions. This is the version of Stylus presented throughout our paper.
- GPT-4o Refiner: Use GPT-4o as the Refiner’s VLM to generate better adapter descriptions.
These results show that a refiner is indeed important for textual alignment with the prompt. In fact, without a refiner VLM, Stylus performs poorly and chooses adapters that the composer thinks are aligned with the prompt but are not in practice. As a result, Stylus chooses adapters that hurt textual alignment (CLIP) and image quality (FID) and performs worse than base SD.
Furthermore, GPT-4o baseline performs much better than Gemini-Ultra, showing that better refiner descriptions can better aid the composer in selecting the right adapters. This also suggests that the Stylus’s performance is independent of Gemini-specific capabilities and benefits from further improved visual-language reasoning capabilities.
**Choice of Gemini for Refiner.** We chose the Gemini class of models since it has mature safety guardrailing. Specifically, Google’s VertexAI API provides stringent safety settings to block explicit content for the input prompt. Safety filters helped us filter out around 30% of original adapters that were tagged as non-explicit by Civit.ai.
**Masking.** The composer decomposes a prompt into tasks and assigns highly-aligned adapters per task. Next, a subset of candidate adapters are selected via masking for image generation. For each task, a mask either selects A) just one of the task’s adapters, B) all of the task’s adapters, C) or none of the adapters. To get the final selection of adapters, we randomly sample a mask for each task and merge the identified adapters to the original base model.
Regarding merging adapters, each adapter weights are first multiplied by the refiner’s recommended adapter weight, α_j (Eqn. 2). For example, [Food Elegant Style LoRA](https://civitai.com/models/127450?modelVersionId=139441) recommends α=0.7 weight. Finally, to merge adapters into the base model, adapters weights are *averaged* per task and then *summed* across tasks.
We appreciate your feedback and believe these clarifications should address your concerns. We are open to further discussions to improve the paper. Thank you once again for your valuable insights!
[1] Hendrycks, Dan, et al. "Unsolved Problems in ML Safety." arXiv, 29 Sep. 2021, arXiv:2109.13916.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. Most of my concerns have been addressed. However, I suggest adding more details about the masking process to the manuscript for clarity. I'm pleased to raise my score.
---
Reply to Comment 1.1.1:
Comment: Thanks! We will definitely revise the description of the masking process in the manuscript to reflect the clarification we provided in the rebuttal. | Summary: The paper addresses the challenge of selecting and composing relevant adapters for generating high-fidelity images with diffusion models. Stylus introduces a three-stage process: refining adapter descriptions, retrieving relevant adapters based on user prompts, and composing them to create the final image. The paper highlights the development of StylusDocs, a dataset featuring 75K adapters with pre-computed embeddings. Evaluation results show that Stylus outperforms baseline models, achieving higher visual fidelity, textual alignment, and image diversity. The system is efficient and suitable for various image-to-image tasks, including translation and inpainting, demonstrating its versatility and effectiveness in improving image generation.
Strengths: - This is a great paper. Original, high quality, clear, and significant.
- The use of adapters and PEFT will/should continue to increase in the future. The authors present a scalable method to improve text-to-image generation.
Weaknesses: None. While adapters have been used in the past to improve image generation, this paper provides a much more coherent strategy to integrate them into VLMs.
Technical Quality: 4
Clarity: 4
Questions for Authors: None.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: None. Great work, Authors!
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the thoughtful and enthusiastic review!
We agree the incorporation of adapters/PEFTs [1] will continue to increase and that automatic adapter selection will be critical for managing and navigating the growing ecosystem of fine-tuned models. Stylus demonstrates improved diversity and visual quality evaluated quantitatively with automated metrics (CLIP/FID scores) as well as qualitatively via human evaluation and VLM as a judge.
Stylus provides a coherent strategy for composing and routing adapters for VLMs. Our retriever ablation (Sec. 4.3.1, Tab. 1) shows that naively selecting adapters (i.e. RAG) can lead to worse performance than the base SD checkpoint. Strategically composing and routing adapters unlocks a dimension of model performance previously underexplored.
We are excited to see LLMs used for dynamically selecting among models in related domains, including but not limited to:
- Automatic construction of agentic workflows/graphs. This includes using Stylus to decompose the task/prompt into a graph of subtasks and identifying which agent, among an ecosystem of agents, is best suited for each subtask.
- Routing between different base models from different providers to optimize the cost-performance tradeoff.
- Given a user prompt that requires composing multiple tools/functions, Stylus can identify, retrieve, and then compose the right sets of tools and functions for the LLM to invoke.
- Domain-specific fine-tuning is an emerging approach to reduce hallucination [2]. Stylus can select the right domain fine-tuned model to maximize factuality.
We are open to further discussions to improve the paper. Thank you once again for your valuable insights!
[1] Hugging Face Team. "Parameter-Efficient Fine-Tuning (PEFT) with Hugging Face." GitHub, 2023, https://github.com/huggingface/peft.
[2] Tian, Katherine, et al. "Fine-tuning Language Models for Factuality." arXiv, 2023, arxiv.org/abs/2311.08401. | Summary: The paper proposes Stylus, an approach for automatically selecting and combining fine-tuned adapters on particular tasks to improve the quality of image generation given a prompt. To evaluate Stylus, the paper introduces StylusDocs, a curated dataset containing 75K adapters with pre-computed adapter embeddings. Both the qualitative and quantitative results show that the proposed method outperforms Stable Diffusion and other retrieval methods.
Strengths: - The paper explores an interesting topic of automatically selecting and combining fine-tuned adapters on particular tasks to improve the quality of image generation.
- Both the qualitative and quantitative results are proposing. The proposed methods method improves over other method for both human evaluation and automatic benchmarks.
Weaknesses: - Some details about the proposed method are missing, making it hard to reproduce. In particular, sections 3.3 and 3.4 about the composer and the masking are not very clear. How are the adapters selected? How is masking applied?
Technical Quality: 3
Clarity: 3
Questions for Authors: No questions beyond the one in the weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper does not have any obvious limitations that were not discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the thoughtful and enthusiastic review!
**Reproducibility.** We plan to release Stylus and StylusDocs as open-source resources to ensure transparency and reproducibility.
**Adapter selection and masking.** The composer decomposes the prompt into tasks and maps highly relevant adapters to each task. For more details on the Chain-of-Thought prompt [1] used for Stylus’s composer, refer to Table 2.
Next, a subset of candidate adapters are selected via masking for image generation. For each task, a mask either selects A) just one of the task’s adapters, B) all of the task’s adapters, C) or none of the adapters. To get the final selection of adapters, we randomly sample a mask for each task and merge the identified adapters with the original base model.
Regarding the merging step, each adapter weights are first multiplied by the refiner’s recommended adapter weight, α_j (Eqn. 2). For example, [Food elegant style LoRA]( https://civitai.com/models/127450?modelVersionId=139441) recommends α=0.7 weight. Finally, to merge adapters into the base model, adapters weights are *averaged* per task and then *summed* across tasks.
We appreciate your feedback and believe these clarifications should address your concerns. We are open to further discussions to improve the paper. Thank you once again for your valuable insights!
[1] Wei, Jason, et al. “Chain-of-Thought Prompting Elicits Reasoning in Large Language Models.” arXiv.Org, 28 Jan. 2022, https://arxiv.org/abs/2201.11903v6.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. My concerns have been addressed and I will keep my initial score.
---
Reply to Comment 1.1.1:
Comment: Thanks! We appreciate your enthusiastic review and constructive feedback! | Summary: This paper works on post-training optimization for image generation with stable diffusion models. They proposed three stages, refiner, retriever and composer, to personalize a SD model for the prompt and thus to generate the perfect images. The experimental result indicate the potential of the proposed method.
Strengths: 1. The motivation makes sense to me and the idea is interesting. Previously, we usually explore prompt engineering to generate a good image, but this paper investigates the adapters and to finalize a suitable model to generate good image given a fixed prompt.
2. The post-training optimization consists of three stages, Refiner, Retriever and Composer, the design of the entire method is reasonable.
3. The experimental results demonstrate the proposed method is promising.
Weaknesses: 1. Efficiency. In the paper, the authors compares the Stylus with the typical SD checkpoint, then the efficiency of stylus is not comparable to SD in terms of memory, CPU, GPU resources.
2. Fairness. The paper claim advantages over typical SD in terms of diversity and quality. While it is not quite fair. The Stylus has a model personalization process (from the post-training optimization pipelines), while the SD is using a static model checkpoint.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Does the description (or the optimized version) can well represent the adapter?
2. in Eq.2, why do you directly use betha=0.8? If the 2nd term's value is much bigger than W_base, how do you deal with it?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the thoughtful review and the constructive feedback! We hope the following clarifications and experiments address your questions.
**Efficiency.** We note that Stylus's efficiency for image generation in terms of CPU and GPU resources is near identical to the base Stable Diffusion (SD) model. Aside from merging adapters (e.g. LoRAs) into the base model, which is small, the inference computation remains the same. Additional CPU memory is required to store unmerged adapters, as evidenced by efficient LoRA serving systems such as SLoRA [1] and dLoRA [2].
We emphasize that users today rely on manual search to identify a helpful subset of adapters. Stylus automates the process of determining the best set of adapters for a given user prompt, improving user efficiency. Furthermore, with recent releases of fast, high quality LLMs (i.e. GPT-4o-mini, Gemini 1.5 Flash), the composer’s latency will continue to reduce over time. We profiled GPT-4o-mini as the composer, which is over 3x faster than Gemini 1.5. This is a small overhead when compared to manual search over adapters.
**Fairness.** Stylus serves as an automatic enhancement to SD, leveraging additional training and data represented by LoRAs. Our retriever ablation (Sec. 4.3.1, Tab. 1) shows that naively selecting adapters (i.e. RAG) can lead to worse performance than the typical SD checkpoint. As such, we include base SD’s CLIP and FID scores as a reference for comparing different approaches to selecting and composing adapters.
**Adapter descriptions.** First, we note that, without access to the original fine-tuning dataset, we cannot guarantee the descriptions are completely error-free. However, we manually inspect commonly selected adapters in StylusDocs, blacklisting adapters that produce low quality images or cases where the StylusDocs description is inconsistent with observed adapter behavior (Sec A.4). Furthermore, we observe that over 80% of adapters on model platforms (Civit.ai/Huggingface) lack sufficient descriptions. As such, Stylus’s refiner takes in example images from Civit.ai’s model card as a grounding mechanism to generate more detailed and accurate descriptions (Sec A.1).
Furthermore, we add an additional refiner ablation that showcases that better adapter descriptions lead to performance gains. In the following table of CLIP/FID scores, we illustrate Stylus without and with refiner.
| Baselines (CFG=6) | CLIP: ↑ is better | FID: ↓ is better |
|--------------------------|-----------------------|-----------------------|
| SD v1.5 | 27.22 | 23.96 |
| No-Refiner | 24.91 (-2.31) | 24.26 (+0.3) |
| Gemini-Ultra Refiner | 27.25 (+0.03) | 22.05 (-1.91) |
| **GPT-4o Refiner** | **28.04 (+0.82)** | **21.96 (-2.00)** |
Here, the baselines are:
- SD v1.5 - The base Stable Diffusion model with the RealisticVision checkpoint.
- No-Refiner: Use base author-provided descriptions from Civit.ai or Huggingface.
- Gemini-Ultra Refiner: Use Gemini-Ultra as the Refiner’s VLM to generate better adapter descriptions. This is the version of Stylus presented throughout our paper.
- GPT-4o Refiner: Use GPT-4o as the Refiner’s VLM.
The quality of author-provided descriptions are poor, leading to worse performance than the typical SD checkpoint. Further improved refiner descriptions from GPT-4o can significantly boost Stylus’s performance, achieving the best textual alignment (CLIP) and image quality (FID) across all baselines, surpassing our original Stylus (with Gemini-Ultra).
**Merging Adapters.** We clarify Eqn. 2 *averages* adapter weights per task and *sums* adapter weights across tasks. We take several measures below to ensure that the second term in Eqn. 2 does not grow too large:
- Our masking scheme reduces the number of adapters in the final composition of LoRAs. (Sec 3.4)
- Empirically, with the COCO dataset, we observed the composer identifies at most seven tasks with associated adapters. We also have the option in the composer’s prompt to limit the number of tasks (Tab. 2).
- β scales down the rate at which the second term grows. We determined β=0.8 prevents highly-weighted adapters from overriding other concepts specified in the prompt, a challenge discussed in Fig. 13(b).
We appreciate your feedback and believe these clarifications should address your concerns. We are open to further discussions to improve the paper. Thank you once again for your valuable insights!
[1] Sheng, Ying, et al. S-LoRA: Serving Thousands of Concurrent LoRA Adapters. arXiv:2311.03285, arXiv, 5 June 2024. arXiv.org, https://doi.org/10.48550/arXiv.2311.03285.
[2] Wu, Bingyang, et al. "dLoRA: Dynamically Orchestrating Requests and Adapters for LoRA LLM Serving." 18th USENIX Symposium on Operating Systems Design and Implementation (OSDI 24), USENIX Association, July 2024, pp. 911-927, Santa Clara, CA, www.usenix.org/conference/osdi24/presentation/wu-bingyang.
---
Rebuttal Comment 1.1:
Title: All of my concerns have been addressed
Comment: Thanks for the clarifications. All of my concerns have been addressed. I am raising the rating from weak accept to accept.
---
Reply to Comment 1.1.1:
Comment: We are glad that our rebuttal have addressed all your concerns and thank the reviewer for increasing the rating! | Rebuttal 1:
Rebuttal: We are grateful to all the reviewers for their insightful feedback and enthusiastic reviews! To name just a few comments, reviewers acknowledged that Stylus is novel,
> This is a great paper. Original, high quality, clear, and significant. (Reviewer N5AD)
> This work presents a novel system, Stylus (Reviewer wGTi)
> The idea is interesting … this paper investigates the adapters and to finalize a suitable model to generate good image given a fixed prompt. (Reviewer QQJ7)
timely and impactful,
> advances the field by providing a robust and automated solution (Reviewer wGTi)
> The use of adapters and PEFT will/should continue to increase in the future. (Reviewer N5AD)
especially for our open-source StylusDocs dataset,
> The creation of StylusDocs, a curated dataset with 75K adapters and pre-computed embeddings, adds substantial value to provide a rich resource for further experimentation and development. (Reviewer wGTi)
and is comprehensively evaluated.
>Comprehensive Evaluation: The paper provides a thorough evaluation of Stylus across multiple metrics (CLIP, FID) and diverse datasets (Microsoft COCO, PartiPrompts). This robust evaluation framework enhances the credibility of the claimed performance improvements. (Reviewer wGTi)
___
___
We’d also like to highlight some of the shared feedback across reviewer rebuttals.
**Impact of Refiner (QQJ7, wGTi).** Our additional ablation experiments demonstrate that Stylus’s performance benefits significantly from the high quality of adapter descriptions provided by the VLM-based Refiner.
| Baselines (CFG=6) | CLIP: ↑ is better | FID: ↓ is better |
|--------------------------|-----------------------|-----------------------|
| SD v1.5 | 27.22 | 23.96 |
| No-Refiner | 24.91 (-2.31) | 24.26 (+0.3) |
| Gemini-Ultra Refiner | 27.25 (+0.03) | 22.05 (-1.91) |
| **GPT-4o Refiner** | **28.04 (+0.82)** | **21.96 (-2.00)** |
Here, the baselines are:
- SD v1.5: The base Stable Diffusion model with the RealisticVision checkpoint
- No-Refiner: Use base author-provided descriptions from Civit.ai or Huggingface.
- Gemini-Ultra Refiner: Use Gemini-Ultra as the Refiner’s VLM to generate better adapter descriptions. This is the version of Stylus presented throughout our paper.
- GPT-4o Refiner: Use GPT-4o as the Refiner’s VLM.
Without the refiner, the poor quality of author-provided descriptions results in Stylus performing worse than SDv1.5. However, the high quality of adapter descriptions from the GPT-4o Refiner results in the best performance, surpassing Gemini-Ultra Refiner, the original refiner VLM.
**Safety and Reliability of Adapter Descriptions (QQJ7, wGTi).** Stylus ensures adapter *safety* through a multi-stage filtering pipeline, initially excluding all explicitly tagged adapters by Civit.ai (Sec 3.2), followed by using Google's VertexAI API filters to reject unsafe adapters based on LLM-generated descriptions (Sec. A.7). For *reliability/accuracy,* Stylus's refiner uses example images from Civit.ai’s model card as a grounding mechanism to generate more accurate descriptions (Sec A.1), and we manually inspect and blacklist low-quality, explicit, or highly-inaccurate adapters (Sec A.4).
**Masking and Merging Clarification (QQJ7, rCTt, wGTi).** Reviewers asked for more clarity on Stylus’s masking and merging steps. Recall that the composer decomposes the prompt into tasks and maps highly relevant adapters to each task. For each task, a mask either selects A) just one of the task’s adapters, B) all of the task’s adapters, C) or none of the adapters. The selected adapters are then merged, with adapter weights *averaged* per task and then *summed* across tasks. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Tiny Time Mixers (TTMs): Fast Pre-trained Models for Enhanced Zero/Few-Shot Forecasting of Multivariate Time Series | Accept (poster) | Summary: It presents a lightweight pretrained model for TS with strong performance.
Strengths: Overall, this is a strong paper with extensive experimental work.
Weaknesses: Adaptive patching is a well-conceived design, although I am unclear why it is termed "adaptive" when the design appears to be fixed and pre-set. Perhaps "multiscale patching" would be a more accurate descriptor. Additionally, the writing should more clearly distinguish between the designs used for pretraining and those used solely for finetuning.
For the full-shot head probing, it is important to include comparisons with state-of-the-art end-to-end methods, as readers are likely interested in such comparisons.
Technical Quality: 3
Clarity: 3
Questions for Authors: In the weakness
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: not applicable
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you reviewer for the constructive feedback. Please find our response below
**Q1: Adaptive patching is a well-conceived design, although I am unclear why it is termed "adaptive" when the design appears to be fixed and pre-set. Perhaps "multiscale patching" would be a more accurate descriptor.**
Yes, we agree with the suggestion to rename it to multiscale patching. We initially used the term “adaptive patching” to indicate that the patch length changes across layers. However, since this change is not runtime adaptation, renaming it to multi-scale patching is indeed more accurate.
**Q2: Additionally, the writing should more clearly distinguish between the designs used for pretraining and those used solely for finetuning.**
Sure. All the multi-variate and exogenous related components are part of the fine-tuning flow, while the rest of the components are used in both pretraining and finetuning. We will clearly distinguish between the designs used for pretraining and those used solely for finetuning in the revised manuscript.
**Q3: For the full-shot head probing, it is important to include comparisons with state-of-the-art end-to-end methods, as readers are likely interested in such comparisons.**
Due to space constraints, we couldn’t add the full end-to-end methods in the main result section. However, these results are available in the appendix (Table 14) where TTM with Head probing (HP) consistently outperform other HP benchmarks and are also superior or very competitive as compared to the full end-to-end training of popular TS architectures. This demonstrates that TTM with simple head probing is both highly effective and extremely lightweight, as it avoids the need to update the backbone weights. We will move this appendix table to the main result section in the revised manuscript.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal. It's an interesting paper, I will maintain my rating. | Summary: This paper introduces Tiny Time Mixers (TTM), a new compact pre-trained model for efficient zero/few-shot multivariate time series forecasting. TTM is based on the lightweight TSMixer [1] architecture and incorporates several innovations, which enable effective pre-training on varied dataset resolutions with minimal model capacity. Besides, TTM also employs multi-level modeling to capture channel correlations and fuses exogenous data into the forecasting process during fine-tuning.
The authors comprehensively evaluate TTM on multiple datasets and compare their performance with other state-of-the-art models. The results highlight TTMs' superior accuracy, computational efficiency, and lightweight nature.
Strengths: 1. **Enhanced Zero/Few-shot Forecasting Performance**: TTMs demonstrate a substantial improvement in zero/few-shot forecasting capabilities, outperforming existing benchmarks by 4-40%. This advancement is particularly notable as it shows that smaller models can achieve high accuracy without the extensive computational resources typically required by larger models.
2. **Specialized Techniques for Pre-training and Fine-tuning**: The paper presents novel techniques such as adaptive patching, diverse resolution sampling, and resolution prefix tuning. These innovations enable robust pre-training on heterogeneous datasets and facilitate effective fine-tuning, allowing TTMs to capture channel correlations and integrate exogenous signals, which is critical for accurate multivariate forecasting.
3. **Innovation in Model Architecture**: The paper introduces Tiny Time Mixers (TTMs), a compact pre-trained model architecture tailored for multivariate time series forecasting. Starting from just 1 million parameters, TTMs offer a lightweight and efficient alternative to larger, more computationally intensive models. This addresses the need for fast and resource-friendly forecasting tools.
Weaknesses: 1. The base model TSMixer used for pre-training is not newly proposed.
2. TTM necessitates to train different models for different context length and forecast length, posing some new weaknesses compared to Transformer-based pre-training models. While this paper presents forecast length adaptation strategy to handle the fixed forecast length problem, performance loss still emerges when there is a disparity between the model's forecast length and the actual forecast length.
3. There are not sufficient experiments to prove the scaling law on pre-training models. The paper's results show that the TTM performs best with 5M parameters. Will the TTM perform better with a larger number of parameters? Showing the tradeoff between performance and cost along with parameters size increasing will facilitate understanding of the model and help choose the most suitable model size.
4. Some parts of TTM lack ablation studies to sufficiently prove their effectiveness, such as decomposing the exogenous mixer module and decoder channel-mixing.
Technical Quality: 3
Clarity: 3
Questions for Authors: see weakness
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: see weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you reviewer for the constructive feedback. Please find our response below
**Q1: The base model TSMixer used for pre-training is not newly proposed.**
Yes, you are correct that TTM builds on the TSMixer foundation. However, TSMixer doesn’t discuss the techniques required to construct a pre-trained model with effective transfer learning capabilities. To achieve this, we have introduced several innovative components on top of TSMixer, such as adaptive patching, diverse resolution sampling, and resolution prefix tuning. These enhancements are crucial for effectively handling of pre-training across datasets with varying resolutions, while maintaining minimal model size. Section 4.7 details the positive impact of these techniques through comprehensive ablation studies. With the help of these novel components and the learned pre-trained weights, we outperform TSMixer by 15% in few-shot setting, as highlighted in Table 4.
**Q2: TTM necessitates to train different models for different context length and forecast length, posing some new weaknesses compared to Transformer-based pre-training models. While this paper presents forecast length adaptation strategy to handle the fixed forecast length problem, performance loss still emerges when there is a disparity between the model's forecast length and the actual forecast length.**
Recent advancements in language and vision have seen a shift towards adopting small, focused foundation models over large, all-purpose ones (SLMs Vs LLMs) considering the ease and effectiveness for production deployments and high task-specific accuracy of small focused models. We have applied a similar strategy to time-series foundation models by designing TTMs that are small, focused, and tailored for specific forecasting contexts. These models are particularly well-suited for production enterprise applications, which often require minimal GPU resources for deployment and enable rapid finetuning without the risk of overfitting, challenges that are more pronounced with massive transformer models.
In addition, pretraining TTM is computationally inexpensive and can be quickly trained in less than a day, a notably faster time as compared to existing counterparts which often take several days to weeks. Hence pre-training multiple TTM has no practical challenges or limitations and can easily be achieved. We would like to highlight that, as part of this paper, we also plan to release and open-source a few pre-trained TTMs with different forecasting contexts that can widely cover most of the common enterprise use-cases.
In addition, we also support several forecast length adaptation techniques to adapt a pre-trained model for different forecast lengths with extremely minimal accuracy impact. Particularly, the pruning technique has found to be very effective (Ex. only 0.8% drop in MSE when more than 50% pruning is applied for the scenario of pruning forecast length from 720 to 336.) Please see Figure 4 for the details.
**Q3: There are not sufficient experiments to prove the scaling law on pre-training models. The paper's results show that the TTM performs best with 5M parameters. Will the TTM perform better with a larger number of parameters? Showing the tradeoff between performance and cost along with parameters size increasing will facilitate understanding of the model and help choose the most suitable model size.**
The scaling laws of TTMs are influenced by the amount of pretraining data used. When training the TTM on Monash data alone (~250M samples), model accuracy saturates beyond 1M parameters (this represents the TTM-Quick model referred in the paper).
However, by expanding the pretraining data to 1B samples by integrating Monash with additional data sources, accuracy improvements continued with increased model size, reaching saturation around 5M parameters (the TTM-Advance Model referred to in the paper). Further increasing the model size beyond 5M did not provide additional benefits for the 1B pretraining dataset. We will include these findings in a new section on scaling laws covering various model sizes. Thank you for bringing up this important aspect.
**Q4: Some parts of TTM lack ablation studies to sufficiently prove their effectiveness, such as decomposing the exogenous mixer module and decoder channel-mixing.**
In general, the Exogenous Mixer and Decoder Channel-Mixing components are designed to be used together. The Exogenous Mixer captures channel correlations in the forecasts, while the Decoder Channel-Mixing captures these correlations in the past context. Using both components together allows the model to learn channel correlations from both forecasts and past contexts, providing a comprehensive view. Running one without other will have information loss in the channel-correlation. Therefore, in the ablation study, we reported them as a single component.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response to address my concerns. I decide to raise my score. | Summary: The paper introduces Tiny Time Mixers (TTMs), a series of pre-trained models designed for efficient zero/few-shot multivariate time series forecasting. TTMs are built on the lightweight TSMixer architecture and incorporate innovations such as adaptive patching, diverse resolution sampling, and resolution prefix tuning to handle varied dataset resolutions with minimal model capacity. These models outperform existing benchmarks in accuracy while significantly reducing computational requirements. The empirical studies demonstrate superior performance across multiple tasks and datasets.
Strengths: 1. The paper introduces innovative solutions to overcome the limitations of large pre-trained models in time series forecasting, offering techniques that could potentially be adapted to enhance other time series forecasting models as well.
2. The empirical studies are robust, thoroughly assessing the model's accuracy and efficiency across multiple benchmark datasets.
3. The experimental results are impressive, demonstrating enhanced accuracy and substantially lower computational demands compared to existing methods.
4. The ablation studies are thorough, providing a detailed analysis of the impact of different pre-training datasets and the effectiveness of the proposed training techniques.
Weaknesses: I did not see significant weaknesses that need to be addressed in this paper. As a minor suggestion, given that there is another mixer architecture known as TSMixer[1], it would be beneficial to include a footnote or a mention in the appendix to clarify that the TSMixer referenced in this work differs from the other one. The clarification will help avoid potential confusion and ensure the distinctiveness of the models is clearly understood.
[1] Chen, Si-An, et al. "Tsmixer: An all-mlp architecture for time series forecasting." arXiv preprint arXiv:2303.06053 (2023).
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. Given that the pre-training datasets may include time series with varying input lengths, how do TTMs manage these differences during training? Additionally, how do TTMs handle time series with input lengths not encountered before during inference?
2. Considering that resolution prefix embeddings are learned discretely, how do TTMs accommodate resolutions that were not seen during training when encountered during inference?
3. What criteria were used to determine the patch size for lagged exogenous features in the TTMs?
4. The training and evaluation of TTMs primarily focus on long time series with hundreds of input steps. Does this focus impact the performance of TTMs when applied to shorter time series, such as those with fewer than 30 time steps?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: The limitations regarding the restricted number of downstream tasks are discussed in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you reviewer for the constructive feedback. Please find our response below
**Q1: Adding clarification note on the TSMixer architecture**
Thank you for the feedback. Sure. We will clarify the TSmixer used in this paper as compared to the other one.
**Q2: Given that the pre-training datasets may include time series with varying input lengths, how do TTMs manage these differences during training?**
During pretraining - we do a sliding window on every pretraining dataset to get them converted into several windows based on the TTM’s context and forecast lengths so that we can train the model with all the windows. In cases, when even a single window cannot be created as the training data is extremely small, then these datasets are skipped during pretraining. Appendix Table 8 lists the datasets which were NOT skipped as part of this filtering step.
**Q3: How do TTMs handle time series with input lengths not encountered before during inference?**
During inference, if the user-provided input length is greater than the TTM-configured length (K), we use the last-K time-points as the input context. On the other hand, if the user-provided length is shorter than the configured TTM length, we have 2 options: if minor adjustments are needed - then we can prepend zeros to virtually extend the length. However, for major adjustments - it is preferable to quickly pre-train another TTM with the required shorter length for enhanced accuracy. Note that, pre-training shorter context length TTM is computationally inexpensive and can be quickly trained in a matter of few hours. Even, pretraining TTM on very long context lengths can be achieved in less than a day, a notably faster time as compared to existing counterparts which often take several days to weeks. As part of this paper, we will release and open-source a few pre-trained TTMs with different forecasting contexts that can widely cover most of the common enterprise use-cases across industries.
**Q4: Considering that resolution prefix embeddings are learned discretely, how do TTMs accommodate resolutions that were not seen during training when encountered during inference?**
If we encounter unseen resolutions, we recommend the user to either use Out-of-Vocabulary (OOV) token configured as part of the pre-trained model to accommodate these unseen resolutions or use the TTM model pre-trained without the Resolution Prefix Tuning (RPT) module.
**Q5: What criteria were used to determine the patch size for lagged exogenous features in the TTMs?**
Patch size for lagged exogenous features is mostly a hyperparameter and depends on the target data-set characteristics. Please note that, Exogenous Mixer block is introduced only during the finetuning phase (on the target-data) where these parameters can easily be configured based on the target data characteristics. In general - at least a patch length of 3 or more is suggested so that we have at least one or more forward and backward lags across all channels for effective inter-channel correlation modelling of exogenous signals.
**Q6: The training and evaluation of TTMs primarily focus on long time series with hundreds of input steps. Does this focus impact the performance of TTMs when applied to shorter time series, such as those with fewer than 30 time steps?**
In general - TTM is pre-trained with higher context length (512, 1024 and 1536) as longer context naturally gives more information for the model to learn better and enable transfer learning. However, TTM also performs well with shorter context length too. For Example. APP, SER and CC data in our benchmarks require a shorter context length of 96 and TTM pretrained with context length 96 works pretty well and outperforms other benchmarks by a good margin (Table 6).
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my concerns. I'm satisfied with the answers and would like to keep my score as a strong acceptance. | Summary: This paper proposes a novel time-series pre-trained model TTM that instead of trying to over-parameterize the model, TTM tries to under-parameterize the model for better generalization ability. The model architecture is simple, straightforward, and easy to understand, coming along with advanced training strategies, such as adaptive patching, diverse resolution sampling, etc.
Empirical evaluations show the benefit of TTM in both state-of-the-art forecasting performance with other supervised, zero-shot, and pre-trained methods, as well as super-efficiency in training.
Strengths: 1. The idea of using an under-parameterized pre-trained model (1M parameters in TTM compared to millions or billions of parameters in other pre-trained time-series models) for better generalization ability seems to be novel and promising.
2. The training strategies are clearly presented, the evaluations are comprehensive, and the results are impressive.
Weaknesses: 1. While the idea of using much fewer model parameters can work as presented in this paper, it is counter-intuitive as the current popular pre-trained models tend to be much larger than the proposed TTM. Beyond the empirical results, the paper does not go into deep intuitions of why this manner can work well in generalization, making the idea less convincing.
2. Presentation may be improved, currently the presentation of figures and tables is too crowded.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. Following the point about weakness, could the author intuitively explain why such few parameters can work well for time-series forecasting tasks?
2. As the model is under-parameterized, my worry about TTM is that TTM can be a model of garbage in and garbage out (take it easy, I am saying this is bad, I am just very curious). Here is my concern:
Let us make a comparison here, the proposed TTM is like a linear regression, very simple, while other over-parameterized models are like a polynomial/kernel regression. We all know that linear regression has less capability than polynomial regression for interpretations. However, for extrapolations, if the extrapolation data still follows the distribution of interpolation, then we can still expect polynomials to work better than linear. However, the reality is that extrapolation data does not follow the distribution of interpolation for most cases in time series (non-stationary problem), so in this case, both linear and polynomials can make mistakes. However because linear is less 'aggressive' in predicting trend changes due to the capability limit itself, it makes less wrong predictions than polynomials in many cases.
That is, given the fact that the datasets used in this work, such as ETT, are highly non-stationary, i.e., data distribution can change heavily as time goes. I might want to see two things:
First, I want to see whether TTM can still show trend information in long-term forecasting. I wonder if the outperformance is because TTM is making 'conservative' and simple predictions that sacrifice the trend information because of the under-parameterization than other pre-trained models.
Second, I want to see some evaluations on some very easy stationary signals, such as sin/cos wave, and see that TTM can still work better than other pre-trained models.
3. Does the training process involve using any synthetic data, such as simple sin/cos waves, which can be common practices in training other pre-trained models?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: Please refer to the Weaknesses and Questions
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you reviewer for the constructive feedback. Please find our response below:
**Q1: Could the author intuitively explain why such few parameters can work well for time-series forecasting tasks?**
There are three important design choices of TTM that greatly enhance its forecasting accuracy despite its extremely small model capacity:
1. All existing pre-trained models use a very high volume of pretraining data (TimesFM used ~300B and Moirai used 27B), hence they naturally require massive model sizes. However, as shown in Figure 3, Section 4.7, we observe that “limited” pretraining data with “high resolution diversity” greatly helps in time-series model generalization, as opposed to simply increasing the pretraining data size. This is an important observation and finding for the community that resolution diversity in pretraining data is very crucial for time-series FMs. Based on these findings, we proceed with a well-reduced dataset (1B) with high resolution diversity which naturally reduces our model size compared to counterparts needing to pretrain with several hundred billion time-series. We introduce a high diversity in our data via Diverse Resolution Sampling technique (DRS) which our counterparts fail to do.
2. Secondly, we opted for TSMixer-based models instead of transformer-based models, which further reduced the model size drastically. The TSMixer architecture has successfully established in the past that interleaving simple gated attentions with mixing components across patches, channels, and features greatly enhances forecasting accuracies with very limited model capacity, as the quadratic time-complexity of self-attentions can be entirely avoided. After TSMixer, several other mixer architectures have been published, reiterating the power of these simple architectures. Thus, avoiding complex transformer architectures further reduced our model size significantly.
3. Finally, we further increased the modeling power of TSMixer without drastically increasing its size by introducing several innovative components, such as adaptive patching/multi-scale patching, diverse resolution sampling, and resolution prefix tuning. These enhancements are crucial for effectively handling large pre-training across datasets with varying resolutions, all while keeping the model capacity very minimal.
Through these three innovative design choices, we managed to keep TTM as small as possible while outperforming state-of-the-art accuracies.
**Q2: Does TTM predict the trends and seasonality well or it’s just doing conservative or simple predictions ?**
Yes. Thanks for raising this important concern. Kindly refer to the pdf attached in the common rebuttal section where we have shared several zero-shot forecasting samples of TTM on various datasets, showcasing its ability to model trends and complex seasonal patterns (both real-world and synthetic sin-cos as per the request).
**Q3: Does the training process involve using any synthetic data ?**
In the current pre-trained models, we do not use any synthetic data. However, we augment more data via Diverse Resolution Sampling (DRS) technique to create different resolutions of existing datasets, which greatly enhances the model performance (as shown in Figure 3, Section 4.7).
**Q4: Presentation may be improved, currently the presentation of figures and tables is too crowded.**
Thanks for the feedback. We will address this in the final manuscript.
---
Rebuttal 2:
Title: Response to the rebuttal
Comment: I appreciate the authors’ additional results addressing my concern.
The showcase plots generally meet my expectation, where the model accurately predicts the overall trend, it misses most residual details. That is, it makes conservative predictions.
Nevertheless, I am impressed by the model's performance in capturing the overall trend (seasonal trend as said by the authors) given that it uses only a few parameters.
Based on the provided results, I have adjusted the scores accordingly. | Rebuttal 1:
Rebuttal: Thank you reviewers for your time, effort, and valuable feedback on our paper. We have clarified your queries in the respective sections. We also extend our gratitude to the Area Chairs and all the PC members for investing their valuable time throughout the review process.
**Short summary:** In 2024, the landscape of time-series forecasting has been dominated by large pre-trained models. Large Transformer-based pre-trained models like TimesFM, Moirai and Moment published at ICML 2024, have garnered significant attention within the time-series community. However, these models are massive, often requiring several hundred millions of parameters, and they face challenges in supporting faster runtime, quick fine-tuning, and integrating exogenous data. In contrast, our study introduces several innovative design modifications and enhancements to traditional model architectures, resulting in an exceptionally small model starting from just 1 million parameters that outperform the existing state-of-the-art models by a notable margin. Moreover, our model offers several other practical advantages such as faster inference, fine-tuning, explainability, exogenous data infusion, and compatibility with CPU deployments—features that are highly valued in industrial forecasting but often lacking in current SOTA models. TTM seeds the first effort towards building light-weight pre-trained models for time-series forecasting and we believe that our model will inspire numerous exciting research endeavours in this area.
We have clarified all the reviewer’s queries below in the respective sections. Thank you.
Pdf: /pdf/aa3bfd511f59bca1cbdd2d8c2166fc028f0c5ea7.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Training Data Attribution via Approximate Unrolling | Accept (poster) | Summary: The paper proposes a training data attribution (TDA) method based on implicit differentiation and unrolling. The goal is to estimate the effect of removing (or changing the weight) of a training example on the final (not necessarily optimal) parameters, accounting for training details influencing the trajectory of the parameters during training.
The method approximates the change in the final parameters $\boldsymbol\theta_T$ ($T$ is the number of optimization iterations) based on approximating the effect of re-weighting a training example with weight $1+\epsilon$. This requires computation of $\partial \boldsymbol\theta_T/\partial\epsilon$, which is a product of Jacobians $\partial\boldsymbol\theta_{k}/\partial\boldsymbol\theta_{k-1}$ from all training steps $k$., each computed from a Hessian of the loss. To reduce computational requirements, the authors apply different approximations while accounting for all parts of the training trajectory. The resulting algorithm, SOURCE, requires 2-18 more computation (Hessian estimations) than influence functions.
Experiments show that SOURCE outperforms other TDA methods in counterfactual evaluations on linear datamodeling score (LDS) (Park et al., 2023) and on standard subset removal counterfactual evaluation.
Strengths: - The paper is very well written and structured.
- The paper is technically sound (as far as I have checked) and presents a valuable analysis and novel insights.
- The proposed training data attribution method, SOURCE, outperforms other TDA methods on counterfactual evaluations on different tasks across different data modalities.
- The SOURCE code will be published (if I interpret L692 correctly).
Weaknesses: There are a few easy-to-fix errors:
- L6: missing comma before but.
- $\mathcal B_k$ is not mentioned before L124.
- L206-207: "approaches to".
- Appendix G is not referenced in the main text.
- L1046: "the SOURCE".
- L1051: $C = \{3, 6\}$ -> $C \in \{3, 6\}$ (likewise for $L$).
- (The curly brackets in Fig. 3 have a slight distortion.)
Technical Quality: 4
Clarity: 4
Questions for Authors: **Questions:**
1. Could you point more precisely to the Implicit Function Theorem and its proof?
2. Eq. (11): Could something more be said about the properties of the approximation?
**Suggestions:**
- L105: "optimal solution" -> "optimal solution when $\epsilon=0$".
- Fig. 2: "Each contour" -> "Each set of concentric contours"?
- FastSOURCE could be explained more clearly, with references to equations.
- An overview of computation time for different TDA and evaluation methods would be interesting.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The limitations seem to be addressed adequately.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's positive assessment of our paper, acknowledging that it is well-written, technically sound, and presents novel insights. We are grateful for your attention to detail in identifying errors and typos, which we will address in the revised manuscript.
> **Could you point more precisely to the Implicit Function Theorem and its proof?**
For a detailed derivation of influence functions using the Implicit Function Theorem, we direct the reviewer to [1] (Appendix B.1). With the assumptions outlined in our paper (the parameters are optimized to convergence and the objective has a unique solution), we have:
\begin{align}
\mathbf{0} = \nabla \mathcal{J}(\theta^\star) + \frac{\epsilon_0}{N} \nabla \mathcal{L}(\theta^\star, z_m),
\end{align}
where $\epsilon_0 = 0$. By the IFT, there exists a unique continuously differentiable function $r$ defined in the neighborhood of $\epsilon_0$ such that:
\begin{align}
\mathbf{0} = \nabla \mathcal{J}(r(\epsilon)) + \frac{\epsilon}{N} \nabla \mathcal{L}(r(\epsilon), z_m).
\end{align}
Taking the derivative with respect to $\epsilon$ at 0, we obtain:
\begin{align}
\mathbf{0} = \nabla^2 \mathcal{J}(\theta^\star) \frac{\mathrm{d} r}{\mathrm{d} \epsilon} \Big\vert_{\epsilon=0} + \frac{1}{N} \nabla \mathcal{L}(\theta^\star, z_m).
\end{align}
Note that $r(0) = \theta^\star$. Rearranging the terms yields the expression in Equation (4). We will explicitly mention this in the updated manuscript.
> **Eq. (11): Could something more be said about the properties of the approximation?**
In Equation (11), we approximate the Jacobians of different segments as statistically independent. There are two sources of randomness (as described in our footnote): (1) mini-batch sampling, which contributes to independence, and (2) autocorrelations in the optimization step, which induce correlations between optimization steps. Our approximation neglects the latter correlation.
> **Re: Suggestions**
We appreciate your suggestions and will incorporate them in the next revision of our manuscript.
[1] Bae, J., Ng, N., Lo, A., Ghassemi, M., & Grosse, R. B. (2022). If influence functions are the answer, then what is the question?. Advances in Neural Information Processing Systems, 35, 17953-17967.
---
Rebuttal Comment 1.1:
Title: Thank you for the clarifications
Comment: Thank you for the clarifications!
Taking everything into account, I think that I will keep my positive evaluation of the paper. | Summary: This paper propose a new TDA method (SOURCE) that combines implicit-differentiation-based methods and unrolling-based approaches. The new method is an extension to the SGD-influence and inherits the advantage to support multi-stage training pipelines while reduce the computation complexity to store and calculate the hessian matrix for each optimization step. The paper also carried out several experiments to show that SOURCE outperforms existing TDA methods under several training settings.
Strengths: - The paper is well presented.
○ The math derivation is self-contained, section 3.1 is a restate of SGD-influence and the extension to all iterations, 3.2 for segmentation and approximation, which is easy to follow.
○ Some figures (especially figure 3) help me to understand the algorithm in a more intuitive way.
- The contextual information of this paper in the whole TDA community is stated clearly.
○ It's stated clearly about the relationship between IF/SGD-Influence/SOURCE.
- The experiments for different training settings are convincing.
○ This including 4 normal settings, 2 non-converging settings, 2 multi-stage training settings.
- The segmentation trick is quite intuitive and reasonable.
Weaknesses: - First of all, I am not quite convinced and identify difference between the segmentation proposed in this paper and the one in TracIN (and also using different checkpoints in one training process for IF/TRAK, as stated in the paper as implicit-differentiation-based methods).
○ Furthermore on this point, it will be good if the experiment for IF/TRAK could also include a version which ensembles (natively for IF and term-wisely for TRAK) on multiple checkpoints for each independently trained models.
○ TRAK also made some experiment to show that it can use non-converge checkpoints to get a comparable LDS, could be mentioned in the paper as well.
- Not quite sure how much performance (accuracy) degradation is involved by the segmentation since there is no comparation directly with sgd-influence.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. **I am willing to make the rating higher** if the first weakness is resolved (intuitively or mathematically will be enough) since I understand add new experiment might be time-consuming.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: I believe there is no negative societal impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly thank the reviewer for their thorough evaluation and insightful comments. We are pleased that our paper's presentation and experimental results were well-received. We will address your concerns and questions below.
> **I am not quite convinced and identify the difference between the segmentation proposed in this paper and the one in TracIn.**
The key distinction lies in the quantities that TracIn and Source (or unrolled differentiation) approximate. As detailed in Section 4, TracIn estimates the importance of a training data point by aggregating the total change in the query's measurable quantity with gradient updates from this data point throughout training:
\begin{align}
\tau_{\text{TracIn}} (z_q, z_m, \mathcal{D}) = \sum_{i \in \mathcal{C}} \eta_i \nabla f(\theta_i, z_q)^\top \nabla \mathcal{L}(\theta_i, z_m),
\end{align}
where $\mathcal{C}$ denotes the set of steps the data point $z_m$ appeared during training, and $\eta_i$ and $\theta_i$ denote the corresponding learning rate and parameters, respectively. Even with a zero Hessian assumption (as done in Hydra, which we compare against in Table 1), the unrolling formulation gives:
\begin{align}
\tau_{\text{Unrolling}} (z_q, z_m, \mathcal{D}) = \sum_{i \in \mathcal{C}} \eta_i \nabla f(\theta^s, z_q)^\top \nabla \mathcal{L}(\theta_i, z_m),
\end{align}
where $\theta^s$ is the final model parameter, and the measurement gradient is always computed on this parameter. In contrast to TracIn, observe that unrolling aims to approximate the change in the final measurable quantity due to the perturbation of the data point's weight. Schioppa et al. [1] noted that using more intermediate checkpoints for TracIn can even hurt the performance in counterfactual estimation. Our work is motivated from the unrolling perspective, and hence, approximates a different quantity from TracIn. If TracInCP's analog of Hydra is considered (which has not been considered before, to the best of our knowledge), it can be viewed as applying our stationary approximation only to the gradient. However, using Hessian is crucial for achieving good LDS performance and better TDA performance in general, as highlighted in our work and recent studies [2, 3].
> **Using different checkpoints in one training process for IF/TRAK, as stated in the paper as implicit-differentiation-based methods.**
To the best of our knowledge, the ensembling over checkpoints during the training process was primarily an empirical observation [2] rather than being theoretically motivated. From the unrolled differentiation perspective, our derivations recommend averaging the Hessian and gradient within a segment. This approach contrasts with the direct ensemble of multiple checkpoints. More importantly, the previous ensembling scheme does not provide a natural treatment for models trained in multiple stages or those that have not converged. Addressing these limitations of influence functions is the core contribution of our work. We acknowledge some additional caveats. Trak generally benefits from ensembling (even using the same checkpoint) due to the projection's randomness. Moreover, using slightly non-converged parameters can also be helpful in cases where the gradients are too small at the final checkpoint. We appreciate the reviewer's observation and will mention this in our next manuscript revision.
In response to this feedback, we conducted additional experiments for different checkpoints in one training process for IF and TRAK. We considered two scenarios: (1) FashionMNIST and (2) FashionMNIST-C (not converged model). Similar to the findings from Park et al. [2], we observe an increase in the LDS when using multiple checkpoints from a single training run. However, Source still obtains higher LDS in general, and the discrepancy is larger, especially in settings where the model has not fully converged. Note that the relatively smaller improvements in FashionMNIST, where the models are trained with a fixed dataset for a sufficient number of iterations, are expected because of the close connections with influence functions, as described in line 209.
**FashionMNIST:**
| # Model / TDA Method | IF | IF Ensemble | Trak | Trak Ensemble | Source |
|---|---|---|---|---|---|
| 1 | 0.29 | 0.37 | 0.08 | 0.14 | **0.46** |
| 10 | 0.45 | 0.48 | 0.26 | 0.35 | **0.52** |
**FashionMNIST-C:**
| # Model / TDA Method | IF | IF Ensemble | Trak | Trak Ensemble | Source |
|---|---|---|---|---|---|
| 1 | 0.34 | 0.36 | 0.06 | 0.08 | **0.63** |
| 10 | 0.40 | 0.41 | 0.12 | 0.17 | **0.64** |
> **Not quite sure how much performance (accuracy) degradation is involved by the segmentation since there is no comparation directly with sgd-influence.**
SGD-Influence poses significant practical challenges: (1) it necessitates storing all intermediate checkpoints throughout the optimization process, (2) the method requires a series of Hessian-vector products (HVPs) and many Monte Carlo samples from multiple training runs, as described in Section 3.1. Unlike other gradient-based approaches, these computations cannot be parallelized efficiently; this process must be repeated for each training data point. (3) It is worth noting that the largest model considered in SGD-Influence was smaller than the MLP (MNIST) models used in our study. These factors make SGD-Influence computationally prohibitive for models and datasets we considered, which is where our method, SOURCE, shows its strengths.
[1] Schioppa, A., …, Zablotskaia, P. (2024). Theoretical and practical perspectives on what influence functions do.
[2] Park, S. …, Madry, A. (2023). Trak: Attributing model behavior at scale.
[3] Deng, J., …, Ma, J. (2024). Efficient Ensembles Improve Training Data Attribution.
---
Rebuttal Comment 1.1:
Comment: Thank you for the reply. I will keep my positive score. | Summary: The article discusses the limitations of existing training data attribution (TDA) methods, which aim to estimate how a model's behavior would change if specific data points were removed from the training set. The authors propose a new method called Source, which combines the benefits of implicit-differentiation-based and unrolling-based approaches. Source is computationally efficient and suitable for cases where implicit-differentiation-based methods struggle, such as non-converged models and multi-stage training pipelines. Empirical results show that Source outperforms existing TDA techniques, particularly in counterfactual prediction, where implicit-differentiation-based approaches fall short.
Strengths: 1. The writing in this article is very clear and excellent, making it easy to read and follow.
2. The design of this article avoids the assumption made by Koh et al.'s estimator that the loss is a graph function.
Weaknesses: See the question part.
Technical Quality: 3
Clarity: 4
Questions for Authors: Regarding this article, as a practitioner in the field, I would like the authors to answer a series of questions.
1. We all hope that Data-attribution/Influence Analysis can find suitable application areas, such as the work done by Gross et al. in explaining the output of LLM. However, currently, we have not seen Data-attribution/Influence Analysis truly solve a series of real problems, especially for large models such as Stable-Diffusion/LLM/LVM.
2. If the time and computational cost required for attribution is much higher than that for training (as it should compute the gradient one-sample-by-one-sample, and even for many checkpoints), is there any practical application significance worth for such high complexities?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: 1. Error Bound: This article lacks an analysis of the error bound for the estimator.
2. Regarding the contribution: The claim "it allows the attribution of data points at different stages of training" is not a unique contribution of this article, as some previous works [1,2,3] have achieved this without assuming that a checkpoint with testing is necessarily at gradient=0. Therefore, I do not believe that this claim can be considered a significant contribution to this article.
3. Experiments: This article lacks validation on significant datasets (even without experiments on relatively old datasets like ImageNet) and instead tests on very small toy datasets.
[1]. S Hara, et al. Data Cleansing for Models Trained with SGD. NeurIPS.
[2]. G Pruthi, et al. Estimating Training Data Influence by Tracing Gradient Descent. NeurIPS.
[3]. H Tan, et al. Data pruning via moving-one-sample-out. NeurIPS.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thorough evaluation and insightful comments. Our research addresses limitations of influence functions and proposes a practical and novel algorithm to overcome these challenges. Grosse et al. [8], which the reviewer cited as an example of a TDA application, highlight a limitation of their approach:
*"A second limitation is that we focus on pretrained models. Practical usefulness and safety of conversational AI assistants depend crucially on fine-tuning …. Extending influence functions … to the combination of pretraining and fine-tuning is an important avenue to explore."*
Our work takes a step towards addressing this challenge. While we appreciate the reviewer's critique, we believe some concerns may not fully align with our research's core contributions. The following sections provide detailed responses to address these concerns and clarify our work’s significance. We welcome any further discussion during the reviewer-author period.
> **Have not seen TDA truly solve real problems…**
Training examples undeniably play a significant role in shaping model behavior. [1] and [2] demonstrated that removing less than 1% of the CIFAR-10 and ImageNet training dataset (identified by TDA) can lead to misclassifying a substantial fraction of test images. Several works show the importance of training data for LLMs [3, 4] and diffusion models [5].
TDA techniques have already shown promise in several areas:
1. **Data Curation:** TDA methods have proven helpful in increasing compute efficiency [6] and improving subgroup robustness [7]. Notably, [6] demonstrated that data curation with TDA techniques provides a 2x compute multiplier on standard LM problems compared to baseline techniques.
2. **Understanding Model:** [8] employed influence functions to study LLM generalization patterns. Their analysis uncovered surprising sensitivities to word ordering. Berglund et al. [9] later empirically verified this phenomenon, which they termed the "Reversal Curse."
3. **Economic Models:** Building a royalty model to compensate data providers properly is an important economic and societal question. Using TDA techniques, [10] and [11] developed initial prototypes for such royalty models.
TDA remains an active and valuable area of research, as evidenced by these applications and empirical observations on the importance of training data. While TDA may not yet be fully integrated into practical pipelines (at least publicly), *we believe this does not diminish the importance of researching potential failure modes in existing TDA methods and developing more accurate techniques, such as our work*. Our research contributes to the ongoing effort to make TDA methods more reliable and applicable to real-world scenarios.
> **Is TDA worth it?**
Naive gradient-based TDA methods do require gradient computations for all data points, which is as expensive as one-epoch training. We would like to clarify several points. First, in our experiments, attributing a batch of query data points is much faster than re-training a single model, as our models were trained over multiple epochs. Secondly, the one-by-one gradient computation described by the reviewer can be significantly optimized. Modern libraries such as Functorch or Opacus allow for efficient per-sample gradient computation. Lastly, some approaches, like [10], achieved computation times significantly faster than one-epoch training. Source is designed with flexibility in mind and can readily incorporate these efficiency improvements (Appendix E.3).
On a separate note, we have recently verified that our implementation runs on the Llama-3-8B model (OpenWebText) using academic resources. We will release this implementation.
> **Lacks error bound for the estimator.**
Investigating the error bound under weak/practical assumptions presents an interesting direction for future research. Note that state-of-the-art techniques, such as Trak [12], EKFAC [8], and TracIn do not provide error bounds. We believe that the empirical results presented in our work provide convincing evidence of Source's advantages. Our evaluation uses state-of-the-art metrics [12] and covers multiple data modalities, demonstrating the efficacy of our approach across diverse scenarios.
> **The claim “support multi-stage” is not a unique contribution ….**
Our work does not claim to be the first to perform unrolled differentiation, as explained in the preceding paragraph. Rather, we present this capability as a key advantage of Source, particularly in comparison to influence functions. As other reviewers have noted, our contribution lies in the novel techniques we have developed to formulate and solve this problem. However, our work is the first to construct a set of experiments to verify the effectiveness of TDA techniques in the multi-stage training setup. Here, Source demonstrated large improvements: (1) over 5x improvement in LDS and (2) over 15x improvement in terms of subset removal evaluation, when compared to TracIn.
> **Lacks large datasets**
The primary challenge lies in evaluation. For context, our CIFAR-10 & ResNet-9 pipeline required training the network 2,000 times for LDS computation and up to 1,800 times per TDA technique in the subset removal evaluation. Developing more efficient yet reliable evaluation metrics remains an open problem. Despite these constraints, we conducted comprehensive experiments across a diverse range of tasks, datasets, and architectures (up to 110M parameters): (1) regression (Concrete, Parkinsons), (2) image classification (MNIST, FashionMNIST, CIFAR-10), (3) text classification (QNLI, SST2, RTE), language modeling (WikiText-2), and continual learning (RotatedMNIST and PACS). We also examined linear and logistic regression problems in Appendix G.4. Our quantitative experimental scope surpasses many previous and recent publications from academic labs [10, 11, 13] in terms of variety in data types, tasks, and model architectures.
---
Rebuttal 2:
Title: References
Comment: [1] Ilyas, A., Park, S. M., Engstrom, L., Leclerc, G., & Madry, A. (2022). Datamodels: Predicting predictions from training data. arXiv preprint arXiv:2202.00622.
[2] Singla, V., Sandoval-Segura, P., Goldblum, M., Geiping, J., & Goldstein, T. (2023). A Simple and Efficient Baseline for Data Attribution on Images. arXiv preprint arXiv:2311.03386.
[3] Lee, K., Ippolito, D., Nystrom, A., Zhang, C., Eck, D., Callison-Burch, C., & Carlini, N. (2021). Deduplicating training data makes language models better. arXiv preprint arXiv:2107.06499.
[4] Longpre, S., Yauney, G., Reif, E., Lee, K., Roberts, A., Zoph, B., ... & Ippolito, D. (2023). A pretrainer's guide to training data: Measuring the effects of data age, domain coverage, quality, & toxicity. arXiv preprint arXiv:2305.13169.
[5] Carlini, N., Hayes, J., Nasr, M., Jagielski, M., Sehwag, V., Tramer, F., ... & Wallace, E. (2023). Extracting training data from diffusion models. In 32nd USENIX Security Symposium (USENIX Security 23) (pp. 5253-5270).
[6] Engstrom, L., Feldmann, A., & Madry, A. (2024). Dsdm: Model-aware dataset selection with datamodels. arXiv preprint arXiv:2401.12926.
[7] Jain, S., Hamidieh, K., Georgiev, K., Ilyas, A., Ghassemi, M., & Madry, A. (2024). Data Debiasing with Datamodels (D3M): Improving Subgroup Robustness via Data Selection. arXiv preprint arXiv:2406.16846.
[8] Grosse, R., Bae, J., Anil, C., Elhage, N., Tamkin, A., Tajdini, A., ... & Bowman, S. R. (2023). Studying large language model generalization with influence functions. arXiv preprint arXiv:2308.03296.
[9] Berglund, L., Tong, M., Kaufmann, M., Balesni, M., Stickland, A. C., Korbak, T., & Evans, O. (2023). The reversal curse: LLMs trained on" a is b" fail to learn" b is a". arXiv preprint arXiv:2309.12288.
[10] Choe, S. K., Ahn, H., Bae, J., Zhao, K., Kang, M., Chung, Y., ... & Xing, E. (2024). What is Your Data Worth to GPT? LLM-Scale Data Valuation with Influence Functions. arXiv preprint arXiv:2405.13954.
[11] Deng, J., & Ma, J. (2023). Computational copyright: Towards a royalty model for ai music generation platforms. arXiv preprint arXiv:2312.06646.
[12] Park, S. M., Georgiev, K., Ilyas, A., Leclerc, G., & Madry, A. (2023). Trak: Attributing model behavior at scale. arXiv preprint arXiv:2303.14186.
[13] Deng, J., Li, T. W., Zhang, S., & Ma, J. (2024). Efficient Ensembles Improve Training Data Attribution. arXiv preprint arXiv:2405.17293.
---
Rebuttal Comment 2.1:
Comment: 1. Why don't you conduct experiments on LLMs/VLMs/SD models or even practical experiments on ImageNet? But just so many toy experiments?
2. This work lacks theoretical justification.
3. **First, in our experiments, attributing a batch of query data points is much faster than re-training a single mode.** So, the query data batch is a random batch?
---
Rebuttal 3:
Comment: We appreciate the reviewer's follow-up questions. While we addressed questions 1 and 2 in our previous response, we are happy to provide further clarification. Please see our responses to your questions below.
> **Why don't you conduct experiments on LLMs/VLMs/SD models or even practical experiments on ImageNet?**
Replicating a setup similar to Grosse et al. (2023) [8] (e.g., LLMs) requires significant computational resources beyond most academic labs' reach. For instance, an experiment on a 13B parameter model, searching over 1M training sequences, would incur costs in the range of tens of thousands of dollars. Given sufficient resources, running more extensive experiments would be possible. As mentioned in our previous response, we recently confirmed that our implementation runs on the Llama3-8B model using academic resources.
It is also worth noting that the primary issue lies in evaluation rather than performing data attribution on large models. Obtaining ground truth for counterfactual estimation is a computationally expensive process. It requires retraining the model up to 2,000 times for LDS and 10,000 times for the subset removal evaluation presented in Figure 5. While repeating these experiments on larger-scale datasets is possible, doing so would incur substantial costs that are difficult to conduct for most academic labs. Our paper demonstrates the effectiveness of our approach in various settings, including models with up to 110M parameters (e.g., BERT & QNLI dataset), which we believe are not just small toy experiments.
> **This work lacks theoretical justification.**
Section 3 of our paper provides a theoretical foundation, introduces a set of approximations needed to obtain practical algorithms, discusses their implications, and formally derives a tractable and efficient algorithm. To support our theoretical work, we offer empirical evidence demonstrating that our algorithm performs better than previous TDA techniques in classical (linear and logistic regression) and modern (neural networks) settings. Based on the previous review, we believe the reviewer argues that our work lacks theoretical justification solely because the error-bound analysis is not provided. It is important to note that state-of-the-art methods in the field, including EKFAC [8] and Trak [12], do not provide error-bound analyses yet are still theoretically justified. Establishing error-bound analysis under realistic assumptions, which are challenging in neural network settings, is an exciting direction for future research.
> **Is the query data batch a random batch?**
The query batch can consist of data points from the test dataset (e.g., 2,000 test data points of interest), which we use to compute influence scores across all training data points. For generative models, it could also consist of the model's output [10]. We are not entirely clear on what the reviewer meant by "random batch," as it could be just any batch of data points on which one would like to perform TDA. For context, our argument was that the reviewer's statement, “*the time and computational cost required for attribution is much higher than that for training*,” may not necessarily be true. Approximating leave-one-out (LOO) by retraining the model, as the reviewer described, requires multiple retraining for *each* training data point, which is significantly more computationally expensive.
The discussion period ends in a couple of hours, but we would be happy to provide further clarification within the given timeframe if needed. Thank you again for your time and expertise. | Summary: This paper introduces SOURCE, a new data attribution method designed primarily for deep neural networks; more generally, SOURCE is suited for any model class optimized with gradiend-based methods. The authors motivate SOURCE as an approximation to gradient unrolling, i.e., differentiating through the entire training process. They introduce a number of assumptions and simplifications that allow them to arrive at a formulation resembling existing influence-function-based data attribution methods.
The main assumption the authors employ is that the model Hessian (of the loss w.r.t. the model parameters) remains constant throughout segments of the training process (they set the number of segments $L$ as a hyperparameter). Additionally, the authors use the EK-FAC heuristic approximation for the Hessian [1] to reduce the computational cost of SOURCE. The same approximation has been employed in a previous data attribution method [2], which the authors use as a baseline.
In the experimental section of the paper, the authors evaluate SOURCE, alongside with an extensive list of baselines, on numerous (albeit rather small) datasets. SOURCE outperforms existing methods on the vast majority of benchmarks on standard data attribution evaluation metrics.
[1] T. George, C. Laurent, X. Bouthillier, N. Ballas, and P. Vincent. Fast approximate natural gradient descent in a kronecker factored eigenbasis. Advances in Neural Information Processing Systems, 31, 2018.
[2] R. Grosse, J. Bae, C. Anil, N. Elhage, A. Tamkin, A. Tajdini, B. Steiner, D. Li, E. Durmus, E. Perez, E. Hubinger, K. Lukošiu¯te˙, K. Nguyen, N. Joseph, S. McCandlish, J. Kaplan, and S. R. Bowman. Studying large language model generalization with influence functions, 2023.
Strengths: on the selected datasets (Concrete, FashionMNIST, CIFAR-10, RTE, PACS), SOURCE performs consistently well, largely outperforming existing methods
- the paper is clearly written: the authors contextualize their method well within the existing literature and succinctly describe the novel ideas resulting in their method SOURCE
- the direction of adapting data attribution methods towards realistic (for deep learning) scenarios like lack of convergence and multi-stage training procedures is very timely and important; the paper does a great job at balancing between addressing these challenges and maintaining computational efficiency
- the resulting method, SOURCE, is easy to understand and is rather "obvious" (in hindsight, of course)
Weaknesses: - compared to [1], SOURCE requires significantly larger amount of compute (as the number $L$ of training trajectory segment increases). At the same time, I would label the increase in performance from [1] to SOURCE as only "moderate". Given, in addition to the previous point, the conceptual similarity between the two methods, the technical contribution of this paper seems to be on the smaller side. Yet, I still appreciate the connection with unrolling that the authors present.
- if possible, it would be good to see (at least a qualitative) evaluation of SOURCE on a larger scale setting, especially since the mechanically very similar method [1] is employed on larger-scale language models; this said, I understand that the rebuttal period is short, and would not base my score change on this result.
[1] R. Grosse, J. Bae, C. Anil, N. Elhage, A. Tamkin, A. Tajdini, B. Steiner, D. Li, E. Durmus, E. Perez, E. Hubinger, K. Lukošiu¯te˙, K. Nguyen, N. Joseph, S. McCandlish, J. Kaplan, and S. R. Bowman. Studying large language model generalization with influence functions, 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: - can the authors provide an estimate of the computational cost of each method (e.g. wall-clock time) used in Figure 4? My understanding is that SOURCE uses up to 6x the amount of compute as the reported baselines (line 283: "TRACIN and SOURCE use at most 6 intermediate checkpoints saved throughout training."). More generally, can the authors provide an equivalent to Figure 4 where compute, rather than the number of checkpoints, is equalized?
- how does the performance (e.g., as measured by LDS) of SOURCE scale as 1) the number of segments $L$ increases, and 2) the number of fully-retrained checkpoints increase? In particular, I am curious where LDS asymptotes, and whether the asymptote value for SOURCE is higher than the one for [1, 2]; if that is indeed the case, this would, in my opinion, significantly strengthen the argument that SOURCE has a smaller "modeling" error compared to previous approaches.
- my understanding is that the authors of [1] also propose using intermediate checkpoints along the training trajectory (without motivating this choice as an approximation to gradient unrolling); could the authors comment on the differences between the two estimators (excluding, obviously, the choice of Hessian approximation---EK-FAC and random projections, respectively)? Additionally, can the authors compare the performance of TRAK when intermediate checkpoints are used?
[1] S. M. Park, K. Georgiev, A. Ilyas, G. Leclerc, and A. Madry. TRAK: Attributing model behavior at scale. In International Conference on Machine Learning, pages 27074–27113. PMLR, 2023.
[2] R. Grosse, J. Bae, C. Anil, N. Elhage, A. Tamkin, A. Tajdini, B. Steiner, D. Li, E. Durmus, E. Perez, E. Hubinger, K. Lukošiu¯te˙, K. Nguyen, N. Joseph, S. McCandlish, J. Kaplan, and S. R. Bowman. Studying large language model generalization with influence functions, 2023.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comprehensive and insightful evaluation of our work. We particularly appreciate the recognition of Source’s consistent performance across datasets, the clarity of our paper, and our efforts to balance addressing challenges in realistic scenarios while maintaining computational efficiency.
> **The increase in performance as only moderate.**
The moderate improvements in the first set of experiments, where models are trained to near convergence with a fixed dataset, are expected because of the close connections with influence functions, as described in line 209. The primary advantage of our approach becomes evident when the assumptions of implicit-differentiation-based methods fall short. We demonstrate this in two scenarios: models trained for only a few iterations and multi-stage training processes. In these scenarios, Source shows large improvements over influence functions, both in terms of LDS and subset removal evaluation.
> **If possible, it would be good to see (at least a qualitative) evaluation of SOURCE on a larger scale setting …**
We appreciate the reviewer's suggestion and agree that such an evaluation would provide useful insights. However, replicating a setup similar to [1] requires significant computational resources beyond most academic labs' reach. For instance, an experiment on a 13B parameter model, searching over 1M sequences, would incur costs in the range of tens of thousands of dollars. However, we have recently verified our EKFAC implementation scales to Llama-3 8B models. We will release our code to support TDA on larger-scale settings.
> **Computational cost of each method (e.g., wall-clock time)**
We have recently improved the efficiency of our EKFAC implementation and subsequently re-ran a subset of the experiments. While we have implemented additional optimization tricks such as query batching [2] and automatic mixed precision — which can significantly improve efficiency with only a slight decrease in LDS — we will report the wall-clock time without these improvements to maintain consistency with the presented results. It is important to note that (1) the wall-clock time includes model training time, and (2) although we used 2,000 query data points in our experiments, consistent with [1], Trak becomes significantly more computationally efficient as the number of query data points increases, as it does not require multiple iterations through the dataset. While we demonstrated one instantiation of Source with EKFAC, we emphasize that Source can also be implemented with Trak, as detailed in Appendix E.3.
FashionMNIST
| Method | TracIn (10 Models) | Trak (1 Model) | Trak (10 Models) | IF (1 Model) | IF (10 Models) | Source (1 Model) | Source (10 Models) |
|---|---|---|---|---|---|---|---|
| Time (s) | 361.22 | 25.40 | 138.66 | 26.73 | 267.91 | 50.55 | 495.39 |
| LDS | 0.21 | 0.08 | 0.26 | 0.30 | 0.45 | 0.46 | 0.53 |
| Frac. Misclassified Test Examples (300) | 0.23 | 0.10 | 0.36 | 0.30 | 0.44 | 0.54 | 0.65 |
FashionMNIST-C
| Method | TracIn (10 Models) | Trak (1 Model) | Trak (10 Models) | IF (1 Model) | IF (10 Models) | Source (1 Model) | Source (10 Models) |
|---|---|---|---|---|---|---|---|
| Time (s) | 115.26 | 7.58 | 32.72 | 8.88 | 89.68 | 18.29 | 182.93 |
| LDS | 0.48 | 0.07 | 0.12 | 0.34 | 0.40 | 0.63 | 0.64 |
| Frac. Misclassified Test Examples (300) | 0.56 | - | 0.18 | - | 0.41 | 0.90 | 0.91 |
CIFAR-10
| Method | TracIn (10 Models) | Trak (1 Model) | Trak (10 Models) | IF (1 Model) | IF (10 Models) | Source (1 Model) | Source (10 Models) |
|---|---|---|---|---|---|---|---|
| Time (s) | 3843.74 | 247.41 | 1843.11 | 232.15 | 2323.6 | 886.15 | 8896.95 |
| LDS | 0.06 | 0.05 | 0.24 | 0.13 | 0.20 | 0.18 | 0.22 |
| Frac. Misclassified Test Examples (1400) | 0.28 | 0.11 | 0.41 | 0.26 | 0.41 | 0.52 | 0.61 |
> **LDS asymptotes**
We direct reviewers to Figure 4 for insights on performance with different numbers of segments. In our experiments with the FashionMNIST pipeline, we did not observe an improvement in the LDS as we increased the number of segments to $L=6$. This may be because the segmentation approximation is already sufficiently accurate for these simpler experiments. We have conducted additional experiments to address this aspect of the reviewers' question about scaling performance with the number of fully re-trained checkpoints. We have trained up to 200 models and ensembled the models accordingly for FashionMNIST and FashionMNIST-C datasets. The uploaded PDF shows the results (we also compare it against the wall-clock time). On the FashionMNIST dataset, the gap between EKFAC and Source reduces as the number of checkpoints increases, but we still obtain a higher LDS at asymptote. On FashionMNIST-C (not converged), Source still obtains a significantly high LDS even at asymptote. We are conducting experiments of Trak with at most 300 models and we will include in the next revision of the manuscript.
> **Re: Intermediate checkpoints**
Our derivations recommend averaging the Hessian and gradient within a segment. This contrasts with the direct ensemble of multiple checkpoints explored in [1]. A core contribution of our work is the treatment for models trained in multiple stages or those that have not fully converged. Regarding the differences, Trak, as used in [1], benefits more from ensembling due to the randomness in the projection. We have observed that Trak can achieve higher performance even when ensembling a single checkpoint, while we generally do not see this improvement with EKFAC. To illustrate the performance differences, we have conducted additional experiments on Trak with intermediate checkpoints ensembling for FashionMNIST and FashionMNIST-C datasets (please see our response to Reviewer 3Y11; omitted due to characters limit). We will include additional datasets with intermediate checkpoints ensembled Trak in the next revision of the manuscript.
---
Rebuttal Comment 1.1:
Title: Score update
Comment: I appreciate the response and additional experiments. As a result, I am raising my score. | Rebuttal 1:
Rebuttal: Dear Reviewers,
We thank all reviewers for their detailed and thoughtful feedback. We are pleased that the reviewers found our work to be well-written, easy to read (**9an4**, **hk4U**, **3Y11**, **5ybi**), addressing an important problem in the field (**9an4**, **3Y11**), technically solid (**9an4**, **5ybi**), and supported by convincing experiments (**3Y11**, **5ybi**).
We will address your concerns and questions in individual responses and are committed to improving our paper based on your valuable insights. Should you have additional comments, we welcome further discussion during the author-reviewer period. Once again, thank you for your time and expertise.
Regards,
Authors of "Training Data Attribution via Approximate Unrolling"
Pdf: /pdf/629b8f36981df057d2daa2f9170de22c6dce4b41.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
OpenGaussian: Towards Point-Level 3D Gaussian-based Open Vocabulary Understanding | Accept (poster) | Summary: This paper presents OpenGaussian, a method utilizing 3D Gaussian Splatting for open vocabulary comprehension at the 3D point level. It addresses the limitations of current 3DGS methods that are confined to 2D pixel-level analysis and are inadequate for 3D point-level tasks. The method leverages SAM masks to maintain 3D consistency, implements a two-stage codebook for feature discretization, and proposes an instance-level 3D-2D feature association approach, demonstrating effectiveness in various tasks.
Strengths: 1. The motivation behind this paper is well-grounded, and the proposed method is technically sound.
2. The paper is well-written and easy to follow.
3. The method demonstrates good performance across various tasks, including open-vocabulary object selection, click-based 3D object selection, and 3D point cloud understanding.
Weaknesses: 1. The ablation studies are not sufficiently thorough. In Table 3, using xyz information results in significant performance improvement. It would be beneficial to provide results for the coarse-level alone, using xyz information, with k=64 and k=320. This would further illustrate the necessity of using the fine-level.
2. A sensitivity analysis for k1 and k2 could be provided to better understand their impact on the performance of the method.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## 1. More Detailed Ablation of Two-Stage Codebooks
Thank you for your constructive comments. Following your suggestions, we have incorporated two additional experiments (refer to cases #3 and #4 in the table below) to evaluate the performance when considering xyz coordinate information at the coarse level alone. Our analysis yielded the following insights:
+ **Importance of xyz Information**: Comparing cases #1 and #3, we observe a significant improvement in mIoU (3.36%) and mAcc (2.55%) when incorporating xyz information at the coarse level. This highlights the crucial role of spatial context in our approach.
+ **Codebook Size**: Comparing cases #2 and #4, we find that simply increasing the number of codewords (k) does not necessarily lead to performance gains.
+ **Fine-Level Codebook Contribution**: Comparing cases #3 and #6, we see a substantial performance boost of 6.25% in mIoU and 5.37% in mAcc when incorporating the fine-level codebook. This demonstrates the significant contribution of the fine-level codebook in capturing detailed semantic information and enhancing the overall performance.
The additional experiments reinforce our claim that incorporating xyz coordinate information at the coarse level is essential and that the fine-level codebook plays a critical role in boosting performance. We appreciate your suggestions, which have significantly strengthened the persuasiveness of our method.
| Case | Coarse-level | | Fine-level | mIoU | mAcc. |
|:----:|:--------------:|:-------------:|:---------:|:---:|:------:|
| | w/o xyz | w/ xyz | | | |
| #1 | ✓ (k=64) | | | 28.68 | 47.27 |
| #2 | ✓ (k=320) | | | 14.61 | 24.34 |
| #3 | | ✓ (k=64) | | 32.04 | 49.82 |
| #4 | | ✓ (k=320) | | 15.20 | 24.91 |
| #5 | ✓ (k=64) | | ✓ | 30.27 | 46.44 |
| #6 | | ✓ (k=64) | ✓ | 38.29 | 55.19 |
---
## 2. Sensitivity Analysis of k1 and k2
Thank you for your question. In the ScanNet dataset, our default settings are k1=64 and k2=5. To address your concerns, we conducted a comprehensive analysis with k1 values of 48, 64, and 80, and k2 values of 3, 5, and 7, as illustrated in the table below. Our findings reveal that the optimal values for k1 and k2 should not be too high. Specifically, performance remains comparable when k1 is set to 64 or 48, and when k2 is set to 5 or 3. However, a performance degradation is observed when k1 and k2 are increased to 80 and 7.
We sincerely appreciate your insightful suggestions, which led to these valuable observations. While our initially chosen values are empirically derived, these experiments highlight that setting fixed values may not always be optimal. In the future, we plan to investigate scene-adaptive k values, which could potentially enhance the performance across diverse scenarios.
We commit to incorporating the aforementioned experiments and this limitations analysis in the revised version
| Case | k1 | k2 | mIoU | mAcc. |
|:------------:|:--:|:--:|:-----:|:-----:|
| #1 | 48 | 5 | 37.18 | 54.56 |
| #2 (default) | 64 | 5 | 38.29 | 55.19 |
| #3 | 80 | 5 | 32.11 | 45.87 |
| #4 | 64 | 7 | 34.89 | 49.33 |
| #5 | 64 | 3 | 38.3 | 55.62 |
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. My concern has been resolved, and I will maintain the original score.
---
Reply to Comment 1.1.1:
Comment: We greatly appreciate your response and valuable suggestions, which improved the quality and comprehensiveness of our paper. | Summary: In this paper, the authors propose three techniques to enhance the point-level 3D gaussian-based open vocabulary understanding:
1. Intra-mask smoothing loss to draw features within the same mask closer, and inter-mask contrastive loss to increase discriminativeness of the mean feature of each instance.
2. Two-level codebook for discretization. The proposed codebook discretizes instance features with the 3D coordinates to ensure identical Gaussian features from the same instance, in a coarse-to-fine manner.
3. Instance-level 2D-3D association technique to link CLIP features with 3D instance without loss backpropagation and depth information.
Strengths: 1. Visualizations are clear and strong to show the effectiveness of the proposed OpenGaussian and advantages over previous literature.
2. Quantitative results also demonstrate consistant and remarkable improvements over previous methods.
Weaknesses: 1. Paper writing needs to be improved. I have to admit that I'm not expert of this field, and this paper is not easy to understand since it requires abundant prior knowledge about the task and previous methods.
2. No limitation discussion is included in this paper.
3. Ablations on inter/intra-mask smoothing loss, since this is also a contribution of this paper.
4. Efficiency comparison between OpenGaussian and previous methods. The authors do includ training time in the supplemental material. However, since there is human-computer interaction in this task, it is critical to reveal the inference time or throughput of the method. Can it achieve real-time performance? Does it lags behind previous methods in terms of efficiency?
5. The three contributions of this paper seems separate. The authors are encouraged to resummarize them in a more integrated story.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. typo: m(m+1) -> m(m-1) in equation (2).
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The paper doesn't discuss any limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## 1. Ablation of Inter/Intra Mask Loss
We truly appreciate your constructive feedback and apologize for any overlook regarding the ablation of the inter/intra mask loss. In response, we have conducted targeted ablation experiments, and the results are presented in the table below. Our analysis is summarized as follows:
+ The inter-mask contrastive loss proves to be more crucial. Employing only this loss achieves respectable performance. Adding the intra-mask loss further enhances results, leading to a 3.05% improvement in mIoU and a 2.76% increase in mAcc.
+ The intra-mask smooth loss exhibits comparatively lower importance: This can be attributed to the inherent characteristics of 3DGS, where a single Gaussian point represents multiple pixels. Consequently, features of neighboring pixels tend to be similar, indicating that 3DGS naturally induces a smoothing effect on adjacent pixels. This intrinsic smoothing mechanism partially mitigates the contribution of the intra-mask smooth loss.
These additional experiments substantiate the contribution of our proposed inter/intra mask loss. We commit to incorporating these results and analysis in the revised version.
| Case | inter-mask loss | intra-mask loss | mIoU | mAcc. |
|:----:|:----------------:|:-----------------:|:-----:|:-----:|
| #1 | ✓ | | 35.24 | 52.43 |
| #2 | | ✓ | 25.89 | 42.76 |
| #3 | ✓ | ✓ | 38.29 | 55.19 |
---
## 2. Efficiency and Real-Time Performance Analysis
We appreciate the reviewer’s question. Our experimental setup aligns with LangSplat and LEGaussian, focusing on semantic understanding of well-reconstructed 3D scenes, which is not a real-time task. However, we believe our method has potential for real-time applications for the following reasons.
+ **Incremental Input Support**: Unlike LangSplat[30] and LEGaussian[34], which necessitate acquiring all scene objects prior to training the autoencoder-decoder or features distillation, our method allows for the incremental input of new images. Each new frame can be processed immediately without prior preprocessing. This aligns well with the incremental needs of real-time tasks such as SLAM and robotics.
+ **Training Efficiency**: Our statistics on the ScanNet indoor dataset show that 3D point feature learning on a 640x480 image takes approximately 50ms per iteration for a scene with around 100,000 points, achieving a 20fps frame rate. On the Waymo outdoor dataset, processing a 960x640 image for a scene with approximately 700,000 points requires about 80ms per iteration, resulting in a 13fps frame rate. Furthermore, techniques like keyframes, sliding windows, and multithreading can be employed to further enhance processing speed in applications such as SLAM.
The analysis suggests our method possesses the potential for real-time implementation. We also hope to inspire the community to utilize the proposed method across various downstream tasks by releasing our code publicly.
---
## 3. Discussion of Limitations
We apologize for not analyzing the limitations of the proposed method. In the revised version, we will incorporate the following discussion:
+ The geometric properties of the Gaussian (position, opacity, scale) are fixed. This may lead to inconsistencies between geometric representation and semantic content. We will consider joint optimization of instance features and geometric properties in future work.
+ The values of k for the two-level codebooks are currently determined empirically. It is necessary to study scenario-specific adaptive values to optimize performance across diverse contexts.
+ Currently, we have not considered dynamic factors, which are common challenges in real-world applications. Integrating the proposed method with 4DGS would be meaningful.
By acknowledging these limitations, we aim to provide a more balanced perspective on our method and suggest areas for future improvements.
---
## 4. Improvements in Paper Writing
We appreciate the reviewer pointing out the shortcomings in our writing. Indeed, our contribution 1 (inter/intra-mask loss for instance feature learning) and contribution 2 (two-level codebooks for discrete feature learning) are closely linked, with contribution 1 essentially serving as the initialization for contribution 2. As for contribution 3, we do realize that this part is somewhat independent, leading to difficulty in understanding. We apologize for any confusion caused. In the revised version, we will enhance the readability of the paper in the following ways:
+ Reorganizing the introduction to better connect the three methods we proposed.
+ Adding contextual transitions between sections.
---
## 5. Other Question
Thank you for pointing out the typo in Eq(2). We will correct this in the revised version.
---
Rebuttal Comment 1.1:
Title: Response to the Rebuttal
Comment: Thanks for the authors' comprehensive rebuttal to my questions and sorry for the late reply. Their clarification on missing ablation studies and real-time performance analysis meet my satisfaction. Therefore, I will raise my rating to 6.
However, I have to admit that I'm not an expert in this field, so please consider more about the review opinions of reviewers with higher confidence.
---
Reply to Comment 1.1.1:
Comment: We are very grateful for your response and the positive evaluation of our work. Your suggestions have significantly improved our manuscript, and we will also actively consider the opinions of other reviewers.
---
Rebuttal 2:
Comment: Dear Reviewer sSBZ,
Thank you once again for your insightful review, which has greatly enhanced the quality and clarity of our paper. We sincerely hope that our rebuttal has effectively addressed your questions and concerns. Should you require any additional clarifications or further information, please do not hesitate to reach out. We greatly value your insightful suggestions.
Thank you very much for your time and consideration.
Best regards,
Authors of Submission 1591 | Summary: This paper introduces "OpenGaussian," a novel method for 3D point-level open vocabulary understanding using 3D Gaussian Splatting (3DGS). The authors address the limitations of existing 3DGS-based methods that primarily focus on 2D pixel-level parsing. OpenGaussian aims to enhance 3D point-level understanding by training instance features with 3D consistency and proposing a two-stage codebook for feature discretization. The method also introduces an instance-level 3D-2D feature association to link 3D points to 2D masks and CLIP features. Extensive experiments demonstrate the effectiveness of OpenGaussian in various 3D tasks, and the source code will be released.
Strengths: - **Novelty**: The paper introduces a unique approach to 3D point-level open vocabulary understanding, which is a significant advancement over existing methods that focus on 2D pixel-level parsing.
- **Technical Contributions**: The proposal of a two-stage codebook for feature discretization and the introduction of a 3D-2D feature association method are innovative and well-executed.
- **Experiments**: The extensive experiments, including open vocabulary-based 3D object selection and 3D point cloud understanding, validate the effectiveness of the proposed method.
- **Clarity**: The paper is well-written and clearly explains the methodology, making it easy to follow the proposed approach and its benefits.
Weaknesses: - **Limitations Discussion**: The paper does not discuss the limitations of the proposed method in detail, which could provide a more balanced view of its applicability and potential drawbacks.
- **Comparative Analysis**: While the paper compares OpenGaussian with LangSplat and LEGaussians, additional comparisons with other state-of-the-art methods in OV 3D understanding could strengthen the evaluation, like Open-vocabulary 3D object detection[1, 2]
- **Complexity**: The implementation details, especially the two-stage codebook and feature association, may be complex and could benefit from further simplification or more detailed explanations for reproducibility.
- **Generalization**: The experiments are conducted on specific datasets, and it's unclear how well the method generalizes to other types of 3D scenes or datasets.
[1] Yuheng Lu, Chenfeng Xu, Xiaobao Wei, Xiaodong Xie, Masayoshi Tomizuka, Kurt Keutzer, and Shanghang Zhang. Open-vocabulary point-cloud object detection without 3d annotation. In CVPR, 2023. 1, 3
[2] Yang Cao, Zeng Yihan, Hang Xu, and Dan Xu. Coda: Collaborative novel box discovery and cross-modal alignment for open-vocabulary 3d object detection. In NeurIPS, 2023
Technical Quality: 3
Clarity: 3
Questions for Authors: - **Scalability**: How does the method perform on larger and more complex 3D scenes? Are there any scalability issues?
- **Real-time Performance**: Can the method be applied in real-time applications, especially in robotics and embodied intelligence scenarios?
- **Ablation Studies**: Can you provide more detailed ablation studies to isolate the contributions of each component of the proposed method?
- **Generalization**: Have you tested the method on different types of 3D datasets to evaluate its generalization capabilities?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors did not provide an analysis of the limitations. For suggestions on improvement, please refer to the weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## 1. Scalability and Generalization
We appreciate the reviewer’s insightful question, which encouraged us to explore the scalability and generalizability of OpenGaussian in other 3D datasets and scenarios.
+ We selected **6 scenes from the Waymo outdoor dataset** captured by vehicle-mounted cameras to demonstrate the effectiveness of the proposed method in large-scale complex scenarios.
+ We captured images of a **real-world office scene** using a mobile phone to demonstrate the method’s generalization.
Please refer to the **attached PDF** in the General Rebuttal section on OpenReview for detailed results. We kindly request the reviewer to check the file:
+ Fig.1, Fig.2: Visualization of 3D point features in 6 scenes from the Waymo dataset and the real-world office scene;
+ Fig.3: Results of rendering 3D point features into 2D feature maps;
+ Fig.4: Comparison of 3D point features after coarse-level and fine-level codebook discretization.
We hope the added results can address your concerns about scalability and generalizability.
---
## 2. More Detailed Ablation
Thank you for your question. Due to the interdependent nature of the three proposed methods, we cannot fully isolate them for ablation; the mask-based instance feature learning is essential for the two-level codebook discretization, and the discretized features are necessary for the 2D-3D association. However, we conducted a detailed ablation on their **subcomponents**, as shown in the table below. Cases #2, #3, and #4 were added during the rebuttal period.
+ Cases #2(w/o intra-mask) & #3(w/o inter-mask): Ablation of inter-mask and intra-mask losses, highlighting the importance of inter-mask loss.
+ Cases #1(two-level) & #4(coarse-level): Ablation of two-level codebooks, emphasizing the significance of fine-grained codebooks.
+ Cases #5(w/o feat. dis.) & #6(w/o IoU): Testing the 2D-3D feature association strategy, demonstrating the superiority of combined strategies.
| Case|Inter/Intra-mask loss || Two-level codebook || 2D-3D association || mIoU | mAcc. |
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
||Inter|Intra|Coarse|Fine|IoU|Feat.Dis.|||
|#1|✓|✓|✓|✓|✓|✓|38.29|55.19|
|#2|✓||✓|✓|✓|✓|35.24|52.43|
|#3||✓|✓|✓|✓|✓|25.89|42.76|
|#4|✓|✓|✓||✓|✓|32.04|49.82|
|#5|✓|✓|✓|✓|✓||35.28|53.19|
|#6|✓|✓|✓|✓||✓|34.01|51.35|
---
## 3. Efficiency and Real-Time Performance Analysis
We appreciate the reviewer’s question. Our experimental setup aligns with LangSplat[30] and LEGaussians[34], focusing on semantic understanding of well-reconstructed 3D scenes, which is not a real-time task. However, we believe our method has the potential for real-time applications for the following reasons.
+ **Incremental Input Support**: Unlike LangSplat and LEGaussian, which necessitate acquiring all scene objects prior to training the autoencoder-decoder or features distillation, our method allows for the incremental input of new images. Each new frame can be processed immediately without preprocessing. This aligns well with the incremental needs of real-time tasks such as SLAM and robotics tasks.
+ **Training Efficiency**: Our statistics on the ScanNet indoor dataset show that 3D point feature learning on a 640x480 image takes approximately 50ms per iteration for a scene with around 100,000 points, achieving a 20fps frame rate. On the Waymo outdoor dataset, processing a 960x640 image for a scene with approximately 700,000 points requires about 80ms per iteration, resulting in a 13fps frame rate. Furthermore, techniques like keyframes, sliding windows, and multithreading can be employed to further enhance processing speed in applications such as SLAM.
The analysis suggests our method possesses the potential for real-time implementation. We also hope to inspire the community to utilize the proposed method across various downstream tasks by releasing our code publicly.
---
## 4. Comparison with 3D Open Vocabulary Detection Methods
Thank you for raising this interesting point. In fact, these two types of methods differ significantly in their setups.
The two papers mentioned by the reviewer require training a 3D Encoder-Decoder Backbone and a task-specific head, meaning they are data-driven and need to be trained on large-scale point cloud datasets. In contrast, our method follows the 3DGS setup, involving no networks (MLP, convolution, or Transformer) and only training point features. Besides, our method is scene-specific and not data-driven, thus avoiding domain gaps. The differences in model, training approach, and data make fair comparison challenging.
Following your suggestion, we will include a discussion on these data-driven 3D open vocabulary understanding methods in the revised version.
---
## 5. Discussion of Limitations
We apologize for not analyzing the limitations of the proposed method. In the revised version, we will incorporate the following discussions:
+ The geometric properties of the Gaussian (position, opacity, scale) are fixed. This may lead to inconsistencies between geometric representation and semantic content. We will consider joint optimization of instance features and geometric properties in future work.
+ The values of k for the two-level codebooks are currently determined empirically. It is necessary to study scenario-specific adaptive values to optimize performance across diverse contexts.
+ Currently, we have not considered dynamic factors, which are common challenges in real-world applications. Integrating the proposed method with 4DGS would be meaningful.
By acknowledging these limitations, we aim to provide a more balanced perspective on our method and suggest areas for future improvements.
---
## 6. Implementation Complexity
We apologize for any confusion caused. We commit to releasing the code for reproducibility and to contribute to the community. To enhance the clarity of the paper, we will provide more detailed implementation details in the revised version and simplify any non-essential components that may cause confusion.
---
Rebuttal 2:
Comment: Dear Reviewer ANGY,
Thank you once again for your insightful review, which has greatly enhanced the quality and clarity of our paper. We sincerely hope that our rebuttal has effectively addressed your questions and concerns. Should you require any additional clarifications or further information, please do not hesitate to reach out. We greatly value your insightful suggestions.
Thank you very much for your time and consideration.
Best regards,
Authors of Submission 1591
---
Rebuttal Comment 2.1:
Title: Further reply
Comment: Dear authors, I have carefully read your paper. Most of my questions are addressed well. I have one more question. Is the feature learning process jointly conducted with the 3DGS learning? Or you learned the 3DGS firstly, then learn the instance/semantic feature further?
---
Reply to Comment 2.1.1:
Comment: Thanks for your feedback and positive evaluation of our work. We are glad that our rebuttal addressed your most concerns.
For this question, we mentioned this training detail in "Implementation Details" of the supplementary material: For a fair comparison, we adhered to the training strategy consistent with LangSplat. We first trained for 30,000 steps using 3DGS, then froze the geometric properties of the Gaussians, and continued training the instance features. The advantage of this strategy is that it allows us to continue training from any model pre-trained on 3DGS (or its variants) without needing to retrain the geometric properties from scratch. However, as we noted in the first point of our limitations analysis (Rebuttal-Q5), this strategy may lead to inconsistencies between geometry and semantics. We appreciate your insightful observation, and we will further explore how to conduct more efficient joint training in the future.
Thank you again for your thorough review. We would appreciate if you could consider re-evaluating our work in light of these clarifications. | null | null | Rebuttal 1:
Rebuttal: Dear Reviewers and AC,
We would like to thank the three anonymous reviewers and the AC for their time and effort in reviewing our paper and providing constructive feedback. We are very grateful for the positive comments from the reviewers, such as “significant advancement over existing methods that focus on 2D pixel-level parsing” (Reviewer ANGY), “method are innovative and well-executed” (Reviewer ANGY), “extensive experiments” (Reviewer ANGY), “well-written and clearly explains” (Reviewer ANGY), “visualizations are clear and strong” (Reviewer sSBZ), “consistant and remarkable improvements” (Reviewer sSBZ), “motivation behind this paper is well-grounded” (Reviewer d9so), “well-written and easy to follow” (Reviewer d9so), and “good performance across various tasks” (Reviewer d9so).
For the insightful questions, constructive suggestions, and additional experiments requested by the reviewers, we have provided detailed responses to each reviewer. **Please refer to our individual responses to each reviewer for more details**. Below is a brief overview of our rebuttal content.
**Attached PDF**: Experiments on large-scale outdoor dataset Waymo and images from a real-world scene captured by the mobile phone to verify the scalability and generalization of the proposed method.
For **Reviewer ANGY**:
+ (1) **Scalability and Generalization**: We demonstrate the scalability and generalization of our method by adding experiments on the large-scale outdoor dataset Waymo and a real-world scene captured by the mobile phone. See the attached PDF for details.
+ (2) **More Detailed Ablation**: We conducted a comprehensive ablation study on each sub-component of our method to illustrate the role of each module.
+ (3) **Efficiency and Real-Time Performance Analysis**: We analyzed the real-time performance of the proposed method.
+ (4) **Comparison with 3D Open Vocabulary Detection Methods**: We discussed our method in comparison to data-driven 3D open vocabulary methods.
+ (5) **Discussion of Limitations**: We discussed the limitations of the proposed method.
+ (6) **Implementation Complexity**: We explained the complexity of the implementation and outlined ways to improve it.
For **Reviewer sSBZ**:
+ (1) **Ablation of Inter/Intra Mask Loss**: We added ablation experiments and analysis for both losses.
+ (2) **Efficiency and Real-Time Performance Analysis**: We analyzed the real-time performance of the proposed method.
+ (3) **Discussion of Limitations**: We discussed the limitations of the proposed method.
+ (4) **Improvements in Paper Writing**: We analyzed and improved the writing deficiencies.
For **Reviewer d9so**:
+ (1) **More Detailed Ablation of Two-Stage Codebooks**: We provided relevant ablation experiments and analysis to enhance the comprehensiveness and fairness of the ablation study.
+ (2) **Sensitivity Analysis of k1 and k2**: We conducted experiments and analysis to investigate the sensitivity of the codebook parameter.
We would like to thank the reviewers again for their valuable feedback, which has significantly improved the quality and comprehensiveness of our method. We hope our responses have addressed the reviewers’ concerns. If the reviewers have any further questions, we are more than happy to provide clarification.
Best regards.
Pdf: /pdf/fe3853381ed9805dc16b247bd5b35d29da7f072f.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
B-ary Tree Push-Pull Method is Provably Efficient for Distributed Learning on Heterogeneous Data | Accept (poster) | Summary: The paper proposes the B-ary Tree Push-Pull (BTPP) algorithm for distributed learning over heterogeneous data. The BTPP algorithm introduces 2 spanning directed trees as communication graphs: Pull Tree $G_R$ and Push Tree $G_C$. BTPP has only $\Theta(1)$ communication overhead per iteration and $O(n)$ transient time complexity but outperforms the baseline. The paper also theoretically shows that BKPP is of sublinear convergence rate. The experimental results shown in the paper also support that BTPP outperforms the SOTA baseline.
Strengths: 1. BTPP combines two directed trees, the Pull Tree and the Push Tree, for distributed learning.
2. Since the degree for the Tree is at most $O(n)$ with n nodes, the communication time complexity for BTPP is lower than other methods. At the same time, the result outperforms other methods.
3. The paper provides theoretical guarantees for the BTPP algorithm.
Weaknesses: 1. BTPP algorithm is derived from the Push-Pull gradient method in [1] by defining the communication graph as B-ary trees. The mathematical formulation and pseudocode Algorithm 1 looks the same as that in [1]. What is the main difference between the two algorithms?
2. In the paper, the authors state that the Pull graph $G_R$ is the inverse of the Push graph $G_C$. What is the reason that the two graphs must have the same structure?
3. The authors claim that the advantage of BTPP is that it could reach better performance than the SOTA with lower communication costs. However, in section 4, the paper only shows iteration-wise test performance rather than the time-wise performance.
[1] Push-Pull Gradient Methods for Distributed Optimization in Networks
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments. We provide response to your concerns and questions below.
1. **Comparison between BTPP and Push-Pull method.**
There are three main differences between BTPP and the original Push-Pull method. First, the Push-Pull method assumes a general condition on the communication graphs but does not consider any specific network topology. By contrast, BTPP makes use of two $B$-ary trees to achieve accelerated convergence. Second, BTPP employs network weights which are only 0's and 1's. This does not satisfy the assumption in Push-Pull that requires non-zero weights for diagonal elements of the mixing matrices. The special design of BTPP greatly improves the efficiency of information propagation and aggregation compared to Push-Pull. Finally, Push-Pull considered deterministic gradients while BTPP works with stochastic gradients.
2. **Graph structure.**
We consider $\mathcal{G}_R$ to be the inverse of $\mathcal{G}_C$ for simplicity (since the network topology is fully controllable). If $\mathcal{G}_R$ and the inverse of $\mathcal{G}_C$ have different structures, as long as they are still $B$-ary trees ($B$ can be different for the two graphs), we can show a similar convergence result for BTPP following almost identical analysis (noting that matrices $R$ and $C$ preserve the desired properties).
3. **Real-time performance.** For the problem of training a CNN on the MNIST dataset, we have further compared the real-time performance of BTPP with other representative methods (see the attachment). The experiments are conducted on a server equipped with eight Nvidia RTX 3090 GPUs and Intel Xeon Gold 4310 CPU, where the communication between GPUs follows the topology requirement of each algorithm. We measure the running time including GPU computation and communication for 13,000 iterations. The experimental settings are consistent with those described in Section 4.2 of the original paper. From Figure 1, BTPP outperforms the other algorithms concerning the running time. Additionally, we evaluate BTPP with various branch sizes $B$, concluding that for relatively small values of $n$, a branch size of $B=2$ is most effective.
Furthermore, we consider training VGG13 on the CIFAR10 dataset, with $n=8$ and a batch size of 16. The learning rate and topology configurations are consistent with those described in Section 4.2. Figures 2 and 3 illustrate that BTPP beats competing algorithms in terms of the convergence rate (against iteration number) and running time. Moreover, a branch size of $B=2$ is optimal.
---
Rebuttal Comment 1.1:
Comment: The authors answer most of my questions and I would like to maintain the score. | Summary: This paper reduces transient time in decentralized optimization algorithms by introducing the B-ary Tree Push-Pull (BTPP) algorithm, derived from the Push-Pull method. The authors assume a setup where any type of communication network can be established among the nodes. They employ two distinct communication networks: the network represented by graph $R$ for pulling parameter information, and the network represented by graph $C$ for pushing stochastic gradient information towards node 1. The method achieves linear speedup for smooth nonconvex objective functions with only $O(n)$ transient iterations, significantly outperforming existing state-of-the-art methods.
Strengths: - Reduction of transient time in decentralized optimization
- Novel convergence approach
- Demonstrating the use of B-ary trees to balance between communications per iteration and transient time
Weaknesses: 1. The introduction lacks cohesion and focus, with unnecessary information in the second paragraph. The significance of the transient time problem, a key contribution of the paper, could be better defined to highlight its importance.
2. The paper demonstrates that only node 1 converges to the local solution. However, it is not entirely clear what happens to the variables at other nodes or whether the nodes achieve consensus similar to classical decentralized algorithms.
3. The proposed setup appears to be limited to high-performance data-center clusters where a B-ary spanning tree communication network can be established among nodes. This might limit the applicability of the algorithm in other environments.
4. Additionally, the analysis is focused on smooth nonconvex objective functions, and there is no discussion on strongly convex functions.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Refer to weaknesses.
2. Could you please elaborate on how the BTPP algorithm addresses data heterogeneity problems? Specifically, which term in equation (3) demonstrates that data heterogeneity decreases as the algorithm progresses?
3. I am interested in the fixed point analysis of BTPP. The Push-Pull algorithm has a comprehensive fixed point analysis. In your algorithm, if all nodes converge to a stationary solution, will the algorithm maintain that state? Additionally, to which points do the other nodes converge, apart from node 1?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Refer to Weaknesses and Questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments. We provide response to your concerns and questions below.
1. **Consensus properties.**
In BTPP, the nodes achieve consensus similar to classical decentralized algorithms. Specifically, noting the definition of $\Pi_{\mathbf{u}}$ on page 7, Lemma 3.3 bounds the summation of the expected squared distances between $x_i^{(t)}$ and $x_1^{(t)}$. Dividing both sides of the inequality by $(T+1)$, in light of the condition on $\gamma$ and the convergence result given in Theorem 2.1, we can derive that the consensus error (averaged over the history) converges to $0$ in the order of $\mathcal{O}(1/T)$.
2. **Extension to fixed networks.**
This work focuses on the setting where the network topology is fully controllable. If instead an arbitrary fixed directed network $\mathcal{G}$ is given, inspired by BTPP, we can locate a spanning tree $\mathcal{G}\_{R}$ and a reversed spanning tree $\mathcal{G}\_{C}$ which are subgraphs of $\mathcal{G}$, such that $\mathcal{G}\_{R}$ and $\mathcal{G}_{C^{\top}}$ share a common root. Then the mixing matrices $R$ and $C$ are constructed accordingly with $0$'s and $1$'s, and an algorithm similar to BTPP can be designed following (2). We anticipate that, although such an algorithm would not enjoy $\tilde{O}(n)$ transient time due to potentially longer network diameter in the order of $\mathcal{O}(n)$, linear speedup can be expected for large $T$, and the transient time would be shorter compared to existing distributed stochastic gradient methods over a general fixed network.
3. **Analysis under (strongly) convex objective functions.**
The analysis of BTPP can be extended to $\mu$-strongly convex function $f$, that is, for any $x,y\in \mathbb{R}^p$,
$$
f(y) \ge f(x) + \left\langle \nabla f(x), y-x \right\rangle + \frac{\mu}{2}\left\\| y-x \right\\|^2.
$$
We are able to obtain the following convergence result (a sketch of analysis is provided in the "global" response):
$$
\mathbb{E}\left\\|x_1^{(T)} - x^* \right\\|^2 = \tilde{\mathcal{O}} \left( \frac{1}{nT} + \frac{1}{nT^2} + \exp\left( - T\right) \right).
$$
where $\tilde{\mathcal{O}}$ hides the constants and polylogarithmic factors. The transient time is thus $\tilde{O}(1)$, which also outperforms the state-of-the-art results.
4. **Addressing data heterogeneity.**
In BTPP, $y_1^{(t)}$ effectively tracks the summation of (possibly delayed) stochastic gradients across the network. By properly choosing the stepsize that is decreasing in $n$, node $1$ is approximately performing stochastic gradient descent for minimizing $f(x)$ using the average of $n$ stochastic gradients per iteration. Therefore, the performance of BTPP is similar to a centralized SGD algorithm whose performance does not degrade due to data heterogeneity. In (3), only the last term is affected by the data heterogeneity. Noting that $\\|\nabla F(\mathbf{X}^{(0)})\\|^2 = \sum_{i=1}^n \\|\nabla f_i(x^{(0)})\\|^2$, even if $x^{(0)}$ is a stationary point of $f(x)$ (i.e., $\nabla f(x^{(0)}) = 0$), $\nabla f_i(x^{(0)})$ are non-zero in general and may differ vastly for different $i$ because of different $f_i$. The last term decreases in the order of $\mathcal{O}(1/T)$, faster than the dominant term. This implies that the influence of data heterogeneity vanishes faster compared with the stochastic gradient variances.
5. **Fixed-point analysis.**
Since BTPP applies stochastic gradients with a constant stepsize, convergence in the common sense cannot be achieved. We thus study the fixed points of BTPP assuming deterministic gradients. From (2), assuming $\mathbf{X}^{(t)}$ and $\mathbf{Y}^{(t)}$ converge to $\mathbf{X}^{\infty}$ and $\mathbf{Y}^{\infty}$ respectively, we have
$$
\mathbf{X}^{\infty} = R\mathbf{X}^{\infty} - \gamma\mathbf{Y}^{\infty}, \quad \mathbf{Y}^{\infty} = C\mathbf{Y}^{\infty}.
$$
Therefore,
$$
(I-R)\mathbf{X}^{\infty} + \gamma\mathbf{Y}^{\infty} = 0, \quad (I-C)\mathbf{Y}^{\infty} = 0.
$$
Since $\text{span}(I-R) \cap \text{null}(I-C) = \emptyset$ (from the design of $R$ and $C$), we have $(I-R)\mathbf{X}^{\infty} = 0$ and $\mathbf{Y}^{\infty} = 0$. The former implies consensus of $x_i^{\infty}$, i.e., $x_i^{\infty} = x^{\infty}$ for all $i$. Noting that $\frac{1}{n} \sum_{i=1}^n y_i^{\infty} = \frac{1}{n} \sum_{i=1}^n \nabla f_i(x_i^{\infty})$, the latter leads to $\frac{1}{n} \sum_{i=1}^n \nabla f_i(x^{\infty}) = 0$. Therefore, $x^{\infty}$ is a stationary point of $f(x)$. In conclusion, the fixed points $x_i^{\infty}$ achieve consensus and equal to a stationary point of $f(x)$.
---
Rebuttal Comment 1.1:
Comment: All my questions were addressed very well. I will adjust my score. | Summary: The paper introduces a method for distributed learning, for the common scenario that various "agents" each hold part of a data set locally, and aim to coordinate to find the minimizer of some criterion. Here, the criterion consists of the average of local cost functions, which are assumed to be smooth but non-convex, and each agent is assumed to able to compute a "gradient estimator". By communicating information concerning these gradient estimators, the agents together minimize global criterion (i.e. averaged local criteria).
The method consists of both a specific network architecture/topology, combined with an algorithm for what each agent computes and communicates throughout the network. Unbiased estimators of the local gradients are generally assumed to be available. The authors consider possibly heterogenous data, but this plays no role beyond just extending generality it seems.
The analysis of the authors focusses mainly on the number of transient iterations required by the algorithm. Communication cost does not take into account the actual size of the gradient estimator, but concerns the number of neighbors that a message is to be passed onto by a node, for every node and at every iteration.
The authors provide a main theorem describing the convergence properties of the algorithm and simulations. Both are used for comparison with existing methods in the literature.
Strengths: * The paper provides a rigorous theoretical analysis, providing clear convergence guarantees
* The method described by the paper has clear advantages in comparison to various existing algorithms in the literature, in terms of the number of transient iterations and in terms of total communication cost.
* The method is practically relevant in distributed settings where one has control over the design of the network nodes.
* The numerical results are well-chosen, and provide convincing support for the theoretical guarantees in small samples / finite iterations, in comparison to many existing methods in the literature.
* The paper is well written (up to some punctuation mistakes), from the introduction lasting into the appendix, and its analysis appears sound to me.
Weaknesses: My main grief with setup considered by the authors is the (in my opinion very strong assumption) that there is no bias in the gradient estimator. Since the authors seem to target high-dimensional setups, this is often not case, unless one has access to the original gradient. Furthermore, it is difficult to verify whether one has zero bias.
It would be a great improvement if the authors captured the importance having (very) small bias in their theory. Is 0 bias strictly necessary? How much bias can one get away with here?
Similarly, the uniform bound on the variance of the gradient estimator seems rather strong. Going through the analysis provided by the authors, I also wonder whether uniformity is generally necessary here. Since this appears as a constant in the main theorem (i.e. in (3)), it might in fact significantly impact the convergence of the algorithm, and I think it would be wise for the authors to comment on this further.
Technical Quality: 3
Clarity: 3
Questions for Authors: * The role of $n$ is somewhat surprising in (3), making many of the terms smaller as the data gets distributed across more servers. Do the authors have an interpretation for these terms / an intuition as to why this is? In principle, I would say that the more servers one considers, the more difficult the problem should get in principle. Is the appearance of $n$ in the dominator a consequence of total communication increasing as $n$ increases?
* The authors comment on the choice of B in Remark 2.3 in terms of the upper bound given in (3); ending with the sentence "..the communication cost and convergence rate can be balanced by considering a proper B". How should this be done in practice? Is there a practical guideline that the authors have here? The insight provided by (3) seem rather limited here, as it is only an upper bound.
* If one does not have full control when designing the network topology, what are the take-aways from the current design?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: * The assumptions on the gradient estimator (mean 0, uniformly bounded variance) are very strong.
* The interpretation of the quantities appearing could be somewhat more extensive.
* The attention spent on //why// this network design considered by the authors leads to such an improvement in terms of transient times is limited.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments.
1. **Assumptions on the stochastic gradients.**
The unbiasedness of the stochastic gradients is a result of the sampling strategy.
Note that this work mainly focuses on the scenario where computing the full gradients is expensive due to large datasets. Specifically, in empirical risk minimization,
$$
f\_{i}(x)=\frac{1}{|\mathcal{D}\_{i}|}\sum_{\xi\in\mathcal{D}\_{i}}F_i(x;\xi),
$$
where $\mathcal{D}\_{i}$ denotes the dataset of node $i$.
Thus exactly computing $\nabla f_i(x)$ requires sampling all the data points in $\mathcal{D}\_{i}$, which is costly.
In practice, node $i$ may select a mini-batch of data points $\mathcal{B}\_{i}$ uniformly randomly from $\mathcal{D}\_{i}$ for estimating $\nabla f\_{i}(x)$ at every iteration. Given $x$, the stochastic gradient is given by
$$
\frac{1}{|\mathcal{B}\_{i}|}\sum_{\xi\in\mathcal{B}\_{i}}\nabla F_i(x;\xi),
$$
which is naturally an unbiased estimator of $\nabla f_i(x)$.
The unbiasedness condition is commonly assumed in related works such as [1-2] and the other references of this paper (see Table 1).
The assumption of bounded variance can be relaxed. For example, we can consider the relaxed growth condition:
$$
\mathbb{E}\_{\xi\_{i} \sim \mathcal{D}\_{i}} (\\|{g\_{i}(x,\xi\_{i}) - \nabla f\_{i}(x)}\\|^2 |x) \le \sigma^2 + \eta \\|\nabla f\_{i}(x)\\|^2
$$
for some $\eta>0$, or more generally,
$$
\mathbb{E}\_{\xi_i \sim \mathcal{D}\_i} (\\|{g_i(x,\xi_i) - \nabla f_i(x)}\\|^2 |x) \le \sigma^2 + C(f_i(x)-f_i^*)
$$
for some $C>0$, where $f_i^*=\inf_x f_i(x)$. The latter is currently the most general assumption in the distributed optimization literature (see [3]). Under the condition, it can be derived that BTPP maintains similar convergence result with some additional constants. For instance, the constant $\sigma^2$ in (3) becomes $\sigma^2+C(f^*-\sum_{i=1}^n f_i^*/n)$ with $f^*=\inf_x f(x)$.
In this paper, we let $\eta=0$ for simplicity to focus on the transient time analysis of BTPP. The bounded variance condition was also commonly assumed in related works (see the works in Table 1).
[1] Ying, Bicheng, et al. "Exponential graph is provably efficient for decentralized deep training." Advances in Neural Information Processing Systems 34 (2021): 13975-13987.
[2] Vogels, Thijs, et al. "Relaysum for decentralized deep learning on heterogeneous data." Advances in Neural Information Processing Systems 34 (2021): 28004-28015.
[3] Huang, Kun, Xiao Li, and Shi Pu. "Distributed stochastic optimization under a general variance condition." IEEE Transactions on Automatic Control (2024).
2. **Role of $n$.**
The appearance of $n$ in the denominator in (3) implies the "linear speedup" property of the BTPP method and other related algorithms (see Table 1), that is, increasing the number of workers/machines accelerates the algorithmic convergence with respect to the number of iterations. Intuitively, as $n$ increases, more stochastic gradients are acquired at every iteration, which helps reduce the variances through implicit averaging (network mixing). Particularly for BTPP, the decision variables $x_i^{(t)}$ for different agents are similar (following $x_1^{(t)}$), and $y_1^{(t)}$ effectively tracks the summation of the stochastic gradients across the network. By properly choosing the stepsize that is decreasing in $n$, node $1$ is approximately performing stochastic gradient descent for minimizing $f(x)$ using the average of $n$ stochastic gradients per iteration. Thus the variances of the stochastic gradients have been divided by $n$ because of averaging.
From another perspective, if we measure the performance of BTPP and related distributed stochastic gradient algorithms against the number of stochastic gradients (rather than the iteration number), the dominant term $\mathcal{O}(1/\sqrt{nT})$ is the same as for a centralized algorithm using one stochastic gradient per iteration (noting that $nT$ represents the total number of stochastic gradients for distributed algorithms).
3. **Choice of $B$.**
From (3) and Figure 2, increasing $B$ can accelerate the convergence of BTPP with respect to the number of iterations, especially when $B$ is small. At the same time, larger $B$ increases the per-iteration communication cost, and thus there is trade-off.
In practice, the actual running time can be roughly estimated as follows:
Running time = (iteration number)×(per-iteration running time)
Given a required accuracy, the running time is a nonlinear function of $B$, and the optimal value can be determined by experiments. For example, we can start with a small $B$, and increase $B$ until the best performance is achieved. Noting that only when $B$ is small, increasing $B$ has significant impact on the convergence rate (w.r.t the number of iterations), the optimal $B$ is usually small. In the additional experiments (see the attachment in "global" response), it can be seen that $B=2$ achieves the best performance in real-time.
4. **Extension to fixed networks.**
Given a fixed directed network $\mathcal{G}$ with arbitrary topology, following the idea of BTPP, we can locate a spanning tree $\mathcal{G}\_R$ and a reversed spanning tree $\mathcal{G}\_C$ which are subgraphs of $\mathcal{G}$, such that $\mathcal{G}\_R$ and $\mathcal{G}\_{C^{\top}}$ share a common root. Then the mixing matrices $R$ and $C$ are constructed accordingly with $0$'s and $1$'s, and an algorithm similar to BTPP can be designed following (2). We anticipate that, although such an algorithm would not enjoy $\tilde{O}(n)$ transient time due to potentially longer network diameter in the order of $\mathcal{O}(n)$, linear speedup can be expected for large $T$, and the transient time would be shorter compared to existing distributed stochastic gradient methods over a general fixed network.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their detailed rebuttal. They have answered my questions and I wish to maintain my score. | Summary: This paper introduces B-ary Tree Push-Pull (BTPP), an extension of the push-pull framework for distributed learning across a network of agents. The algorithm employs two B-ary tree communication graphs - one for distributing model parameters and one for aggregating gradients. This approach allows each agent to communicate with at most B+1 neighbors per iteration. The main contribution is an analysis demonstrating that BTPP achieves linear speedup with O(n) transient iterations for smooth nonconvex objectives, improving upon previous results for related methods while maintaining lower per-iteration communication.
Strengths: - Novel use of B-ary tree topologies within the push-pull framework
- Improved theoretical results showing O(n) transient time
Weaknesses: - The experimental evaluation could be more comprehensive, particularly given that the analysis extends to smooth non-convex problems. Empirical evaluations on deep learning tasks or other practical non-convex optimization problems would strengthen the paper's claims and demonstrate real-world applicability.
- While the improvement in transient time is theoretically significant, the paper could benefit from a more thorough discussion of how this translates to practical advantages in real-world distributed learning scenarios, especially for large-scale or long-running optimization tasks.
- The approach is primarily applicable to controlled network environments like high-performance data-center clusters, limiting its relevance to scenarios with fixed or constrained network topologies (e.g., wireless sensor networks, internet of vehicles).
Technical Quality: 3
Clarity: 3
Questions for Authors: - How does BTPP compare to other extensions of push-pull in the literature?
- Can this B-ary tree approach be applied to other distributed optimization algorithms? What are the main limitations?
- How sensitive is the performance to the choice of B? Is there an optimal B value?
- Can the analysis be extended to strongly convex or convex objective functions?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors acknowledge a fundamental limitation common in this area of research: the assumption of a controlled network environment typical of high-performance data-center clusters. This assumption, where network topology can be fully manipulated, is inherent to the proposed BTPP algorithm and many related works in the literature. It's recognized that this limits applicability to scenarios with fixed or constrained network topologies, such as wireless sensor networks or IOT applications.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments. We provide response to your concerns and questions below.
1. **Enhanced experiments.** For the problem of training a CNN on the MNIST dataset, we have further compared the real-time performance of BTPP with other representative methods (see the attachment). The experiments are conducted on a server equipped with eight Nvidia RTX 3090 GPUs and Intel Xeon Gold 4310 CPU, where the communication between GPUs follows the topology requirement of each algorithm. We measure the running time including GPU computation and communication for 13,000 iterations. The experimental settings are consistent with those described in Section 4.2 of the original paper. From Figure 1, BTPP outperforms the other algorithms concerning the running time. Additionally, we evaluate BTPP with various branch sizes $B$, concluding that for relatively small values of $n$, a branch size of $B=2$ is most effective.
Furthermore, we consider training VGG13 on the CIFAR10 dataset, with $n=8$ and a batch size of 16. The learning rate and topology configurations are consistent with those described in Section 4.2. Figures 2 and 3 illustrate that BTPP beats competing algorithms in terms of the convergence rate (against iteration number) and running time. Moreover, a branch size of $B=2$ is optimal.
2. **Comparison to other Push-Pull extensions.** There are several major differences between BTPP and the other extensions of the Push-Pull method. First, all the other extensions consider communication graphs that satisfy certain assumptions, rather than any specific network topologies. Second, BTPP employs network weights which are only 0’s and 1’s. This does not satisfy the assumptions in previous works that require non-zero weights for diagonal elements of the mixing matrices. Finally, most extensions of Push-Pull method assume deterministic gradients. Note that the first two differences contribute to the huge advantage of BTPP compared to the other extensions regarding the convergence guarantees. Particularly, linear speedup has not been demonstrated in previous works that consider stochastic gradients; see e.g., [1-2].
[1] Xin, Ran, et al. ``Distributed stochastic optimization with gradient tracking over strongly-connected networks." 2019 IEEE 58th Conference on Decision and Control (CDC). IEEE, 2019.
[2] Nguyen, Duong Thuy Anh, Duong Tung Nguyen, and Angelia Nedic. ``Distributed Stochastic Optimization with Gradient Tracking over Time-Varying Directed Networks." 2023 57th Asilomar Conference on Signals, Systems, and Computers. IEEE, 2023.
3. **B-ary tree approach.** To our knowledge, only Push-Pull type methods allow the use of two spanning trees and the corresponding mixing matrices in the algorithm design. Other popular distributed optimization algorithms, such as DSGD, EXTRA and ADMM type methods, require a strongly connected graph and the corresponding mixing matrix for implementation. Thus they are not compatible with the B-ary tree approach.
BTPP has the potential to incorporate other gradient-based methods such SGD with momentum (SGDM), random-reshuffling type methods, and proximal stochastic gradient methods. For example, BTPP with SGDM works as follows:
$$
\begin{aligned}
x_i^{(t+1)} & = \sum\_{j \in \mathcal{N}\_{\mathcal{R},i}^{in}} ({x\_j^{(t)} - \gamma y\_{j}^{(t)}}) \\\\
z_i^{(t+1)} & = \beta z\_i^{(t)} + (1-\beta) g\_i(x^{(t+1)}_i;\xi\_i^{(t+1)})\\\\
y\_i^{(t+1)} & = \sum\_{j\in\mathcal{N}\_{\mathcal{C},i}^{in}} y\_j^{(t)} + z\_i^{(t+1)} - z\_i^{(t)}
\end{aligned}
$$
with $\beta\in(0,1)$. We expect that such extensions of BTPP can also enjoy preferable performance.
4. **Choice of $B$.** From Figure 2, increasing $B$ can accelerate the convergence of BTPP, especially when $B$ is small. As $B$ becomes larger, the acceleration effect degrades. This can be explained by the bound given in Theorem 2.1, which depends on $d$, the diameter of the graphs. For large $B$, $d$ is close to $1$ and thus increasing $B$ only has marginal effect.
From Figure 2 and Theorem 2.1, the optimal $B$ is equal to $n$ considering the number of required iterations.
If we instead consider the real running time, the optimal $B$ may not be equal to $n$. The running time can be computed as:
Running time = (iteration number)$\times$(per-iteration running time)
Given a required accuracy, the running time is a nonlinear function of $B$, and the optimal $B$ can be determined through experiments. For instance, we can start with a small $B$, and increase $B$ until the best performance is achieved. Noting that only when $B$ is small, increasing $B$ has significant impact, the optimal $B$ is usually small. In the additional experiments (see the attachment in "global" response), we find that $B=2$ achieves the best performance in real-time.
5. **Analysis under (strongly) convex objective functions.** The analysis of BTPP can be extended to $\mu$-strongly convex function $f$, that is, for any $x,y\in \mathbb{R}^p$,
$$
f(y) \ge f(x) + \langle\nabla f(x), y-x \rangle + \frac{\mu}{2}\\| y-x \\|^2.
$$
We are able to obtain the following convergence result (a sketch of analysis is provided in the "global" response):
$$
\mathbb{E}\left\\|x_1^{(T)} - x^* \right\\|^2 = \tilde{\mathcal{O}} \left( \frac{1}{nT} + \frac{1}{nT^2} + \exp\left( - T\right) \right).
$$
where $\tilde{\mathcal{O}}$ hides the constants and polylogarithmic factors.
The transient time is thus $\tilde{O}(1)$, which also outperforms the state-of-the-art results. | Rebuttal 1:
Rebuttal: We provide some general feedback corresponding to the common concerns/questions raised by the reviewers. These points will be incorporated in the revision.
1. **Enhanced experiments.** For the problem of training a CNN on the MNIST dataset, we have further compared the real-time performance of BTPP with other representative methods (see the attachment). The experiments are conducted on a server equipped with eight Nvidia RTX 3090 GPUs and Intel Xeon Gold 4310 CPU, where the communication between GPUs follows the topology requirement of each algorithm. We measure the running time including GPU computation and communication for 13,000 iterations. The experimental settings are consistent with those described in Section 4.2 of the original paper. From Figure 1, BTPP outperforms the other algorithms concerning the running time. Additionally, we evaluate BTPP with various branch sizes $B$, concluding that for relatively small values of $n$, a branch size of $B=2$ is most effective.
Furthermore, we consider training VGG13 on the CIFAR10 dataset, with $n=8$ and a batch size of 16. The learning rate and topology configurations are consistent with those described in Section 4.2. Figures 2 and 3 illustrate that BTPP beats competing algorithms in terms of the convergence rate (against iteration number) and running time. Moreover, a branch size of $B=2$ is optimal.
2. **Extension to fixed networks.** Given a fixed directed network $\mathcal{G}$ with arbitrary topology, following the idea of BTPP, we can locate a spanning tree $\mathcal{G}\_R$ and a reversed spanning tree $\mathcal{G}_C$ which are subgraphs of $\mathcal{G}$, such that $\mathcal{G}\_R$ and $\mathcal{G}\_{C^{\top}}$ share a common root. Then the mixing matrices $R$ and $C$ are constructed accordingly with $0$'s and $1$'s, and an algorithm similar to BTPP can be designed following (2). We anticipate that, although such an algorithm would not enjoy $\tilde{O}(n)$ transient time due to potentially longer network diameter in the order of $\mathcal{O}(n)$, linear speedup can be expected for large $T$, and the transient time would be shorter compared to existing distributed stochastic gradient methods over a general fixed network.
3. **Analysis under (strongly) convex objective functions.** The analysis of BTPP can be extended to $\mu$-strongly convex function $f$. The transient time is $\tilde{O}(1)$, which also outperforms the state-of-the-art results. We provide a sketch of analysis below.
Let $x^* = \arg\min_x f(x)$. We start with analyzing the behavior of $\\|x_1^{(t)} - x^*\\|^2$ after obtaining **Lemma 3.4** (boldface represents result from the original paper). It holds that
$$
\left\\|x_1^{(t+1)} - x^* \right\\|^2 = \left\\|x_1^{(t)} - x^* \right\\|^2 + 2 \left\langle x_1^{(t)} - x^*, x_1^{(t+1)} - x_1^{(t)} \right\rangle + \left\\|x_1^{(t+1)} - x_1^{(t)} \right\\|^2.
$$
To deal with the critical inner product, similar to the decomposition in **Equation (14)**, we have, by replacing $\nabla f(x_1^{(t)})$ with $x_1^{(t)} - x^*$ in **Equation (15)**, and invoking **Lemma 3.4** as we have done in **Lemma 3.5**,
$$
\begin{aligned}
&-\mathbb{E}\left\langle x\_1^{(t)} - x^*,\gamma \left(\frac{\mathbf{u}^\top}{n} - \mathbf{1}^\top \right)\mathbf{\Pi}\_{\mathbf{v}}\mathbf{Y}^{(t)} \right\rangle \le \frac{n\gamma \mu}{4} \mathbb{E} \left\\|x\_1^{(t)} - x^* \right\\|^2 \\\\
&\quad + \frac{3dL^2}{\mu} \gamma \sum\_{m=1}^{ \min\\{t,d\\} } \mathbb{E} \left( \left\\|\mathbf{\Pi}\_{\mathbf{u}} \mathbf{X}^{t-m+1} \right\\|^2 + \left\\|\mathbf{\Pi}\_{\mathbf{u}} \mathbf{X}^{t-m} \right\\|^2 + \left\\|\bar{\mathbf{X}}^{t-m+1} - \bar{\mathbf{X}}^{t-m} \right\\|^2\right).
\end{aligned}
$$
In light of the strong convexity and $L$-smoothness of $f$,
for $\gamma \le \frac{1}{10 nd^3\kappa L}$ ($\kappa:=L/\mu$), we have
$$
\begin{aligned}
& \mathbb{E}\left\\|x_1^{(t+1)} - x^* \right\\|^2 \le \left(1 - \frac{n\gamma \mu}{4} \right)\mathbb{E}\left\\|x_1^{(t)} - x^* \right\\|^2 -\frac{1}{L} \mathbb{E}\left\\|\nabla f(x_1^{(t)} ) \right\\|^2 + \frac{1}{n}\mathbb{E}\left\\|\bar{\mathbf{X}}^{(t+1)} - \bar{\mathbf{X}}^{(t)} \right\\|^2_F \\\\
&\quad + 3 d\kappa L \gamma \sum_{m=1}^{ \min\\{t,d\\} } \mathbb{E}\left\\|\bar{\mathbf{X}}^{(t-m+1)} - \bar{\mathbf{X}}^{(t-m)} \right\\|^2_F + 10d\kappa L \gamma \sum_{m=0}^{ \min\\{t,d\\}+1 } \mathbb{E} \left\\|\mathbf{\Pi}_{\mathbf{u}} \mathbf{X}^{t-m} \right\\|^2_F.
\end{aligned}
$$
Unwinding the above recursion, and following several standard steps (implementing **Lemma 3.2** and **Lemma 3.3**, picking $\gamma := \min\\{\frac{1}{10nd^3\kappa L}, \frac{1}{n\mu(T+1)}\\}$, we get
$$
\mathbb{E}\left\\|x_1^{(T)} - x^* \right\\|^2 = \tilde{\mathcal{O}} \left( \frac{1}{nT} + \frac{1}{nT^2} + \exp\left( - T\right) \right).
$$
where $\tilde{\mathcal{O}}$ hides the constants and polylogarithmic factors.
4. **Choice of $B$.** From (3) and Figure 2, increasing $B$ can accelerate the convergence of BTPP with respect to the number of iterations, especially when $B$ is small. At the same time, larger $B$ increases the per-iteration communication cost, and thus there is a trade-off. In practice, the actual running time can be roughly estimated as follows:
Running time = (iteration number)$\times$(per-iteration running time)
Given a required accuracy, the running time is a nonlinear function of $B$, and the optimal value can be determined by experiments. For example, we can start with a small $B$, and increase $B$ until the best performance is achieved. Noting that only when $B$ is small, increasing $B$ has significant impact on the convergence rate (w.r.t the number of iterations), the optimal $B$ is usually small. In the additional experiments (see the attachment), it can be seen that $B=2$ achieves the best performance in real-time.
Pdf: /pdf/cc516cc4f083feb4331dc5414a715bca1880a652.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
FairQueue: Rethinking Prompt Learning for Fair Text-to-Image Generation | Accept (poster) | Summary: This paper examines the prompt learning method used in fair text-to-image (T2I) generation, highlighting its impact on image quality. The authors identify that aligning prompt embeddings with reference image embeddings introduces noise due to unrelated concepts, leading to degraded image quality. They conduct an in-depth analysis of the T2I model's denoising subnetwork, introducing novel prompt switching analyses (I2H and H2I) and new quantitative metrics for cross-attention map characterization. Their findings reveal abnormalities in the early denoising steps, which affect global structure synthesis.
To address these issues, the paper proposes two solutions: Prompt Queuing, which applies base prompts initially and ITI-GEN prompts later in the denoising process, and Attention Amplification, which balances quality and fairness. Extensive experiments demonstrate that these methods improve image quality while maintaining competitive fairness and diversity. The paper's contributions include identifying issues in the current prompt learning approach, providing a detailed analysis of the denoising process, and proposing effective solutions to enhance T2I generation quality.
Strengths: 1. The paper embarks on an in-depth examination of the denoising subnetwork within the T2I model. The goal is to understand how learned prompts influence image generation throughout the denoising steps. This involves dissecting the entire process from noisy input to the final image, which allows for pinpointing where and how the learned prompts might be causing degradation in image quality. This deep dive includes analyzing how noise is progressively reduced and how the model’s intermediate representations evolve at each step. By doing so, the authors can track the influence of different prompt tokens on the generation process, providing a granular view of the internal dynamics of the model.
2. The cross-attention maps are visual tools that show how attention is distributed across different parts of the image in response to various tokens in the prompt. By mapping out these attention distributions, the paper identifies specific patterns and irregularities. The analysis reveals that certain tokens, which are not directly related to the target sensitive attributes (tSAs), exhibit abnormally high levels of activity. For instance, non-tSA tokens like "of" and "a" show higher attention scores, indicating that the model's focus is scattered and not properly aligned with the relevant attributes. This misalignment disrupts the formation of coherent global structures in the early stages of image generation.
3. The paper highlights that abnormalities in cross-attention maps are particularly pronounced during the initial steps of the denoising process. At these stages, the model is supposed to establish the foundational global structure of the image. However, the distorted prompts from ITI-GEN interfere with this process, leading to poorly structured outputs. For example, the analysis shows that when using ITI-GEN prompts, the model often fails to correctly focus on key image regions, resulting in artifacts and unintended features. This is contrasted with the behavior of hard prompts, which show more targeted and consistent attention patterns that help maintain image quality.
4. This paper includes extensive experimental results demonstrating the superiority of FairQueue over ITI-GEN in terms of fairness, image quality, and semantic preservation. These experiments are conducted on multiple datasets and with various tSAs.
Weaknesses: 1. The proposed innovations, Prompt Queuing and Attention Amplification, are relatively incremental and might be seen as a combination of existing techniques rather than a breakthrough innovation. The combination of these two mechanisms does improve performance but lacks a strong theoretical underpinning to justify why this specific combination is superior.
2. The proposed method, FairQueue, while innovative, could benefit from a more detailed explanation of how it addresses the issues identified in the paper. In the current form, the explanation of Prompt Queuing lacks depth regarding why the base prompt T is more effective in the early denoising steps. The authors should delve deeper into the reasons behind this, possibly by providing theoretical analysis and specific experimental results that demonstrate how the base prompt gradually establishes the fundamental elements of the image during the denoising process.
3. Additionally, the transition to ITI-GEN prompts in the later stages should be better explained, particularly in terms of how it enhances the detailed expression of tSAs. Comparative illustrations of image generation results at different stages using different prompts would effectively show the benefits of this transition.
4. The mechanism of Attention Amplification needs a clearer explanation. The authors should detail why scaling the cross-attention maps enhances tSA expression, possibly through mathematical derivations or experimental data showing the impact of different scaling factors on image quality and tSA expression.
5. Although the experimental results are comprehensive, the paper lacks detailed descriptions of the experimental settings and specific parameters used, which could hinder reproducibility.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The authors provide a more detailed explanation or evidence on why the base prompt T is more effective in forming global structures during the early denoising steps compared to ITI-GEN prompts. Including theoretical justifications or experimental comparisons that illustrate how the base prompt helps establish the foundational elements of the image more effectively would be beneficial. Specific examples or visualizations from your experiments could clarify this point.
2. How does switching to ITI-GEN prompts in the later stages of denoising improve the detailed expression of tSAs without compromising the overall image quality?
3. The author should elaborate on the mechanism by which attention amplification enhances tSA expression. How is the scaling factor chosen and validated?
4. The paper would be strengthened by more detailed experiments and results analysis, especially comparisons between using only the base prompt, only ITI-GEN prompts, and the FairQueue method. A thorough analysis of these results would help demonstrate how FairQueue effectively addresses the issues identified with the original method. By incorporating these detailed explanations and analyses, the paper would more convincingly show how FairQueue resolves the problems it aims to address, enhancing its credibility and scientific rigor.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: 1. Although the authors have identified and analyzed the issues with the current methods, the explanation of how their proposed method, FairQueue, effectively addresses these issues lacks detail. Regarding the effectiveness of Prompt Queuing, the authors should provide a deeper explanation of why the base prompt is more effective in the early denoising steps. This could be supported by theoretical analysis and specific experimental results, such as demonstrating how the base prompt gradually establishes the fundamental elements of the image during the denoising process. Further explanation of how using the base prompt in the early steps helps form a robust global structure, allowing the application of ITI-GEN prompts in later steps to better add details without compromising the overall structure, would be beneficial.
2. For the mechanism of Attention Amplification, the authors need to elaborate on why scaling the cross-attention maps effectively enhances tSA expression. This section could include mathematical derivations or experimental data to show how different scaling factors impact image quality and tSA expression. Providing specific examples of how attention amplification compensates for the reduced tSA expression due to prompt queuing in various scenarios would strengthen the argument.
3. The authors should include more experimental results, particularly comparisons between using only the base prompt, only ITI-GEN prompts, and the FairQueue method. Detailed analysis of these results would help demonstrate how FairQueue addresses the issues identified with the original method, such as image quality degradation and insufficient tSA expression. Providing specific quantitative metrics and visual comparisons can more intuitively show the advantages of the proposed method.
4. The paper lacks a thorough analysis of the computational complexity and the necessary resources for implementing FairQueue. Understanding the computational demands is essential for assessing how feasible and scalable the method is in real-world applications. Without this information, it is difficult to determine whether FairQueue can be efficiently deployed in various environments, particularly those with limited computational resources.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer for detailed feedback. Due to lack of space we consolidate the comments from Weakness(W), Questions(Q), and Limitations (L) into 7 responses.
$ $
>R4Q1: “proposed innovations, Prompt Queuing, and Attention Amplifcation are relatively incremental” (W1)
R4A1: We respectfully remark **our proposed solution for quality degradation in sophisticated fair generative model is non-trivial**, requiring our deep analysis of attention maps in individual denoising steps via our novel H2I/I2H analysis:
- I2H reveals **learned tokens (via ITI-GEN) affect early denoising steps, degrading global structure**
- H2I reveals **learned tokens work well in enhancing tSA expression in the later denoising steps, if global structure synthesized properly**
*only from these observations*, can we propose FariQueue which is both simple and effective in mitigating quality degradations.
$ $
>R4Q2: “ why the base prompt $T$ is more effective in the early denoising steps” (W1,W2,Q1,Q4,L1,L3)
R4A2: We apologize if emphasized insufficiently that the effectiveness of base prompt $T$ is grounded on our I2H and H2I analysis. Recall that I2H reveals quality degradation is related to using learned tokens (via ITI-GEN) in early denoising stage, and H2I reveals that using HP (no learned tokens) instead can prevent it. **Using $T$ is in fact grounded on this analysis: both $T$ and HP are free of learned tokens.** This is further seen in Supp B.6 where $T$ and HP are close in the embedding space while the learned prompts are far. (Recall that HP can only be used for a few tSA with minimal linguistic ambiguity, thus using $T$ is needed as a solution for many tSA).
In addition, per Reviewer’s comment, we provide visualizations of $T$’s effectiveness in generating the global structure in early denoising steps. Specifically, we compare the cross-attention maps of FairQueue with ITI-Gen during sample generation, together with quantitative analysis. Results in col 2 vs 3 of Fig. A (rebuttal pdf) illustrate $T$’s effectiveness in synthesizing the global structure in stage 1, and non-abnormal attention (in Fig B), resulting in effective global synthesis than ITI-Gen and better sample quality.
$ $
>R4Q3: “How does switching to ITI-GEN prompts in the later stages of denoising improve the detailed expression of tSAs without compromising the overall image quality” (W1, W3, Q2, Q4, L1, L3)
R4A3: As our I2H/H2I analysis suggests, given good global structures synthesized in stage 1, ITI-Gen tokens enforce the tSA in stage 2 by having good *localized* attention that attends well to *tSA-specific regions* resulting in gradual addition of detail that enhances tSA expression.
Also, per Reviewer’s comment, we provide visualization for FairQueue vs $T$, where they are the same in stage 1 but are different in stage 2 (only FairQueue contains learned tSA tokens).
Results in col 1 vs 2 of Fig. A (rebuttal pdf) show that with FairQueue, the learned tokens $S_i$ emphasize attending to tSA-related regions (e.g., eyes and mouth for Smiling, or lower half of the face and cheeks for High Cheekbones), to output samples with the tSA features. Furthermore, as the attention is generally localized to the tSA-specific regions, the sample’s quality is well preserved.
$ $
>R4Q4: “Elaborate on the mechanism by which attention amplification enhances tSA expression. How is the scaling factor chosen and validated” (W1, W4, Q3, L2)
R4A4: As discussed in R4A3, learned ITI-Gen tokens enforce tSA expressions by attending to tSA-specific regions. Attention Amplification then increases this tSA expression by amplifying the attendance at tSA-specific regions.
Then, an ablation study in Supp A.2 determined the optimal scaling factor $c$. Our results suggest that $c=10$ delivers Pareto optimal performance for fairness and quality.
Additionally, per Reviewer’s comment, we provide some qualitative results (Fig C. rebuttal pdf) to support this analysis. Our results show that at $c=0$, even with prompt queuing, samples may lack tSA expression (Smiling). Increasing $c$ helps to enhance tSA expression.
$ $
>R4Q5: Justify why this specific combination (PromptQueing and Attention Amplifcatio n) is superior. (W1)
R4A5: To verify the contribution of both PromptQueuing and Attention Amplification, our ablation study in Supp A.2. varied the amplification scaling factor, $c$, and Prompt Queuing transition point.
Our analysis reveals that both components contribute to FairQueue's superior performance and **removing any one results in worsened performance**, i.e., when $c=0$, FD increases (less fair), and when the transition point=0, TA decreases (poorer quality).
$ $
>R4Q6: “The paper lacks detailed descriptions of the experimental settings and specific parameters used …” (W5)
R4A6: We respectfully clarify that all experimental reproducibility details are already available in Supp:
- **Model + hyperparameter** in Supp B.3
- **Code + reproduction files** in the anon. link (line 488)
- **Evaluation metrics** in Supp B.5
Furthermore, all reproducibility resources are in the Supp.
$ $
>R4Q7: “The paper lacks a thorough analysis of computational complexity.” (L4)
R4A7: We clarify that we had conducted a thorough analysis to verify that FairQueue contributes **negligible complexity**.
For inference: the inference time of the generation is ~45s regardless of base prompt, ITI-Gen, or FairQueue. It is because PromptQueing adds negligible computation to replace $T$ with $P$, and Attention Amplification only introduces a multiplication operation. General computation resources info is provided in Supp B.4, and we will add more details to clarify this.
For prompt learning: prompt learning (ITI-Gen/FairQueue) is generally light-weight due to:
- T2I generator is frozen and only three tSA tokens are trained.
- T2I generator is not in the training loop
On average, learning tokens for a tSA takes ~5min with one RTX 3090.
---
Rebuttal Comment 1.1:
Title: Response for authors
Comment: Thank you for the rebuttal. I still have one concern. The authors conducted ablation studies to validate the contribution of each component, which is well-handled. However, the explanation of why the specific combination of Prompt Queuing and Attention Amplification is superior to other possible combinations could be more detailed. A deeper analysis of why this particular combination works best would strengthen your argument.
---
Rebuttal 2:
Title: Clarifying FairQueue’s as the Best Performing Combination
Comment: >R4Q8: Explanation of why the specific combination of Prompt Queuing and Attention Amplification is superior to other possible combinations could be more detailed. Deeper analysis of why this particular combination works best would strengthen your argument
We thank Reviewer 9buQ for insightful comments. Here we provide analysis of all possible combinations of Prompt Queuing (PQ) and Attention Amplification (AA). We note that all these analyses are already in the paper to derive the best combination, but we fully agree with Reviewer that it is better to have them in one place, to clearly support why our PQ+AA is superior.
Recall:
1. PQ enables proper global structure generation, leading to **improved sample quality**.
2. AA supplements PQ by enhancing exposure to tSA tokens, leading to **state-of-the-art fairness**.
With these, we provide analysis of why PQ+AA works better than all other combinations.
$ $
Tab i: Analysis of **all possible different combinations** for Prompt Queuing (PQ) and tSA Attention Amplification (AA). We summarize our findings from main paper for the tSA “Smiling”. Note $\alpha (S)$ notates AA for tSA tokens and results in **bold** and *italics* are the best and second best. Notice that C6:FairQueue (PQ+AA) provides the best combination: it achieves **both** outstanding sample quality (C6: TA=0.674 & FID=80.02 similar to C1: TA=0.681 & FID=76.9 with the best quality but poor fairness) and fairness (C6: FD=0.069 similar to C4: FD=0.05 with the best fairness but poor quality).
| | Prompt Queueing (PQ) | Attention Amplification (AA) | Stage 1 Prompt | Stage 2 Prompt | FD$\downarrow$ | TA$\uparrow$ | FID$\downarrow$ | DS$\downarrow$ | Remarks |
|---|---|---|---|---|---|---|---|---|---|
| C1: **no PQ, no AA** for Base Prompt | No | No | $T$ | $T$ | 0.211 | **0.681** | **76.9** | - | |
| C2: **no PQ, no AA** for ITI-Gen | No | No | $[T;S]$ | $[T;S]$ | 0.124 | 0.605 | 88.63 | 0.557 | |
| C3: **AA only** for Base Prompt | No | Yes | $T$ | $T$ | N.A | N.A | N.A | N.A | An **unimplementable combination** due to the absence of tSA tokens for AA. |
| C4: **AA only** for ITI-Gen | No | Yes | $[T;S]$ | $[T;\alpha (S)]$ | **0.05** | 0.61 | 89.41 | 0.55 | |
| C5: **PQ only** | Yes | No | $T$ | $[T;S]$ | 0.145 | *0.674* | 80.15 | **0.24** | |
| C6: **PQ + AA** (Our specific combination) | Yes | Yes | $T$ | $[T;\alpha (S)]$ | *0.069* | *0.674* | *80.02* | *0.284* | Both PQ and AA are present i.e., our proposed FairQueue |
$ $
With the results summarized from main paper (above), we provide deeper analyses below with additional explanations for improved clarity:
- **C1: Base Prompt $T$ Only** (no AA no PQ): It lacks tSA-specific knowledge and results in poor fairness. Additionally, without tSA tokens $S$, AA is not applicable for **C3**.
- **C2: ITI-Gen prompt $P$ Only**, in Tab.1 (no AA no PQ). Our analysis in Sec 3.2. shows it has poor quality due to distortion in global structure during sample generation. Without PQ, the issue of distorted global structure persists for some tSAs.
- **C4: Attention Amplifcation (AA) Only**, in Supp. A.2. Fig.22 when PQ transition point=0. It results in poor quality since only ITI-Gen is used. We remark that utilizing only AA for ITI-Gen may deceptively improve fairness, but the generated samples have poor quality e.g., Smiling cartoons. The reason is (similar to C2): without PQ, the issue of distorted global structure persists for some tSAs.
- **C5: Prompt Queuing (PQ) Only**, in supp. A.2 Fig 22 when $c=0$. By replacing the distorted ITI-Gen prompt with the Base prompt in Stage 1, PQ leads to improved quality, but without AA, the fairness remains poor given reduced exposure to tSA tokens in the denoising process.
- **C6: FairQueue (PQ+AA)**, in Tab.1. Our proposed solution with optimal quality and fairness. Specifically, it combines the effects of Prompt Queuing– enabling the global structure to be properly formed resulting in good quality samples, and Attention Amplification–enhancing the tSA-specific expression for better fairness.
$ $
The results in Tab.i present a summary of related quantitative results:
- Comparing C4, C5, and C6 reveals that both PQ and AA are necessary to obtain high-quality samples with good fairness performance. Specifically, C4 has poor quality, while C5 has poor fairness.
- Comparing C1, C2, and C5 reveals the necessity of Prompt Queuing because utilizing either only ITI-Gen or Base Prompt results in quality and fairness degradation, respectively.
Overall, our quantitative analysis demonstrates **C6:FairQueue** is the superior combination, balancing both fairness and quality. It achieves a TA=0.674 and FID=80.02, the closest to the best quality by C1 with TA=0.681 and FID=76.9 (but poor fairness), while achieving fairness of FD=0.069, the closest to the best fairness by C4 with FD=0.05 (but poor quality).
We hope this further discussion addresses Reviewer’s concerns.
---
Rebuttal Comment 2.1:
Title: Appreciation for the Discussion
Comment: As we will conclude the Rebuttal’s discussion session in 5 hours, we sincerely thank the Reviewer 9buQ for their valuable insights and thoughtful discussion. We hope that our efforts to address the Reviewer’s comments have been satisfactory.
*If the Reviewer finds our responses appropriate, we kindly ask for a re-consideration of our scores based on the new additional understanding of our work.* | Summary: The authors propose FairQueue, a simple and effective solution to solve the quality degradation problem in ITI_GEN through prompt queuing and attention amplification.
Strengths: 1. The paper is well-written and logically structured.
2. The proposed FairQueue effectively solves the quality degradation problem in ITI_GEN.
3. The paper is very rich in experiments
Weaknesses: Some of the contribution points of the article can be optimized.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In Chapter 4, the author proposed a method for attention amplification. Is the amplified area Si related to tSA? If so, please explain the specific impact relationship.
2. Are there specific metrics to measure the relationship between generation quality and fairness?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The author can further improve the description of the method part of the paper and describe the innovation more clearly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >R3Q1: Some of the contribution points of the article can be optimized.
R3A1: We appreciate the Reviewer’s comment and would like to assure the Reviewer that we have already optimized our contributions with further discussion in the supplementary (due to space limitations). Specifically,
- In Supp. A.1., to optimize the delivery of our findings in cross-attention analysis, we provide substantial qualitative samples on different tSAs to demonstrate the characteristics of natural language and ITI-Gen prompts. Specifically, ITI-Gen prompts affect the global structure in the early denoising steps, while effectively enforcing the tSA once the global structure has been well formed by natural language prompts.
- In Supp. A.2, we present an ablation study to optimize the effect of both mechanisms in our proposed method: Prompt Queuing and Attention Amplification. Here, we observe that each component contributes to improving either fairness or quality of the generated sample.
- In Supp B.6, to advance our understanding of the distorted ITI-Gen prompts, we provide further discussion on the issues with ITI-Gen prompts. Here, we provide additional analysis to show that ITI-Gen’s prompt embeddings deviate significantly from the Base prompt and Hard prompt embedding. These findings further support our understanding of the differences between ITI-Gen prompts and natural language prompts (HP and Base prompt).
We hope that these additional resources address the reviewer’s comment. We will include additional pointers to them.
$ $
>R3Q2: Is the amplified area Si related to tSA? If so, please explain the specific impact relationship.
R3A2: We remark that the Reviewer’s understanding of the amplified area is correct. Specifically, the $S_i$ tokens in the ITI-Gen learn to encode the tSA by minimizing the directional loss (as discussed in lines 108-118). Therefore, the attention map correlating to the $S_i$ tokens is responsible for expressing the tSA at related regions, so is the amplified $S_i$.
Particularly, recall that prompt-queuing allows for the global structure to be first generated with base prompt $T$ followed by tSA adaptation with learned prompt $P$. This reduces the number of denoising steps that generated samples are exposed to the $S_i$ tokens, i.e., only in stage 2 and hence affecting tSA expression. To compensate for this, Attention Amplification (discussed in lines 267-270) was introduced, which scales the attention maps correlating to these $S_i$ (the tSA-specific tokens). This results in the intensification of the tSA-specific regions (e.g., attending with more intensity to the mouth region in tSA=’Smiling’), and therefore enhances the tSA expression in the generated samples.
To further understand the impact of attention amplification, we direct the Reviewer to our ablation study in Supp. A.2, where we augment the attention amplification factor $c$. Here, we observe that when $c=0$, the FD is still relatively large due to the lack of tSA expression. Then, by increasing this to $c=10$, we observe the increment in the tSA expression lowers the FD. To further support this point, in this rebuttal, we provide some additional qualitative results seen in Fig. C of the attached pdf. Here, we observe the same effect on ‘Smiling’ where the tSA expression is poor at $c=0$. Then when amplifying the attention intensity ($c>0$) to the related regions (e.g., mouth, as seen in rebbutal Fig. A col 2–FairQueue) we observe that the tSA expression also intensifies i.e., samples begin to smile.
$ $
>R3Q3: Are there specific metrics to measure the relationship between generation quality and fairness?
R3A3: We thank the Reviewer for the insights and agree with the Reviewer’s intuition that there should exist a metric to evaluate the relationship between fairness and quality since this trade-off can exist. For example, some existing fair generative models in the literature [27, 28, 30] require balanced datasets for the tSA, which are commonly small due to their difficulty to collect, and therefore affect sample quality. Unfortunately, to the best of our knowledge, the fair generative modeling community has not studied such a metric yet. Therefore, in our manuscript, following related work [16,28,30,31] we consider existing well-known individual metrics in the literature to assess fairness and quality, separately. Specifically as mentioned in line 140 and discussed in more detail in Supp. B.4, we have followed the existing works in the literature [16, 28, 30, 31], and used the following metrics for evaluating the fairness and quality of the generated images:
- Fairness discrepancy (FD) is a common metric utilized in many popular fair generative modeling works [16,27,28,30,31] to assess fairness in the context of sample generation.
- FID [16,38,28,30,31,35,60] is a very popular quality metric used in generative modeling to compare the feature embedding distribution on the generated sample against an ideal reference dataset.
- Text Alignment [16,37,11,60] is a popular quality metric used in evaluating text-to-image models.
- DreamSIM [39,B,C] extends on the well-known LPIPS [35,60] metric in the literature, to determine semantic similarities but additionally considers the high-level, mid-level, and low-level semantics concurrently.
$ $
>R3Q4: The author can further improve the description of the method part of the paper and describe the innovation more clearly.
R3A4: We appreciate the reviewer's feedback and hope that our responses have clarified the details of the method in our paper. In the final manuscript, we will include this content from the Supp. We remark that all our code and necessary instructions are included in the Supp for full reproducibility of our work.
$ $
**We appreciate the positive feedback and valuable comments of the reviewer. we sincerely hope that reviewers could consider increasing the ratings if our responses have addressed all their questions**. | Summary: This paper introduces an approach to improving fairness and quality in text-to-image generation models by rethinking prompt learning. The current Inclusive Text-to-Image Generation (ITI-GEN) methods often degrade image quality due to suboptimal training objectives and embedding misalignment between learned prompts and reference images.
Key innovations include Prompt Queuing, ensuring regular and fair prompt usage, and Attention Amplification, enhancing the impact of prompts on image generation. Experiments show that FairQueue improves both image quality and fairness compared to state-of-the-art methods.
Strengths: - Through detailed analysis, they revealed the limitations of the previous approaches, specifically how suboptimal training objectives and embedding misalignments degrade image quality.
Weaknesses: - The readability of the paper is poor.
- The evaluation of the model is insufficient, as there are only evaluations on the datasets of human faces, which lacks in evaluating the generalizability of the model. More experiments on other datasets are needed.
- The paper emphasizes quantitative metrics but does not give enough attention to qualitative assessments. Including user studies or expert reviews could provide valuable insights into the perceived quality and fairness of the generated images, complementing the quantitative data.
Technical Quality: 3
Clarity: 1
Questions for Authors: - I'm curious why the experiments were conducted on only one dataset.
- Some of the metrics used in the paper are unfamiliar. Are these metrics validated, and are they commonly used in other research papers as well?
- Additionally, I would like to know how you plan to address the limitations mentioned in the limitations section.
Confidence: 1
Soundness: 3
Presentation: 1
Contribution: 3
Limitations: They mentioned limitations and broader impact in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >R2Q1:The readability of the paper is poor.
R2A1: We remark that Reviewer 89En scored our presentation excellent, and Reviewer iQtH described it as “well written and logically structured.” We will review the paper once again.
$ $
> R2Q2: I'm curious why the experiments were conducted on only one dataset.
R2A2: We respectfully clarify that following ITI-Gen paper [16], our work conducts experiments on **3 datasets** including CelebA [29], FairFace [45] and Fair Benchmark [44] (line 277).
$ $
>R2Q3: The evaluation of the model is insufficient, as there are only evaluations on the datasets of human faces, which lacks in evaluating the generalizability of the model.
R2A3: We remark that for the type of datasets, we followed literature [16, 28, 30] on fairness. Specifically, as the literature is mostly focused on the human-centric tSA (e.g., “Gender”, “SkinTone”, …) expressed by human faces, existing fairness datasets are developed and curated based on this concept.
We remark that our approach is **agnostic to the dataset type** with no restriction regarding the type of content. However, to the best of our knowledge, the fairness community has not developed a curated dataset for this purpose. As an example, the FairFace dataset is curated for fairness study to include 6 skin tone classes and considers the Individual Typology Angle [44] to ensure that race is uncorrelated with skin tone. However, a similar effort has not been carried out for non-human concepts in fairness literature to enable studying fairness in these concepts.
$ $
> R2Q4: The paper emphasizes quantitative metrics but does not give enough attention to qualitative assessments.
R2A4: We thank the Reviewer for the comment. First, we clarify that we have tried to provide extensive qualitative assessment between ITI-Gen and FairQueue samples in [Annon. Supp link line 489] \> more resources \> Tab_1 (due to size limitation) containing 1.8k samples per approach. Nevertheless, we follow Reviewer’s suggestion and include a user study experiment to compare ITI-Gen and FairQueue. Specifically, we utilize the same seed to generate 100 sample pairs with ITI-Gen and FairQueue for ‘Smiling’, ‘High C.’, ‘Gender’, and ‘Young’. Then, utilizing Amazon Mechanical Turk we conduct 2 tasks:
- **Quality comparison by A/B testing**: Human labelers select the better quality sample between ITI-Gen and FairQueue (from the same seed). Each sample was given to 3 labelers.
- **Fairness comparison by human-recognized tSA**: labelers identified the tSA class for each sample. The final label was based on the majority of 3 labelers. labelers were also given an “unidentifiable” option if the class could not be determined. Finally, the labels were used to measure FD.
Our results in Tab. A reveals that FairQueue generates better quality samples than ITI-Gen (>62.0% preference) and Tab.B shows that FairQueue achieves competitive fairness with ITI-Gen. Overall, this aligns with our quantitative results in Tab.1.
We will include these results in the final paper.
$ $
Tab. A: **A/B testing:** Human assessment comparing quality between ITI-Gen vs FairQueue for 200 samples per tSA. Col 2 and 3 indicate the percentage of labelers that prefer the method’s sample quality. A larger value is better.
| | ITI-Gen | FairQueue |
|---|---|---|
| Smiling | 1.3 % | 98.7 % |
| High.C | 2.7 % | 97.3 % |
| Gender | 33.0% | 67.0% |
| Young | 38.0% | 62.0% |
$ $
Tab.B: **Fairness comparison by human-recognized tSA:** Human assessment to compare FD $\downarrow$ for ITI-Gen vs FairQueue for 200 samples per tSA.
| | ITI-Gen FD | FairQueue FD |
|---|---|---|
| Smiling | 0.106 | 0.014 |
| High.C | 0.144 | 0.021 |
| Gender | 0.014 | 0.014 |
| Young | 0.014 | 0.028 |
$ $
>R2Q5: Some of the metrics used in the paper are unfamiliar. Are these metrics validated, and are they commonly used in other research papers as well?
R2A5: We assure the Reviewer that as discussed in line 140, with more details in Supp B.5, these are indeed popular performance metrics. Specifically,
- FD is a common fairness metric utilized in many popular fair generative modeling works [16,27,28,30,31]
- FID [16,38,28,30,31,35,60] is a very popular quality metric used in generative modeling.
- Text Alignment [16,37,11,60] is a popular quality metric used in T2I models.
- DreamSIM [39,B,C] extends on LPIPS [35,60], a popular metric to evaluate semantic similarity of images.
$ $
>R2Q6: Additionally, I would like to know how you plan to address the limitations mentioned in the limitations section.
R2A6: We appreciate the Reviewer's interest in our work. We remark that the limitations section is included with the aim of full transparency. In this paper, as we focus on improving quality of fair generative models, these limitations, which require substantial research effort, are beyond the current scope. Considering these points, in what follows we provide some discussion for potential future works.
We first discuss attribute entanglement, which we believe is largely due to the current setups in fair generative models which lack an ideal dataset that fully disentangles the tSAs. For example, the attribute ‘Bald’ is often associated with ‘Male’ rather than ‘Female’. As a result, the entanglement issue is shared among all fairness approaches and a solution is non-trivial. However, it is indeed a very interesting research direction, which we hope to explore.
Next, we address the need to have a carefully-constructed reference dataset to guide the training of ITI-Gen tokens. In future works, we may take inspiration from the few-shot generation literature like StyleGAN-Nada [35] which utilizes an expert auxiliary model to guide the training in place of reference data.
We will add these details to future work.
$ $
**We appreciate the valuable comments and sincerely hope that reviewers could consider increasing the ratings if our responses have addressed all their questions**.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal. I think the author has addressed all the concerns that I raised. I updated my score to 5.
---
Rebuttal 2:
Title: We appreciate the Reviewer Raising our rating and for the kind words
Comment: We are very thankful for the Reviewer's increased rating and that we were able to “addressed all the concerns that I (the reviewer) raised”
We respectfully seek the reviewer to consider adjusting their presentation and/or confidence score to reflect their new understanding, should the reviewer find it appropriate. | Summary: In the field of fair text-to-image generation, the ITI-GEN paper demonstrated good performance by learning and embedding tSA. However, this paper argues that the embeddings learned in ITI-GEN can include unrelated attributes, resulting in noisy embeddings and significantly degrading the quality of the generated samples. To address this, the paper proposes using prompt queuing to apply tSA after a certain timestep and employs attention amplification to prevent the weakening of tSA.
Strengths: The paper provides a highly detailed analysis of how the learned tSA embeddings are actually reflected in the images using attention maps. It also demonstrates that the learned embeddings do not properly reflect differences in the images.
Weaknesses: First, the degradation seems severe compared to the ITI-GEN paper. If I understand correctly, ITI-GEN appears to aggregate all learned tSA embeddings by summing them, while this paper seems to concatenate a large number of tokens, as indicated by (p+q)*d. This difference in implementation may be causing a decline in baseline performance and requires an explanation.
Additionally, the proposed method to address this issue seems naive, and there is a lack of ablation experiments. Also, even with the proposed method, there appears to be a degradation in sample performance compared to HP. For example, in Figure 2, an originally female image appears to have been changed to a male image using the proposed method.
Technical Quality: 2
Clarity: 4
Questions for Authors: - Is there a difference between the original ITI-GEN and your re-implementation as I mentioned or did I just misunderstand? (I am not an expert of ITI-GEN)
- Wouldn't this problem be naturally resolved if a large number of reference images were used to learn the embeddings?
Confidence: 3
Soundness: 2
Presentation: 4
Contribution: 2
Limitations: The authors describe the limitations and potential societal impact in their supplemental material.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > R1Q1: “ITI-GEN appears to aggregate all learned tSA embeddings…This difference in implementation may be causing a decline in baseline performance”
R1A1: We thank the Reviewer for their comment, and apologize if it was unclear. We clarify that our code of ITI-Gen as a baseline is **100% identical** to the original ITI-Gen paper as we utilize the official GitHub source code released by ITI-Gen authors (line 282). Therefore, **we are certain that the quality problem is sourced from ITI-Gen**.
For multiple tSAs, following ITI-Gen exactly, we used the aggregation mechanism denoted in ITI-Gen’s paper Eqn 3. We will include these details in our paper for further clarification.
We reassure the Reviewer on the above matters by directing them to the submitted source code to observe the existence of this aggregation function: **[Annon. Link line 489] \> iti_gen \> model.py (line 225-230)**.
We hope the provided details address the Reviewer's concerns.
$ $
>R1Q2: the proposed method to address this issue seems naive.
R1A2: We respectfully clarify that proposing a solution for the quality degradation issue is **non-trivial**. Our proposed scheme stems from uncovering this issue in the context of prompt learning for fair T2I generation. **Only through detailed analysis of the issue (highlighted as a core contribution) and examination of attention maps of individual denoising steps, were we able to propose a simple yet effective solution.**
Specifically, first, we uncover the issue of quality degradation in ITI-Gen (based on their code). Then, we attribute this issue to the sub-optimal directional loss used for prompt learning in ITI-Gen leading to encoding unrelated concepts in learned tokens (Fig 1a-col3). We further trace back the effect of these learned tokens in sample generation by analyzing the cross-attention mechanism (Fig. 3), and propose H2I and I2H analysis to analyze the effect of the learned tokens on different stages of the denoising process.
**Only upon this analysis** do we recognize that the distorted ITI-Gen tokens affect the global structure in the early denoising steps, but work well in enforcing tSA expressions if the global structure is formed properly–by natural language prompts like HP and Base Prompt. This inspires us to propose prompt queuing to bypass degrading global structure in the early steps of denoising.
Although Prompt queuing prevents degrading global structure, as there are no tSA-related tokens in the early stage, it results in reduced tSA expression. However, our H2I analysis suggests that with proper global structure, tSA tokens attend to tSA-related regions to enforce fair generation. Inspired by this analysis, we propose attention amplification to enlarge the effect of tSA tokens and intensify tSA expression to achieve fairness without degrading image quality.
$ $
>R1Q3: there is a lack of ablation experiments
R1A3: We respectfully correct the review that ablation studies are included in Supp. A.2 (mentioned in line 297) due to space limitations. In these ablation studies, we experimented with different intervals for prompt queueing and scales for attention amplification. These ablation studies verify the effectiveness of individual components.
We additionally also compared FairQueue against ITI-Gen in the application of Training-Once-For-All. Our results again demonstrate FairQueue improved quality over ITI-Gen.
$ $
>R1Q4: with the proposed method, there appears to be a degradation in sample performance compared to HP… e.g. in Figure 2, female image appears to have been changed to a male.
R1A4: We clarify that based on the DS values (measured on 1k samples per tSA), our proposed approach on average has better performance in semantic preservation compared to HP, e.g, 0.284 and 0.330 for ‘Smiling’ and ‘High Cheekbones’ compared to 0.323, and 0.332 for HP (DS($\downarrow$) is a metric for measuring semantic preservation, see our paper). We remark that comparing degradation of two approaches based on a single sample might not be that easy, as in Fig. 2, HP also shows some discrepancies similar to what indicated by Reviewer, e.g., in col. 4, row 5, HP changes a male to female.
Importantly, we remark that HP is only applicable to a few tSAs which have minimal linguistic ambiguity (MLA). For most tSA which are non-MLA (see Supp A.4), HP has a poor performance in enforcing fairness (as also discussed in the ITI-Gen paper). In contrast, as shown in Tab. 1, FairQueue delivers a good performance in terms of both quality and fairness across various tSAs.
$ $
> R1Q5: Is there a difference between the original ITI-GEN and your re-implementation
R1A5: As answered in R1A1: Our deployment of ITI-Gen as a baseline is 100% identical to the original work.
$ $
>R1Q6: Wouldn't this problem be naturally resolved if a large number of reference images were used to learn the embeddings?
R1A6: We believe that simply increasing the reference data size may not resolve the problem of unrelated knowledge. Specifically, the existing ITI-Gen reference dataset is already relatively sizable. For example, in CelebA, 200 samples are used for each class of tSA, and yet this problem persists.
To verify this, we conduct the same experiment with tSA Smiling with 2k reference samples per class. Our results measured an FD=$127e^{-3}$, TA=$0.591$, FID=$89.2$, and DS=$0.532$ which is similar to Tab.1, with no improvement by using a large number of reference images. This indicates that the core problem may not be the data size, and may need specific data curation (including **sample pairs** with only semantic differences in tSA, and similar semantics elsewhere), which poses scalability and applicability challenges.
$ $
**We appreciate the positive feedback and valuable comments of the reviewer. we sincerely hope that reviewers could consider increasing the ratings if our responses have addressed all their questions**.
---
Rebuttal Comment 1.1:
Comment: Thank you for clarifying my questions and comments. I agree with the rebuttal that a simple method could be proposed due to thorough analysis, and I believe this paper is indeed a suitable follow-up to ITI-GEN. while I still think it may be incremental because it heavily depends on ITI-GEN method.
There is one more thing I’d like to ask. While I also trust that the authors reproduced the exact results since the code is released, I ask for your understanding in confirming the results because the quality of the results in the paper seems to vary significantly. Could you clarify which experiment in this paper achieved similar scores under the exact same settings as ITI-GEN? In the ITI-GEN paper, most results (Table 1 and 2 in their paper) show better or comparable performance in FID compared to the original stable diffusion, but the reproduced results seem to fall short compared to the original model. To be clarifying this, could you also compare the scores with the original model and HP in Table 1 in the proposed paper?
Additionally, I have a concern that could be raised not only with this paper but also with the ITI-GEN paper. The poor performance of the HP method might be due to stable diffusion v1.4 not properly reflecting the prompts. Wouldn’t HP perform better with more recent models that use improved text encoders, such as stable diffusion XL or version 3? Is there a possibility to apply this method to these latest models?
---
Rebuttal 2:
Title: Addressing Question on Research Gap and Reproducibility
Comment: Firstly, we really appreciate Reviewer 89En’s prompt and insightful feedback.
>Q7: Thank you for clarifying my questions and comments. I agree with the rebuttal that a simple method could be proposed due to thorough analysis, and I believe this paper is indeed a suitable follow-up to ITI-GEN. while I still think it may be incremental because it heavily depends on ITI-GEN method.
R7: We would like to emphasize that, as one of our main contributions, we had also proposed new qualitative and quantitative analysis of the effects of learned tokens in individual denoising steps of diffusion models. The proposed method thus grounded on our novel analysis.
$ $
>Q8: which experiment in this paper achieved similar scores under the exact same settings
R8: We thank the Reviewer for the question and clarify that our work considers a more fine-grained performance analysis than ITI-Gen which aligns better with existing SOTA literature [27,28,30,31].
**Quality evaluation**: Reproducibility of ITI-Gen’s quality is indeed a question raised by other researchers in ITI-Gen’s GitHub. To address this, ITI-Gen authors clarified (in GitHub issues #1 “Missing Performance Metric” and #7 “The gap between FID indicators is too large”, currently closed) that the FID results in ITI-Gen’s paper Tab.2 are achievable when evaluation is performed on data of all 40 tSAs, for both real and generated images (with learned ITI-GEN prompt) gathered as one large dataset (200*40), and FID is calculated on this large dataset. We have also verified this.
(We will include the links to the GitHub page to AC per NeurIPS guidelines.)
However, as remarked in global responses, such high-level analysis is susceptible to missing the quality degradation existing in specific tSAs (e.g., High-Cheekbone). To address this, in more fine-grain analysis, we instead focus on individual tSA and evaluate their respective quality, aligning with existing fairness literature [27,28,30,31]. With this more fine-grain analysis, we were able to see the distortion in individual learned prompts. In addition, we considered an additional Text-Alignment (TA) quality metric which confirms our findings.
Note that, Tab. 1 in the original ITI-GEN paper has only fairness comparison for specific tSAs, while FID or other quality evaluation for these tSAs is missing.
**Fairness evaluation**: As mentioned in our paper, following existing literature, we utilizes Fairness Discrepancy (FD) metric [27,28,30,31]. Specifically, FD is the same as that implemented with ITI-Gen but instead uses L2 distance in place of KL divergence.
$ $
>Q9: compare the scores with the original model and HP in Table 1 in the proposed paper?
R9: We provide the following results as per reviewer’s request for HP evaluation:
| | FD | TA | FID | DS |
|---|---|---|---|---|
| Gender | $1.4e^{−3}$ | 0.699 | 77.1 | 0.318 |
| Young | 0 | 0.674 | 76.4 | 0.392 |
| Smiling | $8.4e^{−3}$ | 0.672 | 79.8 | 0.323 |
| High C. | $3.68e^{−3}$ | 0.672 | 80.3 | 0.332 |
| Pale Skin | $591e^{−3}$ | 0.660 | 96.2 | 0.397 |
| Eyeglasses | $670e^{−3}$ | 0.670 | 79.1 | 0.468 |
| Mustache | $554e^{−3}$ | 0.674 | 81.4 | 0.372 |
$ $
As mentioned in our paper e.g., Tab 2, many of the HPs perform poorly w.r.t. Fariness due to their tSA having linguistic ambiguity. However, their sample quality is competitive. Overall, FairQueue still achieves best performance–with competitive fairness as ITI-Gen and high-quality samples as HP.
Due to space limitations, we can provide side-by-side comparison with ITI-Gen and FairQueue in another reply if Reviewer allows us (per NeurIPS rules). (ITI-Gen and FairQueue comparison is already in main paper)
$ $
>Q10: Wouldn’t HP perform better with more recent models that use improved text encoders, such as stable diffusion XL or version 3? Is there a possibility to apply this method to these latest models?
R10: We clarify that we have verified that the problems indicated by the ITI-Gen paper (e.g., in Fig.1) would still persist in more recent versions of Stable Diffusion e.g., SD 3.0. This issue can be observed by simply inputting the prompt “A headshot of a person without glasses” to SD 3.0 demo and generating several samples (provided through the AC comments). The generated samples still frequently have wrong tSA class (“with glasses”) indicating this ambiguity still exists. We provide some results showing the poor fairness of HP with SD3.0. Here, we utilize the same setup as Tab.1 (main paper) with 500 samples per tSA class and HP per Tab.3 (Supp.B.3) to report FD:
$ $
| | FD |
|---|---|
| Eyeglasses | $670e^{−3}$ |
| Pale Skin | $580e^{−3}$ |
$ $
Regarding applying ITI-Gen or FairQueue to more recent models: while in principle our ideas are model agnostics, it requires some effort to implement in the new SDXL or SD3 code base, beyond the duration of this rebuttal.
**Once again, we thank Reviewer 89En’s prompt and insightful feedback.**
---
Rebuttal Comment 2.1:
Comment: I see. Now I understand that while the ITI-GEN performance is strong across the entire dataset, there are some attributes where its results suffer from significant degradation, which this paper addresses. Thank you for clarifying my comments. I am raising my score to 5. However, I did not raise it further because I still feel this work is too dependent on the ITI-GEN method, even though it provides valuable improvements for the community.
---
Rebuttal 3:
Title: We appreciate the Reviewer Raising our rating and for the kind words
Comment: We are very thankful for the Reviewer's increased rating and kind words in considering our work “... valuable improvements for the community”.
We respectfully seek the reviewer to consider adjusting their soundness/contributions score to reflect their new understanding, should the reviewer find it appropriate. | Rebuttal 1:
Rebuttal: Global Response (GR): We thank all the reviewers for their valuable time and effort in reviewing our work. We appreciate the Reviewers' kind comments, such as:
- **Presentation**: Reviewer 89En giving our paper excellent score (4) for presentation; Reviewer iQth for praising our paper as “well-written and logically structured”
- **Analysis**: Reviewer 89En, 8wzu, and 9buQ recognizing our “highly detailed analysis”, “detailed analysis” and “in-depth analysis”, respectively.
- **Experiments**: Reviewer iQth: recognizing our efforts in making our paper “very rich in experiments”
- **Solution**: Reviewer iQth acknowledging “FairQueue effectively solves the quality degradation problem in ITI_GEN” and Reviewer 9buQ: Acknowledging the “superiority of FaiQueue over ITI-Gen”
$ $
We would also like to express our appreciation to all the Reviewers for allowing us to clarify our work, as well as the constructive comments. We have considered all comments seriously.
$ $
To briefly recap our work:
1. Our detailed study first uncovers that ITI-Gen –Prompt learning– degrades the quality of the generated samples, an observation missed by recent works [A] –based on high-level analysis.
2. Upon deeper analysis, it is revealed that the quality degradation sources from ITI-Gen’s sub-optimal directional loss used during training, where unrelated knowledge distorts the tokens in ITI-Gen prompt –responsible for encoding the tSA expression.
3. Based on these findings, we deep dive into analyzing the impact of these distorted ITI-Gen tokens during sample generation. Specifically, we inspect the attention maps with our novel H2I/I2H analysis. Our findings reveal to us that:
- The quality degradation sources from the distorted ITI-Gen tokens affecting the global structure in the early denoising steps
- Interestingly, given that the global structure is well-synthesized, ITI-Gen tokens work well (in the later stage) to enforce the tSA expression.
4. **Only based on this analysis,** are we then able to propose our Prompt Queuing mechanism (an evolution of H2I), which instead utilizes the Base Prompt in place of HP in the early denoising steps (to first synthesize good global features), followed by ITI-Gen prompt (to enforce tSA expression). Note that HP and Base Prompt share the similarity of being natural language prompts (demonstrated in Supp B.6) -undistorted by the directional loss in contrast to ITI-Gen– and hence produce good global structure.
5. Finally, recognizing that Prompt Queuing would impact the tSA expression due to the reduced denoising steps with ITI-Gen prompt, we propose Attention Amplification which scales up the attention maps for the tSA-specific tokens ($S_i$) resulting in enhanced tSA expression. We coin this combination of Prompt Queuing and Attention Amplification as FairQueue.
6. We then perform comprehensive experiments comparing FairQueue with the existing SOTA ITI-Gen to demonstrate our improved quality and semantic preservation while preserving the fairness performance. Our experiments include:
- **3 datasets (fair generative modeling gold-standard benchmarks)**: CelebA [29], FairFace [45], and Fair Benchmark [44]
- **12 different tSA combinations**
- **4 popular performance metrics**: Fairness discrepancy (FD) [16,27,28,30,31], FID [16,38,28,30,31,35,60], Text Alignment [16,37,11,60], DreamSIM [39,B,C].
$ $
Additionally, due to space limitations, we divert some key resources to the supplementary:
1. **Ablation studies** (Supp. A.2): demonstrates the individual contribution of Prompt Queuing and Attention Amplification.
2. **Exploration of ITI-Gen, HP, and Base prompt relationship in their embeddings space** (Supp.B.6): Verified that HP and Base Prompt are similar (close in their embeddings) and ITI-Gen prompt is dissimilar (distant embeddings).
3. **Hyper Parameters** (Supp. B.3): details for reproducibility.
4. **Code and Qualitative Illustrations** ([Annonymous Link in Supp line 489]).
- Code and ReadMe
- Qualitative comparison of ITI-Gen vs FairQueue illustration in \> [more resources] \> [Tab_1_More_illustration]
$ $
In this rebuttal, we have also included a few additional experiments and visualizations, as requested by the reviewers. All of the newly added results help to support the effectiveness of FairQueue:
1. Human-recognized assessment (Amazon-Mechanical Turk) of the quality and fairness of generated samples by ITI-Gen vs FairQueue
2. Additional Qualitative and quantitative analysis on the attention maps between BasePrompt vs FairQueue vs ITI-Gen
3. Qualitative assessment of the effects of Attention Amplification
4. Evaluating ITI-Gen’s performance with a larger reference dataset (2k)
$ $
In what follows, we provide comprehensive responses to all questions. We could provide more details if there are further questions. We hope that our responses can address the concerns.
$ $
[A] Fernández, Daniel Gallo, et al. "Reproducibility Study of" ITI-GEN: Inclusive Text-to-Image Generation"." Transaction on machine learning research (2024).
[B] Ghazanfari, Sara, et al. "LipSim: A Provably Robust Perceptual Similarity Metric." International Conference on Learning Representations (2024).
[C] Liu, Yinqiu, et al. "Cross-Modal Generative Semantic Communications for Mobile AIGC: Joint Semantic Encoding and Prompt Engineering." arXiv preprint arXiv:2404.13898 (2024).
Pdf: /pdf/f071e73d2c7467ed7d9cb56bb441ff67c5b568e1.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
On the Impact of Feature Heterophily on Link Prediction with Graph Neural Networks | Accept (poster) | Summary: This paper studies the impact of feature heterophily, rather than class heterophily, on link prediction tasks with GNN. It introduces the definition of non-homophilic link prediction based on the pairwise feature similarity of connected/non-connected node pairs of the graph. It further shows that the choice of encoders (GCN vs SAGE) and decoders (linear vs MLP) can lead to different link prediction performance on non-homophilic graphs. Various experiments on both synthetic and real graph benchmarks showcase that the feature heterophily can influence the GNNs' performance on link prediction tasks.
Strengths: 1. The paper writing is clear and easy to understand.
2. The research problem of feature heterophily on link prediction tasks is novel.
3. The theoretical analysis part is comprehensive and intuitive.
4. The experiment is interesting, especially the comparisons of encoders/decoders on graphs with different levels of feature heterophily. It supports the claim made in the analysis part.
Weaknesses: 1. While the research problem of feature heterophily is novel, the significance of the study is limited. The authors propose the concept of Heterophilic Link Prediction tasks. However, it is almost impossible to find such graphs in the real world. The most non-homophilic graph in the experiment is E-Commerce, which also has positive feature similarity. It will make the study more significant if the authors provide more real-world graphs with strong Heterophilic properties. Otherwise, it makes the study less practical and the Heterophilic Link Prediction becomes an *artificial* problem.
2. To consolidate the research problem and show that the synthetic graphs are valid in the real world, there can be some discussions about how the Synthetic Graph Generation is related to the other random graph generation methods.
3. While the research is centered around GNNs, Sec 3 and Sec 4.1 have no clear connection to GNNs. Also, Theorem 1 and 2 have no connection to GNNs. The discussion brings nothing beyond classical ML that a classification problem with complex decision boundaries requires a model with higher expressiveness to fit in.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. Beyond the graphs with node features, is there any insight from this study on graphs without node features? Non-attributed graphs are a significant set of benchmarks for link prediction tasks.
2. Beyond the feature heterophily in the direct neighbors, are there any discussions or analyses about the feature heterophily of k-hop neighbors? I suspect in the real-world graphs, k-hop heterophily is much more common, which can further improve the significance of this study.
3. Is it possible to expand the scope of this study to any link prediction method rather than GNNs?
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: See the weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1 Significance of the work**
Thank you for your feedback. We want to point out that going beyond the global characterization of the graphs for the link prediction task, we also provided a zoom-in analysis on the performance on heterophilous edges per method for all the real-world datasets, which also demonstrates the effectiveness of our identified designs in improving GNN performance on the full spectrum of low-to-high feature similarity (c.f. Line 360-365, 370-374, and Figure 4). This is important since in reality every dataset exhibits variance in feature similarity across all the links. That said, we acknowledge that finding real-world, *publicly-available* graphs that align with the heterophilic link prediction task, as discussed in Section 3, is non-trivial. Currently, we have identified e-comm as a high-quality non-homophilic network for link prediction. In Figure 1 of our general response plots, we have demonstrated that many real-world heterophilous node classification benchmarks, such as Pokec [1] and Amazon Ratings [2], are predominantly homophilous for the link prediction task, with most edges connecting nodes with similar features. Nevertheless, as we mentioned above, these datasets still have heterophilous edges where our theory and findings apply.
We believe that the current limited number of high-quality heterophilous link prediction datasets does not imply that the problem itself is artificial. To bring in a bit more context, the lack of high-quality heterophilous datasets for *node classification* tasks was also a longstanding challenge before the challenge of heterophily for GNNs became widely recognized in the field. As the problem attracted more interest, various research teams dedicated their efforts to introducing a range of heterophilous datasets, thereby advancing findings and methodologies within the subfield. Additionally, as we mentioned above, heterophilous connections can also arise *locally* even if the graph is homophilous in general, which motivates our locallized, more nuanced analysis (cf. Figures 4 and 9). Thus, by casting light to the problem of link prediction beyond homophily, we hope that our work will motivate the introduction of a high-quality set of benchmarks with diverse feature similarity, beyond the real data that we presented.
[1] Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods. (NeurIPS 2021).
[2] A critical look at the evaluation of GNNs under heterophily: Are we really making progress? (ICLR 2022).
**W2 Relation of synthetic graph generation with other random graph generation methods**
Our synthetic graph generation process is detailed in Lines 512-520 in Appendix A. Most of the random graph generation methods in existing literature studying heterophilous graphs are for creating synthetic datasets for node classification: [3,4] follow a modified preferential attachment process, where the probability of an edge is determined by both the class compatibility matrix and the node degree, while [5] follows to use contextual stochastic block model (CSBM) for synthetic graph generation. However, all of the aforementioned approaches control the homophily/heterophily levels defined on class labels of the generated synthetic graphs, while for graphs for link prediction usually do not have class labels. NetInfoF [6] recently generates synthetic graphs as link prediction benchmarks by controlling the correlation between node feature and edge existence. However, their generation process only allows node features to be fully correlated, partially correlated, or uncorrelated with edge existence; it lacks the capability to granularly control the feature similarity of the generated graph as in our graph generation process, and it is also not capable of generating graphs where features are *negatively* correlated among connected nodes. We will include these discussions in our final version.
[3] Mixhop: Higher-order graph convolutional architectures via sparsified neighborhood mixing. (ICML 2019).
[4] Beyond Homophily in Graph Neural Networks: Current Limitations and Effective Designs. (NeurIPS 2020).
[5] Adaptive universal generalized pagerank graph neural network. (arXiv 2020).
[6] NetInfoF Framework: Measuring and Exploiting Network Usable Information. (ICLR 2024).
**W3 Theorem connections with GNNs**
Thank you for the feedback. Existing GNN4LP methods are composed of encoders and decoders, both of which are essential components. Our first two theorems primarily consider the effectiveness of link prediction decoders, which, as our experiment shows, have a significant impact on GNN performance under link prediction tasks. Furthermore, the choice of decoders in link prediction tasks is not as trivial as it might seem: while MLP decoder is more preferred in academic research, DOT product decoder is more widely adopted in industry due to its scalability in tasks such as retrieval. Our theoretical analysis reveals the limitation of DOT product decoder for non-homophilic link prediction tasks and helps identify DistMult as an effective substitute which maintains the scalability of DOT product with closer-to-MLP link prediction accuracy. We will make the connections more clear in our paper.
**Q1 Insights for non-attributed graphs**
Our study primarily focuses on the impact of feature heterophily. As we define heterophily on node features, our definition and analysis is not applicable to graphs without node features. Given the prevalence of attributed graphs in real-world applications (e.g., textual information is often associated with nodes in industrial and other applications), we believe that the contributions of this work are significant for these commonplace settings.
**Q2 Expand the analyses to k-hop neighbors**
We appreciate the reviewer for sharing this valuable suggestion. We believe it would be an interesting direction to explore in future work.
---
Rebuttal 2:
Comment: **Q3 Expand beyond GNN**
Currently, GNN4LP methods are the state-of-the-art methods for the link prediction task; additionally, many heuristics such as common neighbors do not make use of node features. Therefore, we primarily restrict our study to the scope of GNNs. However, the notion of "feature heterophily" can have implications beyond GNNs, such as decoder-only models that only consider pairwise node features. Furthermore, there could be interesting dynamics between feature and structural proximity, which is beyond the scope of this paper and could be an interesting direction for future work. We will mention some of these connections in the paper.
---
Rebuttal Comment 2.1:
Comment: Thanks for addressing my questions in the rebuttal. However, I still have my main concerns as in the original review:
1. **One real-world dataset is not enough**: I appreciate the author's comments about the high-quality non-homophilic network for link prediction. I agree that finding such datasets was also a challenge when researchers started noticing heterophily issues in node classification tasks. In those studies, there are at least more than one heterophilic real-world datasets being evaluated. It can indicate that such heterophilic issues exist widely in real-world datasets. However, this study only finds one such dataset with heterophilic properties. This greatly undermines the generalization of this study. Researchers will be hardly interested in novel research that is limited to only one real-world dataset. On the other hand, if the paper can present more heterophilic datasets, it can enhance the contribution of this study, and have a huge impact on the research community just like heterophily issues in node classification. In this way, this problem can attract more interest.
2. **Method part is trivial**: If we look into the method part of the paper, the proposed method is trivial. If we look at the heterophilic papers the authors suggested, they presented novel methods even though evaluating heavily on the synthetic datasets. So another way to improve the study is to revise the method part, which can introduce more contribution to the research value.
Due to the main concerns above, I keep my original rating.
---
Rebuttal 3:
Title: Response to Reviewer ufZT (1)
Comment: Thank you for your additional feedback. We have carefully considered your concerns and would like to address them as follows:
1. Following your suggestion, we have conducted a thorough examination of various real-world benchmarks, including those that are typically used for other graph learning tasks (e.g., graph or node classification). We see the challenge of identifying heterophilic link prediction benchmarks as an opportunity to introduce more diverse, real datasets for this task. Specifically, we have identified a range of biological graph datasets from the TUDataset [1] that exhibit feature heterophily (these datasets are typically used for graph classification). In particular, the following datasets comprise **entirely heterophilic graphs** (i.e., every single graph has negative homophily ratios):
- aspirin
- benzene
- malonaldehyde
- naphthalene
- salicylic_acid
- toluene
- uracil
These datasets are substantial in size, as detailed below:
| Dataset (from TUDataset) | Number of Graphs |
|--------------------------|------------------|
| aspirin | 111,763 |
| benzene | 527,984 |
| malonaldehyde | 893,238 |
| naphthalene | 226,256 |
| salicylic_acid | 220,232 |
| toluene | 342,791 |
| uracil | 133,770 |
In addition, some datasets contain instances that are **strongly heterophilious** (with homophily ratio -1.0), including bbbp, NCI1, AIDS, and QM9 [2]. Note that such findings are not only limited to biological datasets. For instance, 73% of graphs from PATTERN [3] (Mathematical Modeling) have negative homophily ratios.
Furthermore, we have identified node classification benchmarks from torch-geometric.datasets that display a wide range of feature homophily ratios, many of which are more heterphilious than e-comm, the one we proposed in paper. Notably, some real-world benchmarks exhibit negative homophily ratios. We summarize our findings in the following table.
| Dataset | Homophily Ratios |
| ----------------------------------------- | ---------------- |
| Ogbl-ppa | 0.74 |
| Ogbl-collab (in the paper) | 0.70 |
| Ogbl-citat2 (in the paper) | 0.40 |
| WikiCS [4] | 0.35 |
| PubMed [5] | 0.22 |
| e-comm (in the paper) | 0.18 |
| DBLP [5] | 0.13 |
| Cora [5] / FacebookPagepage [10] | 0.12 |
| AQSOL [6] / Yelp | 0.12 |
| PPI [7] | 0.11 |
| Facebook [8] | 0.11 |
| Amazon-Photo [9] | 0.10 |
| Amazon-Computers [9] | 0.07 |
| Twitch-DE [10] | 0.07 |
| Twitch-FR [10] | 0.06 |
| BlogCatalog [8] | 0.06 |
| CiteSeer [8] | 0.05 |
| TWeibo [8] | 0.01 |
| Karateclub [11] | -0.03 |
| UPFD [12] | -0.10 |
| BBBP instances [2] | -1.00 |
| NCI1 instances [1] | -1.00 |
| AIDS instances [1] | -1.00 |
| QM9 instances [2] | -1.00 |
In addition to the homophily ratios presented above, we provide the feature similarity distributions for edges and random node pairs across several datasets in this [anoynomized GitHub repo](https://anonymous.4open.science/r/FeatureHeteroPlots-D591), following the same convention as Figure 5 in our paper. These plots reveal clear signs of heterophily in the existing benchmark graphs.
Consequently, we believe that our study is grounded in existing benchmarks. We will include these findings in our paper to present a set of diverse link prediction datasets and we will also encourage the introduction of additional heterophilic datasets, thereby broadening the scope of research in this area. Thank you again for leading us to this direction!
---
Rebuttal 4:
Title: Response to Reviewer ufZT (2)
Comment: 2. Thank you for your suggestion. As a pioneering study in this area, our primary objective has been to characterize the problem and identify useful design principles. We believe that our work is significant in setting a new research direction, which is essential for advancing the field. Furthermore, it is also noteworthy that many complex architecture designs proposed for handling heterophily in _node classification_ were later found to be not as effective as following the simple principle of separating ego-and neighbor-embeddings in model architecture (e.g., applying it to GAT and Graph Transformer), as highlighted in [13]. Our study takes a comprehensive approach, laying the groundwork for future research. We acknowledge that there is ample room for exploring more effective methods, and we view this as a key avenue for future research.
[1] TUDataset: A collection of benchmark datasets for learning with graphs
[2] MoleculeNet: A Benchmark for Molecular Machine Learning
[3] Benchmarking Graph Neural Networks
[4] Wiki-CS: A Wikipedia-Based Benchmark for Graph Neural Networks
[5] Deep Gaussian Embedding of Graphs: Unsupervised Inductive Learning via Ranking
[6] Benchmarking Graph Neural Networks
[7] Predicting multicellular function through multi-layer tissue networks
[8] PANE: scalable and effective attributed network embedding
[9] Pitfalls of Graph Neural Network Evaluation
[10] Multi-scale Attributed Node Embedding
[11] An Information Flow Model for Conflict and Fission in Small Groups
[12] User Preference-aware Fake News Detection
[13] A critical look at the evaluation of GNNs under heterophily: Are we really making progress?
---
Rebuttal 5:
Title: Response to Reviewer ufZT (3): results on additional heterophilic dataset
Comment: We have additionally conducted experiments on one heterophilic dataset we identified (Amazon-Computers, feature homophily ratio=0.07) and present the result in the table below. We find that the conclusions from our main paper still hold (SAGE > GCN, due to the sparation of the ego- and neighbor-embeddings, and the use of DistMult/MLP decoders is better than DOT).
| Encoder | Decoder | MRR | std |
| ------- | -------- | ----- | ---- |
| GCN | DOT | 24.67 | 0.61 |
| GCN | DistMult | 31.59 | 1.04 |
| GCN | MLP | 52.90 | 0.86 |
| SAGE | DOT | 42.26 | 0.54 |
| SAGE | DistMult | 58.42 | 0.12 |
| SAGE | MLP | 58.38 | 0.07 |
| (NoGNN) | MLP | 21.15 | 0.23 |
We will include these new results in the final version of the paper. | Summary: This paper proposes to connect the feature similarity/dissimilarity to the linking possibilities between node pairs. It theoretically demonstrates the necessity of considering the overall feature heterophily of a graph for better link prediction by using a two-dimensional model. On the other hand, the numerical experiments are conducted to show the impact of feature heterophily on link prediction and the influences of encoder and decoder in final performance.
Strengths: 1. The isssue of feature heterophily is valuable to be addressed, which can contribute to the understanding the formation of links among nodes. This work provides a concise theoretical analysis regarding it.
2. The paper is well-written and easy to follow.
3. The numerical experiments have been done to validate the theoretical results. The results are comprehensive and seems convincing to me.
Weaknesses: 1. Since GNN encode node contents and local structures, it is worthwhile to see the relationship between feature similarity and structural similarity, i.e. whether these two properties are consistent, and its role in affecting the performances of heuristic methods and feature learning methods, e.g., the performance comparison on ogbl-collab for two types of link prediction approaches in Table 1.
2. More benchmark datasets can be considered to cover a diversity of feature similarity distribution.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In eq.(1), \hat{y} is the predicted link probability, so \hat{y}>=0. But in Sec.3.2 in theorem assumption, \hat{y} can be negative. I'm confused.
2. The green line in Fig.1 denotes the score distribution of positive node pairs, why all the scores for positive pairs are identical?
3. Theorem 1 is derived based on the assumption that nodes are 2-dimensionl. For high dimensional features, the learning rate is still linearly with 1/(1-M)? The generality of theoretical results is required to discuss.
4. What is the split for training/validation/test on real-world datasets? How does the size of training data affect the performance of heuristic methods and learning based methods, respectively?
5. I cannot understand why the buckets are divided according to the minimum degree of connected nodes (in x-axis dimension). Why not consider the degree differences or some functions of degrees of two nodes? At least on the surface, nodes with similar degrees have big chance to be linked in assortative graphs, and vice versa in dissassortiative graphs.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: As pointed out in Questions 3, the theoretical results for more general cases need to be addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1 Exploring the relationship between feature similarity and structural similarity**
Thank you for raising this valuable suggestion. Exploring the dynamics between feature and structural similarity would indeed be an interesting direction for future research. In this work, our aim is to provide the first essential characterizations of the impact of feature heterophily on link prediction for GNNs. Future work can explore more fine-grained questions, such as the insightful one you raised.
**W2 Benchmark datasets**
Thank you for your feedback. First, we would like to emphasize that we have run experiments on a series of synthetic graphs of different heterophily ratios/feature similarity distributions. Second, going beyond the global characterization of the graphs for the link prediction task, we also provided a zoom-in analysis on the performance on heterophilous edges per method for each real-world dataset, which also demonstrates the effectiveness of our identified designs in improving GNN performance on the full spectrum of low-to-high feature similarity (c.f. Line 360-365, 370-374, and Figure 4). This is important since in reality every dataset exhibits variance in feature similarity across all the links. Finally, by casting light on the problem of link prediction beyond homophily, we hope that our work will motivate the introduction of a high-quality set of benchmarks with diverse feature similarity, beyond the real data that we presented. This trend was also observed in the literature after the challenges of heterophily for GNNs were pointed out in the node classification task [1,2]: as the problem attracted more interest, various research teams dedicated their efforts to introducing a range of heterophilous datasets, thereby advancing findings and methodologies within the subfield. In Figure 1 of our general response plots, we have shown that many real-world heterophilous *node classification* benchmarks, such as pokec [1] and Amazon ratings [2], are homophilous for link prediction, with most edges connecting nodes with similar features. Nevertheless, as we mentioned above, these datasets still have heterophilous edges where our theory and findings apply.
[1] Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods. (NeurIPS 2021).
[2] A critical look at the evaluation of GNNs under heterophily: Are we really making progress? (ICLR 2022).
**Q1 Clarification on \hat{y}**
Thank you for carefully reading our paper! For our theoretical analysis in Sections 3 & 4, the "predicted link probability" $\hat{y}_{u,v}$ is derived based on the equation in Line 144 ("Theoretical Assumptions" paragraph) in Section 3.2.
We note that we should have used "predicted link score" instead of "predicted link probability" for the discussions in Section 3, as the "link probability" term would imply that our prediction $\hat{y}_{u,v}$ is bounded within $[0,1]$ which is not the case in our analysis. We will revise the usage of "predicted link probability" term and make changes to ensure the notations are consistent and clear in our final version.
**Q2 Identical scores for positive pairs**
Thank you for the question. Figure 1 gives some *example distributions* of positive node pairs (i.e., edges – colored in green) and negative node pairs (non-edges – colored in red) for homophilic and heterophilic link prediction tasks to contextualize the introduction of these concepts. In these examples, the feature similarity scores of positive node pairs are simplified to follow a uniform distribution, hence we show the horizontal green lines in the Probability Density Function (PDF) plots of the similarity score distribution. Of course in real-world datasets the similarity score distributions for both positive and negative node pairs can be more complex: in Figure 5 in the Appendix we show the *actual* feature similarity score distributions for real-world datasets used in our experiments.
**Q3 Generality of theoretical results on learning rate**
The goal of this theorem is to show that the predicted link probability is related to the magnitude of the threshold M that separates the positive and negative samples, which highlights the different optimizations needed for homophilic and heterophilic link prediction tasks that have not been studied in prior literature. While generalizing the theorem to higher dimensions is non-trivial, our experiments in Section 6 have strengthened this implication to datasets of significantly higher complexity.
**Q4 Details about the real-world data**
| Dataset | Training Edges | Validation Edges | Test Edges |
|------------------|----------------|------------------|------------|
| ogbl-citation2 | 30,387,995 | 86,956 | 86,956 |
| ogbl-collab | 1,179,052 | 60,084 | 46,329 |
| e-comm | 238,818 | 34,117 | 68,235 |
The above table shows the details of the split of edges for train, validation, and test on real-world datasets. Note that in standard GNN4LP research, there are usually far more training edges than validation and test edges combined (see Table 2 in [14] in paper references). We follow the standard approach of data splitting. It would be an interesting future direction to consider the impact of data splitting on model performance.
**Q5 Reasons for dividing the buckets**
Thank you for the question. We use the minimum degree of connected nodes to divide the buckets in Fig. 4 since it has been shown in previous works that the presence of low-degree nodes (see [44] in paper references) can increase the complexity of heterophily for node classification tasks and negatively impact GNN performance. By using the minimum degree instead of maximum or average degree, we can group edges that are connecting at least one node with low degree and examine how different approaches perform on these edges. We will also explain our choice in the paper.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the response to my concerns. | Summary: The paper analyze the impact of heterophily in node features on link prediction tasks, and the authors present a theoretical framework that highlights the different optimizations needed for the homophilic and heterophilic link prediction tasks.
Strengths: The paper analyze the impact of heterophily in node features on GNN performance, the authors argue that the homophilic and heterophilic link prediction tasks should be defined based on how the distributions of feature similarity scores between connected and unconnected nodes are separated, which is innovative. The paper reveals the fundamental differences in optimizations for homophilic and heterophilic link prediction tasks, as well as insights into effective decoder selection for non-homomorphic link prediction tasks.
Weaknesses: 1、Some details in the paper are not very clear, such as How is feature extraction implemented? How to generate synthetic graphs that resemble different types of link prediction tasks by varying the feature similarity between connected nodes?
2、In addition, the hierarchical structure of the paper is not very reasonable, such as the chapter positions of the relevant work in Section 5.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How to generate synthetic graphs that resemble different types of link prediction tasks by varying the feature similarity between connected nodes?
2. the variation of the positive feature similarity scores affects the rate of change for the link prediction scores, the authors present definitions with intuitive examples in §3.1 and theoretical analysis in §3.2. but how the predicted link probability is derived?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: Yes, the author has a clear understanding and explanation of the limitations of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1, Q1 Clarifications (1) implementation of feature extraction (2) generation of synthetic graphs**
Thank you for your questions.
- For **W1(1)** implementation of feature extraction, we interpret the "feature extraction" part in your comment as how we obtain the node features for our synthetic and real-world datasets---please correct us if our interpretation is not right. For real-world graphs, we use the original node features provided in the datasets that are created by previous works: ogbl-collab and ogbl-citation2 from Open-Graph Benchmark [1], and e-comm from [2]. We give a brief overview of how the node features are created in Lines 532-539 in Appendix A. Our synthetic datasets are created by rewiring the edges among 10,000 sampled nodes from the real-world dataset `ogbl-collab`; we keep the features attached to each node during the sampling process, which become the node features for our synthetic graphs instead of generating new node features ourselves.
- For **W1(2) and Q1** generation of synthetic graphs, we describe our synthetic graph generation process in Lines 512-520 in Appendix A (as we mentioned in Line 280 in Section 6.1): the synthetic graphs are generated by randomly sampling 10,000 nodes with their features in ogbl-collab and connecting 2% of all possible node pairs whose feature similarity falls within specified ranges; all graphs share the same set of nodes and features and only differ in their edges. More specifically, we calculate the pairwise feature similarity between all node pairs and create 50-quantiles of feature similarity scores. We select the 3 smallest quantiles, the 3 largest quantiles, and 4 quantiles in equal intervals in between, resulting in 10 quantiles. We then create 10 synthetic graphs by connecting node pairs whose feature similarity scores fall within the same quantile. Thus, by gradually increasing the range of similarity for connected nodes, we create graphs which resemble different types of link prediction tasks and average feature similarity.
[1] Open Graph Benchmark: Datasets for Machine Learning on Graphs. (NeurIPS 2020)
[2] Pitfalls in Link Prediction with Graph Neural Networks: Understanding the Impact of Target-link Inclusion & Better Practices. (WSDM 2024)
**W2 Structure of paper; position of the related work section**
Thank you for your suggestion. We will revise the paper structure and specifically move the related work section to an earlier location (e.g., Section 2) in the final version.
**Q2 How is the predicted link probability derived**
Thank you for your question. For our theoretical analysis in Sections 3 & 4, the "predicted link probability" $\hat{y}_{u,v}$ is derived based on the equation in Line 144 ("Theoretical Assumptions" paragraph) in Section 3.2.
We note that we should have used "predicted link score" instead of "predicted link probability" for the discussions in Section 3, as the "link probability" term implies that our prediction $\hat{y}_{u,v}$ is bounded within $[0,1]$ which is not the case in our analysis. However, our analysis can still be given a probabilistic perspective as we can derive the predicted link probability by taking sigmoid $\mathrm{sigmoid}(\hat{y})$ on the predicted link score. We will revise the usage of "predicted link probability" term in our final version. | Summary: The paper examines how heterophily in node features affects the performance of Graph Neural Networks (GNNs) in link prediction tasks, which typically do not utilize node class labels. It introduces formal definitions of homophilic and non-homophilic link predictions, proposes GNN designs optimized for feature heterophily, and demonstrates through synthetic and real-world data that appropriate decoders and the separation of ego- and neighbor-embeddings can significantly enhance performance in non-homophilic settings.
Strengths: originality: good
quality: good
clarity: good
significance: good
Weaknesses: The theoretical analysis is a bit over-simplified.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. It's better to denote the mean feature vector of all nodes as $\hat{x}$ instead of $\hat{X}$
2. Have you tried other heterophily-specific techniques, e.g. high-pass filter or negative message passing?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: the authors adequately addressed the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1 Theoretical Analysis**
Thank you for your feedback. First, our theorems are derived under reasonable simplifications, which are typical in the heterophily literature (e.g., [1-2]). Second, our empirical analysis extends well beyond these theoretical assumptions, demonstrating that our theory holds more broadly. Finally, we emphasize that our theoretical analysis aims to provide concise characterizations as an initial step in formalizing this problem. We believe that extending beyond our current theoretical framework will be a valuable direction for future research.
[1] Beyond Homophily in Graph Neural Networks: Current Limitations and Effective Designs. (NeurIPS 2020).
[2] Revisiting Heterophily For Graph Neural Networks. (NeurIPS 2022).
**Q1 Notation of mean feature vector: use $\bar{\mathbf{x}}$ instead of $\bar{\mathbf{X}}$**
Thank you for reading our paper carefully! We will make this modification in our final version.
**Q2 Other Heterophily-specific techniques, e.g. high-pass filter**
Thank you for your suggestion. We agree that considering different heterophily-specific techniques is important and will be an interesting direction for future work. The main contribution of this paper is to propose and characterize feature heterophily, marking a pioneering effort in this area.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply and I will keep my rating. | Rebuttal 1:
Rebuttal: We would like to express our gratitude to the reviewers for their thoughtful and constructive feedback. We are pleased that all reviewers find the paper clear, most reviewers recognize the novelty of studying the impact of feature heterophily on link prediction with GNNs, and some reviewers (e.g., YTzD, ufZT) point out that they value our theoretical analysis and find that our empirical analyses are comprehensive and validate our theoretical findings.
We answer each reviewer’s questions in separate rebuttals. In this general response, we provide supplementary figures, to which we refer in the individual reviewer questions, as needed.
Pdf: /pdf/72364e4a41bb28ea29b0d434e52386399063b881.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper analyzes the impact of node features in the link prediction task. Based on the feature similarity, it first categorizes and defines link prediction into homophilic, heterophilic and gated ones then shows the differences among them. Further, it explores the encoder and decoder choices along with the ego-neighbor separation for non-homophilic link prediction by theoretical and empirical analysis.
Strengths: * This paper provides a basics including definitions and preliminary conclusion for future non-homophilic link prediction works.
* The overall writing is logical and easy to read.
Weaknesses: * The core conclusions are not exciting enough. For example, the different optimizations needed for homophilic and heterophilic link prediction tasks and the importance of adapting ego-neighbor separation for link prediction are actually intuitive.
* The novelty is insufficient. The theoretical contributions are limited while the solutions are combinations of off-the-shelf modules.
* Only one heterophilic real-world dataset is not enough, which may introduce bias and affect the generality of conclusions.
Technical Quality: 2
Clarity: 4
Questions for Authors: Some potential suggestions:
* Add some heterophilic GNNs as the encoders for supplement.
* Design new decoder specifically for heterophilic link prediction tasks.
* Analyze the reasons for the appearance of heterophilic links.
Confidence: 4
Soundness: 2
Presentation: 4
Contribution: 2
Limitations: * Add more heterophilic real-world datasets to demonstrate the generality of conclusions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1&2 Intuitive conclusions & novelty**
We thank the reviewer for the feedback. We note that this is the first work that explicitly characterizes feature heterophily in link prediction with GNNs. Our work pioneers in formalizing the problem, providing concise characterizations, and examining effective designs. Our paper highlights the necessity of careful selection of link prediction designs, including the significant limitation of DOT product decoders, which are widely adopted in the industry, for non-homophilic link prediction tasks, and identifies DistMult as a scalable alternative for settings which offers high scalability. We believe all these are significant and meaningful contributions to the field and open up interesting future avenues for research.
**W3 Heterophilic real-world datasets**
Thank you for your questions. First, we would like to emphasize that we have run experiments on a series of synthetic graphs of different heterophily ratios/feature similarity distributions. Second, going beyond the global characterization of the graphs for the link prediction task, we also provided a zoom-in analysis on the performance on heterophilous edges per method for each real-world dataset, which also demonstrates the effectiveness of our identified designs in improving GNN performance on the full spectrum of low-to-high feature similarity (c.f. Line 360-365, 370-374, and Figure 4). This is important since in reality every dataset exhibits variance in feature similarity across all the links. Finally, by casting light on the problem of link prediction beyond homophily, we hope that our work will motivate the introduction of a high-quality set of benchmarks with diverse feature similarity, beyond the real data that we presented. This trend was also observed in the literature after the challenges of heterophily for GNNs were pointed out in the node classification task [1,2]: as the problem attracted more interest, various research teams dedicated their efforts to introducing a range of heterophilous datasets, thereby advancing findings and methodologies within the subfield. In Figure 1 of our general response plots, we have shown that many real-world heterophilous node classification benchmarks, such as pokec [1] and Amazon ratings [2], are homophilous for link prediction, with most edges connecting nodes with similar features. Nevertheless, as we mentioned above, these datasets still have heterophilous edges where our theory and findings apply.
[1] Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods. (NeurIPS 2021).
[2] A critical look at the evaluation of GNNs under heterophily: Are we really making progress? (ICLR 2022).
**Q: potential suggestions**
Thank you for your valuable suggestions. Following your recommendation, we have added one more heterophilic GNN as the encoder for experiments (highlighted in Table 1 general response pdf) and our conclusions about the GNN decoder and encoder still hold.
---
Rebuttal Comment 1.1:
Comment: I appreciate the responses from authors. However, there are still some concerns remaining:
* I agree broadly with the author's description of the paper's contributions, but they just don't seem exciting to me. An innovative and feasible solution based on these new formalizations and characteristics is the key for people to be convinced and follow this work.
* Synthetic datasets struggle to simulate all the factors that affect performance in real-world datasets (e.g., degree distribution, noise, etc.), which is beyond feature similarity. Therefore, insufficient results on real-world datasets may cast doubt on the validity of the theory and findings in real-world application scenarios.
Given the above concerns, I will keep my original rating.
---
Rebuttal 2:
Title: Additional Rebuttal: Results on additional heterophilic dataset
Comment: Following your and reviewer ufZT, YTzD's insightful suggestions, we have identified a range of real-world datasets that exhibit strong feature heterophily (please refer to our [lateset response to reviewer ufZT](https://openreview.net/forum?id=3LZHatxUa9¬eId=2UskaksmAp)). In addition, We have additionally conducted experiments on one heterophilic dataset we identified (Amazon-Computers) and present the result in the table below. We find that the conclusions from our main paper still hold (SAGE > GCN, due to the sparation of the ego- and neighbor-embeddings, and the use of DistMult/MLP decoders is better than DOT).
| Encoder | Decoder | MRR | std |
| ------- | -------- | ----- | ---- |
| GCN | DOT | 24.67 | 0.61 |
| GCN | DistMult | 31.59 | 1.04 |
| GCN | MLP | 52.90 | 0.86 |
| SAGE | DOT | 42.26 | 0.54 |
| SAGE | DistMult | 58.42 | 0.12 |
| SAGE | MLP | 58.38 | 0.07 |
| (NoGNN) | MLP | 21.15 | 0.23 |
We will include these new results in the final version of the paper. | null | null | null | null | null | null |
Langevin Unlearning: A New Perspective of Noisy Gradient Descent for Machine Unlearning | Accept (spotlight) | Summary: This paper proposes an unlearning framework based on Langevin dynamics with two key components:
- The initially trained model satisfies some differential privacy guarantee
- Unlearning relies on DP fine-tuning the model on the rest of the dataset (which quickly reduces the privacy loss on the example to be forgotten).
This framework is formalized using the theory of convergence of Langevin dynamics. Specifically, assuming the Log-Sobolev inequality holds for the stationary distribution, the paper establishes a linear rate of convergence of the unlearning step. Notably, these bounds apply for the nonconvex setting as well; however, the constants are quite pessimistic in this case. The explicit constants are given for the convex and strongly convex settings, leading to implementable algorithms.
The paper also gives extensions to multiple deletions (sequential and simultaneous). Advantages relative the D2D baseline and retraining from scratch are established as well.
Strengths: This is a very strong paper and generally high quality work.
- While it is grounded in RDP, the theoretical advances are original. I love that the paper also explores 2nd order effects of the proposed framework, such as batch/parallel unlearning, etc.
- This framework gives the first provable approximate unlearning guarantees in the nonconvex setting and improves upon other approaches in the (strongly) convex setting. I believe these results are very significant and can inspire future work in the area.
- The paper is generally well-organized and well-written (although it is not perfect; specific suggestions for improvements are given below).
Weaknesses: - **Clarity**: The paper tackles some advanced technical tools, which may only be known to a niche audience. It would greatly help the broader appeal of the paper to explain the key intuition behind the LSI and why it can be expected to hold. Similarly, it would be nice to consider simple examples in the unlearning setting and give explicit values of the LSI constant. Also, what is its right scaling with respect to other problem parameters such as smoothness, strong convexity, etc.?
- The improvements from Langevin unlearning over training from scratch are purely logarithmic. How significant is this factor in the experiments?
- The paper uses full gradient steps. How do things change with stochastic gradient steps? What would be the computational complexity of the resulting algorithm?
- No experiments in the nonconvex case. Since these are the only provable guarantees in the literature so far, future work in the area would greatly benefit from setting up a benchmark with the results of this work.
Technical Quality: 4
Clarity: 3
Questions for Authors: - What are the various degrees of freedom in the privacy-utility-compute tradeoff? I assume that if we fix $\epsilon$ (privacy) and $K$ (compute), then we can tune $\sigma$ to get the desired privacy bound. Are there other degrees of freedom? If there is an extra degree of freedom, what are some rules of thumb for setting the parameters?
- Continuiung from the above, it would be nice to see some more plots exploring this tradeoff. Examples:
- Fix $K$, plot accuracy vs $\epsilon$ (or vs. $\sigma$) (I believe Fig 3c varies $K$ between the various points)
- Fix $\sigma$, plot accuracy vs. $\epsilon$ (or vs. $K$)
- Fix $\epsilon$, plot accuracy vs. $K$
- How do $C_k$ and $R_k$ scale with $k$ in the various settings of Theorem 3.2? While the authors acknowledge that the rates in the nonconvex case are pessimistic, it would be nice to discuss exactly how the iteration independent lower-bound scales with various problem parameters. Specifically, the dependence on which parameters do you think can be improved?
- Figure 3b: do all of the points have the same utility?
- Theorem 3.2(c): The strongly convex case needs $\sigma^2$ small enough. I wonder about the implications of this requirement. Does it mean that unlearning is not possible using the given framework for certain parameter values? Does it mean that a certain minimum amount of compute is necessary to make unlearning possible? It would be great to explore these quantitatively from the established bounds.
- Can the established unlearning guarantees be compared analytically to existing ones?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Yes, limitations have been adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are glad that Reviewer Akdu recognizes our work as a very strong and high-quality paper. We also appreciate the thoughtful comments and suggestions. We address the weaknesses and questions below.
**W1: “Question about LSI constant.”**
Thank you for your valuable suggestion. We will add some intuition about the Log-Sobolev Inequality (LSI) in our revision. Roughly speaking, LSI characterizes the tail density as “well-behaved.” For instance, the density of the form $\nu \propto \exp(-V(x))$ satisfies LSI when the tail behaves as $V(x) \approx \|x\|^2$. If the tail exponent is smaller, say $\|x\|^\alpha$ with $\alpha \in (0,2)$, it corresponds to different isoperimetric inequalities. For a more detailed discussion, please refer to our Appendix B, as well as references [31] and the comprehensive book [26]. See also the section 3 and 4.2 of [31] for some LSI constant example of non-convex distrubiton.
**W2: “How significant is the benefit compared to retraining in the experiments?”**
The significance of the benefit compared to retraining can be observed from Figure 3(a) in the experiment section. In our experiments, Langevin unlearning uses only **one** unlearning update per removed data point, achieving a strong privacy guarantee (i.e., $\epsilon=1$) while maintaining similar accuracy (utility) to retraining from scratch, even when retraining is performed without noise (Retraining Optimum). In contrast, retraining would require **at least** tens of iterations even with a good initialization, and more iterations when the initialization is far from the optimum. This clearly demonstrates that the benefit of our approach compared to retraining is significant in terms of both efficiency and computational cost.
**W3: “Extension to mini-batch setting”**
This is a great question! We have discussed how to correctly extend Langevin unlearning to a mini-batch setting in Appendix A. A more thorough study of this extension is left for future work. The extension to mini-batch settings offers potential improvements in both computational efficiency and adaptability to larger datasets, making it a promising area for further investigation.
**Q1: “The privacy-utility-computation trade-off.”**
This is an excellent question! Indeed, for a given $\varepsilon$ (privacy), by fixing $K$ (unlearning steps), we can compute the smallest possible $\sigma$ (noise) to achieve the best possible model utility. Alternatively, another trade-off can be considered: fixing $\sigma$ based on the model utility that one is satisfied with, and then computing the required $K$ that needs to be executed. These approaches allow for a flexible adjustment of parameters based on the desired balance between privacy, utility, and computational cost.
Experiments to examine the privacy-utility-computation trade-off are indeed crucial, and this is the main focus of Section 4 in our manuscript. Our Figure 3(a) illustrates the trade-off between $\epsilon$ (privacy) and model utility (accuracy) for all tested methods, with $K=1$ fixed unless otherwise specified. Figure 4 further demonstrates the trade-off between $K$ and $\epsilon$ for a fixed $\sigma$. Additionally, Figure 3(c) can be interpreted as a trade-off between accuracy (y-axis on the right, inversely related to $\sigma$) and $K$ for a fixed $\epsilon$. We will provide more explicit explanations and highlight these trade-offs in our revision to ensure clarity for readers.
**Q2: “Questions about $C_k$ and $R_k$.”**
To elaborate, $C_k$ scales linearly with $k$ (for the convex case) until it reaches an iteration-independent upper bound $\tilde{C}$, and $R_k$ is roughly the difference $1/C_k^2 - 1/C_{k+1}^2$. For the non-convex case, the dependency on $k$ is worse due to the multiplicative factor $(1+\eta L) > 1$ for $L$-smooth functions and when the step size $\eta > 0$. We believe that the dependencies on all parameters ($L, \eta, k, R$, etc.) can be improved.
Consider a special case where our target distribution is exactly Gaussian $N(0, C I_d)$ with variance $C$. In this scenario, the corresponding LSI constant is $C$, but our bound provides a relatively poor estimate when $C$ is moderate. Intuitively, for sufficiently large $k$, the underlying distribution $\rho_k$ or $\nu_t$ will be close to $N(0, C I_d)$, and the tight bound should be roughly $C$, which is not adequately captured by our Theorem 3.2.
Combined with the discussion in W1, we conjecture that for losses that grow sufficiently fast as the norm of the weight goes to infinity, it may be possible to derive a much tighter estimate of the LSI constant along the (un)learning process.
**Q3: “Do all points in Figure 3(b) have similar utility?”**
Yes. As stated in lines 359-361, all points have similar utility (accuracy) of approximately 0.9 for MNIST and 0.98 for CIFAR-10, respectively.
**Q4: “Question about the strongly convex case of Theorem 3.2.”**
There might be some confusion here. As explained in the simple example provided in W1, it is not necessary to enforce $\sigma$ to be particularly small. Our statement in line 243 pertains purely to the initialization. Note that if a distribution satisfies $C$-LSI, it also satisfies $C^\prime$-LSI for all $C^\prime \geq C$. As a result, given $\sigma^2,m$, we can always choose $C_{LSI}$ to be larger than $\sigma^2/m$, given that the initialization is (roughly) a constant (delta measure) which satisfies $C$-LSI for any $C>0$.
**Q5: “Can the established unlearning guarantees be compared analytically to existing ones?”**
This is a great question. While we can compare our unlearning bounds to existing results in the literature numerically, it is challenging to make an analytical comparison due to the RDP-to-DP conversion process [21]. This complexity is similar in the differential privacy (DP) literature, where researchers typically compare Rényi Differential Privacy (RDP) bounds with DP bounds numerically via such conversions.
---
Rebuttal Comment 1.1:
Title: Thanks!
Comment: Thank you for the response! I will keep my score. All the best to the authors! | Summary: This paper focuses on approximate unlearning under a similar definition of Differential privacy (DP), where privacy is defined as statistical indistinguishability to retraining from scratch. To be specific, they propose Langevin unlearning, which is a new framework based on the noisy gradient descent with privacy guarantees for approximate unlearning. The proposed framework has demonstrated many benefits like approximate certified unlearning for non-convex problems, and sequential and batch unlearning for multiple unlearning requests.
Strengths: 1. This work studies the theoretical unlearning guarantees of projected noisy gradient descent algorithms for convex problems, which provide fruitful theoretical basics for unlearning with several important algorithmic benefits, like approximate certified unlearning for the non-convex problems, and complexity saving, as well as enabling the sequential and batch unlearning.
2. The proposed Langevin unlearning based on the projected noisy gradient descent provides a unified geometric view of learning and unlearning processes, which is good for the understanding of the main idea.
3. Both theoretical results and empirical evaluation of the logistic regression task are provided to make the work comprehensive and insightful.
Weaknesses: Overall, the reviewer appreciates the idea of Langevin unlearning with the rigorous theoretical analysis for privacy guarantees, below are some comments from the weakness part which are hoped to be constructive for consideration.
1. The current presentation of this work can be further improved by considering clearly stating all the notations at the corresponding position and reorganizing the theoretical results with some intuitive explanation and connection with the empirical evaluation or justifications.
2. Before introducing the Langevin unlearning with the main results, it could be better to explain the motivation for the proposed framework.
3. For the geometric interpretation of the relations between learning and unlearning, what is the connection between this and the later illustration of the sequential unlearning and batch unlearning scenarios, and could the authors also explain the gap between the illustration and the practical learning dynamics for the corresponding problems?
4. For the empirical aspects of Langevin unlearning, how can we understand the corollary 3.4 more intuitively for the benefit obtained by the framework?
Technical Quality: 3
Clarity: 2
Questions for Authors: Minor comments and specific questions:
1. It seems there is no conclusion part, it would be better if the author could summarize the main claims and add it.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: This work has adequately discussed the limitations and there is no significant negative societal impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer GDGF for their constructive suggestions and positive assessment. We address the weaknesses and questions below.
**W1: “Suggestions for improving the presentation.”**
We appreciate the helpful suggestions for improving the presentation of our manuscript. Following the suggestion of reviewer GDGF, we plan to add a more intuitive explanation and motivation in Section 3.1, add connections to empirical justification in Sections 3.2-3.3, and improve the clarity of notations.
**W2: “Question about the geometric interpretation for the sequential unlearning and batch unlearning scenarios, and practical learning dynamics.”**
As mentioned in Section 3.1, the learning process can be conceptualized as inducing a regular polyhedron in the space of model weight distribution $\nu_{\mathcal{D}}$. Each vertex represents a model weight distribution $\nu_{\mathcal{D}}$, and adjacent vertices correspond to distributions arising from adjacent datasets $\mathcal{D}$ and $\mathcal{D}^\prime$. The edge length between vertices is indicated by the initial privacy loss $\varepsilon_0$. The unlearning process effectively involves moving from one vertex $\nu_{\mathcal{D}}$ to its adjacent vertex $\nu_{\mathcal{D}^\prime}$ until we are $\varepsilon$-close to the desired distribution.
Within this framework, sequential unlearning is interpreted as moving towards different neighboring vertices in sequence (i.e., $\nu\_{\mathcal{D}} \rightarrow \nu\_{\mathcal{D}\_1} \rightarrow \nu\_{\mathcal{D}\_2}...$), as depicted in Figure 2. It is important to highlight that the unlearning process halts once we are $\varepsilon$-close to the current target distribution. Consequently, the "initial distance" for the subsequent unlearning request is not merely $\varepsilon_0$. Utilizing our geometric interpretation, we apply the weak triangle inequality to provide an upper bound for this initial distance in terms of $\varepsilon_0$ and $\varepsilon$. This allows us to employ Theorem 3.2 to ensure the desired unlearning guarantee.
In the context of batch unlearning, the notion of data adjacency is altered so that datasets $\mathcal{D}$ and $\mathcal{D}^\prime$ are considered adjacent if they differ in $S$ points. This modifies the underlying regular polyhedron, where the edge length (initial distance) now becomes $\varepsilon_0^{(S)}$. Once $\varepsilon_0^{(S)}$ is determined, Theorem 3.2 can again be applied to guarantee successful unlearning.
It is also essential to note that the aforementioned discussion pertains to the worst-case scenario. In practical situations, adjacent (un)learning processes are typically closer in terms of distribution. Recent evidence supporting this claim includes a study [38] demonstrating that models trained using DP-SGD with $\varepsilon \approx 10^{8}$ perform optimally against empirical adversarial attacks, such as membership inference attacks, compared to other popular heuristic defenses. We expect similar behavior to persist for Langevin unlearning, highlighting the practical potential of our approach.
**W3: “Understand the benefit of Corollary 3.4 more intuitively.”**
The primary benefit of Corollary 3.4 is that it facilitates Langevin unlearning for sequential unlearning requests. Sequential unlearning holds significant practical value, particularly in light of regulations like the GDPR that explicitly state, “the data has to be erased without undue delay.” While we do not assert that sequential unlearning is superior to batch unlearning, it is crucial to recognize that these two approaches are orthogonal and can be effectively combined. For instance, one could sequentially unlearn $S\times T$ points by removing $S$ points in each iteration of the process.
**Q1: “No conclusion part”**
We acknowledge reviewer GDGF's suggestion and appreciate the feedback. We will incorporate a conclusion section in the manuscript to summarize our key results and contributions. | Summary: This paper proposes a novel framework for machine unlearning through noisy gradient descent with Langevin dynamic analysis. This framework has privacy guarantees for approximate unlearning problems and it unifies differential privacy and machine unlearning process, giving benefits including approximate certified unlearning for non-convex problems, complexity saving compared to retraining, sequential and batch unlearning for multiple unlearning requests.
Strengths: 1. The authors propose a novel unlearning framework called Langevin unlearning, which is based on noisy gradient descent. This framework provides a new approach to approximate machine unlearning with privacy guarantees.
2. The paper presents a theory that unifies the differential privacy (DP) learning process with the privacy-certified unlearning process, offering algorithmic benefits such as approximate certified unlearning for non-convex problems.
3. This paper provides empirical results showing the practical effectiveness of the proposed framework compared to existing methods.
Weaknesses: 1. The writing is not easy for readers unfamiliar with differential privacy to follow. I think including more details and intuitive explanations in the preliminary would help.
2. Although the paper discusses the computational benefits over retraining, the empirical results are limited to toy datasets such as MNIST and CIFAR10. Actual scalability and computational complexity of the Langevin unlearning approach for large-scale machine learning models and datasets need further investigation.
3. The effect of the method is sensitive to hyperparameters such as the standard deviation of the noise.
Technical Quality: 3
Clarity: 2
Questions for Authors: How does the unlearning process ensure other learned data are not affected? Are there any empirical results or theoretical guarantees?
Confidence: 1
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: While the paper claims to theoretically handle non-convex problems, the practical applicability of the framework for such problems might be limited due to the reliance on certain constants that are either undetermined or can only be loosely determined. The applications may still limited to strong convex problem.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer tZ8F for their helpful comments and positive evaluation. We address the weaknesses and questions below.
**W1: “The writing is not easy for readers unfamiliar with differential privacy to follow.”**
We appreciate reviewer tZ8F's insight on making our manuscript more accessible to a broader audience, including those outside the differential privacy (DP) community. To address this concern, we will incorporate more intuitive explanations into Figure 1, Sections 2.1, and 3.1 to make the concepts more approachable. This will help readers unfamiliar with DP better understand our work.
**W2: “The effect of the method is sensitive to hyperparameters such as the standard deviation of the noise.”**
It is important to note that the noise scale (standard deviation $\sigma$) is not a hyperparameter to be tuned arbitrarily. In both DP and theoretical unlearning literature, a privacy constraint $(\epsilon,\delta)$ is specified either by regulatory policies or user agreements. We use our theoretical results (specifically, Theorem 3.2) to determine the smallest possible $\sigma$ that satisfies the given privacy constraint while maintaining the model's utility as much as possible. Therefore, there is no “sensitivity issue” regarding the noise scale, as our theoretical framework provides a privacy-utility trade-off for any given privacy constraint under Definition 2.4.
**Q1: “How does the unlearning process ensure other learned data are not affected? Are there any empirical results or theoretical guarantees?”**
Our Theorem 3.2 provides a theoretical guarantee that the model weight distribution after the unlearning process will be approximately the same as that obtained from retraining the model from scratch without the data that needs to be removed. This essentially means that the other learned data are approximately unaffected by the unlearning process. The gold standard in machine unlearning is retraining from scratch, and our approach closely aligns with this standard by ensuring minimal impact on the remaining data.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed explanations while I think more empirical results would make the claims more convincing. I will maintain my scores. | Summary: The paper proposes Langevin unlearning, a new perspective of noisy gradient descent for approximate machine unlearning, which unifies the differential privacy learning process and privacy-certified unlearning process with many algorithmic benefits. The key technique of the proposed method is to carefully track the constant of log-Sobolev inequality. The authors validate the practicality of Langevin unlearning through experiments on MNIST and CIFAR10, and demonstrate its superiority against gradient-descent-plus-output-perturbation based approximate unlearning.
Strengths: 1. The authors innovatively interpret the relations between learning and unlearning via a geometric view.
2. The authors provide a rigorous theoretical analysis of privacy guarantees and a framework for certified non-convex approximate unlearning.
3. The theorems and proofs are well-presented.
4. The proposed method supports insufficient training as well as sequential and batch unlearning, aligning well with practical scenarios.
5. The paper is well-organized and the writing is clear overall.
Weaknesses: 1. The experiments are not extensive enough and do not consider non-convex problems.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. Is it possible to provide experiments on (toy) non-convex problems?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: 1. Most experiments are binary classification problems, with multiclass classification experiments on CIFAR-10-multi-class deferred to the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer 1Nom for their praise and positive feedback. We address their sole question below.
**Q1: “Is it possible to provide experiments on (toy) non-convex problems?”**
Indeed, as mentioned in Section 1.1, the current theoretical bound might not be sufficiently tight for non-convex problems. However, it is possible to conduct experiments on toy non-convex problems such as logistic regression with non-convex regularization, including the minimax concave penalty (MCP) or the smoothly clipped absolute deviation (SCAD) penalty. A canonical toy example of a non-convex function is provided in [31]: $f(x) = \frac{1}{2}\log(1+\|x\|^2) + \frac{1}{20}\|x\|^2$, which is related to Student’s t-regression with a Gaussian prior.
Literature indicates that the distribution $\nu(x) \propto \exp(-f(x))$ satisfies the Log-Sobolev Inequality (LSI) with a constant roughly around $176$ [31]. Also, it is known that the Langevin diffusion process with $f(x)$ as its potential function and noise variance $1$ has a stationary distribution $\nu(x)$, also known as the Gibbs measure. Furthermore, $f(x)$ is known to be $1.1$-Lipschitz and $1.6$-smooth within the $\ell_2$ ball of radius $R=1$.
When the step size $\eta = 1$ and $\sigma = 1$, the iteration-independent LSI constant bound in Theorem 3.2 approximately evaluates to $8 \times 10^5$, which is a significant overestimation and results in impractical privacy bounds. This highlights the reasons we did not conduct non-convex experiments.
Improving the unlearning analysis for non-convex problems, particularly a better estimate of the LSI constant, remains an interesting and significant future direction, as already mentioned in our outlined future work.
---
Rebuttal Comment 1.1:
Title: Thank you for your response
Comment: Thank you for your response. I will maintain my score. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Alleviating Distortion in Image Generation via Multi-Resolution Diffusion Models and Time-Dependent Layer Normalization | Accept (poster) | Summary: The paper proposes a multi-resolution network for diffusion model which emphasizes the learning features across resolutions. The method further introduces a time-dependent layer norm to boost the model performance with fewer parameters compared to AdaLN in DiT. This proposed network demonstrates a SoTA performance on ImageNet at 1.70 and 2.89 on ImageNet 256 & 512, respectively.
Strengths: - The paper addresses image distortion caused by varying patch sizes by integrating a multi-resolution structure into the network. The choice of patch sizes involves a trade-off between computational complexity and model performance: larger patch sizes reduce computational complexity but degrade performance.
- The method finds a balance between compute complexity and performance by introducing a multi-branch architecture and multi-resolution loss to decompose the learning process from low to high resolution. The idea is a substantial contribution. Through quantitative and qualitative results, the method can alleviate distortion of image generation and achieve SoTA FID scores.
- The time-dependent layer norm is a simplified form of AdaLN with less intensive parameters by removing the cumbersome MLP layer and rearranging class embedding and time embedding for parameter-efficient time conditioning.
Weaknesses: - The discussion between the method and cascaded methods like "Cascaded diffusion models" and "Matryoshka diffusion models" should be included to highlight the advantages of the method.
- In table 1, it needs to be updated with some latest SoTa methods like MDTv2 with the best FID of 1.58 and PaGoDA with the best FID of 1.56 on ImageNet 256x256. Not to compete with them, but to get an overview picture of the latest methods.
- The proposed multi-scale diffusion loss is also introduced in SimpleDiffusion. So, the authors need to mention the difference in the paper.
- Equation 2: have the authors used concat instead of adding the upsampling features directly to the larger-resolution features of the next branch?
- The current method only uses the output features of the previous branch and injects them into the start of the next branch. However, this design lacks interconnections between blocks from the low-resolution and high-resolution branches. Ideally, there should be skip-connections across branches. What is the motivation behind this? If authors have not tested this, it is encouraged to do so.
- In table 2, what is the reason why using AdaLN-Zero with Multi-branch (row 3) causes a bad result?
- The root problem of distortion boils down to the use of patch size which is the main motivation of the work. However, in the design of multiscale network, the authors also patchify the input image to the corresponding resolution for each branch. I wonder why the author do not use different-resolution noisy images as the input to each branch like Matryoshka Diffusion Models. I think this is more effective to completely remove distortion problem. As shown in Figure 5, the method still exhibits a certain level of distortion rates.
- Sampling speed: it is valid to include a comparison of sampling time with baselines.
- Figure 3: What is the red line meaning?
Misc: L37, L130: the figure 7 of DiT paper is incorrectly linked.
Ref:
- Gao, Shanghua, et al. "MDTv2: Masked Diffusion Transformer is a Strong Image Synthesizer." arXiv preprint arXiv:2303.14389 (2023).
- Kim, Dongjun, et al. "PaGoDA: Progressive Growing of a One-Step Generator from a Low-Resolution Diffusion Teacher." arXiv preprint arXiv:2405.14822 (2024).
- Hoogeboom, Emiel, Jonathan Heek, and Tim Salimans. "simple diffusion: End-to-end diffusion for high resolution images." International Conference on Machine Learning. PMLR, 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: N/A
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: As pointed out in Weaknesses, the method still has a certain distortion rate which is due to the use of strided convolution (a form of patchification) at the start of each branch. I think the author should include this in the revised manuscript.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive comments, and we carefully address the concerns below.
> W1: The discussion between the method and cascaded methods like "Cascaded diffusion models" and "Matryoshka diffusion models" should be included to highlight the advantages of the method.
Thank you. We will add the following paragraph to the related work section.
***Cascaded diffusion models.** To generate images with high resolution while solving the large computational complexity, it is commonly to use Cascaded diffusion models [a,b,c], where a first diffusion model is used to generate images at lower resolution, and then one or multiple diffusion models are used to gradually generate the super-resolution versions of the initial generation. Recently, Gu et al [d] propose using one diffusion model to generate images at different resolutions at the same time via a Nested U-Net and a progressive training recipe. However, the key insight of their multi-resolution design is that high-resolution images can be generated with smaller and affordable models if the low-resolution generation is used as conditioning. In contrast, we observe that different architectures (e.g., transformer and convolution layers) have different behaviors (such as performance, speeds, etc) at different resolutions. Therefore, we propose a multi-resolution network design in the ***feature space***, and use different architectures to handle features at different resolutions. This enables us to utilize the advantages of different architectures within a single diffusion model, and achieve the best performance while maintaining model efficiency.*
- *[a] Ho, Jonathan, et al. "Cascaded diffusion models for high fidelity image generation."*
- *[b] Ramesh, Aditya, et al. "Hierarchical text-conditional image generation with clip latents."*
- *[c] Saharia, Chitwan, et al. "Photorealistic text-to-image diffusion models with deep language understanding."*
- *[d] Gu, Jiatao, et al. "Matryoshka diffusion models."*
> W2: In table 1, update with some latest SoTa methods.
Thanks for the suggestion. We will add them to the full table. However, it is noteworthy that these methods propose new strategies to train diffusion models, such as using mask latent modeling schemes or progressive training, which are orthogonal to the main focus of this work (a multi-branch network design). We believe that these strategies can also be used to train our model and achieve better performance. Additionally, PaGoDA was on arXiv (May 23, 2024) after our submission to NeurIPS.
> W3: The proposed multi-scale diffusion loss is also introduced in SimpleDiffusion. So, the authors need to mention the difference in the paper.
Thanks for the suggestion. We already cited SimpleDiffusion in L230, and will discuss more in the revision. In short, the multi-scale training loss in SimpleDiffusion is used to balance the loss on different levels of details when training diffusion models for high resolutions. The multi-scale training loss used in our DiMR is to supervise different branches at different resolutions. Another crucial difference is the underlying denoising backbone, where SimpleDiffusion uses U-Net, and DiMR uses a multi-branch network.
> W4: Have the authors used concat instead of adding the upsampling features directly to the larger-resolution features of the next branch?
We actually considered both strategies at the early stage of this project. We found that concatenation and upsampling make no difference experimentally. They achieve very similar performances as shown in Table J.
***Table J**: Comparison between upsampling and concatenation on ImageNet-256*
| | Epoch | #Params. | Gflops | FID-50K w/o CFG | FID-50K w CFG |
|-----------------------------|-------|----------|--------|-----------------|---------------|
| DiMR-XL/2R w/ upsampling | 400 | 505M | 160 | 4.87 | 1.77 |
| DiMR-XL/2R w/ concatenation | 400 | 506M | 161 | 5.01 | 2.06 |
> W5: Ideally, there should be skip-connections across branches. What is the motivation behind this? If authors have not tested this, it is encouraged to do so.
Thanks for the interesting suggestion. We have experimented with skip-connections, as detailed in Table K. Specifically, we connect the input of each layer in the latter branch with the output of the corresponding layer in the former branch using a skip-connection block. This block consists of two layer normalization layers with a pixel shuffle upsampling layer in between. Our results indicate that skip-connections have a marginal, or slightly negative, influence on our DiMR model.
***Table K**: Ablation on skip connections*
| | skip-connections | Epoch | #Params. | Gflops | FID-50K w/o CFG | FID-50K w CFG |
|------------|------------------|-------|----------|--------|-----------------|---------------|
| DiMR-XL/2R | NO | 400 | 505M | 160 | 4.87 | 1.77 |
| DiMR-XL/2R | YES | 400 | 542M | 165 | 4.45 | 1.96 |
> W6: In table 2, what is the reason why using AdaLN-Zero with Multi-branch (row 3) causes a bad result?
Our observation is that AdaLN-Zero may not perform well on convolution layers, probably because MLP considers data in 1D, which may not handle the 2D structure of the convolution layers very well. Also, our multi-branch design (w/ transformer and convolution layers) may require a more careful tuning of AdaLN-Zero, which was originally designed for DiT (single-branch and pure transformer layers).
Additionally, some other papers (e.g., see Sec. 3.1 and Fig. 2b in U-ViT paper [e]) also find that AdaLN does not perform well.
*[e] Bao, Fan, et al. "All are Worth Words: A ViT Backbone for Diffusion Models."*
**We address Weakness 7-10 and Limitation 1 in "Part2 of our rebuttal to Reviewer Wwps"**
---
Rebuttal 2:
Title: Part2 of our rebuttal to Reviewer Wwps: addressing Weakness 7-10 and Limitation 1
Comment: > W7: I wonder why the author do not use different-resolution noisy images as the input to each branch like Matryoshka Diffusion Models. I think this is more effective to completely remove distortion problems.
We thank the reviewer for the suggestion. In DiMR, we have one specific branch that uses the original resolution without patchification, which is designed to capture the finer visual details. Using different-resolution noisy images is a very good idea, but we think that resizing the input images to a smaller resolution may also cause the loss of details. Additionally, the FID score of 6.62 reported by Matryoshka Diffusion Models (vs. DiMR 1.70) may also suggest that this would only provide marginal improvements. Nevertheless, we agree that designing a method without any sort of patchification would further reduce the distortion rate. We leave it for future work.
> W8: Sampling speed: it is valid to include a comparison of sampling time with baselines.
Thanks for the suggestion! We provide the comparison of sampling time in the following table. We test both methods with batch size 1, and compare the sampling time to generate one image on an A100. We can see that our model also has a very high sampling speed.
***Table L**: Comparison of sampling speed. We report Sampling Speed (second/sample) here.*
| | ImageNet-256 | ImageNet-512 |
|----------------|--------------|--------------|
| DiT-XL | 3.3 | 6.3 |
| DiMR-XL (Ours) | 2.4 | 2.6 |
> W9: Figure 3: What is the red line meaning?
Thanks for pointing it out. It means 0.1 threshold. We will make it clear in the revision.
> W10: L37, L130: the figure 7 of DiT paper is incorrectly linked.
Thank you. We actually did intend to refer to the Figure 7 of DiT paper, which qualitatively shows that decreasing patch size reduces distortion. We note that they are not hyper-linked.
> L1: The method still has a certain distortion rate which is due to the use of strided convolution (a form of patchification) at the start of each branch. I think the author should include this in the revised manuscript.
Thanks for the suggestion. We will include this in the limitation section. It is noteworthy that completely solving distortion in image/video generation is an extremely challenging problem. In this work, we have already significantly reduced the distortion rate compared to U-ViT and DiT, which is quite a step towards the goal.
---
Rebuttal 3:
Title: Response
Comment: I appreciate the authors' effort in addressing all of my concerns and I am happy with these answers. I will raise my final score and vote for acceptance, as the contributions are significant for advancing the growth of diffusion models.
---
Rebuttal Comment 3.1:
Title: Thank You for Your Review and Support
Comment: Thank you very much for your insightful feedback and for considering our responses. We're glad we could address all your concerns satisfactorily. Your support in recommending our paper for acceptance is greatly appreciated. If you have any further questions, please feel free to let us know. | Summary: This paper proposes to replace the original pull transformer blocks in DiT with multi-resolution network. Detailly, transformer block is employed for low resolution and Conv blocks are used for the remaining higher resolution. An additional time-conditioning block is designed for Conv blocks. The effectiveness of this method is validated with numercial experiments.
Strengths: 1. This paper proposes a multi-resolution network, consisting of both transformer and conv blocks, to boost the performamce of image generation. Such design is rational. To fit the conv block, this paper also introduce an additional time conditioning block.
2. This method achieves better performance than DiT.
Weaknesses: 1. It is unclear what the performance gain comes from. intuitively, the multi-resolution design could already brings better performance. But this paper proposes the conbination of transformer and conv. I wonder if the design of transformer in low resolution really necessary.
2. The scalibility of this network may be limited due to the introduction of conv blocks. For example, for network parameters larger than 1B.
3. This paper mainly claims to solve the large computional complexityinduced by the long token length. But in the real long token length case (image 512x512), the perfomance gains of this method is marginal compared to DiT. This raises the concern of the real performance of this method on large image case, eg. >512x512.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. It is suggested to identify the real core contribution for performance gain of this paper. The ablation study of replacing the transformer block with conv block is suggested.
2. The effectiveness of this method on larger image size and networks are concerened. The authors are suggested to give furhter discussions on these points if they wish a full surpass of the original DiT.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors mask well discussions on the limitaion of this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive comments, and we carefully address the concerns below.
> W1: It is unclear what the performance gain comes from. Is the design of transformer in low resolution really necessary?
We thank the reviewer for the suggestion. Below, our experiments in Table G demonstrate that ***using transformer blocks at the lowest resolution is critical for achieving good performance while maintaining efficiency.*** Replacing the transformer blocks in the lowest-resolution results in a DiMR variant that has multi-resolution design with ***pure*** convolution blocks. However, it performs worse than combing transformer blocks with convolution blocks at different resolutions.
***Table G**: Ablation of different architectures on ImageNet-256.*
| | 1st branch (lowest resolution | 2nd branch | Epoch | FID-50K w/o CFG | FID-50K w CFG |
|---------------------------------|-------------------------------|------------|-------|-----------------|---------------|
| DiMR-XL/2R (pure conv) | ConvNeXt | ConvNeXt | 400 | 5.75 | 2.09 |
| DiMR-XL/2R (transformer + conv) | Transformer | ConvNeXt | 400 | 4.87 | 1.77 |
> W2: The scalibility of this network may be limited due to the introduction of conv blocks. For example, for network parameters larger than 1B.
We thank the reviewer for the suggestion. To verify it, we trained a 1B variant of DiMR (i.e., DiMR-H/3R) on ImageNet-512 during the rebuttal period. ***As shown in Table H below, performance significantly improves when increasing the model size from 505M to 1.02B, demonstrating the scalability of DiMR.*** We note that DiMR-H/3R has been trained with only 300 epochs due to limited GPUs and the short rebuttal period. However, we still observe improvement as training progresses. We emphasize that this experiment was conducted only once due to time and GPU constraints. Therefore, better performance may be achieved with a more careful network design and optimized training hyperparameters.
***Table H**: Scaling up DiMR on ImageNet-512*
| | Epoch | #Params. | Gflops | FID-50K w/o CFG | FID-50K w CFG |
|-------------|-------|----------|--------|-----------------|---------------|
| DiMR-XL/3R | 400 | 525M | 206.1 | 8.56 | 3.23 |
| DiMR-H/3R | 300 | 1.03B | 399.6 | 7.74 | 2.86 |
> W3: In the real long token length case (image 512x512), the performance gains of this method are marginal compared to DiT, eg. >512x512.
We thank the reviewer for the comment. To clarify, on ImageNet-512, we adopt a three-branch design, DiMR-XL/3R, where the transformer branch has a patch size of 4. This design strikes a better balance between accuracy and speed, following U-ViT-H/4. However, the design may lead to inferior performance to a model using a patch size of 2, like DiT-XL/2. As a result, we think ***U-ViT-H/4 is a more proper baseline for DiMR-XL/3R (both use the same patch size for transformer branch), as the patch size significantly affects generation accuracy.*** It is also evidenced by the Table 4 of DiT paper, where **DiT models with a patch size of 4 are 1.47 to 2.21 times worse than the same models with a patch size of 2** (e.g., DiT-XL/4 43.01 vs. DiT-XL/2 19.47 FID-50k w/o CFG). Therefore, DiT-XL/2 needs to handle heavy computational burdens to achieve good results. Specifically, ***on ImageNet-512, DiT-XL/2 is 2.55 times slower than our DiMR-XL/3R per forward pass (525 Gflops vs. 206 Gflops under similar model parameters).*** Nevertheless, DiMR-XL/3R still outperforms the best and heaviest DiT-XL/2 by 0.15 FID, and surpasses the proper baseline U-ViT-H/4 by 1.16 FID.
In addition, our model can also be improved by reducing the patch size and increasing the computational complexity. During the rebuttal period, we also tried 512 generation with 2 branches (which equal to a patch size of 2). We report the FID-10K score during training in Table I. We can observe that we can further improve performance, but it also increases computational complexity.
***Table I**: ImageNet-512 generation with 2 branches (DiMR-XL/2R) will further improve performance*
| | #Params. | Gflops | FID-10K w/o CFG at 80 epochs | FID-10K w/o CFG at 160 epochs | FID-10K w/o CFG at 240 epochs |
|------------|----------|--------|------------------------------|-------------------------------|-------------------------------|
| DiMR-XL/3R | 525M | 206 | 18.74 | 14.84 | 13.90 |
| DiMR-XL/2R | 515M | 619 | 17.56 | 13.68 | 12.95 |
> Q1: Identify the real core contribution for performance gain of this paper. The ablation study of replacing the transformer block with conv block is suggested.
Please refer to W1 for the detailed discussion.
> Q2: The effectiveness of this method on larger image size and networks are concerned.
Please refer to W3 for the detailed discussion.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I decide to raise the score to 5.
---
Reply to Comment 1.1.1:
Title: Thank You for Your Review and Support
Comment: Thank you once more for your valuable suggestions and for considering our responses! If you need any further information or clarification, please feel free to contact us! | Summary: This paper introduces a diffusion model named DiMR, which incorporates a Multi-Resolution Network and Time-Dependent Layer Normalization to enhance image generation quality. Traditional diffusion models, often limited by a trade-off between computational efficiency and visual fidelity, struggle with image distortion due to the coarse resolution of input data processing in Transformer-based designs. DiMR addresses this by refining image details progressively across multiple resolutions, reducing distortions significantly. The new light-weight Time-Dependent Layer Normalization technique introduced in the model embeds time-based parameters into the normalization process efficiently. Demonstrated through benchmarks on ImageNet 256x256 and ImageNet 512x512, DiMR outperforms competing models.
Strengths: + The paper is well-written and easy to follow, featuring clear diagrams and detailed captions that enhance understanding.
+ The introduction of Time-Dependent Layer Normalization (TD-LN) is particularly interesting. The use of PCA analysis to justify this approach provides a strong motivation and highlights its innovativeness.
+ The experimental section of the paper effectively demonstrates the effectiveness of the proposed method.
Weaknesses: - The model has a higher FLOPs count compared to a DiT of similar parameter size, raising concerns about slower training speeds.
- It is unclear whether this cascaded approach can support multi-resolution training.
Technical Quality: 2
Clarity: 3
Questions for Authors: - This work appears to be a cascade diffusion model based on DiT. How does it perform compared to traditional UNet-based cascade diffusion models?
- What are the training and convergence speeds of this model, particularly in comparison to the baselines?
- The proposed method adopted the class token similar to that used in U-ViT for injecting class conditions. Is it possible to replace this with an approach similar to Time-Dependent Layer Normalization (TD-LN)?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors have adequately addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive comments, and we carefully address the concerns below.
> W1: The model has a higher FLOPs count compared to a DiT of similar parameter size, raising concerns about slower training speeds.
Thanks for the comment. To address it, we provide a training speed analysis in Table D. We note that on ImageNet-256, when comparing with DiT-XL/2, our DiMR-XL/2R only increases the computation by 41 Gflops, which is relatively negligible. Additionally, we also experiment with a much faster DiMR variant (i.e., DiMR-XL/3R) on ImageNet-256. As shown in the following table, DiMR-XL/3R, even with much smaller Gflops, still surpasses DiT-XL/2 by a large margin.
***Table D**: Comparison of training speed on ImageNet-256. For training speed, we test all models with batch size of 256 on 8 A100s.*
| | #Params. | Gflops | FID-50K w/o CFG | FID-50K w CFG | Total training time |
|-------------------|----------|--------|-----------------|---------------|---------------------|
| DiT-XL/2 | 675M | 119 | 9.62 | 2.27 | 24.8 days |
| Large-DiT-3B | 3B | 928 | - | 2.10 | - |
| DiMR-XL/2R (ours) | 505M | 160 | 4.87 | 1.77 | 11.8 days |
| DiMR-XL/3R (ours) | 502M | 48 | 5.58 | 1.98 | 10.0 days |
Furthermore, on ImageNet-512, our DiMR-XL/3R is more compute-efficient than DiT-XL/2. Specifically, our DiMR-XL/3R has ***206*** Gflops while DiT-XL/2 has ***525*** Gflops, indicating that we are more than 2x faster at each forward pass, making ours much more training efficient for high-resolution image generations.
> W2: Can this cascaded approach support multi-resolution training?
We thank the reviewer for the interesting idea. Our DiMR can actually perform multi-resolution training and enable multi-resolution generation with a single model. Due to the limited time of the rebuttal period, we only tried two different resolutions: 256x256 and 512x512 multi-resolution training, and we report the results in Table E. We observe that the same DiMR-XL/2R model can generate both 256x256 and 512x512 images while achieving good FID scores. For reference, we also reported the best DiT model at each resolution. Our model is comparable (even superior) to the best resolution-specific DiT models.
***Table E**: Multi-resolution generation. For DiMR-XL/2R, we use the same model to generate 50K images for each resolution and compute the FID-50K score. For DiT-XL, we report their best model trained on each specific resolution.*
| | #Params. | Gflops | FID-50K on 256x256 | FID-50K on 512x512 |
|-----------------------------------------------|----------|--------|--------------------|--------------------|
| DiMR-XL/2R (multi-resolution generation) | 505M | 160 | 1.79 | 3.18 |
| DiT-XL/2 (single 256-resolution generation) | 675M | 119 | 2.27 | x |
| DiT-XL/2(single 512-resolution generation) | 675M | 525 | x | 3.04 |
> Q1: How does DiMR perform compared to traditional UNet-based cascade diffusion models?
As discussed in the paper, DiMR adopts a framework of *feature cascade*, instead of *image cascade* (like traditional UNet-based cascade diffusion models). Specifically, the *image cascade* approaches (e.g., Cascaded Diffusion Models) first generate low-resolution images by a base diffusion model, and then improve the details in the subsequent super-resolution diffusion models. By contrast, DiMR generates feature maps progressively from low-resolution to high-resolution, ***all within a single model***.
Below, Table F provides the results compared to the cascade diffusion models, i.e., Cascaded Diffusion Model (CDM) and Matryoshka Diffusion Model (MDM), a special type of cascaded diffusion model where multi-resolution images are generated at the same time via a Nested U-Net. Note that they only report results on ImageNet-256 generations. As shown in the table, our model surpasses them by a large margin.
***Table F**: Comparison with cascade diffusion models on ImageNet-256.*
| | FID-50K without CFG | FID-50K with CFG |
|------|---------------------|------------------|
| CDM | - | 4.88 |
| MDM | 8.92 | 6.62 |
| DiMR-XL/2R (ours) | 4.50 | 1.70 |
> Q2: What are the training and convergence speeds of this model, particularly in comparison to the baselines?
Please refer to W1 for the detailed discussion.
> Q3: The proposed method adopted the class token similar to that used in U-ViT for injecting class conditions. Is it possible to replace this with an approach similar to Time-Dependent Layer Normalization (TD-LN)?
We thank the reviewer for the interesting question. We experimented with this idea, but it did not work. In the formulation of TD-LN (Equ (5) in the paper), the time token is considered as a scalar, instead of an embedding vector as in adaLN (Equ (3) in the paper). The diffusion (or denoising) process is a monotonical function of time, allowing us to easily encode time as a scalar. On the other hand, the class token contains more complex information, and therefore simply encoding it as a scalar in our Equ (5) does not work. In the end, we resort to U-ViT's simple yet effective strategy by feeding the class tokens into the transformer branch. A more careful exploration (e.g., adding a lightweight MLP for the class tokens) may make it work, and we leave it for future work. | Summary: This paper works on efficient diffusion model backbone architecture by using transformer and ConvNeXt architecture respectively on small and large resolutions of the same inputs, to leverage the strength of both architectures, and alleviate the distortion problem.
Strengths: 1. The writing is easy to read and follow.
2. The idea is simple and effective.
3. They conduct rich experiments to support the idea.
Weaknesses: 1. This work utilizes two standard architectures, incorporating only minor design elements such as the TD-LN. It would be more advantageous to propose a new, general, and elegant architecture that effectively combines the strengths of the attention layer and the convolution layer.
Technical Quality: 3
Clarity: 3
Questions for Authors: Are there any ablation studies on the number of different resolutions used? For instance, what happens if we use one small resolution with DiT and one normal resolution with a larger conv net?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See the weakness section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive comments, and we carefully address the concerns below.
> W1: This work utilizes two standard architectures; more advantageous to propose a new, general, and elegant architecture that combines both.
We thank the reviewer for the suggestion, which we fully agree with. Additionally, we believe that our proposed DiMR is one step towards the goal, by effectively combining the strengths of the attention and convolution layers. Specifically,
1. ***DiMR is clean and elegant:*** As shown in Fig. 2 of the paper, DiMR only uses the standard multi-head attention layers, depthwise convolutions, and MLP layers, along with the proposed TD-LN layers. No other complex designs or operations are involved. As a result, DiMR showcases a straightforward and effective network design.
2. ***DiMR is general:*** Since only standard operations are employed in DiMR, users can design their own DiMR variant by simply changing the types, numbers, or orders of layers. Additionally, as demonstrated in Table A, DiMR also supports the variant of pure convolutions (i.e., the lowest-resolution branch also uses ConvNeXt blocks).
***Table A**: DiMR supports arbitrary combination of different architectures at different branches. Experiments on ImageNet-256. We explain the performance gap between 2R and 3R in Q1 (Table B).*
| | 1st branch (lowest resolution) | 2nd branch | 3rd branch | FID50K w/o CFG | FID50K w/ CFG |
|------------------------|--------------------------------|------------|------------|----------------|---------------|
| DiMR-XL/2R (pure conv) | ConvNeXt | ConvNeXt | - | 5.75 | 2.09 |
| DiMR-XL/2R | Transformer | ConvNeXt | - | 4.87 | 1.77 |
| DiMR-XL/3R | Transformer | ConvNeXt | ConvNeXt | 5.58 | 1.98 |
3. ***DiMR is novel and effective:*** To the best of our knowledge, DiMR is the first work that has successfully combined transformer and convolution architectures into a single **multi-resolution** diffusion model. DiMR demonstrates the effectiveness of combining both architectures, resulting in state-of-the-art performance without any bells and whistles (even surpassing Large-DiT 3B parameters by a large margin: 1.70 vs. 2.10 on ImageNet-256 benchmark, while DiMR only uses 505M parameters).
4. ***TD-LN is crucial, not minor:*** As demonstrated by the systematic analysis in the paper, TD-LN is a parameter-efficient approach that effectively injects time information into the model. We emphasize that other reviewers appraise the proposed TD-LN: Reviewer MPTv acknowledges that TD-LN is **particularly interesting, well-motivated, and innovative**, while Reviewer Wwps also thinks that TD-LN is **more efficient and effective than AdaLN-zero, due to the removal of the cumbersome MLP layer**.
> Q1: Are there any ablation studies on the number of different resolutions used?
We thank the reviewer for the suggestion. It is noteworthy that using different number of resolutions affects the trade-off between generation performance and model speed (both inference and training time). As a result, we use two resolutions (denoted as 2R in the paper) and three resolutions (i.e., 3R) for ImageNet-256 and ImageNet-512 benchmarks, respectively.
Below, we present a careful ablation study on the number of resolutions. Note that for ImageNet-512 generations, we could not complete the full experiment within the short rebuttal period. Therefore, we followed U-ViT and reported the FID10K at every 100K iterations (i.e., 80 training epochs). Our findings indicate that one transformer branch combined with one convolution branch (i.e., 2R) usually achieves the best results. However, the Gflops also increases (since the transformer branch is operated on a higher resolution than the 3R counterpart). Conversely, employing more blocks at smaller resolutions (i.e., 3R) reduces the computational burden but slightly degrades the performance. We will include these experiments in the revised version.
***Table B**: Ablation of different numbers of resolutions on ImageNet-256*
| | Epoch | #Params. | Gflops | FID-50K w/o CFG | FID-50K w CFG |
|------------|-------|----------|--------|-----------------|---------------|
| DiMR-XL/2R | 400 | 505M | 160 | 4.87 | 1.77 |
| DiMR-XL/3R | 400 | 502M | 48 | 5.58 | 1.98 |
***Table C**: Ablation of different numbers of resolutions on ImageNet-512*
| | #Params. | Gflops | FID-10K w/o CFG at 80 epochs | FID-10K w/o CFG at 160 epochs | FID-10K w/o CFG at 240 epochs |
|------------|----------|--------|------------------------------|-------------------------------|-------------------------------|
| DiMR-XL/2R | 515M | 619 | 17.56 | 13.68 | 12.95 |
| DiMR-XL/3R | 525M | 206 | 18.74 | 14.84 | 13.90 | | Rebuttal 1:
Rebuttal: Dear reviewers and ACs,
We thank all reviewers for their valuable comments and feedback, mentioning that our method is "**simple and effective**" (Reviewer 1gRs and MPTv), which "**alleviates distortion of image generation and achieves SoTA FID scores**" (Reviewer Wwps). Additionally, we are glad that they find the proposed TD-LN is "**particularly interesting with strong motivation and innovativeness**" (Reviewer MPTv) and "**simple with less intensive parameters**" (Reviewer Wwps).
We address all the comments and questions from the reviewers in the rebuttals below, and provide detailed explanations and experimental results.
For quick reference, we list new experiments provided in this rebuttal below:
- Table A shows that **DiMR supports arbitrary combination of different architectures at different branches.**
- Table B/C ablate different numbers of resolutions on ImageNet-256/512, respectively.
- Table D **demonstrates the advantage in training speed compared to the baselines.**
- Table E shows that **DiMR is capable of multi-resolution generation and is comparable to (or even superior to) the best resolution-specific DiT models.**
- Table F compares DiMR with cascade diffusion models.
- Table G ablates different architectures (pure convolution vs. transformer + convolution)
- Table H **shows the results of the 1B variant of DiMR on ImageNet-512, demonstrating the scalability of DiMR.**
- Table I shows that **increasing Gflops of DiMR will further improve performance on ImageNet-512.**
- Table J compares the difference between upsampling and concatanation.
- Table K ablates the skip connections.
- Table L **demonstrates the advantage in sampling speed compared to the baselines.**
We thank the reviewers for the constructive comments, and we will include all the new results and discussions in the final version.
Best,
Authors | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Optimal Classification under Performative Distribution Shift | Accept (poster) | Summary: This paper studies the performative learning problem, where the goal is to minimize some measure of *performative risk*, $PR(\theta) := \underset{\theta}{\mathbb{E}}[\ell(Z; \theta)]$, where the difficulty is that random variables come from some distribution the depends on the deployed model parameters $\theta$, i.e. $Z \sim \underset{\theta}{\mathbb{P}}$. The paper models the performative effect of some model parameters $\theta$ as a pushforward measure under some differentiable, invertible mapping that depends on $\theta$. This pushforward measure view of performative learning admits a couple of main results:
1. This gives a new expression for the performative gradient (the quantity $\nabla_{\theta} PR(\theta$)).
2. For strategic classification, a specific performative learning scenario, we find that performative risk is convex under linearity assumptions on the performative shift and the classifier model.
3. Under the same assumptions as (2), we can rewrite the performative risk as a min-max problem, connecting the performative risk in this classification scenario to adversarially robust classification.
Strengths: Overall, the paper is well-written, with a couple of clarity suggestions (in "Weaknesses") that may make the presentation smoother. The technical results are sound and, to my limited knowledge of the performative learning literature, the proposal to model performative shifts as a pushforward measure seems novel and interesting. However, I must emphasize that I am not very familiar with the literature on performative learning, so I cannot judge well the impact of such an approach on existing work.
**Originality:** The main original contribution of this work is modeling the performative effect as a pushforward measure, which seems novel to my limited knowledge. The main results of the paper drop out of this modeling assumption, which seems flexible and general. However, I have a couple of questions towards how natural the specific instantiations of this pushforward measure are, particularly the "shift operator" used in many of the results (see "Questions").
**Quality:** I cannot give too informed a judgment of the quality of the results in comparison to other literature on performative learning, but to my reading, the technical results and evaluation seem sound. The authors provide a comparison to the gradient estimator for the performative effect of Izzo et al. (2022) and demonstrate their method's efficacy in comparison. The analysis seems sound, but I have no reference for how important this comparison is or whether there are better baselines in the literature to compare to.
**Clarity:** The paper is well-written overall, but I have a couple of suggestions for presentation in "Weaknesses."
**Significance:** As an outsider to the subfield, I cannot give a fully informed judgment on the significance of this paper, but, taking the comparison to Izzo et al. (2022) into account and viewing the displayed experiments, it seems that this modeling assumption does lead to an effective gradient estimator for performative risk.
Weaknesses: As I am an outsider to the subfield, I cannot comment too much on the relative weaknesses of this approach to others in performative learning. I am also not aware of current results in this subfield, but it seems like optimizing this notion of performative risk is still in nascent stages if theorems like Theorem 2 exist just to show situations in which we can prove that it is convex and apply standard optimization techniques to the problem. As such, I can only give a couple of suggestions that may improve the clarity of the paper's presentation:
1. Throughout the paper, the term "performative effect" is used quite heavily, but I believe it lacks a formal definition. I assume that we are to take the performative effect as, ultimately, the distribution $\mathbb{P}_{\theta}$, but it wasn't completely clear to me on a first reading of the introduction. Explicitly defining this term in the intro may help.
2. On Page 3, when introducing the pushforward measure, I would define the "pound" symbol. I wasn't aware of this notation until I looked it up on Wikpedia.
3. Small nitpick: on page 3, "$\mathbb{P}$ admits a density" should specify the density $p(\cdot)$ that $\mathbb{P}$ admits.
4. In the Experiments section, I didn't find your algorithm "Reparametrization-based Performative Gradient (RPPerfGD)" clearly defined. I assume that the algorithm just uses the gradient in Equation 3 (Definition 1) as an estimator of the gradient and performs gradient descent, but it would be helpful to explicitly write that in the Experiments section next to the baselines.
Technical Quality: 3
Clarity: 2
Questions for Authors: I have a couple of questions that may stem from my unfamiliarity with the literature:
1. How restrictive is the assumption that the performative effect can be modeled by a "shift operator"? I didn't fully understand how natural this assumption is, and some motivation to this assumption would help the presentation. However, I understand that this might just stem from my lack of exposure to the literature.
2. In order to estimate the gradient in Definition 1, you must have access to the form of the operator that defines the pushforward performative model, $\psi$. How realistic is it to assume that one has access to this in a non-synthetic scenario?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors have addressed limitations in the NeurIPS paper checklist. They also motivated the problem of performative learning in their Introduction.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for his/her positive review and address his/her questions below. We are also grateful for the four suggestions made in the weakness section to improve the clarity of the manuscript and will made the required changes in the final version of the manuscript.
**Modelling the Performative Effect as a Shift** We consider that it is indeed a key specific contribution of this paper, which was not present in earlier works on performative prediction. As noted in the paper, this assumption is however inspired by the field of adversarially robust classification where the adversary-induced distribution change is also modelled by shifts of the individual data points (where an argument also used in the proof of Theorem 3 shows that the worst case shift is indeed identical for all data points when using a linear model). In addition to exhibiting novel cases in which the performative risk can be convex in classification tasks, we believe that this assumption is also important in providing a way to interpret the effect of the performative changes. To the best of our knowledge, Theorem 2 and 3 are the first results in the literature that show that, at least in the case of linear-in-parameters shifts, if the performative change tends to bring the two class distributions closer together, then it essentially leads to a regularization effect on the parameter (when compared to a model without performative changes of distribution). We believe that this idea could be used more generically to design new approaches to performative learning.
**Knowing $\varphi$ vs Knowing $p_\theta$** While we do agree that knowing $\varphi$ is a restrictive assumption, we still argue (and this is one of the main messages of the paper) that it is very different from knowing $p_\theta$. Knowing $p_\theta$ means that there is no learning problem anymore: whatever the loss function, the performative optimum could be found without training data, at worse using Monte Carlo simulations. On the other hand, even if $\varphi$ is known one still needs to use training data to find the performative optimum (as $\varphi$ does not fully define the data distributions, but only how they shift). Note also that, as argued in the paper, the shift could be taking place in an embedding space which adds even more generality. Finally, $\varphi$ could also be known up to some parameters only. In the Example of Section 6, for instance one can indeed estimate the diagonal elements of $\Pi$ using ridge regression from a very limited number of deployments and we provide in the global response PDF two additional figures showing that we need few iterations to have a good estimate of $\Pi$ and that the gradient trajectories obtained by plugging in the value of $\Pi$ estimated this way are very similar to those of RPPerfGD when using the exact value of $\Pi$.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their comprehensive responses and willingness to address the weaknesses I pointed out in the full work. Due to my lack of exposure in the area of performative learning, I didn't initially appreciate the novelty in the model, and the response made me understand better the novelty and place in the literature. I'd like to raise my score from 5 to 6 (Weak Accept).
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for taking the time to carefully assess our rebuttal and for raising his/her score. | Summary: This paper considers a specific performative effect that's characterized as a transformation on the original probability measure on covariate X, which is novel in the literature. The authors propose to restrict the performative effect of deploying model with parameter \theta to such a multiplier function \varphi_\theta, which is mainly a shift operator in the discussion of this paper and can be viewed as a strong restriction on shift pattern. Authors also show the benefit of scalability of performative gradient estimation based on this performative effect, and demonstrate the convexity of performative risk can be achieved through direction of performative effect instead of its magnitude, in the context of binary classification and shift operator. This finding is interesting and novel in the literature. Moreover, authors show the connection with robustness and regularization by a minimax reformulation.
Strengths: 1. This paper clearly presents its assumption, main theorems with illustrative examples, which makes this paper easy to follow.
2. This paper generalizes the performative shift pattern in former literature without restricting to location-scale family, and reveal another path to convexity of performative risk, which is innovative.
3. Connection with robustness and regularization is also presented to strengthen the background of this paper.
Weaknesses: 1. The main weakness is the generality of the performative effect, \Pi \theta, a shift on covariate, since it's the factual pattern considered in this paper. For general \varphi_\theta, this paper doesn't discuss how to identify or effectively estimate such transformation function \varphi_\theta. Moreover, the knowledge on shift matrix \Pi is also susceptible, since the specific mechanism of the distribution shift is usually unknown in the context of performative prediction.
2. In the synthetic experiments, I think the authors should explore more abundant \varphi_\theta and show the benefit of scalability for high-dimensional \theta. In fact, the experiments mainly shown in the main paper is still restricted to the 2-dimensional simple setting of Izzo et al. 2022 and neither validate such scalability nor explore general \varphi_\theta. In addition, the colors used for denote different methods should be more distingushable.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Can the authors provide some intuitions on the dimension-freeness of \hat{G}_\theta^{R P}? I think it's a bit counterfactual since the shift \Pi \theta is put on the covariate X.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for his/her positive review and address his/her questions below.
**Knowledge of $\Pi$** We agree with the reviewer that knowing $\Pi$ entirely is indeed a limitation. In addition to the answers on this point given in response to reviewer PyL2 we point out that, from a theoretical point of view, the convexity arguments obtained in the paper require that $\Pi$ is known: even in the simpler cases, the performative risk would not be convex if it was to be optimized w.r.t. both $\theta$ and $\Pi$. From a more practical point of view, for linearly parameterized shifts, it is very natural to estimate $\Pi$ by ridge regression along the successive deployments of the model: the ridge penalty ensures that initially the estimate of $\Pi$ is close to zero making the RPPerfGD updates very similar to those of RGD and it is easy to check that order $d$ deployments are enough to obtain a non void estimate of $\Pi$. Of course, the behavior of the RPPerfGD algorithm that uses the values of $\Pi$ estimated along the trajectory of successive model deployments would need to be studied more precisely (probably requiring more specific choices of stepsizes than in the convex case). From a practical point of view however, we provide in the global response PDF an additional figure pertaining to the example of Section 6 showing that in this case this approach is very effective. We will add this discussion in the final version of the paper.
**Comments on the Experiment Section**
We changed the colors to more contrasted ones. Note that figure 2b explores the impact of dimension with 7 dimensions, and the Housing dataset is also in 8 dimensions so our experiments are not limited to the 2-dimensional setting, and experiments in higher dimensions exhibits similar behaviors than in two dimensions. We also refer the reviewer to our answer to eHFF and our additionnal experiments, and would be happy to provide other similar plots during the discussion periods if a specific point can add value to the paper.
**Dimension Freeness of the Covariance Estimator** The computations in Appendix A.1 shows that estimating the performative gradient is in this case fully equivalent, up to multiplication by $\Pi^T$, to the estimation of the mean of a multivariate Gaussian with a $\sigma^2 I$ covariance matrix. Hence, the covariance matrix of the estimator is indeed dimension-independent. We agree with the reviewer however that in terms of interpretation of this finding there is a hidden dependence in the dimension here as the RMS error in estimating the mean of a multivariate normal is scaling as the square root of the dimension.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response and clarification. I'll keep my evaluation unchanged.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for his/her answer and feedback that contributed to improve our paper. | Summary: The authors address the challenge of performative prediction, a scenario in which the predictor's outcomes influence the underlying data distribution. They introduce a novel formulation for the gradient of the performative risk, thereby enabling the implementation of stochastic optimization methods. This new formulation offers an advantage over existing approaches by producing a gradient estimator with lower variance, particularly in cases where the data distribution shift is linear. Additionally, the authors establish a weaker sufficient condition for the convexity of the performative risk in situations involving linear classifiers and linear distributional shifts. Furthermore, they demonstrate that transform-invariant learning leads to parameter regularization. Empirical evaluations reveal that the proposed method achieves superior stable accuracy compared to existing techniques.
Strengths: This well-written paper addresses a crucial problem in performative prediction that is highly relevant to the conference. The authors' theoretical contributions are clearly presented.
The proposed estimator for the performative gradient is novel and innovative. Its advantages over existing estimators are verified through an analysis of the estimator's variance under the linear shift scenario. The lower variance of this estimator results in faster convergence of the stochastic gradient descent, demonstrating its practical utility.
The authors' finding of a sufficient condition for the convexity of the performative loss under the linear shift setup is also a significant contribution. The benefits of this condition are adequately demonstrated, particularly in practical examples where existing techniques fail to verify the convexity of the performative loss.
The experimental results effectively demonstrate the superiority and stability of the proposed estimator compared to existing methods. These results indicate that the authors' method achieves greater stability while simultaneously improving accuracy, underscoring its practical applicability.
Weaknesses: One potential limitation of the study is that the authors' analyses are primarily limited to linear shift cases. The linear shift assumption is relatively strong and may not hold in many practical scenarios. Moreover, the variance reduction effect is confirmed only under the linear shift setup, leaving the benefits of the proposed estimator in other situations unclear.
The implications of Theorem 4 require further clarification. The claim that the magnitude of $\theta^*$ becomes smaller for a larger $\Pi$ is questionable, as the norm $\\cdot\_\Pi$ is also affected by the magnitude of $\Pi$. This relationship warrants a more detailed explanation or additional analysis.
The assertion that knowing $\varphi$ is more practical than knowing $p_\theta$ may be an overstatement. Both assumptions appear to be equally unrealistic in practical applications, and this comparison could benefit from a more nuanced discussion.
Technical Quality: 3
Clarity: 3
Questions for Authors: - How might your approach be extended to non-linear shift scenarios?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors adequately address the limitations and potential impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for his/her positive review and address his/her questions below.
**Restriction to Linear Shifts** Theorem 2 proves convexity under general assumptions thanks to the linearity of the shift, and it is unlikely that convexity holds without this assumption. However, our approach to compute the performative gradient can also be used for non-linear shifts; it will only lack the theoretical guarantees. Thus, we agree with the reviewer that it is a limitation of our work, but we believe that it is an assumption that is more general than those found in previous works on performative learning (not being limited in particular to "small" performative effects). On the variance reduction effect, we also agree that we proved it for the specific combination of Gaussian noise and quadratic loss only. The proof given in appendix A.1, however, suggests that it could also be extended to non-linear drifts in the Gaussian/quadratic case. The comparison between both estimators is an open question that deserves a more systematic comparison, using other distribution/loss function combinations (e.g., Laplace noise could be more favorable for the use of the score function estimator as $\nabla_\theta \log p_\theta(z)$ would then have constant magnitude), and exceeds the scope of this paper.
**Effect of Norm of $\Pi$** We agree with the reviewer that the comment under Theorem 4 is a bit hasty, we will make it clearer in the final version. The effect can be seen as follows: if $\Pi$ gets larger the r.h.s. decreases while the l.h.s. increases for identical values of $\theta$ (due to the fact that the norm depends on $\Pi$ as noted by the reviewer); hence values of $\theta$ have to be "smaller". The example given in lines 282--286 also illustrates this effect.
**Knowledge of $\varphi$** We refer to the responses to reviewers eHFF and uddR on this point.
**Questions on Extension to Non-Linear Shift Scenarios** As discussed above, this case is similar from a practical point of view and the same learning algorithm can be used but convexity of the performative learning would be most likely lost. Extension to transformations other than shifts however should be more difficult due to the necessity to consider the derivative of the log-Jacobian term. Note that the shift only needs to be linear in an embedding space, which covers already a large spectrum of situations.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' response and maintain my positive assessment.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for his/her positive feedback confirming that our rebuttal addressed his/her concerns. We also thank the reviewer for his/her feedback that will improve the final version of the paper. | null | null | Rebuttal 1:
Rebuttal: We thank the reviewers for their careful assessment of the manuscript and their helpful suggestions to improve the clarity. We are happy to see that reviewers recognize that our "proposed estimator for the performative gradient is novel and innovative" (PyL2) and that our assumptions are "flexible and general" (eHFF) and the connection with robustness "strengthen the background of this paper" (uddR). We provide in PDF a new experiment with estimation of $\Pi$ during the successive deployments, reporting the impact on the accuracy and the convergence of $\Pi$. More details and the code of this experiment will be included in the final version of the paper. We answer specifically to each reviewer below.
Pdf: /pdf/46bb3ae17b6b36d57b4c97ed0414aa9f50dbcd68.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Adaptive $Q$-Aid for Conditional Supervised Learning in Offline Reinforcement Learning | Accept (poster) | Summary: This paper proposes an offline reinforcement learning framework by combining return-conditioned supervised learning with in-sample Q-learning. By demonstrating their relative merits on offline datasets with different behavior policies, an overall loss function is designed to integrate Q assistance into RCSL. Extensive experiments empirically validate its superior performance with compared baselines on several domains and datasets.
Strengths: 1. The studied problem is important. Empowering RCSL with stitching ability is crucial to facilitate its capability for pursuing optimal policies.
2. The analysis and experiments of RCSL and Q-learning on datasets with optimal and suboptimal policies provide valuable insights. Based on that, the proposed QCS method can be naturally optimized on the proposed loss function by considering conditional supervising learning and maximizing Q-function simultaneously.
3. Empirical experimental results are promising with a bunch of baselines on several datasets and domains.
Weaknesses: 1. The definition of degree of Q-aid as w(R($\tau$)) is not completely reasonable. As described, it should be defined by the degree of optimality of the behavior policy used to generate this sequence. However, a sequence with a higher return does not necessarily indicate it was generated by a superior policy, even from a statistical perspective. This definition only makes sense if the initial states of the compared trajectories are the same.
2. The analysis of the Q-Greedy Policy appears somewhat redundant. The focus of this paper is on integrating Q-learning into GCSL, rather than addressing challenges in Q-learning.
3. Conversely, the experimental section seems overly brief. I suggest moving more experimental details from the appendix to the main paper.
Technical Quality: 3
Clarity: 4
Questions for Authors: All my questions are listed in the weakness part.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: No limitation issues.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive comments on the strengths of our work. Your valuable suggestions will help us further improve and clarify our work.
### **W1. The definition of degree of Q-aid.**
Thank you for pointing out a good point. We have dealt with environments where the deviations in the initial state are marginal to affect the scale of the returns significantly, but in practical scenarios, differences may arise. In such cases, a higher trajectory return may not always indicate optimality. Considering how to set the weights in a more practical setting with significant differences in the initial state could be a good follow-up work.
### **W2. Regarding writing.**
As the reviewer mentioned, the focus of our work is combining Q-learning with RCSL. To develop an algorithm for this purpose, we first (1) experimentally demonstrated that the Q-greedy policy and RCSL can be complementary depending on the quality of the dataset, and (2) analyzed the reasons for this through an examination of Q generalization. Based on this analysis, we determined the QCS weight and developed the QCS algorithm. Therefore, we believe that discussing the Q-greedy policy is necessary to explain the need for and development process of our algorithm. However, we will improve the writing to make this point brief and clear. However, we will improve the writing to make this point brief and clear. Additionally, we will refine our manuscript to include more experimental details in the main section. | Summary: Authors of this paper introduce a new algorithm called Q-aided Conditional Supervised Learning (QCS) for offline DRL scenario, which combines the stability of return-conditioned supervised learning (RCSL) and the stitching ability of Q-functions. The primary contributions are 1). Identify the strength and weakness of RCSL and Q-learning in different experimental settings and 2) propose a new approach to combine the them to achieve better performance.
Strengths: 1. The paper is well written. The paper has a logical structure on the problem definition. It first demonstrates the strength and weakness of RCSL and Q-learning in different offline settings through experimental results and then explain the reasons through simple toy examples to help understanding. Finally, it propose the novel approach QCS based on the reasoning above.
2. The paper has rigorous experimental set-up and SOTA performance. The proposed algorithm is tested on various offline D4RL benchmarking datasets and demonstrate SOTA performance across the datasets.
3. Very interesting study on the strength and weakness of RCSL and Q-learning in different experimental settings, especially the discovery of Q-learning over generalization problem.
Weaknesses: 1. If I understand correctly, QCS requires a pre-trained Q-function using IQL, while the experimental results on D4RL benchmarking seems very impressive, there appears to be a lack of detailed discussion concerning the computational workload.
2. The weight term (which is important and essential) in the algorithm is dynamic and depends on R*, the optimal return of the task. Calculating R* across environments and dataset settings can be a bit tricky and discussion on how calculation is performed is a also a bit limited and unclear.
3. Different benchmark experiments seem to require different hyper-parameter $\lambda$ values to achieve SOTA performance. Not sure how sensitive QCS is to $\lambda$.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. For calculating R*, for each update using equation (3), do you need to calculate R* for each trajectory? Could you provide more detailed information on how it is calculated?
2. I think the hyper-parameter K has very limited discussion on its impact on the performance. Do you perform any ablation studies on how K will affect the model performance?
3. How sensitive is the QCS algorithm to the scaling factor $\lambda$ ? I noticed that, for one environment with different offline settings, (for example, halfcheetah-medium-replay and halfcheetah-medium-expert), OCS uses different $\lambda$ values. Does one fixed $\lambda$ work well within the same environments? I think this is important for understanding the adaptability and robustness of the method across different settings.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper briefly mentions the potential limitation of using a simple linear function to combine RCSL and Q-function.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed comments that help clarify our algorithm and for the positive feedback on our work. As suggested, we have included further explanations and experimental results to enhance the significance of our study. Due to the rebuttal word limit, **we have addressed the response regarding context length in the global response G5.**
### **W1. Regarding complexity.**
We compare QCS training time complexity with IQL, CQL, and QDT [Ref.8]. QCS requires a pre-trained Q learned using the IQL method, while QDT requires a pre-trained Q learned using the CQL method. Therefore, the time was measured by incorporating the Q pretraining time for both algorithms.
The training times for IQL, CQL, QDT, and QCS are as follows:
IQL - 80 min, CQL - 220 min, QDT - 400 min, and QCS - 215 min.
The results show that QCS takes longer than IQL but has a total time similar to CQL. Notably, compared to QDT, which requires CQL pretraining, QCS can be trained in nearly half the time but demonstrates superior performance to QDT as shown in our main results in Table 2.
[Ref.8] Yamagata, Taku, et al. "Q-learning decision transformer: Leveraging dynamic programming for conditional sequence modeling in offline rl." ICML 2023.
### **W2/Q1. Regarding calculating R***
The detailed explanation of how to obtain R$^*$ is as follows. We propose two ways of calculating R$^*$, and we will add this explanation to our manuscript.
* (1) Set R$^*$ with the optimal return for the environment.
For our main experiments, we set R$^*$ for the environments with the optimal return for the environment as follows: Hopper - 3500, Walker2d - 5000, Halfcheetah - 11000, and Antmaze - 1. Note that prior RCSL algorithms, such as Decision Transformer [Ref.8] or RvS [Ref.7], used predefined R$^*$ for target RTG conditioning at the inference stage. Therefore, setting R$^*$ with optimal return requires no further assumptions compared to prior RCSL works. As mentioned in Appendix J.2 of our manuscript, for target RTG conditioning, QCS does not use R$^*$ but instead uses the maximum trajectory return, only requiring one R$^*$ per algorithm.
* Set R$^*$ to the maximum trajectory return within the dataset.
Another way of setting R$^*$ is to use the maximum trajectory return within the dataset. In situations where obtaining the optimal environment return is difficult, we can infer the optimal return using the maximum trajectory return. Therefore, in Table R4 we provide additional results using the maximum trajectory return as R$^*$.
**[Table R4. QCS performance when R\* is the maximum dataset return and the optimal environment return.]**
| Dataset | QCS (optimal env return) | QCS (max dataset return) |
|---------------------------|--------------------------|--------------------------|
| halfcheetah-medium | 59.0±0.4 | 55.2±0.5 |
| hopper-medium | 96.4±3.7 | 97.1±3.0 |
| walker2d-medium | 88.2±1.1 | 87.4±2.1 |
| halfcheetah-medium-replay | 54.1±0.8 | 52.1±0.7 |
| hopper-medium-replay | 100.4±1.1 | 99.8±1.2 |
| walker2d-medium-replay | 94.1±2.0 | 90.6±3.2 |
As shown in Table R4, setting R$^*$ with the optimal environment return is slightly better than setting it with the maximum dataset return, but setting it with the maximum dataset return still outperforms the baselines. Therefore, we propose using the optimal environment return for R$^*$; however, when it is hard to determine, using the maximum dataset return can be a good alternative.
[Ref.7] Emmons, Scott, et al. "RvS: What is Essential for Offline RL via Supervised Learning?." ICLR 2022.
[Ref.8] Chen, Lili, et al. "Decision transformer: Reinforcement learning via sequence modeling." NeurIPS 2022.
### **W3/Q3. Impact of the QCS weight $\lambda$.**
We wish to direct the reviewer's attention to Table 9 in Appendix H.2 of our original manuscript, which investigates the effect of $\lambda$ by varying it from 0.2 to 1.5. We additionally test the QCS with $\lambda$ 0.5 and 1 with MuJoCo medium-expert datasets and the result can be seen in Table R5.
As shown in Table 9 and R5, except for the walker2d-medium and halfcheetah-medium-expert, we found that even the smallest values achieved with changing $\lambda$ either matched or surpassed the performance of existing value-based methods and RCSL's representative methods, including IQL, CQL, DT, DC, and RvS. This demonstrates QCS's relative robustness regarding hyperparameters. For walker2d-medium and halfcheetah-medium-expert, we found that when $\lambda$ exceeds the initial setting of 0.5, performance begins to decrease. We anticipate that further research will address these issues.
Overall, setting $\lambda$ to 0.5 consistently produces good performance, even though it results in performance degradation of around 2 points in some environments. **Compared to many offline RL algorithms that require tuning more than 10 sets of hyperparameters depending on the environment, achieving stable performance with only one or two hyperparameter adjustments is a major strength of QCS.**
**[Table R5. Impact of the QCS weight $\lambda$ in MuJoCo medium-expert datasets.]**
| Dataset | $\lambda$=0.5 | $\lambda$=1 |
|---------------------------|-------------------------|-------------------------|
| halfcheetah-medium-expert | 93.3±1.8 | 84.6±4.2 |
| hopper-medium-expert | 110.2±2.4 | 110.8±2.8 |
| walker2d-medium-expert | 117.4±2.0 | 116.6±2.4 | | Summary: Offline reinforcement learning (RL) has advanced with return-conditioned supervised learning (RCSL) but still lacks stitching ability. Q-Aided Conditional Supervised Learning (QCS) combines RCSL's stability with the stitching capability of Q-functions, addressing Q-function over-generalization. QCS adapts Q-aid integration into RCSL's loss function based on trajectory returns, significantly outperforming RCSL and value-based methods in offline RL benchmarks. This innovation pushes the limits of offline RL and fosters further advancements.
Strengths: - it is well-written and fluent.
- They brought clear definitions which make it easy to follow the context.
- They compared with different baselines.
Weaknesses: The innovation of the works is limited.
Technical Quality: 2
Clarity: 2
Questions for Authors: - how the subgoals are chosen? Is not it complicated to choose them in a more complex environment?
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The conditioning part seems to be controled by the algorithm which makes it hard to use in any environment.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for providing comments and positive feedback on our work. We hope that our response below will clarify our algorithm and innovations.
### **W1. The innovation of QCS.**
We summarize the innovations of QCS as follows:
* We first analyze the strengths and weaknesses of RCSL and Q-learning in various dataset qualities and discover that RCSL and Q-learning can be complementary depending on the quality of the dataset.
* Especially, we identify the Q-function over-generalization problem when performing Q-learning with an optimal quality dataset.
* Based on the analysis of Q-function over-generalization, we propose a novel algorithm that combines RCSL and Q-learning, called QCS. QCS demonstrates superior performance compared to various recent and SOTA baselines.
We want to emphasize that our analysis of the complementary conditions of RCSL and Q-learning, as well as the Q-function over-generalization problem, is novel and has not been addressed in previous works. The superiority of the QCS algorithm, which is based on these analyses, further confirms that our analyses are necessary and can lead to a deeper understanding within the offline RL community.
### **Q1. Regarding the subgoal selection.**
For QCS-G, we follow the subgoal selection as proposed by RvS [Ref.7]. As per RvS, during the training phase, we randomly select a subgoal from among the states between the next state and the final state. For the evaluation phase, we prefix the subgoal as the goal state. We select subgoals based on the states present in the dataset, and there is no need for a special algorithm; we simply choose randomly among the next states, incurring no additional computational cost or difficulty.
Moreover, as can be seen in Table 3 of our manuscript, the QCS-R score, which only utilizes reward information, shows performance comparable to QCS-G. This demonstrates that the QCS algorithm can achieve good performance even without subgoal selection.
[Ref.7] Emmons, Scott, et al. "RvS: What is Essential for Offline RL via Supervised Learning?." ICLR 2022.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response.
How randomly selected subgoals can always achieve to a good performance?
---
Rebuttal 2:
Comment: Thank you for bringing up a good point. As we mentioned in our previous response, subgoal conditioning follows the prior work [Ref.7], and our explanation of the effect of the subgoal is summarized below.
Let's first reconsider return conditioned supervised learning. In the case of return conditioning, the agent basically learns $a = f_\theta(s, \hat{R})$, where $\hat{R}$ is the return-to-go, i.e., sum of rewards received from the current timestep until the end of each episode. During training, the joint function $f_\theta$ of $(s, \hat{R})$ is learned with various $\hat{R}$ values from good or bad trajectories in the dataset. Basically we learn the function $f_\theta$ on the $(s, \hat{R})$ plane hoping for generalization over a wide region of $(s, \hat{R})$ with various $(s,\hat{R})$ data points. Later during the test phase, we set a high target return-to-go as $\hat{R}$ so that the model will output an action
$a$ for each $s$ with high return-to-go. That is, we use $a=f_\theta(s, \hat{R}\_{target})$, a function from $s$ to $a$, where $\hat{R}\_{target}$ is a fixed high target value.
Similarly, subgoal conditioning case can be considered to learn a function $a = f_\psi(s, \hat{s}\_{goal})$, where the subgoal $ \hat{s}\_{goal}$ is a randomly-picked state between the next step and the episode end in each episode.
If $\hat{s}\_{goal}$ is from a bad episode, the produced action will be bad. If $\hat{s}\_{goal}$ is from a good episode, the produced action will be good.
But, again, our goal is to learn an entire function $a = f_\psi(s, \hat{s}\_{goal})$ generalized over a wide joint region of $(s, \hat{s}\_{goal})$, where $\hat{s}\_{goal}$ ranges from good to bad (hoping for interpolation and extrapolation) so that in the later test phase, we use this function $a = f_\psi(s, \hat{s}\_{goal})$ generalized over a wide joint region of $(s, \hat{s}\_{goal})$ by fixing $\hat{s}\_{goal}$ to be the ultimate goal of the task, which is known for each task.
We hope our additional response has addressed all your concerns. If your concerns have been resolved or if you have any further questions, please feel free to let us know.
[Ref.7] Emmons, Scott, et al. "RvS: What is Essential for Offline RL via Supervised Learning?." ICLR 2022.
---
Rebuttal Comment 2.1:
Title: Further clarification on previous response
Comment: Adding on to our previous response, random selection of subgoals occurs during training to learn a generalized function for various optimal and non-optimal subgoals. However, during evaluation, instead of using random subgoals, the agent always uses the fixed ultimate goal of the task, which is known for each task. We hope this helps to further clarify your concerns regarding our subgoal conditioning method. | Summary: This submission proposes an algorithm that combines the stability of return-conditioned supervised learning (RCSL) with the stitching capability of Q-functions. The submission tests their algorithm in the MuJoCo domain with medium, medium-replay, and medium-expert datasets. The performance of the submission’s proposed method is higher on average in the MuJoCo domain.
Strengths: The paper tests their proposal in the main benchmarks.
Weaknesses: The HQIL algorithm [1] outperforms the proposed method of the submission. However, this is not mentioned, and further there is no comparison to HQIL.
Table 1 might need some standard deviations.
To be able to interpret Table 2 correctly, the table needs to have standard deviations of the previous methods as well.
POR method can also perform above 70 in the antmaze-large-diverse dataset.
Other studies also report results for antmaze-medium-replay dataset. Why did the authors omit this baseline when reporting results?
Table 8 again needs standard deviations.
The submission only compares their algorithm to old baselines. However, there are currently algorithms that outperforms the proposal of the submission.
Section 3.2. argues about something that is already expected/known to anyone that knows basic reinforcement learning, i.e. how the $Q$-learning update works. Should this section really occupy two pages in the main body of the paper?
*“QCS represents a breakthrough in offline RL,pushing the limits of what can be achieved and fostering further innovations.”*
This statement might be too strong to describe the submission's contributions
[1] HIQL: Offline Goal-Conditioned RL with Latent States as Actions, NeurIPS 2023.
Technical Quality: 2
Clarity: 1
Questions for Authors: Why did the authors bold their algorithm while SQL is the highest performing algorithm for hopper-m-e in Table 2? (111.8>110.2)
Confidence: 3
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: Please see weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's efforts in providing constructive feedback to improve our work. **Our detailed response to the reviewer's comments is posted below and also in the global response G1-G4.**
### **W1. Comparison with HIQL.**
Thank you for introducing the highly effective algorithm HIQL [Ref.2] for the goal conditioning task. RL tasks can be divided into two categories: return-maximizing tasks, which aim to earn the maximum return, like MuJoCo, and goal-reaching tasks, which aim to reach the goal with a higher success ratio, like Antmaze. HIQL specializes in goal-reaching tasks as it uses a hierarchical approach that generates high-valued subgoals with a higher-level policy and generates actions with a lower-level policy conditioned on those generated subgoals. HIQL achieves superior performance on Antmaze tasks, but due to the nature of the algorithm, which involves generating and conditioning on subgoals, it has limitations when applied to tasks like MuJoCo.
In Table R2, we compare QCS-R and QCS-G with HIQL in the Antmaze medium and large datasets. QCS previously used a hidden dimension of 256 for training the Q-function, and we observed that HIQL uses a hidden dimension of 512. Therefore, we re-trained the Q-function with an increased hidden dimension of 512 and confirmed that this setting is beneficial for QCS. In Table R2, we mark the original QCS-R/QCS-G score as QCS-R(256)/QCS-G(256) and the new score as QCS-R(512)/QCS-G(512).
As shown in Table R2, our new QCS score is comparable to HIQL, which is specialized for goal conditioning tasks, and particularly, QCS-R performs even better than HIQL. This new result reaffirms that our algorithm, despite being general, is effective even when compared to specialized algorithms.
**[Table R2. Performance comparison between QCS and HIQL.**]
| Dataset | HIQL | QCS-R(256) | QCS-R(512) | QCS-G(256) | QCS-G(512) |
|-------------|---------------|----------------|----------------|----------------|----------------|
| antmaze-m-p | 84.1±10.8 | 81.6±6.9 | **93.1±7.0** | 84.8±11.5 | 88.5±6.3 |
| antmaze-m-d | 86.8±4.6 | 79.5±5.8 | **88.7±5.4** | 75.2±11.9 | 84.9±8.4 |
| antmaze-l-p | 88.2±5.3 | 68.7±7.8 | **89.2±6.3** | 70.0±9.6 | 84.2±10.5 |
| antmaze-l-d | **86.1±7.5** | 70.6±5.6 | 84.2±10.6 | 77.3±11.2 | 77.1±7.0 |
| **average** | **86.3** | 75.1 | **88.8** | 76.8 | 83.7 |
[Ref.2] Park, Seohong, et al. "Hiql: Offline goal-conditioned rl with latent states as actions." NeurIPS 2023.
### **W2. Recentness of Baselines.**
We want to note that we compared baselines including very recent works such as ACT (AAAI 2024), CGDT (AAAI 2024), DC (ICLR 2024), FamO2O (NeurIPS 2023), EDT (NeurIPS 2023), QDT (ICML 2023), SQL (ICLR 2023). Especially since the NeurIPS 2024 submission deadline was May 22, considering this date, the works for AAAI 2024 and ICLR 2024 can be regarded as very recent, having been published less than three months ago.
We continuously track the latest works, and after the NeurIPS submission deadline, we found a new work, Q-value Regularized Transformer(QT) [Ref.6] that proposes a new way of combining RCSL and Q-learning, published at ICML 2024. Note that according to NeurIPS regulations, QT is considered `Contemporaneous Work' because it appeared online after the submission. Since the work appeared after our submission, we are not obligated to compare it, but to help the reviewer better appreciate the quality of our work, we additionally compare QCS with QT in Table R3 in response W3 using the same evaluation metrics as QCS.
[Ref.6] Hu, Shengchao, et al. "Q-value Regularized Transformer for Offline Reinforcement Learning." ICML 2024.
### **W3. Comparison with QT and POR with the same evaluation metric of QCS.**
As mentioned in our manuscript Section 6.1, we report QCS as the **last running average score**. However, QT selects the best score throughout the whole training process (as confirmed through email with the QT author), which is not suitable for offline RL settings. Offline RL assumes a situation with limited online interaction, but finding the best score relies heavily on a significant amount of online interaction. Therefore, we re-ran the QT through the official QT code with author-recommended hyperparameters and verified the last running average score.
Regarding POR, mentioned by the reviewer for the antmaze large score, we previously reported POR scores from the POR paper. We were unable to confirm the exact evaluation metric, but the authors shared the training curve of POR on the POR github. From this, we verified the last running average score of POR.
To summarize, in Table R3 and Table R9 in the global response PDF, we compare QCS with QT and POR using the last running average score. As can be seen in Tables, QCS outperforms both POR and QT in MuJoCo and Antmaze domains, with POR's Antmaze large scores being below 70, demonstrating that QCS is a robust and high-performing algorithm.
**[Table R3. Performance comparison between QCS (ours), POR, and QT. We evaluate each algorithm based on the last running average score.]**
| Dataset | POR (last) | QT (last) | QCS (ours, last) |
|----------------|-----------------|----------------|-----------------|
| antmaze-u | 87.9±4.0 | 51.3±15.7 | **92.5**±4.6 |
| antmaze-u-d | 65.5±6.0 | 57.9±9.6 | **82.5**±8.2 |
| antmaze-m-p | **84.6**±5.4 | 32.7±11.3 | **84.8**±11.5 |
| antmaze-m-d | **75.6**±4.8 | 22.7±23.6 | **75.2**±11.9 |
| antmaze-l-p | 56.5±5.2 | 0±0.0 | **70.0**±9.6 |
| antmaze-l-d | 63.0±4.0 | 0±0.0 | **77.3**±11.2 |
| **average** | 72.2 | 27.4 | **80.4** |
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response and providing the standard deviation results.
The results reported in the original paper [1] for POR are different from what the submission reports. For instance, for the hopper-m dataset for POR the original paper reports 98.2 ±1.6, however the submission reports POR results as 78.6 ± 7.2 for the hopper-m dataset. Given that the performance of the submission’s proposed algorithm QCS-R is 96.4 ± 3.72, this means the submission does not outperform a prior algorithm POR. The way the results are reported in the submission is quite misleading and incorrect.
[1] A Policy-Guided Imitation Approach for Offline Reinforcement Learning, NeurIPS 2022.
Looking at the results reported in Table 9 in the attached pdf, in the hopper-m-e dataset the performance of QT is 108.2± 3.6, and the performance of QCS is 110.2± 2.41. For the halfcheetah-m-e dataset the performance of QT is 94.0±0.2 the performance of QCS 93.3±1.78. These results are within one standard deviation.
*“When learning $Q_\theta$ from such limited information, where the values at the narrow action points are almost identical for each given state, it is observed that the learned $Q_\theta$ tends to be over-generalized to the OOD action region.”*
Where was this observed? It would be good to have a reference here if this observation comes from prior work.
How would the results look like for POR and QT if you were reporting results as the prior work did, i.e. selecting the best score throughout the whole training process as you confirmed with the QT author through email, instead of the last running average score? Before changing the metrics we report the results, perhaps we can provide results in both metrics to be able to provide robust comparisons with prior work.
While the question of “Why Does Q-Greedy Policy Struggle with Optimal Datasets?” is one of the main questions that the submission focuses on that takes up more than two pages in the main body of the paper, but in reality this question has been only investigated in one game in the MuJoCo environment presented in the Appendix. I do not think this is sufficient enough evidence that the experiments of the submission support the claim that is being made here.
Looking at the results reported in Appendix E, MuJoCo Hopper results, it seems that as the dimension of the action space increases the results reported in the main paper about the toy example becomes less relevant.
I thank the authors for their rebuttal. The submission needs substantial re-writing. I will keep my original score.
---
Rebuttal 2:
Comment: We appreciate the reviewer's additional comments aimed at further clarifying our work.
### **POR changed their official score after their camera-ready version.**
In the POR GitHub, around Nov 2022, questions were raised regarding the reproduction of POR results. Since the POR authors could not access their previous code due to the copyright issue, they re-implemented the work and presented the results anew. While most of the results matched the previous ones, they did not match the hopper-medium-v2 dataset. As a result, they replaced the score with the new results and updated the arxiv version accordingly. Through a comment on GitHub, the author recommended using the new results instead of the conference version, which the reviewer had checked. The exact author's comment is **"We have updated our paper in the arxiv to reflect that and the mismatch results in the hopper-medium-v2 dataset (lower it from 98.2 to 78.6). If you want to compare POR with your work, you can refer to this paper version."**. Due to NeurIPS rebuttal policies, we cannot include external links, but we provide the address of this comment: github.com/ryanxhr/POR/issues/2. Therefore, our POR score reporting is accurate and aligns with the author's intent.
### **Comparison with QT in Table R9.**
Surpassing QT in 13 out of the 15 datasets we compared, with the remaining 2 scores being similar to QT within the range of standard deviation, cannot be a reason for rejection but rather demonstrates the superiority of QCS. (QCS mean 86.3 > QT mean 60.9)
We want to emphasize that the offline RL benchmark features a variety of state and action dimensions, as well as different goals, making it a diverse and challenging benchmark where it is difficult for a single algorithm to perform well across all datasets compared to all baselines. For example, both POR and QT have instances where their performance is similar to or even lower than that of previous studies when examined closely at each dataset. For example, POR's performance of 76.6 on walker2d-medium-replay is lower than TD3+BC's 81.8, and QT's reported performance of 59.3 on antmaze-medium-diverse is lower than IQL's 70.0. However, POR and QT are recognized for their sufficiently good results due to their generally strong performance. **Slightly lower or similar scores to baselines on a few datasets cannot be grounds for rejection. We believe that what truly matters is how robustly the performance has improved compared to previous works.**
### **Comparison of QCS to the Best Score**
As mentioned in the previous response, we were unable to confirm the exact evaluation metric for POR. Since we have already compared the scores from the POR paper provided by the POR author in Tables 2 and 3 of our original manuscript, we conduct an additional comparison for QT with the best score. Table R10 shows the results of the comparison using the best score. As can be seen from the table, the QCS score has generally increased compared to the running average score that we previously reported. Compared to QT, it also shows superior results on average in MuJoCo (six datasets are better, two are similar, and one is worse), and especially demonstrates better performance with a large gap in Antmaze. While we knew that reporting the best score would naturally result in better performance, we want to emphasize again, as mentioned in our previous response, that we believe the last running average score is a more appropriate metric for offline settings, which is why we reported using this metric. We have demonstrated better performance than QT, but we would also like to clarify once again that QT should not be a reason for rejection, as it qualifies as concurrent work under NeurIPS policies.
**Table R10. Performance comparison between QCS (ours) and QT. We evaluate each algorithm based on the best score.**
| Dataset | QT (best) | QCS (best) |
|--------------------|---------------------|--------------------|
| halfcheetah-m | 51.4±0.4 | **60.6**±0.3 |
| hopper-m | 96.9±3.1 | **99.7**±0.3 |
| walker2d-m | 88.8±0.5 | **92.3**±0.4 |
| halfcheetah-m-r | 48.9±0.3 | **55.5**±0.5 |
| hopper-m-r | 102.0±0.2 | **103.4**±0.5 |
| walker2d-m-r | **98.5**±1.1 | **98.6**±0.3 |
| halfcheetah-m-e | **96.1**±0.2 | 95.6±0.2 |
| hopper-m-e | **113.4**±0.4 | **113.1**±0.6 |
| walker2d-m-e | 112.6±0.6 | **118.3**±1.2 |
| **average** | **89.8** | **93.0** |
| antmaze-u | 96.7±4.7 | **100.0**±0.0 |
| antmaze-u-d | 96.7±4.7 | **98.0**±4.0 |
| antmaze-m-d | 59.3±0.9 | **100.0**±0.0 |
| antmaze-l-d | 53.3±4.7 | **92.0**±7.5 |
| **average** | 76.5 | **97.5** |
---
Rebuttal 3:
Comment: ### **Observation of Q Overgeneralization is one of our contributions.**
Starting from the 164th sentence in our manuscript (slightly below the sentence mentioned by the reviewer)—"We present a simple experiment to verify that learning $Q_\theta$ indeed induces over-generalization when trained on optimal trajectories."—this phenomenon is illustrated throughout Fig. 4 and Fig. 5 in Section 3.2.
### **The topic 'Why Does the Q-Greedy Policy Struggle with Optimal Datasets?' is addressed in Section 3.2 by demonstrating the phenomenon of Q generalization and validating it through various methods across different domains.**
Section 3.2 explains that the Q function tends to over-generalize when trained on an optimal dataset, which is why the Q-Greedy Policy struggles with optimal datasets. Throughout the section, we demonstrate that this overgeneralization phenomenon can particularly occur when training with an optimal dataset, using various domains including toy environment, Gym Inverted Double Pendulum, and MuJoCo Hopper, as well as various methods such as Q-value analysis and NTK analysis (please see Figures 4 and 5 in our manuscript). The appendix provides an in-depth analysis that couldn’t be fully covered in the main text due to space constraints, but it is already addressed in the main body.
### **The results reported in Appendix E (the MuJoCo Hopper results) are additional analyses that connect to the Hopper results shown in Figure 5 of the main paper.**
In Section 3.2, Figure 4, we present the results of a toy experiment, and immediately afterward, in Figure 5, we extend this experiment to demonstrate that a similar phenomenon occurs in both the Gym Inverted Double Pendulum and MuJoCo Hopper environments. We would like to note that the main paper includes not only the results of the toy experiment but also the results for Hopper. Therefore, the results in Appendix E are not a sudden comparison involving a change of domain and an expansion of the dimension from the toy example, but rather a more detailed analysis of the results presented for Hopper in Figure 5. | Rebuttal 1:
Rebuttal: We express our deepest gratitude to all the reviewers for their time and effort in evaluating our work and providing valuable advice. Our responses to the reviewers' comments have been left as replies to each review. Moreover, due to the rebuttal word limit, we have posted the responses that could not be addressed to individual reviewers here and specified which reviewer each response is for. Additionally, the content regarding the standard deviation pointed out by the reviewer WnjH can be found in the PDF attachment.
### **G1. Regarding Antmaze-medium-replay dataset. - reply to reviewer WnjH**
According to [Ref.1], there are six datasets (antmaze-umaze, antmaze-umaze-diverse, antmaze-medium-play, antmaze-medium-diverse, antmaze-large-play, antmaze-large-diverse) for the Antmaze task, and we could not find an antmaze-medium-replay dataset. Neither the baselines we compared nor the HIQL [Ref.2] mentioned by the reviewer referred to the antmaze-medium-replay dataset. However, we recognize that new benchmarks are continuously being updated and our knowledge may be incomplete. If the reviewer can provide information on where we can find the antmaze-medium-replay dataset or any prior work that evaluates using that dataset, we will conduct additional tests.
[Ref.1] Fu, Justin, et al. "D4rl: Datasets for deep data-driven reinforcement learning." arXiv 2020.
[Ref.2] Park, Seohong, et al. "Hiql: Offline goal-conditioned rl with latent states as actions." NeurIPS 2023.
### **G2. Regarding Section 3.2. - reply to reviewer WnjH**
We'd like to kindly ask the reviewer for specific parts in section 3.2 that seem to be unnecessary. We would be happy to revise and improve our paper accordingly.
In Section 3.2, what we are trying to convey is not 'how the Q-learning update works,' but rather **analyzing why a Q function trained with an optimal dataset tends to over-generalize, and why this is not the case when trained with a medium-quality dataset.** This analysis is novel and has not been addressed in previous works. Since we are comparing and analyzing the Q function based on dataset quality, the analysis naturally begins with how the Q function is updated. However, this is merely the starting point of the analysis; the crucial point is how in-sample Q-learning progresses according to dataset quality. We believe that properly analyzing offline Q-learning in various settings helps establish a logical structure, provides insight into the problem definition, and naturally leads to the development of the QCS algorithm. We will update the manuscript to clearly convey our key points.
### **G3. Regarding Standard deviation - reply to reviewer WnjH**
Thank you for pointing this out and allowing us to make our score reporting more complete. The standard deviations for Tables 1, 2, and 8 in our manuscript have been added to Tables R6, R7, and R8 in the PDF attachment within the global response. Since some baselines do not report the standard deviation for their own algorithms [Ref.3, Ref.4, etc.], we add standard deviation except for those works.
[Ref.3] Kostrikov, Ilya, Ashvin Nair, and Sergey Levine. "Offline Reinforcement Learning with Implicit Q-Learning." ICLR 2022.
[Ref.4] Emmons, Scott, et al. "RvS: What is Essential for Offline RL via Supervised Learning?." ICLR 2022.
### **G4. Regarding wrong bolding. - reply to reviewer WnjH**
Thank you for pointing out. We made a mistake with the bold in the hopper-medium-expert dataset and will correct it. We also noticed that for the hopper-medium-replay dataset, the correct SQL score is 99.7, but we mistakenly reported it as 101.7 higher than 99.7. Since 101.7 was the maximum score we previously marked in bold, we will correct the score and remove the boldface.
### **G5. Impact of the context length K. - reply to reviewer fTMz**
We conducted additional experiments by varying the context length of QCS. As seen in the results, the QCS is highly robust to changes in context length. The QCS is based on the DC [Ref.5], which emphasizes the local context in reinforcement learning, allowing it to achieve good results even with a short context length.
**[Table R1. QCS with varying context length $K$.]**
| Dataset | K=2 | K=8 | K=20 |
|---------|-----|-----|------|
| halfcheetah-medium | 60.2±0.6 | 59.0±0.4 | 59.2±0.3 |
| hopper-medium | 97.7±2.5 | 96.4±3.7 | 95.5±5.1 |
| walker2d-medium | 88.4±0.9 | 88.2±1.1 | 87.8±2.0 |
| halfcheetah-medium-replay | 54.0±0.4 | 54.1±0.8 | 52.6±0.8 |
| hopper-medium-replay | 100.1±1.5 | 100.4±1.1 | 99.0±3.4 |
| walker2d-medium-replay | 86.4±5.1 | 94.1±2.0 | 88.6±4.1 |
[Ref.5] Kim, Jeonghye, et al. "Decision ConvFormer: Local Filtering in MetaFormer is Sufficient for Decision Making." ICLR 2024.
Pdf: /pdf/e0d0f60b8a1a34e9017968f227e0af83ebafcdc3.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Geometric-Averaged Preference Optimization for Soft Preference Labels | Accept (poster) | Summary: Pretrained large language models know information about a wide range of topics, but since pre-training is often done on internet scale data, these models are not aligned with human values. Offline preference learning methods such as DPO are getting increasingly popular for this task.
A key assumption for DPO is that it sees a binary preference dataset of $(x, y_w, y_l)$ tuples, where $x$ is the prompt, $y_w$ is the preferred and $y_l$ is the dispreferred response. However, this binary labeling is often too hard and does not count for the difference between two responses: while responses $y_1$ and $y_2$ may have a very clear preferred-dispreferred relationship, another pair $y_3$ and $y_4$ might have a much smaller difference.
This paper proposes an algorithm to take into account the relative preference between two responses. The relative preference is not binary and can take any value between 0.5 and 1.0: closer to 1.0 means the preference is almost binary, whereas closer to 0.5 means it is a tossup between two responses. Prior work such as cDPO has used a linear interpolation between two different DPO losses with reversed preference relationships. In comparison, this paper assumes geometric averaging between two LLM policies, and this results in a simple modification to the existing DPO algorithm and its derivatives. This paper’s method, GDPO, outperforms DPO in the case of different soft preference labels.
Strengths: 1. The algorithm presented comes from a simple assumption. The final form of the algorithm’s loss functions is simple and intuitive.
2. The paper is nicely written.
In short, this paper comes from a long line of recent papers that try to fix one or more of DPO’s algorithmic issues. The authors focus on the strict binary nature of the preference dataset utilized by DPO, and try to improve it when additional information, i.e., how strong the preference relationship is, is known. This is an interesting question and nicely answered.
Weaknesses: I will list the weaknesses from most important to less important, in my opinion.
**(What about DPO pushing down probabilities of both positive and negative examples)**
The biggest problem with DPO in my opinion, is that it pushes down the log probabilities of both positives and negatives, though it pushes down that of negatives much more, increasing the reward margin. Recent work such as [1] discusses this issue. This makes DPO extrapolate to an OOD region, which might be either good or bad. Discussing the results of [1] in context of this paper’s algorithm would be necessary: we would not want an algorithm that would push down probabilities of winning responses. On the other hand, GDPO’s gradient weighting term, $w_{\theta}$, can actually improve this situation too, and it would be good to know.
Prior work such as RPO [2] has attempted to fix this issue by adding a SFT loss to the DPO loss, whereas SimPO [3] shows that a lot of the mismatches happen because of the reference model’s wrong reward attribution, and removing the reference model from the loss calculation can help. While comparing all of these might be out-of-scope for this paper, at least documenting the basic results from [1], i.e., what happens to the log probabilities of winning vs losing responses under GDPO, DPO and cDPO, would strengthen the paper. Also how does this vary depending on the soft preference distribution?
**(Online DPO)**
Multiple works have shown since DPO that offline preference tuning methods are sub-optimal, and the DPO objective, coupled with samples generated from the model itself, is generally better [1, 4, 5, 6, 7]. These methods can either use the reward from the language model itself to train it [6], or a separate reward model [1]. Could the authors expand this paper’s method to the online variant of DPO? Two problems I can see from the start: in order to use a self-rewarding scheme, the probability/reward from the policy needs to somehow inform us of the soft preference label. Would model miscalibration hurt this? Also for a separate reward model, if not sufficiently strong enough, obtaining the soft preference label can be hard. Also obtaining it during training, for every training sample, can become computationally expensive.
**(More architectures/models tried out)**
Trying this algorithm with more recent models, such as LLama3, or other more commonly used models such as Pythia or Mistral would strengthen the paper. **Note that I do not consider this a major weakness, just something that would make the paper more comprehensive.**
Technical Quality: 3
Clarity: 3
Questions for Authors: **Possibility of extension to token level labels**
Could this paper’s method be extended to token level labels? Eg, assume a math problem that has 5 steps. Assume responses $y_1$ and $y_2$ that are both wrong, but wrong in different steps, i.e., $y_1$ gets it correct up till 3rd step, whereas $y_2$ gets it correct till 2nd step. Can the soft preference labels reflect that/can we still make the model learn something here?
# References
[1] Preference Fine-Tuning of LLMs Should Leverage Suboptimal, On-Policy Data, https://arxiv.org/abs/2404.14367
[2] Iterative Reasoning Preference Optimization, https://arxiv.org/abs/2404.19733
[3] SimPO: Simple Preference Optimization with a Reference-Free Reward, https://arxiv.org/abs/2405.14734
[4] Is DPO Superior to PPO for LLM Alignment? A Comprehensive Study, https://arxiv.org/abs/2404.10719
[5] Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-Constraint, https://arxiv.org/abs/2312.11456
[6] Self-Rewarding Language Models, https://arxiv.org/abs/2401.10020
[7] Direct Language Model Alignment from Online AI Feedback, https://arxiv.org/abs/2402.04792
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for careful reading and thoughtful feedback.
**> What about DPO pushing down probabilities of both positive and negative examples**
As suggested by the reviewer and following https://arxiv.org/abs/2404.14367, **Figure 1 (f)** in additional PDF measures the log ratio $\log \frac{\pi_{\theta}}{\pi_{\text{ref}}}$ of winner/loser responses and estimated reward gap $\log \frac{\pi_{\theta}(x,y_w)\pi_{\text{ref}}(x,y_l)}{\pi_{\text{ref}}(x,y_w)\pi_{\theta}(x,y_l)}$ on Plasma Plan and Anthropic Harmless. DPO aggressively pushes down the log ratio and increases the reward gap, since DPO objective forces the model to achieve $r_{\theta}(x,y_w)-r_{\theta}(x,y_l) \rightarrow \infty$, which is causing an over-optimization issue. cDPO is more conservative in pushing down the log ratio while leading to worse alignment quality due to objective mismatch. GDPO avoids the issues of such objective mismatch and over-optimization by suppressing the reward gap increase modestly. The log ratio and estimated reward gap with Anthropic Harmless dataset, which has difference soft preference label distribution from Plasma Plan (see Figure 2 in the main text), also show the same trends as seen with Plasma Plan.
**> Online DPO**
As proposed by the reviewer, we conduct the comparison among GDPO, DPO, and cDPO in online (on-policy) settings with the Plasma Plan dataset.
We prepare 2 variants: (1) incorporating an extra reward model $r_{\psi}(x,y)$ and (2) leveraging estimated self-preference $\rho_{\theta} = \sigma( \text{SG}[\beta\log \frac{\pi_{\theta}(x,y_w)\pi_{\text{ref}}(x,y_l)}{\pi_{\text{ref}}(x,y_w)\pi_{\theta}(x,y_l)}])$, where $\text{SG}$ stands for stop gradient operation. For the extra reward model, we use PaLM 2-XS, the same as a policy LLM.
**Figure 1 (h)** in the rebuttal PDF provides the results of online alignment methods, which shows that GDPO performs the best in both settings.
This is because GDPO can cancel the effects from competitive/confusing preference around lower soft preferences such as $\hat{p}=0.5$, which could often help the case when (i) the sampled responses are equally good or (ii) the estimation of preferences are not calibrated enough.
Especially, GDPO demonstrates significant gain in (2) self-preference settings; in contrast, because the binarization of the preference increases the gap from the true preference, DPO degrades the performance worse.
Please note that, because we adopt PaLM 2-XS as external reward models but offline soft preferences are provided from PaLM 2-L (the number of model parameters are quite different), the online performances do not reach the offline performance. Moreover, we train DPO/GDPO/cDPO in a pure on-policy setting (without any reuse of generated data) and sample only 2 responses per prompt. It would be an interesting future direction to optimize the number of gradient steps to reuse the generated samples (something like “batched iteration” methods) and the number of responses sampled per prompt (e.g. Best-of-$N$ strategy). We will include these results in the revision.
**> More architectures/models tried out**
We provide the additional results with Gemma-2B model (https://arxiv.org/abs/2403.08295), which is an open LLM model with an architecture and pre-training different from PaLM 2-XS; an LLM we mainly used in this paper. **Figure 1 (i)** in additional PDF shows the winning rate on Plasma Plan using Gemma-2B as a base language model. The results show that geometric averaging (GDPO) still outperforms DPO and cDPO on Plasma Plan, Plasma Plan Skewed, and Plasma Plan Stairs datasets. These trends are consistent with those of PaLM 2-XS. We will include these results in the revision.
**> Possibility of extension to token level labels**
As long as the problems token-level DPO can be formulated, we think that GDPO can be extended to such settings. However, token-level formulation might have some technical challenges. For instance, handling different token lengths between a part of $y_1$ and a part of $y_2$ with token-level binary/soft labels might be an issue for both DPO and GDPO. It is another problem that obtaining such a dense token-leven preference signal is more costly than the current settings.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for providing a nice rebuttal!
A few followup questions:
> Please note that, because we adopt PaLM 2-XS as external reward models but offline soft preferences are provided from PaLM 2-L (the number of model parameters are quite different...
What are the parameter counts of these two models? What is their agreement rate?
> More architectures/models tried out
This is a bit unfortunate, as it would be interesting to see if the same performances scale to at least 7 or 8B parameter models, as using them has become quite common for small-scale evaluations. However, if the authors cannot produce these results due to compute constraints, that is understandable as well.
> What about DPO pushing down probabilities of both positive and negative examples
It seems the GDPO still pushes down probabilities of positives, albeit less compared to regular DPO. It is not clear to me, from the results, how the soft preference label/variation within it actually affects the results. Also it seems it does not mitigate the problem that RPO [1] does.
> the online performances do not reach the offline performance.
Based on both [2] and [3], online DPO > offline DPO. Are the authors claiming that this is not true (which would be an important finding)? What are the differences in setups that leads to this finding? Or are the authors claiming online DPO > offline DPO, but offline GDPO > online GDPO, possibly because of using a significantly smaller reward model? Clarifying this point would be important.
# Questions
Any possible extension to a reference model free version, similar to [4]? Conceptual explanation might suffice here. This is not an important question related to the paper, just for my personal curiosity.
# References
[1] Iterative Reasoning Preference Optimization, https://arxiv.org/abs/2404.19733
[2] Is DPO Superior to PPO for LLM Alignment? A Comprehensive Study, https://arxiv.org/abs/2404.10719
[3] Preference Fine-Tuning of LLMs Should Leverage Suboptimal, On-Policy Data, https://arxiv.org/abs/2404.14367
[4] SimPO: Simple Preference Optimization with a Reference-Free Reward, https://arxiv.org/abs/2405.14734
---
Rebuttal 2:
Title: Response to Reviewer 98AM (1/3)
Comment: We appreciate the quick and detailed response. Please let us know if our follow-up responses address your concerns.
**> What are the parameter counts of these two models? What is their agreement rate?**
The number of parameters in PaLM 2 models is not publicly available in the paper (https://arxiv.org/abs/2305.10403). However, the former version of PaLM models opens their parameter counts (https://arxiv.org/abs/2204.02311). The smallest PaLM has 8B parameters, and the largest PaLM has 540B. PaLM 2 might have similar configurations to PaLM.
For the agreement rate between trained reward models (PaLM 2-XS) and AI rater (PaLM 2-L), we can refer to the preference classification accuracy after the convergence: PaLM 2-XS achieves 90.1% train accuracy and 93.8% validation accuracy. For the agreement rate on **out-of-distribution** data, we can also refer to Figure 4 (right) in the main text. While a preference model analytically recovered from the DPO policy (based on PaLM 2-XS) achieves 94.0% accuracy between PaLM 2-L v.s. Human samples (the same distribution to the training data), it drops the accuracy to 61.6% (PaLM 2-L v.s. GPT-4 samples) and 66.3% (PaLM 2-L v.s. GPT-3.5 samples) when the response pairs are far from training data distribution. This implies that trained external reward models may not be so accurate when DPO/cDPO/GDPO outputs out-of-distribution responses.
**> 2B LLMs are not sufficient. Results with 7-8B LLMs?**
Because the similar size LLMs -- Pythia-1.4B/2.8B (https://arxiv.org/abs/2304.01373) -- are often used in RLHF/offline alignment papers (e.g. DPO: https://arxiv.org/abs/2305.18290, Preference Fine-Tuning of LLMs Should Leverage Suboptimal, On-Policy Data: https://arxiv.org/abs/2404.14367) as a default, we still believe that our additional results from Gemma-2B are enough to claim the scalability of geometric-averaging methods to different LLMs.
However, following the reviewer's request, we can also provide the results from Gemma-7B (https://arxiv.org/abs/2403.08295). We are running the experiments now and please let us follow this up in a few days.
**> What about DPO pushing down probabilities of both positive and negative examples**
First, we -- the author and the reviewer -- can agree that **the log probabilities of preferable/less-preferable samples from the dataset (or their gap) can characterize the training dynamics and algorithm behaviors, but cannot be surrogate metrics proportional to alignment performance**.
Iterative RPO (https://arxiv.org/abs/2404.19733) paper and our experiments provide the evidence to support this:
- (1) Figure 2 (a) in the Iterative RPO paper has shown that SFT achieves a larger log probability of winner samples than RPO (DPO+SFT) with monotonic improvement (i.e. SFT > RPO). However, in terms of performance (test accuracy of GSM8K in Table 1), RPO surpasses SFT (i.e. RPO > SFT). This suggests that it is unclear whether a larger log probability of winner samples correlates to the improvement of target metrics or not.
- (2) Our results (Figure 1 (f) in the rebuttal PDF) also reveal that while the order of log probability of winner samples is cDPO > GDPO > DPO, the order of the winning rate (Table 1 in the main text) is GDPO > DPO > cDPO.
- (3) The analysis paper (https://arxiv.org/abs/2404.14367) initiating this log probability discussion has only observed the fact that DPO pushes down both preferable log probability and less-preferable log probability, but has not claimed that the increasing trend of the preferable log probability improves the performance.
Moreover, we would like to point out that **it depends on the experimental settings and tasks whether the improvement of preferable log probability happens/is necessary or not**.
- (4) The log probability of winner samples in the preference dataset can achieve its maximum value after optimizing the maximum likelihood objective (i.e. SFT with winner responses). As done in DPO, if the policy is initialized with the SFT checkpoint with preferable responses, further improvement hardly happens because the initial checkpoint is already an almost optimal parameter in terms of winner response likelihood. As explained in Section 4 (L188) in the main text, our experiments have followed the procedure in the DPO paper, starting from SFT checkpoints with preferable responses.
- (5) Iterative RPO has started the experiments from LLaMA-2-70B-Chat checkpoint (Section 3 in the Iterative RPO paper). This is not finetuned with the winner's responses in the GSM8K dataset yet, which means that the policy has a sufficient margin to improve the preferable log-likelihood in the dataset. The maximum likelihood term in iterative RPO methods contributes to increasing the preferable log likelihood.
**(continues to the next thread)**
---
Rebuttal 3:
Title: Response to Reviewer 98AM (2/3)
Comment: **(Continuing from the last thread)**
- (6) The target tasks or metrics can be related to the necessity of the increasing trends of the preferable log-likelihood. For instance, mathematical reasoning or code generation tasks can measure the performance with exact match/unit tests and need to maintain reasonable responses, because the best responses do not significantly differ from the preferable responses in the dataset. In these tasks, the degradation of preferable outputs in DPO may cause significant issues. DPO v.s. PPO paper (https://arxiv.org/abs/2404.10719) has stated `However, we will demonstrate in Sec. 6 that even with a nearly perfect annotator, the performance of DPO remains unsatisfactory in challenging tasks such as code generation.` (from Section 4.3: Practical Remark). In contrast, our paper has focused on "open-ended generation" tasks such as summarization or conversation, where we cannot measure the performance with an exact match, rather requiring a human or AI rater to evaluate the quality. In these tasks, the exploration into out-of-distribution regions makes more sense, and the policy needs to push down the likelihood of the (both winner and loser) responses to further improve the response quality.
In summary, we believe that our additional experiments (Figure 1 (f) in the rebuttal PDF) characterize the behavior of geometric averaging well; cDPO suffers from objective mismatch due to the conservative update (both log probabilities do not push down much, and the winning rate is the worst). DPO faces an over-optimization issue induced by the maximization objective $r_{\theta}(x,y_w)-r_{\theta}(x,y_l) \rightarrow \infty$. This is aligned with our experimental observations (both log probability pushes down the most and the largest reward gap. The performance is the second.) GDPO resolves those two issues by adjusting the scaling factor of the gradient with soft labels (the decrease of log probability suppresses, and the reward gap stays in a modest range. The performance is the best). The trend of the log probability/reward gap is not proportional to the alignment quality. They need to be in a reasonable range. Because the policy has been initialized with SFT models finetuned with (50%) winner responses in the dataset, there is no margin for any algorithms to increase the preferable log-likelihood further. RPO increases log probability, but this comes from the experimental settings. The RPO paper has started the experiments from the checkpoints, which are not finetuned with the target tasks and have enough margin for improvement.
Lastly, we provide the table to compare the relationship among log-likelihood, reward gap, and the binary winning rate (the results are the same as Figure 1 (f) in the rebuttal PDF).
||400|800|1200|1600|2000|
|--|--|--|--|--|--|
|DPO (Pref ratio)|-3.97|-5.23|-7.77|-9.96|-11.9
|cDPO (Pref ratio)|-1.14|-1.46|-1.79|-2.00|-2.00
|GDPO (Pref ratio)|-3.24|-3.38|-4.49|-5.83|-9.18
|DPO (R gap)|33.7|38.4|43.3|49.0|54.5
|cDPO (R gap)|13.6|14.7|15.3|16.0|16.1
|GDPO (R gap)|23.1|26.6|29.4|33.5|40.8
|DPO (WR)|68.18|73.29|78.75|81.53|83.16
|cDPO (WR)|51.57|61.41|70.62|75.96|72.13
|GDPO (WR)|71.66|75.49|80.95|84.55|85.48
We appreciate the reviewer deepening the discussion. We are happy to address your concerns if you have any.
**> The online performances do not reach the offline performance.**
We'd like to clarify that our additional experiments (Figure 1 (h) in the rebuttal PDF) intend to verify the scalability of our geometric-averaging methods to online (on-policy) settings; i.e. online GDPO > online DPO or online cDPO, and do not intend to compare/discuss the performance between offline methods and online methods. The results reveal that, as the same as offline settings, online GDPO consistently outperforms online DPO and cDPO in both the extra reward model and self-rewarding settings.
For the discussion about online DPO and offline DPO, we would also like to clarify that their performances significantly depend on the scale of reward models (i.e. the preference label quality). In the Online AI Feedback paper (https://arxiv.org/abs/2402.04792), Figure 3 shows that, when the preference labels are annotated by PaLM 2-L, online DPO achieves a 95% winning rate against the SFT model on Reddit TL;DR dataset, and offline DPO achieves 90% winning rate. In contrast, Figure 5 also shows that even online DPO achieves a lower winning rate when the scale of reward models is small; online DPO with PaLM 2-S achieves an 86% winning rate against the SFT model (blue bars) and online DPO with PaLM 2-XS achieves 82%.
Because our online experiments use PaLM 2-XS for the extra reward model and offline experiments use PaLM 2-L, our performance gap between online and offline methods is consistent with the previous literature. Our experiments emphasize the scalability of geometric averaging under the same conditions.
We are glad to provide further clarification upon request.
---
Rebuttal 4:
Title: Response to Reviewer 98AM (3/3)
Comment: **> Reference Model Free Version (Geometric SimPO)**
We think we can apply geometric averaging to the reference model free SimPO. We start with the DPO objective without a reference model (we omit the input $x$ for readability). The derivation of the objective is as follows:
$E[\beta \log\pi_{\theta}(y_w) - \beta \log\pi_{\theta}(y_l) ]$
$= E[\beta \log \frac{\pi_{\theta}(y_w)^{\hat{p}}\pi_{\theta}(y_l)^{1-\hat{p}}/Z_{w}}{\pi_{\theta}(y_l)^{\hat{p}}\pi_{\theta}(y_w)^{1-\hat{p}}/Z_{l}}]$
$= E[\beta (2\hat{p}-1) \log\pi_{\theta}(y_w) - \beta (2\hat{p}-1) \log\pi_{\theta}(y_l) - \beta\log\frac{Z_{w}}{Z_{l}}]$
where $Z_{w}$ and $Z_{l}$ is a partition function. Because they are hard to estimate accurately, we may treat the term $\beta\log\frac{Z_{w}}{Z_{l}}$ as a constant value $\gamma \geq 0$. By adding length normalization coefficients, we have an objective for Geometric SimPO.
$L_{\text{GSimPO}} := E[\frac{\beta(2\hat{p}-1)}{|y_w|} \log\pi_{\theta}(y_w) - \frac{\beta(2\hat{p}-1)}{|y_l|} \log\pi_{\theta}(y_l) - \gamma]$.
---
Rebuttal 5:
Title: Followup from Reviewer 98AM
Comment: I thank the authors for the prompt response!
I agree with the points the authors present. As long as the authors present the log-probability argument in this paper in the final version of the paper, along with discussions of the following papers:
[1] Iterative Reasoning Preference Optimization, https://arxiv.org/abs/2404.19733
[2] Preference Fine-Tuning of LLMs Should Leverage Suboptimal, On-Policy Data, https://arxiv.org/abs/2404.14367
[3] From $r$ to $Q^*$: Your Language Model is Secretly a Q-Function, https://arxiv.org/abs/2404.12358
I am satisfied with the results. The reason I insist on this is: It is certain now that DPO over-extrapolates to an OOD region. Whether this OOD region is good or bad depends on the task: for math problems, as [1] shows, it is probably bad, because only a narrow distribution of responses is good, whereas for RLHF tasks that this paper considers, this might be okay. **It is important to note the effect of new variations of DPO in this regard, specially their learning dynamics.**
Section 5.3 of [3] seems to show a theoretical understanding of why this happens, and possibly including a discussion of this in the paper should improve the quality of this work.
I am willing to increase my scores once the authors produce results on models of around 7B parameter count range.
Kudos to the authors for leading a nice and thorough rebuttal!
---
Rebuttal 6:
Comment: **> Discussion about Log Ratio and Training Dynamics**
We thank the reviewer for discussing this thoroughly. We also agree with your final recap and will include all the experimental results, discussions, and references you provided during the rebuttal in the revised paper.
**> Winning Rate on Plasma Plan, Skewed and Stairs with Gemma-7B**
In the following table, we compare the winning rate of Gemma-7B among DPO, cDPO, and GDPO (against PaLM 2-L). The results show that, following the results on PaLM 2-XS and Gemma-2B, applying soft labels and weighted geometric averaging improves the alignment quality compared to the binary methods (DPO) and conservative methods with soft labels. Geometric averaging can be effective for various model sizes and architectures.
| Method | Plasma Plan (Binary) | (Percentage) | Skewed (Binary) | (Percentage) | Stairs (Binary) | (Percentage)
|--|--|--|--|--|--|--|
|SFT| 42.62% | 44.45% | 42.62% | 44.45% | 42.62% | 44.45% |
|DPO| 79.56% | 61.53% | 78.63% | 61.47% | 75.49% | 60.12% |
|cDPO| 74.33%| 59.91% | 73.52% | 56.79% | 71.89%| 59.91% |
|GDPO| 82.58% | 64.11% | 82.23% | 63.73% | 80.37% | 62.61% |
|$\Delta(+\text{Geom.})$| +5.64% | +3.39% | +6.16% | +4.60% | +6.68% | +2.60%|
Lastly, thank the reviewer again for your active engagement with the rebuttal and discussion despite the limited time. Your thoughtful feedback helped improve our paper.
---
Rebuttal Comment 6.1:
Comment: Dear Authors,
Thanks a lot for the updated results! This is satisfactory to me, I am increasing the score from 5 to 6, since **the authors have updated the rebuttals with results of 7B parameter range LLMs, as discussed before.**
Kudos on driving a successful rebuttal. | Summary: The paper proposes a variation to Direct Preference Optimization which takes into account "soft labels," which aim to reflect that not every evaluator might agree on the relative ordering of a pair of model outputs, or that there may otherwise be a lack of confidence in the ordering of outputs. They model this as the likelihood that one output is better than another and use this likelihood in selecting the winner from two prompts using geometric averaging of the confidence in each output.
This approach is straightforwardly extended to several algorithms in the DPO family. This is then used in fine-tuning a model. Experiments compare their variations of DPO with other DPO algorithms and show that an LLM evaluation agent prefers their model outputs over those of PaLM-2L and GPT4 more often than the evaluator prefers other DPO outputs.
Strengths: The central idea of the paper is well motivated and approaches the problem in what seems a quite reasonable manner. I found the authors generally seemed to do a good job of explaining why they were doing what they did in the design of GDPO.
Sections 2 and 3, the primary technical sections, are fairly understandable and clearly written.
I am not entirely convinced that the experiments are optimal but they do suggest that this method provides significant improvements over existing approaches.
I am not well positioned to evaluate the originality of this work. That said, I am not aware of other existing work that approaches this problem in the same way.
Weaknesses: I am only loosely familiar with the current research in this area but I am aware of criticism of the Bradley-Terry model. While this might help to learn preferences under the BT model, it might aid the paper to include a comment on whether this model is worth continued analysis.
Some aspects of the paper could be written more clearly to explain what is happening or why. In particular, I find the paper likely assumes that readers are extremely familiar the most recent approaches to LLM evaluation and corresponding norms of how to present these results. Not being familiar with these things, I feel there are several aspects of the paper that could use more explanation; some are listed below.
The definition of p^ is vital to a thorough understanding of the paper. Right now, I find it's meaning somewhat difficult to parse when there are no subscripts. The paper would be improved by a more clear explanation of what it means in context. This would also make Figure 2 and some of Figure 1 more clear.
Similarly, in Eq. 10, y_w and y_l seem disconnected from y_1 and y_2. I generally understand what's going on but a more clear explanation would aid many readers.
The explanation of exactly what models you applied your techniques to may be clear to someone actively engaged in very similar research but was unclear to me.
Eq 17 needs clarification. From study it's meaning can be interpreted but seeing as both y's come from llm's (as far as I understand) and are involved in testing it is not immediately clear which meaning to assign to y_llm and y_test.
Some of the minor editing issues:
Sentence on line 113 is not correct.
line 129 - "Bradly-Terry"
line 158/159 - maybe "500,000 training instances are ..."?
Technical Quality: 3
Clarity: 2
Questions for Authors: Can you expand on the reasonableness of using an LLM to judge LLM performance? Can this lead to potential damage to model training in the future?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: See question above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for careful reading and detailed feedback.
**> W1 (Problem from Bradley-Terry model)**
As the reviewer mentioned, the objective functions stemming from Bradley-Terry model cause "over-optimization" issues, which is inherent in DPO and its derivation from Bradley-Terry model. Recalling the objective of DPO and reward models, it is $\max \log p_{\theta}(y_1 \succ y_2 | x) = \max \log \sigma(r_{\theta}(y_1) - r_{\theta}(y_2))$ (as explained in Section 2, L58 & L83). Since $p_{\theta}$ is a probability, the maximization objective forces $p_{\theta}(y_1 \succ y_2 | x) \rightarrow 1$.
As discussed in Section 3.2, because geometric averaging cancels the gradient from equally good samples (e.g. p=0.55), our methods can successfully mitigate over-optimization issues. Please see **Figure 1 (f)** in additional PDF; we visualize that GDPO suppresses the reward gap increase modestly. It is an orthogonal but important future direction to derive other better preference modelings instead of the Bradley-Terry model.
**> W2 (Clarity)**
We will follow your suggestions to improve clarity of this paper.
**> W2.1**
First, following another reviewer's feedback, we will update the definition of soft preference labels as follows in the revision:
---
We assume that the binary preference labels ($l(y_1 \succ y_2 | x) = 1$) are sampled from the Bradley-Terry model preference distribution with the parameter $p^{\star}(y_1 \succ y_2 | x) = \sigma(r^{\star}(x,y_1) - r^{\star}(x,y_2))$.. We define soft preference labels as estimates of the true preference probability: $\hat{p}_{x,y_1,y_2} := \hat{p}(y_1 \succ y_2 | x) \approx p^{\star}(y_1 \succ y_2 | x)$.
For instance, we can estimate this via monte-carlo sampling such as:
$\hat{p} = \frac{1}{M}\sum_{i=1}^{M} l_i (l_i \in \{0,1\})$,
which is done via majority voting among $M$ people in practice. The sampled binary preference may sometimes flip with probability $\epsilon$ (i.e. label noise). If the label noise is known, we may consider the expectation over the noise such as: $\hat{p} = (1-\frac{1}{M}\sum_{i=1}^{M} \epsilon_i) \frac{1}{M}\sum_{i=1}^{M} l_i + \frac{1}{M}\sum_{i=1}^{M} \epsilon_i \frac{1}{M}\sum_{i=1}^{M} (1 - l_i)$, or we may ignore the noise if $\epsilon_i$ is small and $M$ is sufficiently large.
Alternatively, we can also estimate the soft preference directly via Bradley-Terry models with some reward function. The direct estimation is often adopted in AI feedback with scoring.
---
In this paper, $\hat{p}$ always means $\hat{p}(y_1 > y_2 | x)$ to reduce the redundancy and emphasize $\hat{p}$ is a given label from the dataset. We will clarify this and include subscripts appropriately depending on the context.
**> W2.2**
$y_w$ represents the winner response, and $y_l$ represents the loser response. As stated in Section 2, we assume $y_1 \succ y_2$ always holds in this paper unless otherwise mentioned (i.e. $y_w=y_1$, $y_l=y_2$). In Equation 10, to emphasize that the weighted geometric average is taken for the distribution, we employ $y_w$ and $y_l$ instead of $y_1$ and $y_2$. We will clarify this in the revision.
**> W2.3**
For the base LLM we used, we have clearly stated in the beginning of Section 4: “In the experiments, we use PaLM 2-XS for the base LLM”. Furthermore, we provide additional results with the open Gemma-2B model. Please see **Figure (i)** in additional PDF for the results.
**> W2.4**
Thank you for pointing out the notation. $y_llm$ stands for the response from the models trained with DPO/cDPO/GDPO, etc. $y_test$ stands for the reference response from PaLM 2-L, GPT-4, or humans, which is used for the evaluation. We will fix the notation from $y_{llm}$ to $y_{gen}$ (i.e. generated response), and $y_{test}$ to $y_{ref}$ (i.e. reference response) for clarity.
**> W3**
We also thank you for pointing out the minor editing issues. We will fix them appropriately in the revised paper.
**> Q1**
First, because our formulation of soft preference labels and geometric averaging are not limited to AI feedback (e.g. majority voting from humans), we believe that the discussion whether it is reasonable for LLM to judge LLM performance is out of scope from this paper.
However, we can justify the LLM-as-a-judge with some evidence. **Figure 1 (c)** in rebuttal PDF provides the agreement evaluation between human and LLM judges on Plasma Plan. We compare the responses from PaLM 2-L and GPT-3.5. The agreement accuracy reaches 81.3\%. This observation is also consistent with previous literature (https://arxiv.org/abs/2306.05685, https://arxiv.org/abs/2309.00267). In addition, as mentioned in Section 4.2 (L200), LLM rating has a position bias (https://arxiv.org/abs/2305.17926), and to mitigate this, we take the average of $\hat{p}_{\text{AI}}$ by flipping the ordering of ($y_1$, $y_2$) in the evaluation prompt.
Since this paper focuses on the RLHF/alignment phase, rather than pre-training, we think that the catastrophic degradation by the synthetic data would not happen. We will add these related discussions to Appendix in the revision.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. My question was not so much asking "does it currently work to use LLMs as judges" but, rather, what the impact of this can be in the future. It is, of course, expected that papers should consider what effects they may have on the world as a result of being published.
---
Rebuttal 2:
Title: Response to Reviewer EtLq
Comment: We appreciate the quick response and clarification. The potential social impacts have been discussed in Appendix A, "Broader Impacts" section (in the original submission), and following your clarification, we can further extend it considering the effect of AI feedback or training data synthesized by LLMs in the future.
The use of LLMs for AI feedback and synthetic data generation has significantly reduced the costs associated with manual annotation and data curation, enabling scalable learning. However, it remains unclear whether LLMs can accurately identify biases such as prejudice and discrimination when providing AI feedback; potentially LLMs wrongly provide preference labels leading to undesirable biases. Additionally, while agreement between human and LLM preferences is generally high (around 80-85%), the remaining 20% of disagreements could contribute to the accumulation of errors through iterative feedback processes, amplifying the less preferred preferences. Continuous human monitoring is therefore crucial to ensure safety and mitigate potential risks. Furthermore, learning with synthetic data, particularly in pre-training, has shown potential for catastrophic performance degradation due to data distribution shifts. It is also important to be mindful of potential performance deterioration during post-training phases, including alignment, when using synthetic data.
We will include these discussions in the revised version. | Summary: This paper introduces a novel approach to aligning Large Language Models (LLMs) with human preferences by incorporating distributional soft preference labels. The authors argue that existing methods like Direct Preference Optimization (DPO) assume binary, deterministic preferences, which may not accurately reflect the nuanced nature of human judgments. To address this, they propose a modification to DPO that uses a weighted geometric average of LLM output likelihood in the loss function, allowing for a more nuanced representation of preferences. The method can be easily applied to any DPO-based algorithm and shows consistent improvements on standard alignment benchmarks, particularly for data with modestly-confident labels. The authors simulate soft preference labels using AI feedback from LLMs in their experiments.
Strengths: The approach of adapting different scaling factors in preference optimization is a handy and intuitive method.
Weaknesses: The paper lacks sufficient theoretical analysis to justify the use of "Geometric Averaging" for soft labels in preference optimization. While the authors compare their method to others in terms of scaling factors, they fail to provide an in-depth analysis of its effectiveness. Key questions remain unanswered: What are the core weaknesses of previous scaling factors? Why are scaling factors crucial in preference optimization? How does the proposed method fundamentally differ from prior work? The paper would benefit from a more rigorous examination of the method's impact on training dynamics, such as reward gaps and convergence stability. Without this analysis, it's challenging to fully understand and evaluate the method's contributions to the field.
Technical Quality: 3
Clarity: 3
Questions for Authors: The soft label addresses preference variability, which relates to label noise. While the authors discuss robustness to noise, it's unclear how this approach actually performs when faced with inconsistent or noisy preference labels. Can the authors provide a more detailed analysis of the method's robustness to label inconsistencies? This would help clarify the practical advantages of soft labels in real-world scenarios where human preferences may be inconsistent or contradictory.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors adequately discussed the limitations in the seperated section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the thoughtful feedback. Please let us know if our responses in the following address your concerns.
**> Main Weaknesses of Previous Papers & How GDPO Resolves them**
As discussed in Section 3.2, GDPO can avoid (1) "over-optimization" issues (compared to DPO) and (2) objective mismatch between text generation and preference modeling (compared to cDPO), which are fundamental issues of previous works.
(1) Binary preference methods such as DPO, suffer from "over-optimization" issues, where the maximization objective forces $p_{\theta}(y_1 \succ y_2 | x) \rightarrow 1$ (even if the soft label is around 0.5) and induces $r_{\theta}(y_1) - r_{\theta}(y_2) \rightarrow \infty$. This is also raised in the IPO paper (https://arxiv.org/abs/2310.12036). In contrast, GDPO can ignore the gradient from less-confident samples (as shown in Figure 1 (left)), which can mitigate over-optimization.
(2) Soft preference methods with linear interpolation such as cDPO, suffers from objective mismatch between text generation and preference modeling. We have shown that in the synthetic experiments in Figure 1 (right), and also have demonstrated that the same phenomena happen in LLM experiments (Figure 4 (right)). By maintaining the scale of gradient when soft labels are large and ignoring the update when the paired responses are equally good, GDPO avoids the issue of objective mismatch.
**> Analysis on Training Dynamics**
**Figure 1 (f)** in the additional PDF shows the log ratio $\log \frac{\pi_{\theta}}{\pi_{\text{ref}}}$ of winner/loser responses and estimated reward gap $\log \frac{\pi_{\theta}(x,y_w)\pi_{\text{ref}}(x,y_l)}{\pi_{\text{ref}}(x,y_w)\pi_{\theta}(x,y_l)}$ on Plasma Plan and Anthropic Harmless (please refer to https://arxiv.org/abs/2404.14367 for discussion of these metrics). We can see that DPO aggressively pushes down the log ratio and increases the reward gap, since DPO objective forces the model to achieve $r_{\theta}(x,y_w)-r_{\theta}(x,y_l) \rightarrow \infty$, which is causing an over-optimization issue. cDPO is more conservative in pushing down the log ratio while leading to worse alignment quality due to objective mismatch. GDPO avoids the issues of such objective mismatch and over-optimization by suppressing the reward gap increase modestly.
**> Theoretical Analysis on the Optimality Bound**
We here provide a corollary derived from Theorem 4.1 from https://arxiv.org/abs/2406.01462, which shows the bound of optimality gap is improved by GDPO: from $O(C\sqrt{\epsilon_{dpo}})$ (DPO) to $O(C\sqrt{\epsilon_{dpo} - \epsilon_{\bar{p}}})$ (GDPO). Due to the word limit, we omit some assumptions presented in the previous paper, but we are happy to follow-up in the discussion period upon request. We will also include these descriptions in the revision.
First of all, we make the following two assumptions:
**Assumption 1** (Overestimation of the learned reward):
For all $x,y^{1},y^{2} \sim \rho \circ \pi_{\text{ref}} \text{ s.t. } y^{1} \succ y^{2}$ and the learned reward function $\hat{r}$, we have
$r^\star(x,y^{1}) - r^\star(x,y^{2}) \leq p^\star(y^{1} \succ y^{2} | x)(\hat{r}(x,y^{1}) - \hat{r}(x,y^{2})).$
**Assumption 2** (Relation between GDPO and DPO):
For the learned reward from GDPO $\widehat{r_{\text{gdpo}}}$ and DPO $\widehat{r_{\text{dpo}}}$, we assume that
$\Delta \widehat{r_{\text{gdpo}}}=(2p^\star(y^1\succ y^2|x)-1) \Delta \widehat{r_{\text{dpo}}}$.
where $y^1\succ y^2$ and $\Delta\widehat{r} = \widehat{r}(x,y^1)-\widehat{r}(x,y^2)>0$.
**Corollary 1**: Let $\pi_{\text{ref}}$ be any reference policy such that global coverage assumption (from previous paper) holds. For any policy $\pi_{\text{dpo}}$ such that the event in the assumption of in-distribution reward learning (from previous paper), Assumption 1 and 2 holds, we have that
$E_{x\sim\rho} [E_{y^1,y^2 \sim \pi_{\text{ref}}( \cdot | x) \text{ s.t. } y^1 \succ y^2} [( r^\star(x,y^1) - \widehat{r_{\text{gdpo}}}(x,y^1) - r^\star(x,y^2) + \widehat{r_{\text{gdpo}}}(x,y^2)^2] ] \leq \epsilon_{\text{dpo}} - \epsilon_{\bar{p}}$,
then we have
$ J(\pi^\star)-J(\pi_{\text{gdpo}})\leq O(C\sqrt{\epsilon_{\text{dpo}} - \epsilon_{\bar{p}}}).$
Since $ J(\pi^\star)-J(\pi_{\text{dpo}}) \leq O(C\sqrt{\epsilon_{\text{dpo}}})$, the bound is improved by GDPO.
**Proof of Corollary 1**
$E_{x\sim\rho}[E_{y^1,y^2\sim\pi_{\text{ref}}(\cdot|x)}[(r^\star(x,y^1)-\widehat{r_{\text{gdpo}}}(x,y^1)-r^\star(x,y^2)+\widehat{r_{\text{gdpo}}}(x,y^2))^2-(r^\star(x,y^1)- \widehat{r_{\text{dpo}}}(x,y^1)-r^\star(x,y^2)+\widehat{r_{\text{dpo}}}(x,y^2))^2]]$
$=E_{x\sim\rho}[E_{y^1,y^2\sim\pi_{\text{ref}}(\cdot|x)}[\Delta\widehat{r_{\text{gdpo}}}^2+2\Delta r^\star(\Delta\widehat{r_{\text{dpo}}}-\Delta\widehat{r_{\text{gdpo}}})-\Delta\widehat{r_{\text{dpo}}}^2]]$
$=E_{x\sim\rho}[E_{y^1,y^2\sim\pi_{\text{ref}}(\cdot|x)}[4(1-p^\star(y^1\succ y^2|x))\Delta\widehat{r_{\text{dpo}}}(\Delta r^\star - p^\star(y^1\succ y^2|x) \Delta\widehat{r_{\text{dpo}}})]]$
$\leq0$,
then some small $\epsilon_{\bar{p}}\geq 0$ exists such that
$E_{x\sim\rho}[E_{y^1,y^2 \sim \pi_{\text{ref}}(\cdot|x) \text{ s.t. } y^1\succ y^2} [(r^\star(x,y^1) - \widehat{r_{\text{gdpo}}}(x,y^1) - r^\star(x,y^2)+\widehat{r_{\text{gdpo}}}(x,y^2)^2] ]+\epsilon_{\bar{p}}\leq\epsilon_{\text{dpo}}.$
**> Performance under Label Noise**
In **Figure 1 (g)** in the additional PDF, we provide the winning rate on Plasma Plan with different label noise $\epsilon$. Please also see the response to Reviewer **GdY5** for the definition of label noise. We assume flipping binary label (B-Flip), soft label (S-Flip) with probability $\epsilon$, and taking the expectation of soft labels with probability $\epsilon$ (S-Ave.). While DPO is often affected, GDPO mitigates the noise and performs the best in all the cases.
---
Rebuttal Comment 1.1:
Title: Response to the Authors
Comment: Thank you for your response.
After reviewing the authors' replies to the other reviewers and following the ongoing discussions, I find the concept of "soft preference labels with AI feedback" quite interesting, and I have raised my score accordingly.
I acknowledge that the proposed method effectively addresses some algorithmic challenges in DPO, and the analysis conducted on it appears promising. My main concern now is whether the TLDR and Anthropic HH datasets are sufficient to fully validate the method's efficacy. However, I understand that this limitation is unavoidable given the scarcity of available public data, so I did not consider this part when deciding my score.
---
Rebuttal 2:
Comment: We thank the reviewer for reading the rebuttal and the thoughtful consideration.
For the benchmark, we believe that Reddit TL;DR and Anthropic Helpfulness and Harmlessness datasets are widely adopted in previous RLHF/alignment works (e.g. https://arxiv.org/abs/2305.10425, https://arxiv.org/abs/2309.06657, https://arxiv.org/abs/2309.16240, and more). As the reviewer mentioned, because the soft labels have not been used much ever and the community did not have suitable public dataset, we needed to build our experiments on top of popular configurations. We hope our work inspire the community to leverage the soft labels and release the open dataset with soft label annotations in the future.
We will include all the discussion with the reviewer (analysis on training dynamics & theoretical analysis) in the revision. Thank you again for taking your time. | Summary: This paper introduces the concept of soft preference labels and proposes leveraging this distributional information, alongside deterministic binary preferences, to enhance Direct Preference Optimization (DPO) losses. By incorporating a weighted geometric average of the LLM output likelihood in the loss function, the authors demonstrate that geometric averaging consistently improves performance on standard benchmarks.
Strengths: 1. The paper proposes a novel method to incorporate additional distributional information that is sometimes available in vote pooling data curation processes (assuming this definition is correct).
2. The demonstration of the scaling factor $w_\theta$ in relation to the gradient provides a more intuitive understanding of why this method improves performance.
3. The paper includes fairly extensive evaluation studies.
Weaknesses: The writing in general is very hard to follow and at times very unclear. Some examples are as follows:
1. The soft Preference Labels $\hat{p}$ is poorly defined. What exactly does the perturbation model refer to? Is it the personalized preference model relative to the population-averaged model? If so, over what is the expectation taken? Why is this $\hat{p}$ readily available data? Should it be $\widehat{p}_{x,y_1,y_2} = 1 - \frac{1}{M}\sum_{i=1}^M \epsilon_i(x,y_1,y_2) \approx 1 - \mathbb{E}[\epsilon(x,y_1,y_2)] = p^\star(y_1>y_2 | x)$ ?
2. The introduction is not easy to parse. Providing a concrete example of the problem being studied would help. For instance, the statement "Nevertheless, many existing RLHF algorithms and Direct Preference Optimization (DPO) [38] variants assume binary deterministic preferences" is confusing, as the usual assumption for DPO is a Bradley-Terry preference model, which is not deterministic. This isn't clarified until the next section.
3. There is no justification for why weighted geometric averaging over the LLM output makes sense. If this approach makes mathematical sense after reparametrization, it would be very helpful to explain this clearly.
4. The explanation in chapter 3.3 is very hard to follow.
5. Are soft preference labels readily available with the current data curation process, or are you advocating for their utilization and thus encouraging the community to propose methods for obtaining them?
Technical Quality: 2
Clarity: 2
Questions for Authors: As previously mentioned in the weaknesses section, I am mainly confused about the soft preference labels: are you proposing to obtain these soft preference labels instead of the binary deterministic ones currently used, or are you suggesting that we have been under-exploiting our data, which already contains this information?
Additionally, it would be helpful to explain how this distributional information is philosophically different from the Bradley-Terry model, where preference labels are random draws from a fixed distribution representing population-level preferences.
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The authors address some limitation in their writing.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback.
**> W1 & Q2 (Definition of Soft Preference Labels)**
Thank you for pointing this out. We will update the definition of soft preference labels as follows in the revision:
We assume that the binary preference labels ($l(y_1 \succ y_2 | x) = 1$) are sampled from the Bradley-Terry model preference distribution with the parameter $p^{\star}(y_1 \succ y_2 | x) = \sigma(r^{\star}(x,y_1)-r^{\star}(x,y_2))$. We define soft preference labels as estimates of the true preference probability: $\hat{p}_{x,y_1,y_2}:=\hat{p}(y_1 \succ y_2 | x) \approx p^{\star}(y_1 \succ y_2 | x)$. For instance, we can estimate this via monte-carlo sampling such as:
$\hat{p} = \frac{1}{M}\sum_{i=1}^{M} l_i (l_i \in \{0,1\})$,
which is done via majority voting among $M$ people in practice. The sampled binary preference may sometimes flip with probability $\epsilon$ (i.e. label noise). If the label noise is known, we may take the expectation over the noise such as: $\hat{p} = (1-\frac{1}{M}\sum_{i=1}^{M} \epsilon_i) \frac{1}{M}\sum_{i=1}^{M} l_i+\frac{1}{M}\sum_{i=1}^{M}\epsilon_i \frac{1}{M}\sum_{i=1}^{M} (1-l_i)$, or may ignore the noise if $\epsilon_i$ is small and $M$ is sufficiently large. Alternatively, we can also estimate the soft preference directly via Bradley-Terry models with some reward function. The direct estimation is often adopted in AI feedback with scoring.
**> W2 (Does DPO assume binary deterministic preference?)**
We initially use the term “binary deterministic preferences” because, the objective of DPO and reward models for RLHF is $\max\log p_{\theta}(y_1\succ y_2|x)=\max\log \sigma(r_{\theta}(y_1)-r_{\theta}(y_2))$ (as explained in Section 2, L58&L83), and since $p_{\theta}$ is a probability, the maximization objective forces $p_{\theta}(y_1\succ y_2|x) \rightarrow 1$. In IPO paper (https://arxiv.org/abs/2310.12036), it is pointed out that such a determinism formulation of DPO induces $r_{\theta}(y_1)-r_{\theta}(y_2)\rightarrow\infty$ and causes over-optimization issues. However, as the reviewer pointed out, we agree that the term “deterministic” may cause confusion to the readers. In the revision, we will avoid using “deterministic” in the context, and just mention it as “binary labels” or “binary preference” for better readability.
**> W3 (Clarification of Geometric Averaging)**
Weighted geometric averaging of likelihood is one of the design choices to regularize the objective of DPO variants. For instance, RLHF algorithm based on Nash equilibrium (https://arxiv.org/abs/2312.00886) examined geometric averaging of the current policy and reference policy as a regularization (another was EMA).
Geometric averaging pushes large probability to small when the soft preference is far from 1. Figure 1 (b) on rebuttal PDF provides illustrative examples of geometric-averaged Gaussian distribution. The geometric-averaged winner distribution $\bar{q}_w(x)=q_w(x)^{\hat{p}}q_l(x)^{1-\hat{p}}/Z(x)$ is smoothly regularized when soft preference $\hat{p}$ is small. After the transformation of objective, geometric averaging helps mitigate over-optimization issues in DPO and escape from objective mismatch in cDPO by reducing the scaling factor of gradient from small soft preference samples (as in Section 3.2&3.3).
**> W4 (Clarification of Section 3.3)**
In Section 3.3, we demonstrate an toy example of the objective mismatch issues between text generation and preference modeling, observed in linear interpolation of objective function as done in cDPO. In Figure 1 on the main text, we compare the distribution of learned policy with DPO objectives. Note that we optimize parameterized reward function, and learned policy is analytically recovered. The true reward is a linear function to the index. It shows that cDPO accurately fits the data distribution, which has a larger probability mass on the smaller index (smaller reward), while DPO and GDPO assign a larger probability mass on the larger index (larger reward). The learned distribution of cDPO is desirable from the preference modeling perspective, while it is not desirable from the text generation, because greedy decoding only considers the largest probability mass, which only has a lower reward, during text generation. We also have demonstrated that the same phenomena happen in LLM experiments (Figure 4). We can provide further explanation if needed.
**> W5 & Q1 (Does soft labels help improve the alignment?)**
In the paper, we advocate using soft labels to improve the alignment quality, as they contain more information than binary labels. In addition to AI feedback in this paper, if the existing data has some point-wise scores, soft labels can be easily obtained through direct estimation with Bradley-Terry model. If we can prepare multiple human raters, we can estimate more accurate soft preferences through the majority voting by increasing the pool of rater.
In Table 2 on the main text, we have shown that the dataset with richer soft preference (Plasma Plan) achieves larger performance gain compared to the one with more sparse soft labels (Anthropic Helpful & Harmless). To directly compare the trend, we additionally construct a new preference label dataset on top of Anthropic Helpful and Harmless. In Figure 1 (d) on rebuttal PDF, we show a histogram of soft labels $\hat{p}$ in new preference datasets simulated with AI feedback. We collect competitive paired samples with winner responses of original dataset and from PaLM 2-L to realize richer and more diverse preference distributions that have enough mass around the modest confidence (e.g. $\hat{p}\in[0.7, 0.9)$).
Figure 1 (e) in rebuttal PDF shows the winning rate learned from new preference distribution in Figure 1 (d). The results highlight that rich soft labels help align LLMs better than original dataset with sparse soft labels (esp. notable in Anthropic Harmless; e.g. +12.93% absolute improvement by geometric-averaging on binary winning rate).
---
Rebuttal 2:
Comment: Dear Reviewer,
While this is a last minute before closing discussion period, we would appreciate it if you could check our updates and feel free to raise further questions if you have any. We are happy to address them further. Thank you so much for your time.
Sincerely,
Authors
---
Rebuttal Comment 2.1:
Comment: Dear Authors,
Thank you for the detailed response. I have read the rebuttals that the authors provided as well as other concurrent discussions, and I am willing to raise my score accordingly since I think the idea is interesting enough. | Rebuttal 1:
Rebuttal: We sincerely thank the reviewers for the detailed and thoughtful feedback. To address the concerns and questions, we provide a one-page PDF for the figures and tables of additional experiments in addition to our response to each reviewer. Here is a brief overview of the PDF contents:
- **(a)** Finetuning with multiple preferences from both Anthropic Helpful and Harmless datasets (Reviewer **SyyB**)
- **(b)** Illustrative examples of Geometric-averaged Gaussian distribution (Reviewer **GdY5**)
- **(c)** Agreement of human-LLM judge (Reviewer **EtLq**)
- **(d)** Comparison between original and new soft preference distribution to simulate diverse soft labels (Reviewer **GdY5**)
- **(e)** Winning rate with new soft preference distribution presented in (d) (Reviewer **GdY5**)
- **(f)** Log ratio and Estimated reward gap on Plasma Plan and Anthropic Harmlessness among DPO/cDPO/GDPO (Reviewer **msmA & EtLq & 98AM**)
- **(g)** Winning rate under preference noise $\epsilon \in {0.1,0.2,0.3}$ (Reviewer **msmA**)
- **(h)** Online alignment with extra reward model/self-preference (Reviewer **98AM**)
- **(i)** Performance with Gemma-2B, another open-source LLM, instead of PaLM 2-XS (Reviewer **98AM**)
In addition, we provide a theoretical analysis of GDPO in response to Reviewer **msmA**.
Please let us know if there are remaining concerns or questions, which we would be happy to address again.
Pdf: /pdf/5f4b461c0102e049fc25b5c8f3a6d2535e03b5b4.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: Many existing algorithms for aligning large language models (LLMs) with human preferences assume these preferences are binary and deterministic. However, preferences can vary among individuals and should be represented distributionally to capture subtle relationships between responses. This work introduces distributional soft preference labels and enhances Direct Preference Optimization (DPO) by incorporating a weighted geometric average of the LLM output likelihood in the loss function. This adjustment scales the learning loss based on soft labels, minimizing loss for equally preferred responses. This modification is applicable to any DPO family, addressing the objective mismatch seen in prior work. Experiments using simulated soft preference labels with AI feedback from LLMs show that geometric averaging consistently improves performance on standard alignment benchmarks, particularly with data containing a majority of modestly-confident labels.
Strengths: 1. This paper is well written. The notations are clear and the literature review is sufficient.
2. The proposed distributional soft labels combined with weighted geometric averaging of the output likelihood in the loss function can mitigate the mismatch between the text generation and preference modeling.
3. Simulation on the soft preference labels verifies the proposed method in for diversed preference label distributions. This provides insightful evidences and implications for the performance in a practical scenario.
Weaknesses: Given the multiple aspect of human preference, the proposed method is only using a soft label scalar scoring which is established based on the basic assumption of transivity, derived from BT model and its formulation. Therefore, the trade-off between diverse aspects of preferences, especially when the preferences are contradictory, is not explicitly addressed.
Technical Quality: 3
Clarity: 3
Questions for Authors: It would be interesting to understand how could a model derived based on transitivity assumption or soft label based scalar ratings be able to reflect multiple aspects of real world preferences, e.g., the balance between helpfulness and harmlessness in Anthropic dataset.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s thoughtful feedback.
**> W1 & Q1 (Can GDPO handle multiple preference labels at once?)**
Following the reviewer’s suggestion, we finetune the LLM the Anthropic Helpfulness and Harmlessness preference datasets simultaneously to study the balance between different real-world preferences. For example, the Harmlessness dataset instructs the LLM to provide concise refusals (e.g., "I don't know") when content is inappropriate, while the Helpfulness dataset encourages detailed responses, which can be conflicting with each other.
The experimental results are shown in **Figure 1 (a)** in the rebuttal PDF. Soft preference methods appear to outperform vanilla DPO, presumably because of avoiding the over-optimization problem. And we can see that GDPO consistently outperforms all baseline methods, same as our other experiments.
We will add these results in the Appendix in the revision. Please let us know if you have further questions.
---
Rebuttal Comment 1.1:
Comment: I have read the rebuttal and thank the authors for their detailed and candid responses.
It is straightforward to understand that the proposed method outperforms all baseline methods, despite that the structural gap in handling contracting real-world preferences remains unresolved.
I maintain my positive opinion on this paper.
---
Rebuttal 2:
Comment: We thank the reviewer for reading the responses and your thoughtful consideration.
As the reviewer pointed out, it is an important direction in practical scenarios to align LLMs to multiple preferences conflicting with each other. In addition to our experimental results, we will include this point in the revision. Thank you again for taking your time. | null | null | null | null | null | null |
Enhancing Large Language Models through Adaptive Tokenizers | Accept (poster) | Summary: The presented paper proposes an adaptive tokenization scheme that is learned jointly with an LLM and assesses its performance on downstream tasks for smaller model sizes and token budgets.
Strengths: - Interesting idea to include both the loss of the current iteration as well as the loss of the previous iteration via a momentum term to stabilize the training procedure
- Interesting results on Cross-Model Adaptability of vocabularies for different model sizes
Weaknesses: ### Weaknesses
---
- **[medium]**: both the model size (up to 410M parameters) and the corpus size (16B tokens) seem to be on the smaller side and it is unclear if these findings would generalize to the billion level parameter level and trillion level token budget which is more representative of the scales for current state-of-the-art LLMs.
- **[medium]**: The fertility of a tokenizer is essential for comparing the average sequence length as well as performance across all supported languages i.e. a fertility of 1 would indicate that every single word is contained in the vocabulary **[1]**. I believe those values should be compared across all methods to give a better understanding.
- **[medium]**: For ARC-C, LogiQA, PIQA, and Winogrande (4/9) the scores are actually lower than the Unigram baseline or within the standard deviation range which calls the significance of the results into question. Additionally, the model size scaling experiments indicate a reduction in gains +2.29 (70M), +2.01 (160M), +1.92 (410M) over the baselines and the gap will likely continue to narrow as we scale up the model size, potentially losing its effectiveness for production-scale models. Ideally, we'd want some sort of significance tests to verify the impact.
- **[medium]**: the loss calculation in Equation 2 seems expensive for a large vocabulary $V$ as well as a large corpus. How big is the impact on the runtime and/or is this done on mini-batches? How does this scale with a bigger corpus i.e. within + beyond what is considered in Table 5? Naturally, from the ablation we would just scale both, when does this become too expensive or flatten out in terms of performance gains? Some scaling plots for both variables would be helpful for practitioners.
### Minor Comments & Typos
---
- please sort references such that [7,2] is ordered as [2,7]
- the fraction balance method in Table 5 needs brackets to reflect the correct fraction from l. 349
### References
---
- **[1]**: [How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models](https://aclanthology.org/2021.acl-long.243) (Rust et al., ACL-IJCNLP 2021)
Technical Quality: 2
Clarity: 3
Questions for Authors: N/A
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Might need a separate section to address limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your positive feedback on our work and your thoughtful review.
**Q1. Both the model size (up to 410M) and the corpus size (16B) seem to be on the smaller side and it is unclear if these findings would generalize to the billion level parameter level and trillion level token budget.**
To demonstrate the scalability of our proposed ADAT method, we expanded our experimental results on larger models and larger corpus. Specifically, we add results(table below) from training a **1B model** on **60B corpus**. The results demonstrate that on larger models and with more data, ADAT continues to show substantial improvements over the baseline, indicating that ADAT has strong scalability.
| | Unigram | ADAT |
| ---- | ------- | ------------- |
| Avg. | 49.11 | 51.20 (+2.09) |
**Q2. The fertility of a tokenizer is essential for comparing the average sequence length as well as performance across all supported languages[1]. I believe those values should be compared across all methods to give a better understanding.**
Regarding the fertility of tokenizers, we have included a comparative analysis of the fertility rates for all methods. We apply the four English treebanks used in Reference [1] and present the average values in the table below.
| | BPE | BytePiece | BytePiece+ADAT | Unigram | Unigram+ADAT |
| --------- | ---- | --------- | -------------- | ------- | ------------ |
| Fertility | 1.25 | 1.07 | 1.08 | 1.32 | 1.49 |
It's crucial to recognize that the fertility metric does not directly correlate with a model's final performance. For example, a fertility value of 1 indicates a vocabulary that covers all corpus words, similar to a bag of words model, which fails to capture intrinsic semantic connections like word roots or affixes. At the other extreme, high fertility could lead to character-level or byte-level tokenization, resulting in over-segmentation and a loss of semantic priors.
[1] How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models (Rust et al., ACL-IJCNLP 2021)
**Q3.** **The significance of the results and losing its effectiveness for production-scale models. we'd want some sort of significance tests to verify the impact.**
While ADAT scores are lower than Unigram on very few tasks, it outperforms the baseline on 7 out of 9 tasks for the 70M model and on 8 out of 9 tasks for the 410M model, with an overall improvement of approximately 2 points. This may be due to different datasets requiring different sub-optimal tokenization strategies. ADAT maintains accuracy on these datasets while improving performance on others.
We used **t-tests** to compare the results of the ADAT method with the Unigram baseline to statistically validate the performance differences. For both the 70M and 410M models, the p-values comparing the ADAT method to the baseline were 0.002 and 0.0009, respectively, indicating that **ADAT achieved statistically significant performance gains over the baseline**. (p-value < 0.05).
To address the concern regarding the impact of model size scaling and the potential reduction in gains, we conducted statistical tests and additional experiments as follows,
- To assess the effect of the ADAT algorithm across different model sizes, we performed **t-tests** on **the performance gains provided by ADAT relative to the baseline** for the 70M and 410M models. The resulting **p-value of 0.48** indicates that the performance improvement of the ADAT method **does not significantly differ between the 70M and 410M** models, showing **consistent gains across various model sizes**.
- Furthermore, As shown in the table of the response to Q1, the performance gain(2.09) is higher than that of the 410M model (1.92) and similar to that of 160M model (2.01), demonstrating the effectiveness of scalability in model size for ADAT.
These results collectively confirm that the ADAT method is effective across various model sizes.
**Q4. Equation 2 seems expensive. How big is the impact on the runtime and/or is this done on mini-batches? How does this scale with a bigger corpus i.e. within + beyond what is considered in Table 5?**
Equation 2 is employed for generating the initial vocabulary, which is expensive for large datasets. For this, **a 5GB subset of** the corpus was used. Furthermore, during the actual calculations, an approximation of the loss is applied. The empirical runtime to produce an initial vocabulary of 150k is 914 seconds.
Furthermore, we have expanded the analysis of 'infer data volume tokens' and 'initial vocabulary size'—both variables are explored within and beyond typical settings. The expanded results are displayed in the table below. We observe that an inference data volume of 100M tokens is sufficient, with larger volumes yielding only marginal improvements. Regarding the initial vocabulary size, increasing it to 150K is important to enhance performance. However, when it increases to 200K, the score shows almost no improvement, indicating that the 150K vocabulary likely already includes most of the potential final candidate tokens. Therefore, further increasing the initial vocabulary size will not bring additional benefits.
| | 75K | 100K | 150K | 200K |
| ----- | ----- | ----- | ----- | ----- |
| 1M | 42.89 | 43.07 | 43.13 | 43.20 |
| 10M | 43.19 | 43.39 | 43.74 | 43.77 |
| 100M | 43.42 | 43.78 | 44.51 | 44.56 |
| 1000M | 43.45 | 43.83 | 44.53 | 44.57 |
**Q5.Typos.**
Thanks for the corrections regarding the typos in sort references and Table 5. We will carefully examine the manuscript and rectify typos.
**Q6. Discussion of Limitations.**
Please refer to General Response.
We hope we have adequately addressed your concerns. If there is still anything unclear, Please feel free to let us know during the rebuttal window.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thanks for the additional details and experiments. I've raised my score accordingly.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you for your recognition and for raising your score! Your support is greatly appreciated. | Summary: This study introduces an adaptive tokenizer whose development is integrated with the performance of the LLM. The tokenizer has the particularity that it is fine-tuned based on the model’s perplexity during training. Empirical results show that this approach improves accuracy compared to “traditional” tokenization methods.
Strengths: - Well motivate and easy to understand
- Experiments are very comprehensive
- Insightful ablation study
- Answer most of the questions that one may have for a tokenizer: impact on perplexity, accuracy in downstream tasks, and performance depending on the vocabulary size,
Weaknesses: - There are only comparisons with very common tokenizers: BPE, BytePiece, and Unigram. This is enough to guide engineers but for scientific work, I would expect comparisons with related work, even if they are not widely adopted. For instance, how does it compare to other adaptive tokenizers such as the one proposed by “Task-Adaptive Tokenization: Enhancing Long-Form Text Generation Efficacy in Mental Health and Beyond” (not cited)?
- The limitations of this work are not very well discussed. When does ADAT completely fail? For which scenario shouldn’t we use it?
Technical Quality: 2
Clarity: 3
Questions for Authors: The main suggestion that I have for this work would be to compare it with other tokenizers that might not be mainstream but that have been shown to perform better. Comparing it with other Task-Adaptive Tokenizers would be a start.
Confidence: 2
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your efforts to enhance the quality of our manuscript. We appreciate the issues you identified, and we believe we have thoroughly clarified and addressed them as follows.
**Q1. There are only comparisons with very common tokenizers: BPE, BytePiece, and Unigram. This is enough to guide engineers but for scientific work, I would expect comparisons with related work, even if they are not widely adopted. For instance, how does it compare to other adaptive tokenizers such as the one proposed by “Task-Adaptive Tokenization: Enhancing Long-Form Text Generation Efficacy in Mental Health and Beyond” (not cited)?**
Thanks for your feedback regarding the comparison method in our experiments. Task-adaptive tokenizers represent a promising direction, yet they **serve a different purpose from the methodology outlined in our paper**. Like BPE and Unigram tokenizers, our proposed model ADAT is designed to learn a general tokenizer. In contrast, the task-adaptive tokenizer[1] you mentioned focuses on integrating tokenizers derived from varying data distributions.
In the current experimental setup, we incorporated the task-adaptive tokenizer[1] by utilizing data from 'the pile' as Distribution A, while employing the training dataset from test set ARC-E, PIQA and SciQ as Distribution B. This approach facilitated the generation of a vocabulary through task-adaptive tokenization. The outcomes, presented in the table below, illustrate that the task-adaptive method exhibits robust performance on tasks ARC-E, PIQA and SciQ. However, on the other 6 tasks, its performance is inferior to ADAT, and on 5 of those tasks, it falls below the baseline, attributable to its merge strategy that favours task-specific tokens, consequently neglecting universal tokens.
Given that the task-adaptive methodology accesses an expanded dataset from Distribution B (the training set of the test set ARC-E, PIQA and SciQ), the findings suggest its limitations as a universally applicable tokenization strategy. Detailed discussions and comparisons with Reference[1] are planned for the final version of our paper to thoroughly investigate these observations.
These findings are significant as they demonstrate the robustness of our method, even when compared to more specialized, task-adaptive tokenization strategies. We hope this additional analysis addresses your concerns and further validates the effectiveness of our proposed method.
| | ARC-C | ARC-E | Boolq | Lambda | LogiQA | PIQA | SciQ | SST-2 | Winogrande | Avg. |
| ----------------------- | ----- | ----- | ----- | ------ | ------ | ----- | ----- | ----- | ---------- | ----- |
| Unigram | 19.54 | 37.04 | 53.06 | 17.27 | 23.20 | 60.50 | 68.10 | 49.77 | 51.46 | 42.22 |
| ADAT(Ours) | 18.46 | 40.57 | 61.19 | 17.97 | 24.22 | 59.93 | 72.40 | 54.24 | 51.62 | 44.51 |
| Task-specific tokenizer | 17.15 | 37.82 | 55.63 | 14.36 | 22.73 | 59.58 | 69.20 | 49.08 | 50.36 | 41.77 |
[1] Task-Adaptive Tokenization: Enhancing Long-Form Text Generation Efficacy in Mental Health and Beyond. EMNLP2023
**Q2. Discussion of Limitations.**
We apologize for not clearly and separately discussing the limitations of our method in the manuscript. We will add the following discussion of limitations to the beginning of the Conclusion section as a separate part in the final version.
The adaptive nature of our proposed tokenizer method introduces variations in tokenizers across different model sizes, leading to inconsistent vocabularies. This inconsistency complicates tasks such as knowledge distillation and speculative decoding, which rely on the assumption of a uniform vocabulary across both smaller and larger models.
We hope that our answer has addressed your concerns. Please feel free to let us know during the rebuttal window if there is still anything unclear. We appreciate your suggestions and comments! Thank you!
---
Rebuttal 2:
Comment: Dear Reviewer GBKH,
Thank you once again for the time you spent reviewing our paper and for your efforts to enhance the quality of our manuscript. We hope that our response can fully address your concerns.
Given that the discussion period concludes on August 13th, we would appreciate any further questions you might have about our paper, and we are glad to have a discussion with you over the coming days. If all the concerns you have raised have been addressed by our response, would you mind considering re-evaluating our work based on our response?
---
Rebuttal Comment 2.1:
Comment: Thank you for the additional experiments.
I increased my score but also significantly decreased my confidence score. I was quite sure that your work was not the first to propose an adaptive tokenizer for this use case.
Especially in neural machine translation, adaptive tokenization has been studied a lot. But I have to admit that I can't find the papers that I'm thinking of, hence my late reply.
---
Reply to Comment 2.1.1:
Comment: We truly appreciate your revised evaluation and the increase in score, your support is valuable to our work. Thank you very much.
We have conducted further research on the existing work of adaptive tokenizers, including in the field of neural machine translation (NMT). For example,
ONE-Piece[1] proposed a subword tokenization using morphological segmentation and vocabulary communication to address OOV problem. This method leverages the correspondence between two languages to create the tokenizer, specifically tailored for translation tasks. Moreover, it differs from our approach by not adapting to the model. AT[2] adapts the tokenizer of the pretrained model to transfer the pretrained language model to new domains. It defines adaptive tokenization as augmenting the existing tokenizer and fixed subword embeddings with new entries from a novel corpus. But it does not create a general tokenizer. Our proposed method, which aims to build a general tokenizer, differs in purpose from the mentioned approaches. Considering the differences in setting—such as ONE-Piece[1] being designed for NMT tasks that use two corresponding languages to generate a tokenizer, and AT[2] focusing on expanding an existing vocabulary to new domains—these methods are not directly comparable to ours.
We will further discuss the mentioned references in the final version.
Thank you once again for your time and efforts in reviewing our paper and for raising your score.
[1] Should we find another model?: Improving Neural Machine Translation Performance with ONE-Piece Tokenization Method without Model Modification. (Park et al., NAACL 2021)
[2] Efficient domain adaptation of language models via adaptive tokenization[J]. arXiv preprint arXiv:2109.07460, 2021. | Summary: This paper proposes a method to learn the tokenizer of a language model as part of language model training. The method works by combining a compression loss (which is also used by classical tokenization algorithms) with a language modeling loss, and iteratively removing tokens that contribute the least to the combined loss. The authors show that this method leads to improved performance on several downstream tasks compared to commonly-used tokenizers and also conduct an extensive quantitative analysis of various design choices (e.g., vocabulary size, corpus size, initial vocabulary size).
Strengths: The authors take a fresh look at tokenization and come up with a novel method that is simpler than many alternative approaches while still resulting in clear performance improvements. The presentation of the method is clear and easy to follow. The experiments are quite extensive and include analyses into the different components of the method. The results show systematic performance gains compared to the standard tokenizers. Overall, I think that this is a valuable contribution to the emerging field of tokenization research.
Weaknesses: There are a few places where decisions of the authors seemed questionable to me and/or where I would have liked to see more details:
- The authors make the assumption that an individual token $x$ contributes to the overall language modeling loss only via the cross-entropy loss in places where $x$ is the to-be-predicted token. However, this ignores the impact that $x$ has on the language modeling loss as part of the left context (i.e., when $x$ is among the tokens processed before predicting the next token). I understand that this might be a necessary simplification to make the method feasible, but I would have expected a more in-depth discussion of this limitation and its potential repercussions.
- Based on the details provided in the paper, the setup of evaluating performance in terms of perplexity seems to be flawed: if I understand correctly, the adaptive tokenizer is trained using the token losses on the Pile, and the Pile is then again used to evaluate the different methods in terms of perplexity. I think this gives an unfair advantage to the adaptive method. The authors should either (i) evaluate perplexity on a different dataset not used for training or (ii) refrain from using perplexity as an evaluation measure.
- I would have liked to see a more in-depth analysis of the compute costs of the adaptive tokenizer compared to the standard tokenizers.
Technical Quality: 3
Clarity: 4
Questions for Authors: These are mostly comments and smaller points:
- Citation [8] should be NeurIPS 2023, not 2024.
- An important paper not cited is [Nawrot et al. (2023)](https://aclanthology.org/2023.acl-long.353/).
- 84: "Neurall" -> "Neural"
- 86: "Stringent" -> "stringent"
- Figure 1: "Troditional" -> "Traditional"
- 194: "We" -> "we"
- 226: How did you test for significance? Please mention the statistical test and exact results or else rephrase (e.g., "substantial").
- 195: How exactly do you conduct these evaluations? Are they zero-shot? Please provide more details.
- 235: "examines" -> "examine"
- 244: "trianing" -> "training"
- 4.6.2: Why did you not test above 150K tokens?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors say that limitations are discussed in the "Experimental Setup" subsection, but I could not find any discussion of the limitations there.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your positive feedback on our work and your insightful comments.
**Q1. This ignores the impact that token $x$ has on the language modeling loss as part of the left context. I would have expected a more in-depth discussion.**
Thanks for your insightful feedback. In response to your comments, we conducted additional experiments to examine the impact of token $x$ when it appears as part of the left context in the prediction target. We allocated the predicted token's loss to its left-hand side tokens based on attention weights. To manage computational costs effectively, we implemented a lookback window that only considers the immediate token to the left for loss updates.
These experiments were carried out on a model with 70M parameters, and the results are detailed in the table below. The findings indicate that incorporating this "lookback loss" has a minimal impact on model performance. This suggests that while the contextual influence of $x$ is indeed present, its effect is sufficiently captured under our current modeling framework without necessitating significant computational overhead.
| | Unigram | ADAT | ADAT+Win |
| ----- | ---- | ------ | ------ |
| Avg. | 42.22 | 44.51 | 44.38 |
**Q2. PPL evaluation method may unfairly advantage adaptive tokenizer.**
Thank you for your valuable comments regarding our evaluation methodology. I would like to provide clarifications on two key points you raised:
Firstly, as stated in line 193 of our manuscript, the perplexity (ppl) evaluation metric was conducted on the **PG19 dataset, not the Pile dataset**. This distinction ensures that our training and testing were performed on separate datasets, effectively mitigating any potential issues of data leakage.
Secondly, to comprehensively assess the effectiveness of our method, we reported the accuracy (ACC) of our model across nine different datasets. These results robustly validate the performance of our approach.
In the final version of our paper, we will clarify these evaluation details more explicitly to prevent any possible misunderstandings.
**Q3. I would have liked to see a more in-depth analysis of the compute costs.**
In the table below, we outline the computational costs for standard tokenizers, the proposed ADAT, and the training phases of an LLM. Although ADAT introduces additional computational expenses, these costs are minimal and marginal compared to the significant resources required for LLM training.
We analyzed the empirical runtime introduced by ADAT. The tokenizer optimization involves 5 epochs, where each epoch consists of training the LLM on a 0.3B corpus, followed by inference on a 0.1B corpus, and concludes with a vocabulary pruning step (90 seconds for a 100K tokens vocabulary). Therefore, the total computational cost of the tokenizer optimization process is calculated as,
$$5 \times (0.3B \text{ training} + 0.1B \text{ inference}+\text{ pruning time }) = 1.5B \text{ training} + 0.5B \text{ inference} + 450s$$
ADAT introduces an additional training cost of 1.5B tokens and an inference cost of 0.5B tokens, along with minimal vocabulary pruning time. Compared to the hundreds of billions or even trillions of tokens required for LLM training, these computational costs are negligible.
To measure runtime, we used 8 NVIDIA A100 GPUs, an Intel 8378A CPU, and PyTorch 2.1.2 with CUDA 12.1. As illustrated in the table below, the Unigram tokenizer consumes 0.33 CPU hours, whereas ADAT requires around 2 GPU hours. The LLM training phase demands significantly more resources, exceeding 500 GPU hours [1]. This comparison underscores that the additional computational expense introduced by ADAT is relatively insignificant, further justifying its use given its potential benefits.
| |Unigram| ADAT-70M|Pythia-70M|
| ------------ | ------ | -------- | ---------- |
| Runtime | 0.33 CPUh | 2 GPUh | 510 GPUh |
[1] Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling. S Biderman.
**Q4. References.**
We will include and discuss the reference [2] in the final version.
[2] Nawrot P, Chorowski J, Lancucki A, et al. Efficient Transformers with Dynamic Token Pooling
**Q5. Lin226. How did you test for significance? Please mention the statistical test and exact results or else rephrase (e.g., "substantial").**
Sorry for the misleading. We use the word "significant" merely to describe the degree of improvement. We will replace it with "substantial" or "considerable."
**Q6. How exactly do you conduct these evaluations?**
We apply five-shot evaluations in experiments, we will explicitly clarify this in the final version.
**Q7. 4.6.2 Why did you not test above 150K tokens.?**
As shown in Table 2 of the manuscript, increasing the initial vocabulary size up to 150K markedly enhances performance. However, when it increases to **200K**, the score is **44.56**, which is comparable to the score of 44.51 for 150K, indicating that the 150K vocabulary likely already includes most of the potential final candidate tokens. Therefore, further increasing the initial vocabulary size will not bring additional benefits.
**Q8 Typos.**
Thanks for your suggestions and corrections. We will correct all typos and revise the year information regarding citation [8] in the manuscript.
**Q9. Discussion of Limitations.**
Please refer to General Response.
If there is still anything unclear, Please feel free to let us know during the rebuttal window.
---
Rebuttal Comment 1.1:
Title: Response to the Authors' Rebuttal
Comment: I thank the authors for these helpful explanations. I have raised my score.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: We are grateful for your recognition and for increasing your score. Thank you so much! | Summary: Language model vocabularies are typically learned using frequency-based objectives. This objective does not entirely align with the language modeling objective–the task for which these vocabularies are ultimately used–and may therefore cause a bottleneck in language model performance. This proposes a new tokenization method that addresses this issue. Specifically, the vocabulary is optimized using an objective that incorporates both frequency-based and language-modeling losses. The method is relatively simple: use the Unigram LM tokenization method albeit with an altered loss function (i.e., not just unigram negative log-likelihood but also the cross entropy under a language model trained using the vocabulary under consideration). The two loss terms can be balanced using different functional forms and loss terms from previous iterations can be incorporated. Empirically, the authors find that their approach leads to better model performance across a variety of tasks. They perform ablations on various design choices.
Strengths: * This work takes a step towards the more end-to-end learning of language models, exploring perhaps one of the last remaining components in language modeling that has not been well optimized yet.
* The proposed method is a simple extension of a widely used algorithm, which could make its adoption easier.
* Empirical results show how sensitive models are to tokenization, which is something that is perhaps underestimated and is an important point that the community should be aware of.
* Even though the method is rather computationally expensive, it appears that using smaller models in the computation of the CE component of the tokenization loss is also effective. Training these smaller LMs can be considered a drop in the bucket compared to the total amount of computation used for training larger language models, although its unclear from the experiments how much smaller the tokenization LM can be compared to the final LM while still leading to good performance
Weaknesses: * Ultimately, the algorithm may not be practically feasible, given that it requires training an LM at each iteration of vocabulary pruning. This is a very important aspect that the authors do not discuss, either in terms of runtime analysis or empirical runtimes.
* The writing is fairly poor and imprecise. Comments such as "maintaining the stability of the vocabulary is crucial to ensure consistent learning outcomes" (line 159) are very vague. What is the "stability of the vocabulary"? What does it mean to have "consistent learning outcomes"? There are numerous other examples such as this and collectively, they will leave readers confused or perhaps even misinformed.
* Stylistic points: There is a lot of redundancy between the introduction and section 2.1 that can be eliminated.
* There are a huge number of spelling errors. Please run the manuscript through a spell checker!
Technical Quality: 3
Clarity: 2
Questions for Authors: * In equation 1, the notation suggests that you’re optimizing the dataset. I’m guessing this is not the case though. Could you clarify?
* Was switching between V and D to denote the vocabulary and D and T to denote the dataset an intentional choice?
* How much more computation does ADAT take? Unclear… computational complexity should be discussed
* Do you have Insights about why 410M vocab doesnt lead to big improvements for the smaller model?
* The jump from 70M to 410M parameters isn’t huge. Do you think a 70M model could be used in ADAT for finding the vocabulary of a much larger model? This is perhaps the only way I see the algorithm being scalable.
* Smaller points: The relationship between inference speed and sequence length (discussed first in section 3) should be made explicit; I believe lines 121-22 are a misstatement. Should either be least -> most or increasing -> decreasing
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Limitations are generally not discussed. The most important points that the authors need to address are the changes to the computational complexity of the tokenization step and the fact that only English is explored empirically.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your careful reading of our paper and valuable comments.
**Q1. The algorithm's repeated LM training and unaddressed runtime issues may limit practicality.**
Our algorithm is feasible in practice, because the optimization phase of the tokenizer only requires a minimal amount of data (0.3B), compared to the full training of the Large Language Model (LLM), resulting in only slight computational overhead.
We analyzed the empirical runtime introduced by ADAT. To measure runtime, we used 8 NVIDIA A100 GPUs, an Intel 8378A CPU, and PyTorch 2.1.2 with CUDA 12.1. The tokenizer optimization involves 5 epochs, where each epoch consists of training the LLM on a 0.3B corpus, followed by inference on a 0.1B corpus, and concludes with a vocabulary pruning step (90 seconds for a 100K tokens vocabulary). Therefore, the total computational cost of the tokenizer optimization process is calculated as,
$$5 \times (0.3B \text{ training} + 0.1B \text{ inference}+\text{ pruning time }) = 1.5B \text{ training} + 0.5B \text{ inference} + 450s$$
In contrast, the full-scale training of the LLM incurs significantly higher computational costs. For instance, when training models with a 16B and 60B corpus, the tokenizer optimization accounts for only 4.17% and 1.04% of the total training time, respectively. The Pythia-70M model takes 510 GPU hours to train with the full Pile dataset [1], and exceeds the tokenizer optimization's computational cost by over **255 times**. Therefore, the additional computational cost introduced by our method is minimal, making it feasible in practice.
| | Tokenizer Optimization | Training on 16B | Training on 60B | Pythia Report |
| ------- | ---------------------- | --------------- | --------------- | ------------- |
| RunTime | 2 GPUh | 48 GPUh | 192 GPUh | 510 GPUh |
[1] Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling. S Biderman.
**Q2. Writing lacks clarity.**
We appreciate your feedback regarding the readability of our paper. In the revised manuscript, we will ensure more thorough proofreading and polishing to improve the overall quality and precision of the writing. Specifically, we clarify as follows,
\- **Vocabulary stability** refers to the consistency of each token's score relative to the previous round during each iteration of vocabulary pruning. To mitigate significant fluctuations, we introduced a momentum mechanism to smooth these changes and ensure score stability.
\- **Consistent Learning Outcomes** refers to achieving stable and reliable resulting vocabulary with same data and strategies, ensuring that the outcomes do not significantly vary due to different conditions or random factors.
**Q3. Redundancy between the introduction and section 2.1.**
Thank you for your suggestions. We will revise Section 2.1 of the related work to eliminate redundancy.
**Q4. Why 410M vocab doesnt lead to big improvements for the smaller model?**
This phenomenon may be due to larger models with more parameters are better equipped to capture complex token relationships. Thus, vocabularies from larger models have higher model capabilities, leading to smaller performance improvements when applied to smaller models. In contrast, vocabularies generated by smaller models are less dependent on extensive model capabilities, and thus, when applied to larger models, they are able to sustain better performance gains.
**Q5. Do you think a 70M model could be used in ADAT for finding the vocabulary of a much larger model?**
Thank you for this insightful question. Yes, I believe it is a feasible way to enhance scalability. We add experimental results of 70M-ADAT on a larger model (1B) in the table below. As the table shows, the 70M-ADAT also improves performance compared to Unigram. This demonstrates that the vocabulary found by 70M-ADAT exhibits good generalizability across different model sizes.
Based on our analysis of compute costs in response to Q1, directly scaling ADAT to a large model is within an acceptable cost range compared to training a large model.
| 1B model with Unigram | 1B model with 70M-ADAT |
| --------------------- | ---------------------- |
| 48.63 | 49.68 |
**Q6. Only English is explored empirically.**
To verify the effectiveness of ADAT in other languages, we further assess our method using a **mixed corpus of Chinese and English**. We evaluate the performance on the same English benchmarks as in the manuscript and the Language Model Evaluation Opencompass for the Chinese benchmark FewCLUE [2]. The results shown in the table below indicate that ADAT shows a 2.13 increase on the Chinese benchmark and a 2.28 for English benchmarks, demonstrating its effectiveness in other languages.
| benchmark | English | Chinese |
| --------- | ------- | ------- |
| Unigram | 37.75 | 44.75 |
| ADAT | 40.03 | 46.88 |
[2] FewCLUE: A Chinese Few-shot Learning Evaluation Benchmark
**Q7. Smaller points and misstatements.**
We appreciate your attention to detail, and we will carefully examine the manuscript and correct all spelling errors and misstatements.
- Equation 1: The vocabulary $V$ will be optimized using Equaton 1, which will be rewritten as $ \min_V \textbf{Length}(D_o,V)- \lambda \textbf{Acc}(D,M,V) , \vert V\vert \leq N $ in the final version.
- Notation of vocabulary and dataset: We will use consistent notation.
- Inference Speed and Sequence Length: For a given text, longer token sequences extend inference time and reduce inference speed due to increased computational demands. We will include clearer expressions in the final version.
- Line 121: We will change "least" to "most".
**Q8. Discussion of Limitations.**
Please refer to General Response.
We hope we have adequately addressed your concerns. If there is still anything unclear, please feel free to let us know during the rebuttal window.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications. I am raising my score slightly
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: We sincerely appreciate your recognition and thank you very much for raising your score! | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewers' time and the valuable feedback they have provided for our paper. These constructive comments have been instrumental in enhancing the quality of our work. Here is one common concern we would like to address:
**General Response: Discussion of Limitations.**
We apologize for not clearly and separately discussing the limitations of our method in the manuscript. We will add the following discussion of limitations to the beginning of the Conclusion section as a separate part in the final version.
The adaptive nature of our proposed tokenizer method introduces variations in tokenizers across different model sizes, leading to inconsistent vocabularies. This inconsistency complicates tasks such as knowledge distillation and speculative decoding, which rely on the assumption of a uniform vocabulary across both smaller and larger models. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Multidimensional Fractional Programming for Normalized Cuts | Accept (poster) | Summary: The paper presents a new fractional programming based algorithm for the multi-cluster normalized cut objective which exploits a multidimensional quadratic transform.
The paper starts by introducing the Ncut problem. Then it discusses previous fractional programming methods such as Dinkelbach's method for the single-ratio case, and the quadratic transform that can be used to deal with the sum-of-ratios problem. It is then shown that the quadratic transform carries over to the matrix ratio case. The proposed method FPC works by applying the multidimensional quadratic form to the Ncut objective. They then show that the objective as well as the original Ncut objective are nondecreasing after each iteration of their method FPC.
In the experiments the approach is compared to standard spectral clustering as well as the recently proposed methods FINC and FCD. The proposed method achieves the lowest Ncut value for all datasets, when doing random initializations or refining an existing spectral clustering solution, as well as competitive results when evaluating with other performance metrics. In terms of time consumption, the method runs in the same order of magnitude as standard spectral clustering and faster as FINC method, while being slower than the FCD method.
Strengths: The paper is well-written and technically sound. It proposes a new method for the Ncut objective which has been shown to yield competitive results on several small- to medium-size benchmark datasets. The main contribution of the paper is the application of the multi-dimensional quadratic form to the Ncut objective.
Weaknesses: The used datasets are relatively small, the paper could be made stronger by showing that the method also scales to some larger datasets. Moreover, the connection to the MM theory is mentioned in the last section as a conclusion without being mentioned before. It only gets discussed in the appendix for the first time. The main results should be at least mentioned before in the main body of the text.
Technical Quality: 3
Clarity: 3
Questions for Authors: Is Prop. 3 novel? Could you please comment on the differences to Theorem 2 in [5]?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors mention in the conclusion that the analysis of the convergence speed of the method could still be extended.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you so much for appreciating the presentation and the technical contributions of this work. We would like to address your concerns and questions in the following.
1. **Weakness:** Thanks for the constructive suggestion. We have added two larger datasets: letter recognition (with 20,000 samples) and MNIST subset (with 30,000 samples). The new experimental results are summarized in Table 1 and Table 2 in the attached one-page PDF. Observe that the proposed FPC algorithm still outperforms the benchmarks significantly on these new datasets. Also thanks for the advice on paper organization. We would move the connection to the MM theory to the main body of the text after the rebuttal session finishes.
2. **Questions:** Yes, Prop. 3 is novel. Prop. 3 deals with the matrix ratio
$$
\boldsymbol B^{-1}\boldsymbol A\in\mathbb R^{m\times m},
$$
while Theorem 2 in [5] deals with the scalar ratio
$$
\boldsymbol a^\top\boldsymbol B^{-1} \boldsymbol a \in\mathbb R.
$$
Furthermore, when the ratio is nested in $\mathrm{Tr}(\cdot)$, we rewrite the scalar ratio problem as the matrix ratio case:
$$
\mathrm{Tr}(\boldsymbol a^\top\boldsymbol B^{-1} \boldsymbol a)=\mathrm{Tr}(\boldsymbol B^{-1} \boldsymbol a\boldsymbol a^\top)=\mathrm{Tr}(\boldsymbol B^{-1} \boldsymbol A),\quad\text{where}\quad \boldsymbol A = \boldsymbol a\boldsymbol a^\top.
$$
However, the reverse is not true since $\boldsymbol A$ may not be factored as $\boldsymbol a\boldsymbol a^\top$. Thus, Prop. 3 encompasses Theorem 2 in [5] as a special case.
3. **Limitations:** We have accomplished the convergence rate analysis of the proposed FPC algorithm. It can be shown that
\begin{align*}
|f(x^*)-f(x^{(1)})| \le \frac{LR^3}{6},\\
\end{align*}
\begin{align}
|f(x^*)-f(x^{(k)})| \le \frac{2\Lambda R^2+2LR^3/3}{k+3},\quad\text{for}\quad k\ge 2,
\end{align}
where $f(\cdot)$ is the optimization objective value, $x^*$ is the converged solution,
$k$ is the iterate index, $R$ is the Euclidean distance from the starting point to $x^*$, $L$ is the Lipschitz constant of $\nabla^2 f(x)$, $\Lambda$ is the maximum eigenvalue of $\nabla^2 f(x)$, and $x^{(k)}$ is the solution after $k$ iterates. We remark that the convergence rate analysis is highly nontrivial here because the NCut problem is nonconvex and incurs discrete constraints. The above analysis will be added to the paper as a major theoretic contribution.
---
Rebuttal Comment 1.1:
Title: Reponse to Rebuttal
Comment: In the rebuttal, the authors address my concerns by adding results on two larger datasets. Moreover, they discuss the difference between Prop.3 and Theorem 2 in [5]. Finally, they add a discussion of the convergence rate analysis of the proposed method.
Thank you for providing your response and additional clarifications. | Summary: This manuscript deals with the **Normlaized cut (NCut)** problem by proposing a new algorithm called **fractional programming-based clustering** (FPC). The main idea of FPC is rewriting the original NCut problem to an equivalent one by using a so-called **Multidimensional quadratic transform**. Then, the clustering result is obtained by an iterative step that is guaranteed to monotonically increase the objective of NCut problem until it converges. Experiments demonstrate that the proposed algorithm outperforms the baselines with a higher objetive value on multiple datasets.
Strengths: I think the strengh of this paper mainly falls onto the following aspects:
**Novelty**:
- Rewriting the NCut problem by the Multidimensioanl quadratic transform
- A critical step (Eq. (23) in the manuscript) is proposed to make the clustering result $\boldsymbol{X}$ can be efficiently solved
- The designed FPC finds the clustering iteratively that monotonically increases the objective of NCut
**Experimental result**
- The proposed FPC outperforms other baselines with a higher objective value of NCut obtained on multiple real datasets
- It also gives a better clustering result in terms of metrics such as accuracy and normalized mutual information
- The runtime of FPC seems to be similar to the baselines
**Clarity**
- The writing of this manuscript is clear and easy to follow
- The proofs are complete that necessarily supports the proposed arguments
Overall, I like the idea of this manuscript and I think the novelty is sufficient to NeurIPS. The proposed algorithm should be a good supply to the field of graph cut that may inspires some further research under the same direction, as long as several concerns (see the weaknesses below) can be adequately addressed.
Weaknesses: **(1) Lack of analysis and experiments on the convergence of FPC**
Since it is guaranteed that the objective value of NCut is monotonically increasing during the iterations, it is very important to understand how does the convergence of FPC look like. For example, I would like to know **if the objective value could get stuck on some sub-optimal point?** And if yes, **when does this happen in practice?** Without any study (either theoretical or emprical) on the convergece, it is hard to tell if the algorithm has reached its limit or not. So I would suggest the authors to at least show some curves of the objective values during the iterations.
**(2) How far to the global optimum?**
Even though the objective values obtained from FPC are higher than other baselines' in the experiments, it is still unknown how important are such improvements compared to the global optimum. So I would be curious on some cases (could be small and artificial) when global optimum are known and see how far is the result of FPC to the global optimum. So far, the objective value by itself is not that meaningful to me and may not be able to fully showcase the strength of the algorithm
Technical Quality: 3
Clarity: 3
Questions for Authors: My questions are mainly described in the Weaknesses section.
Besides, I wonder if the author could conduct some experiments that compares the performance of the algorithm (especially on the aspect of convergence) on clustering scenarios with **difficulties from easy to hard**, i.e. simply construct a Gaussian Mixtures with two clusters and gradually adjust their center distance from distant to close, and see during the transition, if the convergence becomes more challanging. This could be helpful to understand the limits of the algorithm.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you so much for the very positive comments on this work. Also thank you so much for providing many constructive suggestions. We have added the convergence rate analysis and many new experiments according to your suggestions, as specified in what follows.
1. **Weakness One:** We would edit both the theoretic and the experimental aspects of this paper significantly to relieve your concern. On the theory side, we accomplish the convergence rate analysis of the proposed FPC algorithm. It can be shown that
\begin{align}
|f(x^*)-f(x^{(1)})| \le \frac{LR^3}{6},
\end{align}
\begin{align}
|f(x^*)-f(x^{(k)})| \le \frac{2\Lambda R^2+2LR^3/3}{k+3},\quad\text{for}\quad k\ge 2
\end{align}
where $f(\cdot)$ is the optimization objective value, $x^*$ is the converged solution,
$k$ is the iterate index, $R$ is the Euclidean distance from the starting point to $x^*$, $L$ is the Lipschitz constant of $\nabla^2 f(x)$, $\Lambda$ is the maximum eigenvalue of $\nabla^2 f(x)$, and $x^{(k)}$ is the solution after $k$ iterates. We remark that the convergence rate analysis is highly nontrivial here because the NCut problem is nonconvex and incurs discrete constraints. On the experiment side, we add new numerical results on the convergence behavior of the proposed algorithm. In the one-page PDF, Figure 1 shows how fast the FPC algorithm converges; observe that FPC attains convergence after merely 3 iterates. Moreover, Figure 2 shows the local optimum issue of FPC. When two cluster centers are far apart, FPC can always achieve the global optimum; but when the cluster centers get closer so that the NCut problem becomes more difficult (as advised by the reviewer), then FPC may get stuck at a local optimum. In practice, we may reduce the risk of local optimum trapping by trying out various starting points.
2. **Weakness Two:** We have incorporated this excellent advice into our work. Specifically, we find the global optimum for those small-size dataset via exhaustive search, and use it as the benchmark to compare with the proposed FPC algorithm. The new results are shown in Figure 1 in the attached one-page PDF. The figure shows that the FPC attains the global optimum.
3. **Questions:** Thank you for the nice advice, which has been fully implemented. As the reviewer suggests, we try out the FPC algorithm on the different clustering scenarios with the distance between two cluster centers being gradually reduced, and consequently the clustering problem becomes increasingly difficult. This new experiment is shown in Figure 2 in the attached one-page PDF, which will be incorporated into the paper after the rebuttal session finishes.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank the authors for the detailed explanations, and most of my concerns and questions have been addressed in the reponse. Specifically, the toy example on a two-Gaussian Mixture clearly demonstrates the performance of the proposed algorithm, and also reveals the potential issue of being trapped at local optimums. It would be a good complement to the current analysis of this manuscript.
Overall, I would keep my rating and thanks again for the response. | Summary: The paper addresses the challenge of the Normalized Cut (NCut) problem in unsupervised clustering.
Conventional fractional programming (FP) techniques, especially Dinkelbach’s transform, are inadequate as they only handle single ratios and are limited to two-class clustering.
This paper extends the quadratic transform to multidimensional ratios, converting the fractional 0-1 NCut problem into a solvable problem.
The authors also show the convergence of their proposed multidimensional FP method using minorization-maximization theory.
Strengths: 1. The proposed method extends the quadratic transform to handle multiple ratios, enabling it to address multi-class classification problems.
2. The proposed method converts the complex NCut problem into a more tractable form, solving it iteratively as a manageable subproblem.
3. The algorithm's performance is validated on multiple datasets, demonstrating superior results compared to existing methods.
Weaknesses: 1. **Application Reformulation**: Reformulating the NCut problem into a multiple ratio problem is not new and has been previously demonstrated in the literature.
2. **Multiple Ratio Fractional Optimization**: Solving multiple ratio fractional optimization using the Quadratic Transform is a standard practice. The convergence of the proposed method follows directly from existing minorization-maximization techniques. The proposed algorithm and its associated theoretical analysis are relatively incremental.
Overall, the problem formulation and optimization algorithm in this paper have limited novelties.
Technical Quality: 2
Clarity: 2
Questions for Authors: What are the main novelties of this paper?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank the reviewer for acknowledging the strengths of this paper. We would like to focus on the "Weakness'' and "Questions'' parts in the following.
1. **Weakness One:** Actually, the paper never claims that the reformulation of the NCut problem as a multiple-ratio problem is a contribution or novelty. The real contribution lies in how to address the 0-1 NCut problem and the corresponding performance analysis, as opposed to the previous works that simply drop the discrete constraint in a heuristic fashion.
2. **Weakness Two:** It is true that our method is more or less connected to the existing fractional programming theory. But we wish to clarify that (almost) all the optimization algorithms in the AI field are based on the existing optimization methods/theories, e.g., the well-known Adam optimizer is in essence a momentum-aided gradient method---which has been extensively studied in the optimization community before Adam was proposed. Moreover, we wish to highlight the fundamental difference between our work and the existing literature. There are several highly nontrivial new results/insights that are by no means incremental improvements, e.g., the comparison between the scalar-ratio method and the matrix-ratio method, the new quadratic transform method tailored to the NCut problem, and the convergence rate analysis.
3. **Conclusion of Weakness:** Again, first of all, we never ever claim the problem formulation as a novelty. This work is aimed at solving a notoriously difficult long-standing problem from a novel perspective. Secondly, this work is much more than just applying the standard quadratic transform to the NCut problem; we have provided many highly nontrivial new results/insights.
4. **Questions:** The main novelties of this paper can be recognized in the following three respects:
- *New Method:* Our main theoretic contribution is stated in Proposition 3. We clarify that it is fundamentally different from the existing quadratic transform in Theorem 2 in [5]. Actually, the conventional quadratic transform in the literature does not even work for the NCut problem. The new fractional programming method considerably generalizes the existing one to account for a wider range of multiple-ratio problems.
- *New Insight:* There are two types of quadratic transform: the scalar case and the matrix case. In the literature, the choice of quadratic transform simply depends on the original form of the ratios contained in the problem, i.e., the scalar (resp. matrix) quadratic transform is employed if the ratios are scalars (resp. matrices). However, this paper shows that, even though the ratios in the NCut problem are scalar, it is better to rewrite the scalar ratios in the matrix form and thereby apply the matrix quadratic transform, otherwise the discrete constraint is difficult to tackle. This is an interesting nontrivial insight.
- *New Analysis:* Although the convergence behavior of the quadratic transform has been studied extensively in the literature, most of the prior works only discuss the conditions under which the quadratic transform based iterative algorithm can guarantee convergence. To the best of our knowledge, it remains a mystery as to how fast the quadratic transform method converges. As a newly added result, we show that
\begin{align}
|f(x^*)-f(x^{(1)})| \le \frac{LR^3}{6},
\end{align}
\begin{align}
|f(x^*)-f(x^{(k)})| \le \frac{2\Lambda R^2+2LR^3/3}{k+3},\quad\text{for}\quad k\ge 2,
\end{align}
where $f(\cdot)$ is the optimization objective value, $x^*$ is the converged solution,
$k$ is the iterate index, $R$ is the Euclidean distance from the starting point to $x^*$, $L$ is the Lipschitz constant of $\nabla^2 f(x)$, $\Lambda$ is the maximum eigenvalue of $\nabla^2 f(x)$,
and $x^{(k)}$ is the solution after $k$ iterates. We emphasize that the convergence rate analysis is highly nontrivial due to the nonconvexity and the discrete constraint of the NCut problem.
---
Rebuttal Comment 1.1:
Comment: 1. First, as the authors acknowledge, this paper does not claim the problem formulation as a novel contribution. Now, let's discuss the multiple ratio fractional problem, specifically Problem (7) in the paper.
2. Using the quadratic transform $\frac{g(X)}{f(X)} = \max_{\beta} \left(2 \beta \sqrt{g(X)} - \beta^2 f(X)\right)$, the authors transform the original multiple-ratio problem $\max_{X} \sum_{i} \frac{g_i(X)}{f_i(X)}$ into an optimization problem involving two blocks:
$~~~~~~\max_{X} \max_{\beta} \sum_{i} \left(2 \beta_i \sqrt{g_i(X)} - \beta_i^2 f_i(X)\right), X \in \Omega.$
$\quad$ Here, $ \Omega $ is the constraint set in (7b) and (7c) in the perper.
$\quad$ This is a very standard practice in solving multiple ratio fractional optimization.
3. For the variable $\beta$, the authors use the standard quadratic transform update rule. For the variable $X$, since the constraints are non-convex and have a discrete structure, an additional term $-||X||_F^2$ can be added to the objective function to make it strongly concave with respect to $X$ (which is equivalent to adding a constant). Consequently, methods like the conditional gradient method or power method can be employed to maximize over $X$. By alternating between maximization over X and $\beta$, the algorithm can be shown to be monotonically increasing. The alternating maximization strategy is also a very standard approach [R2,R3].
4. Although the authors extend this to matrix methods in "Section 3.2 Multidimensional FP method," they do not discuss the specific motivation, practical applications, or experimental results. Moreover, matrix methods have already been explored in the literature [R3].
5. The authors reply that: "The new fractional programming method considerably generalizes the existing one to account for a wider range of multiple-ratio problems." This is incorrect. For problem (7), where both the numerator and denominator are quadratic and the constraints are straightforward, there are already established algorithms [R1,R2,R3]. The quadratic transform was originally proposed to handle the optimization of **multiple-ratio problems**, and naturally, problem (7) falls within its scope.
6. The theoretical contributions of this paper are quite limited. The authors only establish a sufficient descent property for the algorithm, which can be directly derived using the classical Majorization Minimization technique. Although the authors claim to have added new analysis to the paper, I do not find such results.
7. The proposed algorithm is merely another heuristic approach. While it converges to a fixed point, it lacks any theoretical guarantee of optimality. Furthermore, there is no intuitive or theoretical justification for why it leads to improved experimental results over existing methods.
[R1] XueGang Zhou and JiHui Yang. Global optimization for the sum of concave-convex ratios problem. Journal of Applied Mathematics, 2014.
[R2] Radu Ioan Bot, Minh N. Dao, Guoyin Li. Inertial Proximal Block Coordinate Method for a Class of Nonsmooth Sum-of-Ratios Optimization Problems. SIOPT 2023.
[R3] K. Shen and W. Yu, Fractional programming for communication systems–Part I: Power control and beamforming, IEEE Trans. Signal Process 2018.
---
Reply to Comment 1.1.1:
Title: Our response to new comments of Reviewer mC6e
Comment: Thanks so much for the further comments and for giving us this opportunity to clarify. To avoid any possible confusion, we now refer to our proposed method in Prop. 3 as MQT (matrix quadratic transform), the existing method in Theorem 1 in [R3] as SQT (scalar quadratic transform), and the existing method in Theorem 2 in [R3] as VQT (vector quadratic transform). We would like to answer your detailed comments in the following.
1. Thanks for accepting our previous argument.
2. Sorry but the method you describe here is NOT our method. The method you refer to is SQT, while our method is MQT. We will show that SQT does NOT work for the NCut problem later.
3. Again, we are not applying SQT to the NCut problem as the reviewer thinks; we are using MQT. The gradient method suggested by the reviewer is typically limited to the continuous optimization, which does not account for the discrete constraint directly, so it has to relax the discrete variables to be continuous and consequently its performance becomes unpredictable.
4. This is a big misunderstanding! We beg to differ. The new matrix method in Prop. 3, namely MQT, is the building block of this work. Now let us show why the existing standard methods, SQT and VQT, do not work for the NCut problem. Recall that the NCut problem is
\begin{equation}
\begin{aligned}
\underset{X}{\text{maximize}}&\quad \sum_{k=1}^K\frac{x_k^\top W x_k}{x_k^\top Dx_k}\\\\
\text{subject to}&\quad \sum_{k=1}^K X_{ik} = 1,\quad\forall i\\\\
&\quad X_{ik}\in\\{0,1\\}, \quad \forall i,\forall k,
\end{aligned}
\end{equation}
where $x_k=[X_{1k},X_{2k},\ldots,X_{Nk}]^\top$. If we apply SQT then the new problem is
\begin{equation}
\begin{aligned}
\underset{X,y_k\in\mathbb R}{\text{maximize}}&\quad \sum_{k=1}^K\left(2y_k\sqrt{x_k^\top Wx_k}-y^2_k{x_k^\top Dx_k}\right)\\\\
\text{subject to}&\quad \sum_{k=1}^K X_{ik} = 1,\quad\forall i\\\\
&\quad X_{ik}\in\\{0,1\\}, \quad \forall i,\forall k,
\end{aligned}
\end{equation}
However, the new problem is still a nonlinear integer program, so the optimization for $X_{ik}\in\\{0,1\\}$ remains quite difficult.
We further show that VQT in Theorem 2 in [R3] does not work for the NCut problem either. Recall that by the VQT, the vector ratio problem
$$\underset{X\in\mathcal X}{\text{maximize}}\quad\sum_{k=1}^K a_k^\top(X) B^{-1}_k(X) a_k(X)$$
with $a_k(X)\in\mathbb R^d$ and $B_k(X)\in\mathbb S^{d\times d}$
is recast to
$$\underset{X\in\mathcal X,y_k\in\mathbb R^d}{\text{maximize}}\quad\sum_{k=1}^K y_k^\top a_k(X)-y_k^\top B^{-1}_k(X)y_k.$$
For the NCut problem, each $x_k^\top Dx_k \in \mathbb{R}$ is treated as $B_k(X)$, so we end up with $d=1$ and $a_k(X) = \sqrt{X_k^\top W X_k}$. As a result, VQT leads to the same reformulation as SQT in the NCut problem case, so it also cannot render the integer variable $X_{ik}\in\\{0,1\\}$ easier to tackle.
In contrast, MQT aims at a more general matrix ratio problem
$$\underset{X\in\mathcal X}{\text{maximzie}}\quad \sum_{k=1}^n\mathrm{tr}\left(B_k^{-1}(X)A_k(X)\right),$$
where $B_k(X)\in\mathbb S_{++}^{d\times d}$ and $A_k(X)\in\mathbb S_{+}^{d\times d}$, and recasts it to
$$\underset{X\in\mathcal X,Y_k\in\mathbb{R}^{\ell\times d}}{\text{maximize}}\quad\sum_{k=1}^n \mathrm{tr}\left(2Y_k[Z_k(X)]^\top-Y_k B_k(X)Y_k^\top\right),$$
where
$$A_k(X) = [Z_k(X)]^\top[Z_k(X)]\quad\text{for some}\quad Z_k(X)\in\mathbb R^{\ell\times d}$$
for the given positive integer $\ell\ge1$. Observe that MQT reduces to VQT when $\ell=1$.
In light of MQT, the NCut problem (1) is converted to
\begin{equation}
\begin{aligned}
\underset{X,y_k\in\mathbb{R}^{N}}{\text{maximize}}&\quad \sum_{k=1}^K \left(2y_k^\top{W}^{\frac12}x_k-y_k^\top y_k\delta^\top x_k\right)\\\\
\text{subject to}&\quad \sum_{k=1}^K X_{ik} = 1,\quad\forall i\\\\
&\quad X_{ik}\in\\{0,1\\}, \quad \forall i,\forall k,
\end{aligned}
\end{equation}
where $\delta=1^\top D$.
Now we arrive at a linear integer problem which can be immediately solved by the standard matching method.
Due to the length limit, we would like to answer your questions 5 to 7 in a separate response that follows the current one.
---
Rebuttal 2:
Title: Our response to new comments of Reviewer mC6e (Cont.)
Comment: Sorry about the break. Now please allow us to continue to answer your questions 5 to 7.
5. Let us reiterate the distinctions between the various methods. SQT and VQT can be found in the existing literature [R1, R2, R3], while MQT is what we newly propose. We remark that:
- SQT considers the sum-of-scalar-ratios problem:
$$\text{maximize}\quad\sum_{i=1}^K \frac{A_i}{B_i},$$
where each $A_i\in \mathbb R$, each $B_i\in\mathbb R$, and each ratio
$$
\frac{A_i}{B_i}\in\mathbb R.
$$
- VQT considers a generalized problem:
$$\text{maximize}\quad\sum_{i=1}^K a^\top_i B^{-1}_i a_i$$
where each $a_i\in\mathbb R^{d}$, each $B_i\in\mathbb S^{d\times d}$, and each ratio
$$
a_i^\top B_i^{-1} a_i \in\mathbb R.
$$
- MQT considers a further generalized problem:
$$\text{maximize}\quad\sum_{i=1}^K \text{tr}(B^{-1}_i A_i),$$
where each $A_i\in\mathbb S^{d\times d}$, each $B_i\in\mathbb S^{d\times d}$, and each ratio
$$B_i^{-1}A_i \in\mathbb R^{d\times d}.$$
In particular, we assume that each $A_i$ can be factorized as
$$A_{i} = Z_i^\top Z_i \text{ for some } Z_i\in\mathbb R^{\ell \times d}.$$
Note that when the ratio is nested in $\mathrm{Tr}(\cdot)$, we can rewrite the VQT problem as the MQT problem:
$$\mathrm{Tr}( a_i^\top B_i^{-1} a_i)=\mathrm{Tr}( B_i^{-1} a_i a_i^\top)=\mathrm{Tr}( B_i^{-1} A_i), \text{where} A_i = a_i a_i^\top.$$
However, the reverse is false since not every matrix $A_i$ can be factored as an outer product $a_i a^\top_i$. Thus, MQT is strictly more general than VQT, while VQT is strictly more general than SQT.
6. The theoretical contributions of this paper
are two-fold aside from the MM interpretation. First, as we repeatedly emphasize, the proposed MQT in Prop. 3 is a brand-new FP method, which strictly generalizes the existing SQT and VQT. Second, we analyze the convergence rate of MQT. Due to the length limit, we only provide a sketched proof of this new result in the following.
Denote by $x$ the vectorization of $X$. To ease analysis, we now write the NCut objective function as a function of $ x$.Conditioned on $ x'\in\mathcal X$, write the difference between the original NCut objective function $f(x)$ and the new objective function $h(x,y)$ by MQT as a function of $ x\in\mathcal X$:
$$\delta(x|x') = f(x) - h(x,\mathcal Y(x')),$$
where $\mathcal Y(x')$ refers the optimal update of each $Y_k$ based on current $x'$. Moreover, define the following quantity
$$\Lambda = \sup_{x\in\mathcal X} \lambda_{\max}\big(\nabla^2\delta(x|x')\big)=\sup_{x\in\mathcal X} \lambda_{\max}\big(\nabla^2f(x)\big),$$
where $\lambda_{\max}(\cdot)$ is the largest eigenvalue of the given matrix. With the iteration index denoted by $t$, we can bound the cubic Euclidean norm as
\begin{align*}
&\frac{L}{6}\|x- x^{t-1}\|^3_2\notag\\\\
&\ge \delta(x|x^{t-1})-\frac{\Lambda}{2}\|x-x^{t-1}\|^2_2\notag\\\\
&= f(x)-h(x,y^t)-\frac{\Lambda}{2}\|x-x^{t-1}\|^2_2\notag\\\\
&\overset{(a)}{\ge} f(x)-h(x^t,y^t)-\frac{\Lambda}{2}\| x- x^{t-1}\|^2_2\notag\\\\
&\overset{(b)}{\ge} f(x)-h( x^t, y^{t+1})-\frac{\Lambda}{2}\|x-x^{t-1}\|^2_2\notag\\\\
&= f(x)-f(x^t)-\frac{\Lambda}{2}\|x-x^{t-1}\|^2_2,
\end{align*}
where step $(a)$ follows since $x^t$ maximizes $h(x, y)$ for the current $y=y^t$, and step $(b)$ follows since $ y^{t+1}$ maximizes $h(x,y)$ for the current $x=x^t$. Furthermore, denote the gap in the objective value as
$$ v_t = f(x^*)-f(x^t),$$
which can be bounded from above as
$$v_t
\le (1-\pi) v_{t-1}+\frac{\pi^2\Lambda }{2}\| x^*- x^{t-1}\|^2_2+\frac{\pi^3L}{6}\| x^*- x^{t-1}\|^3_2$$
$$\quad\le (1-\pi) v_{t-1}+\pi^2\bigg(\frac{\Lambda R^2}{2}+\frac{LR^3}{6}\bigg),\tag{1}$$
where $R$ is the Euclidean distance from the starting point to $x^*$. When $t=1$, we let $\pi=1$ in (1) and obtain
$$v_1\le \frac{\Lambda R^2}{2}+\frac{LR^3}{6}.\tag{2}$$
When $t\ge2$, we let
$$\pi = \frac{v_{t-1}}{\Lambda R^2+LR^3/3}.$$
Then after some algebra, we ultimately obtain
\begin{align*}
\frac{1}{v_t}
&\ge \frac{1}{v_1} + \frac{t-1}{2\Lambda R^2+2LR^3/3}\notag\\\\
&\ge \frac{t+3}{2\Lambda R^2+2LR^3/3},
\end{align*}
where the second inequality follows by (2). The proof is then completed.
7. Since the NCut problem is NP-complete as shown in Ref. [4], one has to resort to the branch-and-bound algorithm as in [R1] to guarantee optimality, but it has exponential complexity and thus is not suited for large dataset. The other reference [R2] recommended by the reviewer does not provide any optimality guarantee. Our method is much more than a heuristic for three reasons: (i) we connect it to the MM theory and thus all the desirable properties of MM carry over to it; (ii) we generalize the existing SQT and VQT to MQT, which plays a crucial role in solving the nonlinear integer program of NCut; (iii) we provide convergence rate analysis. | null | null | Rebuttal 1:
Rebuttal: First of all, we wish to thank the TPC members for organizing reviews for our paper. The comments from Reviewer MUdC and Reviewer fiXH are quite positive; they both think that the paper is well written and contains sufficient novelty and technical contributions in terms of the NeurIPS criterion. The two reviewers provide some constructive suggestions (e.g., add convergence analysis and new experiments, and use larger datasets), all of which have been accomplished and can be readily incorporated into the paper.
In contrast, Reviewer mC6e has expressed deep concerns about the novelty of this work. But we believe that this is due to misunderstanding. Reviewer mC6e criticizes that formulating the NCut problem as a multiple-ratio problem should not count a novelty. However, this paper never claims this problem formulation as any sort of novelty/contribution. Our real contribution lies in solving this notoriously difficult long-standing problem from a novel fractional programming perspective, and also in highly nontrivial performance analysis.
The other criticism from Reviewer mC6e concerns the novelty of the proposed method. He or she thinks that the proposed FPC algorithm just follows the existing quadratic transform method in [5]. But the technical contributions of this paper are much more than that:
- As we have confirmed with Reviewer mC6e and Reviewer fiXH, our main theoretic contribution stated in Proposition 3 is fundamentally different from the existing quadratic transform in Theorem 2 in [5]. Actually, the conventional quadratic transform in the literature does not even work for the NCut problem. The new fractional programming method considerably generalizes the existing one to account for a wider range of multiple-ratio problems.
- We tailor quadratic transform to the NCut problem and bring new insight. There are two types of quadratic transform: the scalar case and the matrix case. In the literature, the choice of quadratic transform simply depends on the original form of the ratios contained in the problem, i.e., the scalar (resp. matrix) quadratic transform is employed if the ratios are scalars (resp. matrices). However, this paper shows that, even though the ratios in the NCut problem are scalar, it is better to rewrite the scalar ratios in the matrix form and thereby apply the matrix quadratic transform, otherwise the discrete constraint is difficult to tackle.
- The convergence rate analysis is by no means an incremental improvement upon the previous works. Although the convergence behavior of the quadratic transform has been studied extensively in the literature, most of the prior works only discuss the conditions under which the quadratic transform based iterative algorithm can guarantee convergence. To the best of our knowledge, it remains a mystery as to how fast the quadratic transform method converges. As a newly added result, we show that
\begin{align}
|f(x^*)-f(x^{(1)})| \le \frac{LR^3}{6},
\end{align}
\begin{align}
|f(x^*)-f(x^{(k)})| \le \frac{2\Lambda R^2+2LR^3/3}{k+3},\quad\text{for}\quad k\ge 2,
\end{align}
where $f(\cdot)$ is the optimization objective value, $x^*$ is the converged solution, $k$ is the iterate index, $R$ is the Euclidean distance from the starting point to $x^*$, $L$ is the Lipschitz constant of $\nabla^2 f(x)$, $\Lambda$ is the maximum eigenvalue of $\nabla^2 f(x)$,
and $x^{(k)}$ is the solution after $k$ iterates. We emphasize that the convergence rate analysis is highly nontrivial due to the nonconvexity and the discrete constraint of the NCut problem.
The above key contributions and novelties may have been overlooked in the last round of review. We sincerely hope that our responses can help highlight them.
Pdf: /pdf/676844d4b352d530c7814b25753c8788ef26fb21.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Multiclass Transductive Online Learning | Accept (spotlight) | Summary: This paper studies transductive online learning in the multiclass setting, where the label space can be unbounded. In the transductive setting, the adversary commits to a sequence of examples and can only adaptively choose labels for a sequence of instances, unlike the pure online setting where the adversary can adaptively choose both the sequence of examples and the labels, or the pure offline where the adversary commits to both a sequence of examples and labels.
The main result of this paper extends the results of Hanneke et. al. [2023] by characterizing the optimal mistake bound in the unbounded label case. The proof techniques involve defining and leveraging two new combinatorial parameters termed Level-constrained Branching and Littlestone dimensions. The authors prove that the level-constrained Littlestone dimension characterizes this specific variant of online learning.
Hanneke et. al. [2023]: A Trichotomy for Transductive Online Learning.
Strengths: 1. This paper characterizes the optimal mistake bound of transductive online learners in the unbounded label space setting in both realizable and agnostic cases.
2. The paper introduces two new combinatorial parameters: the Level-constrained Littlestone dimension $D(C)$ and the Level-constrained Branching dimension $B(C)$, and show that $D(C)$ characterizes transductive online learning in the unbounded label setting. These new parameters are also compared to existing combinatorial parameters like the Natarajan Dimension, DS dimension, and the Graph dimension.
3. The technical parts of the paper involve modifying the Halving technique to work with new notion of shattering using the above newly introduced combinatorial parameters, and a modified version of Littlestone's standard optimal algorithm. Combined with the definition of $D(C)$ and $B(C)$, I feel that these new definitions and techniques could be of independent interest to the learning theory community.
4. This work almost rounds out the literature on regret and mistake bounds in the original transductive online learning setting. Removing the logarithmic factor in the upper bound of Theorem 3 seems to be the final milestone in this setting.
Weaknesses: 1. The definitions of expected regret and expected cumulative mistakes in section 2.2 seem to be incomplete. What is the probability distribution that the expectations are taken with respect to? The only clear choice to me is some probability distribution over the label space $\mathcal{Y}$. Since the label space is unbounded (a critical point of this work), the notion of a probability distribution over this unbounded label space should be rigorously defined somewhere in the paper (maybe in the Appendix if there are space constraints). This seems to be an inexplicable oversight in an otherwise highly technical and detailed paper.
2. Even though the regret in the agnostic setting is stated in terms of the level-constrained Littlestone dimension, the proof technique heavily borrows from previous works. The authors state this clearly in the text as well. While the result serves a pedagogical purpose and also lends to completeness of the results, Theorem 4 cannot be counted as a significant contribution.
The following weaknesses do not directly impact my scores but should improve the paper's readability.
3. In reference [Brukhim 2022], the DS dimension characterizes multiclass learnability for unbounded label space in the offline setting. It is not immediately clear from the main text why the DS dimension cannot similarly characterize the transductive online setting and why $D(C)$ and $B(C)$ are required in the first place. The answer is possibly buried in Proposition 2, given in Appendix F.2, and should clearly be stated in the main text (even informal statements suffice), given the fact that the new combinatorial parameters are a core contribution of this paper.
4. The proofs in section 3.1 and 3.2 use the standard Halving technique, but make use of the newly defined combinatorial parameters. Both upper bounds share similar overarching ideas with each other, and also with results in previous works for the constrained label space. These two sections can be rewritten in a manner that highlights the power of the Halving technique. While the proofs themselves appear to be correct, if we take the above weakness into consideration, its also not immediately apparent from the main text why the Halving technique cannot be applied using more standard combinatorial parameters such as the DS dimension. Also, I feel that some parts of the proof can be deferred to the appendix (up to the discretion of the authors).
Technical Quality: 3
Clarity: 3
Questions for Authors: Minor issues
------------------
1. It is unclear from the definitions in section 2.2 how optimal regret and optimal mistake bound would be defined differently in the realizable setting. Could the authors expand on this?
2. Equations on lines 210, 285, and 354 should be formatted properly.
Future Work Discussions
-----------------------------------
Please feel free not to answer these questions if the authors are cramped for time. Not answering these questions will not affect my score.
1. Do the lower bounds of Theorem 3 hold directly for the list transductive online setting?
2. In the online real-valued regression setting, can one reduce a discretized version of the problem to the unbounded label case setting?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Adequately discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for pointing out that our techniques could be of independent interest to the learning theory community and that our result almost rounds out the literature on regret and mistake bounds in the original transductive online learning setting. All minor issues and suggestions will be incorporated in the final version. We address each weakness and question below.
- The expectation is taken with respect to only the randomness of the learner. The learner makes predictions by sampling from distributions over $Y$. Thus, the reviewer is correct in the sense that the expectation is taken with respect to distributions over the label space $Y$. Regarding issues of measurability, following the work of [1], we only require that the singleton sets $\lbrace{y \rbrace}$ need to be measurable. We will make the assumption needed on $Y$ more clear in the camera-ready version.
- We agree with the reviewer that our agnostic upper bound uses a pretty standard technique. Our main contribution is in the realizable setting.
- Hanneke et al. [2023] in Claim 5.3 show that the DS dimension cannot characterize multiclass transductive online learnability. Namely, they give a class $C$ such that $DS(C) = 1$, but $C$ is not transductive online learnable. This necessitates the need for new dimensions since the Littlestone dimension is clearly not necessary. We will make this explicit in the main text of the camera ready version.
- We provided a brief explanation of the relevance of the Halving algorithm in transductive online learning in lines 110-120. In particular, we noted that a naive adaptation of the Halving algorithm for multiclass learning would not work and that we defined a new notion of shattering that would allow us to apply an analog of the Halving algorithm. Nevertheless, we will make sure to point out that both algorithms in Section 3.1 and 3.2 share similar overarching ideas in the sense that they both use the Halving technique with a particular combinatorial dimension. We note that the Halving technique using the DS dimension cannot work because the finiteness of the DS dimension is not sufficient for online learnability.
- In the realizable setting, we evaluate the learner through its mistake bound. On the other hand, in the agnostic setting, we evaluate the learner through its regret bound. In the realizable setting, the regret bound and mistake bound are the same quantities. However, this is not the case in the agnostic setting, where one usually only cares about the regret bound.
- Regarding the list setting, our lower bound applies to the list transductive online setting. However, it is possible to establish tighter lower bounds by adapting our definitions for $(L+1)$-array trees when the list size is $L$. Regarding the regression setting, one could approach this problem using binary results. We are uncertain whether our results will be directly applicable.
[1] S. Hanneke, S. Moran, V. Raman, U. Subedi, A. Tewari. Multiclass Online Learning and Uniform Convergence. 36th Conference on Learning Theory, 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response and for an excellent paper! I am thoroughly satisfied with the responses and I will be raising my score further in order to stress the contribution of this work. | Summary: The paper studies the problem of multiclass transductive online learning where the number of classes can be unbounded.
In the transductive setting, the learner receives in advance a sequence of instances $(x_1,…,x_T)$ by an adversary. Then, sequentially at each time step t, it needs to decide a label $\hat{y}_t$, and then the adversary reveals the true label label $y_t$. Given a concept class $C$, in the realizable setting, the adversary must choose the labels according to a concept $c \in C$.
The goal is to design an online algorithm that minimizes the regret, i.e. the total number of mistakes done by the learner compared to the best concept chosen from C in hindsight.
The paper fully characterizes the regret for the realizable and the agnostic setting for this problem in the case of an infinite number of classes, extending previous results of Hanneke et al. [2024] limited to a finite number of classes.
Strengths: The paper characterizes the multiclass transductive online learning problem in the case of an infinite number of classes. This is a nice contribution to a fundamental problem. The paper is well written.
Weaknesses: It is not clear whether the tools presented in this paper can be applied to other settings. Specifically, the definitions of level-constrained branching dimension, and level-constrained littlestone dimension seem specific to solve this (still important) problem. (See also question 2 below).
In Lines 102-103, it is claimed that finiteness of D(C) and B(C) coincides for |Y| = 2. However, if that’s the case, it seems that Theorem 1 cannot show the Trichotomy of Hanneke et al 2024 for the binary case.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Does the label set Y need to be countable?
2. The DS dimension characterizes the learnability of multiclass learning in the pac setting (Brukhim et al 2022) . Would it be possible to express Theorem 1 using DS and D (i.e., can DS replace the branching dimension, to have a nice parallel between VC and DS?)
Typos
207 upper=
211 upper round
359 lowerbounds
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: There is no specific section for limitations. This theory paper is self-contained so I do not believe it is needed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for finding our work to be well written and a nice contribution to a fundamental problem. All minor typos and suggestions will be incorporated in the final version. We address each weakness and question below.
- The algorithm achieving the $\log{T}$ upper bound in the realizable setting is the most significant contribution of our work, as mentioned in Section 1.2. We believe that the adaptation of this algorithm is applicable to various other settings, including list transductive online learning and transductive online learning with bandit feedback.
- In lines 102-103, we first stated that $D(C) = VC(C)$ for binary classification. Then, we stated that $B(C) < \infty$ if and only if $L(C) < \infty$ for binary classification. We did not make the claim that $B(C) < \infty$ if and only if $D(C) < \infty$.
- The label space $Y$ does not need to be countable. Following the work of [1], we only require that the singleton sets $\lbrace{y \rbrace}$ need to be measurable. We will make the assumption needed on $Y$ more clear in the camera-ready version.
- No, it is not possible to express Theorem 1 using DS and D because $DS(C) \leq D(C)$ for every $C$. In addition, Claim 5.4 in Hanneke et al. [2023] shows that the DS dimension does not characterize multiclass transductive online learning. Namely, there is a class where $DS(C) = 1$, but $C$ is not transductive online learnable. However, if $|Y| < \infty$, then the DS dimension does characterize learnability. We also note that we have comparisons to other existing dimensions in Appendices B and F.
[1] S. Hanneke, S. Moran, V. Raman, U. Subedi, A. Tewari. Multiclass Online Learning and Uniform Convergence. 36th Conference on Learning Theory, 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you for your clarification and your response, and for pointing out to the appendix sections relevant to my questions.
(I am sorry for misreading Lines 102-103).
After reading the other review and the rebuttal, I still believe this is a good theory paper (and I increased my score accordingly). | Summary: This work continues the study of transductive online classification (a learning setting from the 90s recently reviewed by Hanneke et al. [NeurIPS 2023]. The main result is a trichotomy of possible rates for the general multi-class case (even for the infinite label case) in the realizable setting; answering an open question by Hanneke et al. The three cases are characterized by novel combinatorial dimensions (variant of Littlestone dimension called level-wise Littlestone and ).
Additionally they achieve optimal $\tilde{\Theta}(\sqrt{TD(\mathcal{C})})$ (up to log factors) rates in the agnostic setting again determined by the level-wise Littlestone dim. $D(\mathcal{C})$.
Strengths: Timely and interesting paper continuing the recent interest in transductive online classification and related models. Almost tightly characterizes the possible rates for agnostic and realizable learning.
--- rebuttal ---
changed from 6 to 7
Weaknesses: Not in-depth discussion of previous related work.
The gap between $D(C)$ and $B(C)$ in Theorem 3 (in the $B(C)<\infty$ case) is somewhat unsatisfactory.
See also questions and limitations below.
Minor:
* You might want to fix "Hanneke et al. [2024]" to [2023], otherwise you cite a NeurIPS paper in the future.
* typo in line 37: should be $c:X\to Y$ not $c:X\mapsto Y$.
* typo in line 207: "upper="
* Perhaps hint the additional results (Prop 4, comparison to DS, graph-dim) in the main paper, at least with some short sentences.
Technical Quality: 4
Clarity: 3
Questions for Authors: In the agnostic setting for $|Y|=k$ one would expect something like $O(\sqrt{T\mathrm{Ndim}(C)k})$ up to log factors, similar to the bound in the realizable setting in Hanneke et al. 2023 (Theorem B.3). Please relate this to your agnostic bound.
Can the gap between $D(C)$ and $B(C)$ be arbitrary? More generally how far can $D(C)$,$B(C)$, and $L(C)$ be from each other. I agree that stating the rate as $O(1)$ makes sense, but it would be nice to have some quantity explicitly giving the rate in this case some Xdim s.t. $O(\mathrm{Xdim}(H))$ (as Ldim does for standard online classification). Note that the tree ranks by Ben-David et al [1997] (see limitations, as well), achieves this specifically also for the worst-sequence/transductive setting (at least for the binary case).
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Please discuss previous work more thoroughly. The tree ranks from the "Online learning versus offline learning" (self-directed, worst-case sequence, etc.) [Ben-David et al. 1997] and similar papers are very much related to the proposed dimensions here (which is only very briefly acknowledged in a short sentence here). E.g., the "level-constrained" variant of the trees in these papers also become the VC-dim for $|Y|=2$. Also similarities to Devulapalli and Hanneke could be discussed more (e.g., the lower bounds there probably apply here too).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for finding our work timely and interesting. All minor typos and suggestions will be incorporated in the final version. We address each weakness and question below.
- For small $k$ (i.e. $k << 2^{(\log{T})^2}$), the Natarajan bound can be smaller than the upper bound in terms of the Level-constrained Littlestone dimension. However, for large $k$ (i.e. $k > 2^{(\log{T})^2}$), our upper bound in terms of $D(C)$ can be better. We will make sure to point this out in the camera ready version.
- There can indeed be an arbitrary gap between $B(C)$ and $D(C)$. For example, for the class of thresholds $C = \lbrace{x \mapsto 1\lbrace{x \geq a \rbrace}: a \in \mathbb{R} \rbrace}$, we have that $D(C) = VC(C) = 1$, however $B(C) = \infty$ using the lower bound from Hanneke et al. [2023]. We will make sure to point this out in the main text. With regards to $B(C)$ and $L(C)$, there can also be an arbitrary gap. Proposition 4 in Appendix F gives a class where $B(C) \leq 2$ but $L(C) = \infty$. We will make this explicit in the main text again.
- Regarding the case when the rate is $O(1)$, Theorem 3 shows that when $B(C) < \infty$, the minimax rate is at most $B(C)$ in the realizable setting. With regards to lower bounds, we can also show that when $T$ is large enough (namely $T >> 2^{B(C)}$), the lowerbound in the realizable setting is also $B(C)/2$. We will include this lower bound in the camera-ready version.
- We thank the reviewer for pointing out these related previous works. We note that we do have a more in-depth discussion of prior work in Appendix A. In the final version, we will relocate this section to the main text. Additionally, we will incorporate a sentence that draws a precise comparison between our $B(C)$ and the rank notion from the paper by [Ben-David et al. 1997]. It is also important to note that while the dimension introduced by Devulapalli and Hanneke provides a lower bound, it is not feasible to establish an upper bound based on it. Specifically, their paper includes a theorem demonstrating the gap between transductive and self-directed online learning, even in the binary case.
---
Rebuttal Comment 1.1:
Comment: Thanks for these comments and the remark that in the realizable case the rate is $\Theta(B(C))$ if $T$ is large enough. I raised my score. | Summary: This paper addresses the problem of multiclass transductive online learning with unbounded label spaces. The paper extends previous work on binary and finite label spaces to the more general case of unbounded label spaces. The authors introduce two new combinatorial dimensions - the Level-constrained Littlestone dimension and the Level-constrained Branching dimension - to characterize the optimal mistake bounds in this setting. They establish a trichotomy of possible minimax rates in the realizable setting, showing that the expected number of mistakes can only grow like \theta(T), \theta(log T), or \theta(1).
Strengths: - The paper solves an open problem in online learning theory by characterizing optimal mistake bounds for unbounded label spaces.
- The paper is very well written and easy to understand.
Weaknesses: - This paper extends the results of multi class transductive learning to infinite label space but the not clear how important is this setting.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Can the authors provide more more intuitive explanation of the Level-constrained Littlestone and Branching dimensions.
- Can the authors discuss the computational complexity of their algorithm for this setting?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for noting that our work resolves an open problem in online learning theory and that our paper is very well written and easy to understand. We address each weakness and question below.
- Multiclass learning with unbounded label spaces is a fundamental setting that has been under study for nearly 40 years, starting with [1,2] and more recently in [3,4,5,6]. Studying infinite label spaces is important as guarantees for multiclass learning should not inherently depend on the number of labels, even when it is finite. This is quite a practical concern as many modern machine learning paradigms have massive label space, such as in face recognition, next word prediction, and protein structure prediction, where the dependence of label size in learning bounds would be undesirable. Beyond being of practical interest, multiclass learning with infinite labels might also advance the understanding of real-valued regression problems [7]. Finally, in mathematics, concepts involving infinities often provide clearer insights into the true hardness of a problem. These motivations have been highlighted in lines 78-85.
- To define the Level-constrained Littlestone dimension, we first need to define the Level-constrained Littlestone tree. A Level-constrained Littlestone tree is a Littlestone tree with the additional requirement that the same instance has to label all the internal nodes across a given level. The Level-constrained Littlestone dimension is just the largest natural number $d \in \mathbb{N}$, such that there exists a shattered Level-constrained Littlestone tree of depth $d$. We will add a more intuitive explanation of this dimension in the final version.
- To define the Level-constrained Branching dimension, we first need to define the Level-constrained Branching tree. The Level-constrained Branching tree is a Level-constrained Littlestone tree without the restriction that the labels on the two outgoing edges are distinct. The Level-constrained Branching dimension is then the smallest natural number $d \in \mathbb{N}$ that satisfies the following condition: for every shattered Level-constrained Branching tree $\mathcal{T}$, there exists a path down $\mathcal{T}$ such that the number of nodes on this path whose outgoing edges are labeled by different elements of $Y$ is at most $d$. We will add a more intuitive explanation of this dimension in Section 1.2 of the final version.
- Our algorithms are not computationally efficient. Indeed, our algorithms require calculating the level-constrained Littlestone dimension or level-constrained branching dimension for concept classes. In the binary setting, the level-constrained Littlestone dimension equals the VC dimension, which is computationally hard to compute for general concept classes. That said, most online learning algorithms in online learning theory are not computationally efficient. For instance, in the case of adversarial online learning, SOA involves computing the Littlestone dimension of concept classes defined by the online learner in the course of its interaction with the adversary, which are challenging computations, even when the concept class and the set of features are finite [8]. Notably, no efficient algorithm can achieve finite mistake bounds for general Littlestone classes [9].
[1] Balas K. Natarajan. Some results on learning. 1988.
[2] B. K. Natarajan. On learning sets and functions. Machine Learning, 4:67–97, 1989.
[3] A. Daniely, S. Sabato, S. Ben-David, and S. Shalev-Shwartz. Multiclass learnability and the ERM principle. 24th Conference on Learning Theory, 2011.
[4] A. Daniely and S. Shalev-Shwartz. Optimal learners for multiclass problems. 27th Conference on Learning Theory, 2014.
[5] N. Brukhim, D. Carmon, I. Dinur, S. Moran, and A. Yehudayoff. A characterization of multiclass learnability. 63rd Annual IEEE Symposium on Foundations of Computer Science, 2022.
[6] S. Hanneke, S. Moran, V. Raman, U. Subedi, A. Tewari. Multiclass Online Learning and Uniform Convergence. 36th Conference on Learning Theory, 2023.
[7] I. Attias, S.Hanneke, A.Kalavasis, A. Karbasi, G. Velegkas. Optimal learners for realizable regression: Pac learning and online learning. Advances in Neural Information Processing Systems 36 (2023).
[8] P.Manurangsi, A. Rubinstein. Inapproximability of VC dimension and littlestone’s dimension. 30th Conference on Learning Theory, 2017.
[9] A. Assos, I. Attias, Y. Dagan, C.Daskalakis, M. K. Fishelson. Online learning and solving infinite games with an erm oracle. 36th Conference on Learning Theory, 2017. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
LLM Circuit Analyses Are Consistent Across Training and Scale | Accept (poster) | Summary: This paper examines how a few common circuits (IOI, greater than, verb agreement, gendered pronoun prediction) develop at different model scales and timings in training. The main findings are that these circuits develop at the same time across different model scales, that once they develop they do not disappear, and that although individual components of the circuits can change once developing the overall circuit structure mostly remains the same once it develops.
Strengths: - The "sharing" behavior, where one attention head stops being suited for a particular purpose and another one starts being used for that purpose during training, is extremely interesting and to my knowledge a novel finding.
- The authors are the first to my knowledge to study how the structures of circuits change over training.
- The authors develop novel metrics that allow them to test if a given model implements a certain type of circuit.
- The authors contribute to universality by showing that previously studied circuits occurs in Pythia models of many scales.
Weaknesses: - Only a few simple, well known circuits are analyzed. Significantly, the methods introduced are not scalable to large datasets of auto-discovered circuits.
- The motivation of fine-tuning/continual training is not quite applicable, since the work in this paper studies only pretraining checkpoints (which is fundamentally different from something like RLHF or fine-tuning on a narrow distribution).
- As the authors acknowledge, some results are not novel and merely reproduce earlier work (e.g. when induction heads arise).
Technical Quality: 4
Clarity: 3
Questions for Authors: - Maybe I missed it, but what components are the circuits over? Are they MLPs + attention heads? Stating this very clearly could help, as well as pictures of the circuits in the appendix.
- It would be good to also have a graph showing Jaccard similarity of the current circuit vs. the final circuit, as it is hard to tell if the circuit is slowly drifting across nodes or if it stays mostly the same with a few heads switching back and forth.
- I'm confused about section 4.2. Are you rerunning the circuit discover algorithm at each checkpoint to get candidate heads? Or are you just running on all possible heads? How do you know the circuit uses the head if it’s the latter? I would also like to see Figure 4 at all tokens, including before the heads develop; since the lines are just horizontal it’s not clear to me that such heads don’t always exist, it'd be great to see a steep increase when the circuit develops.
- Line 162 says we're going to learn why bigger models do worse (an interesting finding in and of itself!) but I didn't see where this was touched on.
- One large claim (line 194) is that good performance on the circuits emerges at the same time as the heads emerge, but as these are on two different plots (figure 1 and figure 2) with different axes, this is hard to tell. Perhaps they could both be plotted on the same axes in the appendix with different line patterns to tell the different colors apart?
- Nit: in Figure 1, is there a reason some of it is logit diff and some of it is prob diff?
- The authors should ideally cite The Quantization Model of Neural Scaling (the hypothesis that language models learn skills ("quanta") in order of importance), since their work supports the Quanta hypothesis in a very interesting way (the circuits arise at the same time across models).
- For weighted jaccard similarity, it would be good to “see” attention heads switching. Maybe you can use the weight of edges in the DAG to assign nodes importance and do some sort of importance-weighted jaccard? As is it is hard to tell what’s changing in the circuits, much of the variance in the graph could just be from the size of the circuit changing as opposed to the more interesting "sharing".
I would be happy to raise my score if some of these questions/concerns were addressed.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your thorough review! We respond below, abbreviating your comments for space reasons.
## Weaknesses
> the methods introduced are not scalable to large datasets of auto-discovered circuits
We disagree in part about the scalability of our methods. EAP-IG is efficient, and can now be used with models around 7B parameters in size (unfortunately, this was not the case when writing the paper). The fact that small models are increasingly capable means that studying e.g. OLMo-1B or 7B could show us how models learn even rather complex tasks. However, we do agree that studying 156 checkpoints for each model was compute-intensive, and some parts (copy suppression evaluation) scale poorly.
> the motivation of fine-tuning/continual training is not quite applicable
We agree that narrow fine-tuning doesn’t not quite fit as a motivation, and will revise this. However, continuous training is often done on a wider distribution of text (e.g., a large variety of user queries) and there may also be cases where researchers would want to look at circuits at intermediate checkpoints of a model before it has finished training.
> some results are not novel and merely reproduce earlier work
Although our induction head findings do reproduce findings in Olsson et al., we primarily mention them in passing; the bulk of our paper consists of novel findings: other component types, performance emergence, algorithmic stability. We will try to focus on the more novel parts of our work.
## Questions
> what components are the circuits over?
The circuits are composed of attention heads and MLPs (plus the inputs and logits). We will clarify this!
> It would be good to also have a graph showing Jaccard similarity of the current circuit vs. the final circuit
Thanks for the suggestion, implemented in Figures 2/3 of the response PDF. We see that in Figures 2/3 there are still many fluctuations compared with the final circuits which indicates that component swapping occurs during training. We can also observe a rising trend, indicating that the circuit grows increasingly similar to the final circuit during training. Therefore, the model both slowly drifts towards the final circuit, even as components swap during training.
> I'm confused about section 4.2. Are you rerunning the circuit discover algorithm at each checkpoint to get candidate heads? Or are you just running on all possible heads? How do you know the circuit uses the head if it’s the latter? I would also like to see Figure 4 at all tokens, including before the heads develop.
Apologies for the confusion in Section 4.2! To clarify, there are two criteria for including a head in a group: whether it meets a component score threshold, and whether it is important to model performance as measured by the path patching causal intervention. The latter intervention is roughly what EAP-IG approximates, so it’s actually a stronger test than checking circuit membership; see our response to dE5s for more details. Given this procedure, we start Figure 4 at the 30B token mark (10% of training) because this is where name-movers (NMHs) first appear as part of the IOI circuits. S2-inhibition and other heads are found via their impact on NMHs, so without NMHs, there’s nothing to measure / plot. Before the 30B token mark, most IOI circuits in these models consist simply of a single copy suppression head. Gradual emergence of relevant component types is better seen in Figures 2 and 3 of the main text.
> Line 162 says we're going to learn why bigger models do worse
Our apologies for the confusion! Large models do not learn any faster than smaller models do (above a certain threshold), because such models do not develop the responsible heads any faster than smaller models do. We don’t say that explicitly right now, but we will add that to the paper.
> Perhaps they could both be plotted on the same axes in the appendix with different line patterns to tell the different colors apart?
We agree that plotting things in one plot, with the same axis would help, but we didn’t have the space in our response PDF. However, we plotted the sum of component effects for in-circuit heads using the same x-axis as the behavioral plot. In this new plot, you can see clearly how greater-than components arise at or just before task behavior emerges, at 2x$10^9$ tokens. For IOI, copy suppression, the initial part of the circuit, arises at 4x$10^9$ tokens, while the name movers form a bit later, at 7 to 8x$10^9$ tokens. We will include the plot that you suggest in the appendix.
> in Figure 1, is there a reason some of it is logit diff and some of it is prob diff?
For tasks with one in/correct answer, we use logit diff; for tasks with multiple, we use prob diff, aligning with the original metrics for these tasks. Logit diff doesn’t quite work for multi-answer tasks; we can’t sum over the logits of all in/correct answers as with prob diff. We could use prob diff for all tasks, but some argue against this (Heimersheim and Nanda, 2024).
> The authors should ideally cite The Quantization Model of Neural Scaling
Thank you for pointing out that connection! We’ll add that citation.
> Maybe you can use the weight of edges in the DAG to assign nodes importance and do some sort of importance-weighted jaccard?
We have included related plots in the response PDF (Figures 2 and 3). For now, instead of plotting importance-weighted Jaccard similarity (JS) for node sets, we do the same for edge sets, as EAP-IG gives us only edge importance; we think computing node importance from edge importance would give similar results. We compute the Jaccard similarity between intermediate checkpoints and final circuit. We can see that the JS slowly climbs and remains high at the end. It aligns with the conclusion that the model both slowly drifts towards the final circuit along with components swapping during training. To see specific attention head switching, we refer to Figure 3 (main text).
---
Rebuttal Comment 1.1:
Comment: Re scalability, I meant that to run the same analysis on a new circuit, you need to figure out a custom test that will identify each head of that circuit, which is very non trivial (correct me if I'm wrong). This to me feels like a large weakness in being able to scale up this analysis to a large database of circuits and reproduce the results there.
Thank you for plotting the additional Jaccard similarity graphs! I think the results of this experiment were quite nice. I am curious how you think the new graphs affect the discussion of load balancing. It seems there are now fewer spikes; does this mean that the fluctuations might mostly be due to noise in the interventions in the less important edges?
To make sure I understand 4.2, you're trying to find out if the circuit "works" in a different way in different points in time? I am a little uncomfortable still with this experiment, it seems a bit too much to me like you are assuming the conclusion (that the circuit works in a certain way) and then showing that when you assume that conclusion and ablate those edges, then model performance decreases. The mere fact that you can find the heads at all seems to me that you are assuming the circuit works in a certain way, and then the ablation doesn't seem necessary. But please correct me if I am wrong.
For now, I will keep my score, thank you for responding to my comments!
---
Reply to Comment 1.1.1:
Comment: Thanks for your response; it’s much appreciated! As for the points you make:
> Re scalability, I meant that to run the same analysis on a new circuit, you need to figure out a custom test that will identify each head of that circuit, which is very non trivial (correct me if I'm wrong). This to me feels like a large weakness in being able to scale up this analysis to a large database of circuits and reproduce the results there.
While EAP-IG doesn’t require custom tests (as it is agnostic regarding the nature of the sub-functions in the circuit), we do agree that the fact that there exist so few well-characterized attention heads makes testing head identity difficult. However, this isn’t unique to our method: automatically assigning / verifying the semantics of model components is one of the big challenges of mechanistic interpretability. The difficulty of this challenge is why we don’t yet have a large database of well-understood circuits to scale to. However, we hope that as that nascent line of research progresses, more heads (and tests for them) will emerge! Finding circuit structure was also very slow just two years ago; ideally,the rate at which we understand circuit semantics will also accelerate. Once these tests emerge, it would be relatively straightforward to repeat our experiments with them.
> Thank you for plotting the additional Jaccard similarity graphs! I think the results of this experiment were quite nice. I am curious how you think the new graphs affect the discussion of load balancing. It seems there are now fewer spikes; does this mean that the fluctuations might mostly be due to noise in the interventions in the less important edges?
We’re glad our new graphs proved useful! In spot-checking the data points individually, we do indeed see what you suggested: that fluctuations occur more frequently in less-important edges. (Though, we do emphasize that even highly-important nodes also drop out of the circuit at times, as per Figure 2 in the main paper).
> To make sure I understand 4.2, you're trying to find out if the circuit "works" in a different way in different points in time? I am a little uncomfortable still with this experiment, it seems a bit too much to me like you are assuming the conclusion (that the circuit works in a certain way) and then showing that when you assume that conclusion and ablate those edges, then model performance decreases. The mere fact that you can find the heads at all seems to me that you are assuming the circuit works in a certain way, and then the ablation doesn't seem necessary. But please correct me if I am wrong.
We understand your concern about 4.2, but want to point out here that our approach is slightly different than you suggest. Essentially, we are trying to test whether our assumptions about the circuit are correct. For example, for Figure 4B, we first identify all CS/NM heads in the model that are contributing positively to performance (without assuming the overall algorithm), and our metric is the ratio between “performance reduction if *only* CS/NM heads are ablated” (which is our hypothesis) and “performance reduction if *all* heads are ablated,” finding that this ratio is quite high, confirming our hypothesis that they are important.
Having found that NM heads exist at all checkpoints in these models, we then identify S2 inhibition heads through their effect on those NM heads, and our next metric in Figure 4C is the ratio between “performance reduction if only S2I heads are ablated, as intermediated through the NM heads” and “performance reduction if all heads upstream of NM heads are ablated, as intermediated through the NM heads” (and then so on for Figure 4D). There are things that these tests don’t cover, like the relative importance of CS vs. NM heads, or induction vs. duplicate token heads, or what the non-S2I heads do to affect the NM heads, but our tests do provide evidence that specific key algorithmic steps are occurring consistently in these models. | Summary: This paper studies the stability of circuits in language models during pre-training and across different model sizes. Specifically, it examines the Pythia model suite and a selection of four simple tasks with known circuits: indirect object identification (IOI), subject-verb agreement, greater-than, and gendered pronouns. The analysis involves multiple steps, each repeated across different model sizes (70m - 2.8b) and pre-training checkpoints:
1. Evaluate the ability of the language model to solve each of these tasks.
2. Evaluate the emergence of specific attention heads, which previous work has established as important for each of these tasks.
3. Narrow down on the IOI task to evaluate whether the three core algorithmic steps, again known from previous work, are consistent across training and scale. To this end, the authors use path patching to ablate the connection between the components involved in each of these steps.
4. Evaluate the consistency of the circuits during training by looking at component and edge overlap. To identify these circuits, the authors use edge attribution patching.
Overall, the results reveal a significant level of consistency of circuits during training and across scale: important attention heads tend to start emerging after roughly the same number of tokens, although at different paces; the effect of important heads in the IOI circuit is somewhat consistent once these components emerge; and there is significant circuit component overlap across checkpoints.
Strengths: - The paper investigates an interesting question: to what extent might existing circuit analysis results generalise across training and scale? Understanding the training dynamics of circuits is important for the field of (mechanistic) interpretability.
- The authors employ a variety of analysis techniques, including path patching and edge attribution patching, to systematically and causally test their hypotheses.
- Despite focusing on a narrow set of tasks, the authors identify potentially general motifs, such as “load balancing”. These insights open up avenues for future work to develop a more fundamental understanding of the stability of circuits across training and scale.
Weaknesses: - While the results demonstrate a significant degree of consistency and stability of circuits across training, the focus on a small number of simple tasks provides limited insight into whether circuits for other tasks are consistent as well.
- The results reveal several training dynamics that are left unexplored. For example, load balancing or the observation that many attention heads emerge after roughly the same number could have been explored in more detail. I believe a detailed investigation of one of these phenomenons could have significantly improved the paper.
Minor issues:
- The title of the paper (“Stability and Generalizability of Language Model Mechanisms Across Training and Scale”) does not match the title in OpenReview (“LLM Circuit Analyses Are Consistent Across Training and Scale”).
- Typo in L67: “… we can verify that they are …”
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. In Figure 5, you focused on the Jaccard similarity with *all* previous checkpoints. This makes it hard to evaluate whether circuits had phases of high consistency with only the previous checkpoint. Did you observe any periods of significant circuit stability during training? This would be interesting to see as the authors of latent state models of training dynamics [1] suggest phase transitions between different algorithmic solutions.
2. You mention that “Circuits in larger models require more components, with circuit sizes positively correlating with model scale” in your contributions, but I don’t see this discussed in the following sections. Have you been able to study how these circuits differ across scale? Do we find duplications of the same components? How does this relate to the higher stability of circuits in larger models?
[1] M. Y. Hu, A. Chen, N. Saphra, and K. Cho, ‘Latent State Models of Training Dynamics’, arXiv [cs.LG]. 2024.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: The selection of tasks limits the generalisation of their findings, as previous studies suggested that various behaviours of language models emerge at scale [2] or qualitatively change across scale [3]. Both of these suggest that circuits still fundamentally change for more complex tasks. However, I believe that this is mostly addressed in the limitations section of the paper.
[2] J. Wei et al., ‘Emergent Abilities of Large Language Models’, arXiv [cs.CL]. 2022.
[3] J. Wei et al., ‘Larger Language Models Do In-Context Learning Differently’, arXiv [cs.CL]. 2023.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your helpful review! We agree that many of the issues you bring up are important, and have attempted to address them below.
## Weaknesses:
> While the results demonstrate a significant degree of consistency and stability of circuits across training, the focus on a small number of simple tasks provides limited insight into whether circuits for other tasks are consistent as well.
We agree that it remains to be seen whether more complex circuits, or a broader set of circuits in general show the same stability over training and scale. Currently, however, only a very small set of circuits have yet been identified, and finding new circuits was somewhat beyond the scope of our paper. We hope that our work will be useful to follow-on investigations of future circuits and their potential for consistency across these dimensions. We do not claim that all circuits will retain this consistency, and it remains to be seen what happens in much, much larger models; however, we do suggest that this provides some evidence that circuits found at specific points and model sizes can provide information that holds to at least some degree in across these dimensions.
> The results reveal several training dynamics that are left unexplored. For example, load balancing or the observation that many attention heads emerge after roughly the same number could have been explored in more detail. I believe a detailed investigation of one of these phenomena could have significantly improved the paper.
We also agree that investigating load balancing or token-dependent formation of components would have added to the paper; however, we felt such investigations likely would deserve their own dedicated projects (likely involving significant amounts of model training to get more granular checkpoints) and research output, and as such decided the scope of this paper would be best limited to an initial investigation of a wide set of phenomena. Those two topics are certainly worth investigation, however, and we hope follow-up work will address them!
>The title of the paper (“Stability and Generalizability of Language Model Mechanisms Across Training and Scale”) does not match the title in OpenReview (“LLM Circuit Analyses Are Consistent Across Training and Scale”).
> Typo in L67: “… we can verify that they are …”
Thanks for catching these! We will fix them.
## Questions:
> 1. In Figure 5, you focused on the Jaccard similarity with all previous checkpoints. This makes it hard to evaluate whether circuits had phases of high consistency with only the previous checkpoint. Did you observe any periods of significant circuit stability during training? This would be interesting to see as the authors of latent state models of training dynamics [1] suggest phase transitions between different algorithmic solutions.
To provide more detail the question on Jaccard similarity and circuit stability, we have added a set of graphs (Figures 2 and 3) that show circuit Jaccard similarity at each checkpoint to the final circuit. We do observe periods of stability in some models and tasks, but in other cases we see gradual or spiky transitions towards the final circuit. We also note that we conducted EAP-IG on a number of seed variations of the Pythia models, finding a variety of patterns; in some models, the circuits were quite stable for long periods, while others showing oscillating sharp changes between checkpoints. Unfortunately, we didn’t have room to include these plots, but we could include them in the appendix.
> 2. You mention that “Circuits in larger models require more components, with circuit sizes positively correlating with model scale” in your contributions, but I don’t see this discussed in the following sections. Have you been able to study how these circuits differ across scale? Do we find duplications of the same components? How does this relate to the higher stability of circuits in larger models?
In Section 5, we compute the Pearson correlation between circuit sizes and model sizes. The Pearson correlation is r = 0.72 for IOI and SVA, 0.9 for Greater-Than, 0.6 for Gender Pronoun.
We do find some differences in circuits across model scales. As we cover in Section 4, the algorithmic structure remains similar, but as shown in Section 5, the circuits are larger in larger models. In the case of the IOI task, some components demonstrate more replication than others; for example, copy suppression heads and S-inhibition heads remain few in number across circuits in different model scales, while induction heads and name-mover heads increase significantly. On balance, less-volatile head types seem to increase the most in number, which does indeed support your suggestion that this is related to circuit stability in larger models. Regardless, though, all identified components seem to be present in all models, and seemed to be similarly important to their circuits for IOI.
---
Rebuttal Comment 1.1:
Comment: Thank you for the additional information regarding differences in circuits across model scales and the additional Figures 2 & 3. I appreciate the effort and believe that these results add significant nuance to your findings. Thus, I strongly suggest including them in the final version of the paper to enhance the clarity of your analysis. I think the other reviewers have brought up some valid concerns (e.g. generality of findings), but I believe the authors have done an adequate job at trying to address them. Overall, I believe this paper would be an interesting contribution to the conference and will increase my score to accept. | Summary: This study explores the emergence and evolution of internal mechanisms in language models of varied sizes during the training process. Specifically, it examines simple tasks such as IOI, Greater-than, Gender Pronoun, and Subject-verb agreement using Pythia models. The findings indicate that models of different sizes tend to learn these tasks after a similar number of token counts during training. Moreover, while individual components of the models may change in functionality, the overall algorithm implemented by the models remains consistent throughout the training process. Lastly, the study identifies that once a circuit emerges, it generally remains stable thereafter.
Strengths: 1. This work is highly relevant for two key reasons:
- Most mechanistic interpretability studies do not examine the internal mechanisms of models throughout the training and fine-tuning processes. As a result, they fail to offer a comprehensive understanding of how and when a model learns its mechanisms and how these mechanisms evolve after their creation.
- Unlike many existing studies, this work analyzes multiple tasks across various models of different sizes.
2. The results from Section 4 indicate that, although some components of the model change their functionality, the overall algorithm used by the model to solve simple tasks, such as IOI, remains consistent. This finding aligns with previous results from [1], which state that even though fine-tuned models have larger circuits, their mechanisms remain consistent, even for more complex tasks like entity tracking.
3. The presentation is clear, but it does require the reader to have some background in mechanistic interpretability. Additionally, there are a few other pertinent works on mechanistically understanding the impact of fine-tuning that the authors should cite [2, 3].
[1] Prakash et al, "Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity Tracking", 2024.
[2] Jain et al, “Mechanistically analyzing the effects of fine-tuning on procedurally defined tasks”, 2023.
[3] Lee et al, “A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity”, 2024.
Weaknesses: 1. I’m unsure about the results of section 3.2 stating that emergence of circuit components that are involved in the internal computation of the tasks at the same time as the model behavior performance, suggests that the former is responsible for the emergence of latter, because of the following reasons:
- All the model heads are evaluated, rather than only those involved in the circuit performing the task. It is possible that some heads exhibit certain behaviors without actually being part of the circuit. Thus, concluding that their occurrence is responsible for the model's performance could be misleading.
- Furthermore, the analysis is primarily conducted for a single task, IOI, rather than all four tasks mentioned earlier. Although some of the heads studied are involved in the Greater-than tasks, MLP neurons, which have been shown to be part of the circuit, are not analyzed.
2. While it is interesting to note that individual circuit components emerge simultaneously with the model's behavior, it still does not explain why these components emerge after similar token counts in models of varied scales.
3. Section 4 investigates only the IOI task, which raises concerns regarding the generalizability of its results.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The finding that models of varied sizes learn a task after a similar number of tokens is intriguing and somewhat counterintuitive. This makes me wonder if analyzing how gradient updates modify model weights and comparing these changes across models could provide a better understanding of how language models learn (potentially in future works).
2. Section 3.2 states that to validate the importance of four mentioned types of attention heads, a circuit is discovered for each model at each checkpoint. I’m unsure of what a circuit discovery algorithm will discover for early checkpoints where the model does not even have behavioral ability to perform the task. So, it’s surprising to me that authors were still able to identify circuits and functionality of their components which is consistent with existing discovered circuits in the literature. I would like to see evaluation results of these circuits.
3. Section 2.2 mentions that there is no definitive method for verifying the entirety of the identified circuit. This needs more explanation, particularly regarding why metrics like completeness proposed in [4] are considered inadequate for this purpose.
[4] Wang et al, “Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 small”, 2022.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: The primary limitation of this work is the use of simple tasks such as IOI, Greater-than, Subject-verb, and Gender Pronoun for analysis. Some of the results may not be applicable to more complex tasks. Additionally, relying solely on Pythia model suits trained using the same training data poses a risk to the generalizability of the findings. However, despite these limitations, the results are insightful and should be valuable for the mechanistic interpretability community and beyond.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your insightful review! These are good points, which we answer below.
## Weaknesses
> 1.1 All the model heads are evaluated, rather than only those involved in the circuit performing the task. It is possible that some heads exhibit certain behaviors without actually being part of the circuit. Thus, concluding that their occurrence is responsible for the model's performance could be misleading.
Indeed, this could be a concern. In our PDF response, we plotted the sum of all component effects from all heads in the circuit (Figure 1). We see that the emergence of these components comes right as (or just before) the emergence of task behavior as well, indicating that our previous results are not due to the inclusion of non-circuit heads.
> 1.2 Furthermore, the analysis is primarily conducted for a single task, IOI, rather than all four tasks mentioned earlier. Although some of the heads studied are involved in the Greater-than tasks, MLP neurons, which have been shown to be part of the circuit, are not analyzed.
It’s true that Hanna et al. (2023) study MLPs and their neurons. However, unlike the induction and successor heads on which we focus, these aren’t known to perform a generalizable function that is reused across tasks. We could track the MLPs that connect to the logits in the circuit, and which neurons therein matter most, but we wouldn’t gain any insights about how higher-level model abilities develop. For these reasons, we excluded them.
> 2. While it is interesting to note that individual circuit components emerge simultaneously with the model's behavior, it still does not explain why these components emerge after similar token counts in models of varied scales.
We agree that the phenomenon of components emerging at similar token counts across model scales deserves more exploration. A deeper look into this phenomenon seemed beyond the scope of our paper, but we hope it will be the subject of future work. We hypothesize that the formation of the other head types may occur in a phase change similar to the induction head phase change observed in Olsson et al. (2022), though model retraining for the purpose of obtaining more granular checkpoints would be needed to look in further detail at what is happening as these heads develop. This by itself would not explain why this happens, however, and we hope to see further investigations with additional approaches.
> 3. Section 4 investigates only the IOI task, which raises concerns regarding the generalizability of its results.
It is true that running a similar analysis on a broader set of circuits would strengthen our claim of algorithmic consistency. Unfortunately, few circuits with a clear and quantifiable algorithm beyond IOI have been identified to-date; even quantifying the relatively well-characterized Greater-Than is challenging. For this reason, we used IOI as the subject of our analysis. We do not claim that the property of generalizability over time and scale will apply to all circuits, especially more complex ones, but we hope this is investigated further.
## Questions
>1. The finding that models of varied sizes learn a task after a similar number of tokens is intriguing and somewhat counterintuitive. This makes me wonder if analyzing how gradient updates modify model weights and comparing these changes across models could provide a better understanding of how language models learn (potentially in future works).
We have also hypothesized that there may well be a connection between the number of gradient updates and the magnitude of changes to model weights required for models to learn a task. This isn’t quite in the scope of the current work, but we agree this is an interesting question.
>2. Section 3.2 states that to validate the importance of four mentioned types of attention heads, a circuit is discovered for each model at each checkpoint. I’m unsure of what a circuit discovery algorithm will discover for early checkpoints where the model does not even have behavioral ability to perform the task. So, it’s surprising to me that authors were still able to identify circuits and functionality of their components which is consistent with existing discovered circuits in the literature. I would like to see evaluation results of these circuits.
This is true—we run the circuit at all checkpoints, but at early checkpoints, there’s little model behavior to localize. The circuits discovered before model behavior emerges are not very meaningful; though they attain 80% of the model’s original performance, that performance is near 0. These pre-performance “circuits” consist of fairly stochastically-changing, hard-to-interpret sets of components that contribute only to the extremely small positive and negative variances in the task performance metrics. Due to this, we don’t base our conclusions on these early circuits; we will stress this in the text.
> 3. Section 2.2 mentions that there is no definitive method for verifying the entirety of the identified circuit. This needs more explanation, particularly regarding why metrics like completeness proposed in [4] are considered inadequate for this purpose.
Wang et al. (2023) indeed propose a completeness metric, which involves comparing model and circuit performance while ablating a set of components from both. However, this has flaws. For one, it’s challenging to choose components; Wang et al. propose three methods for doing this, which yield different results, and are not all compatible with our needs. More importantly, these ablations are computationally expensive, and not feasible to perform over many checkpoints. As a result, this metric has not been widely adopted, and we find most of the proposed alternatives (e.g. faithfulness of the circuit's complement) inadequate. We can add some discussion of this issue.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I believe including some of these points in the manuscript will improve its clarity. I will be keeping my original score.
---
Reply to Comment 1.1.1:
Comment: Our pleasure. What would we need to do over the next two weeks to strengthen the paper enough to raise your score? | Summary: This paper presents a set of analyses on the dynamics with which internal language models’ mechanisms emerge and change during training. The four mechanisms studied are internal circuits that the model implements to carry out four simple tasks: indirect object identification (IOI), gendered pronoun, greater-than, and subject-verb agreement. These circuits have been identified by previous work and shown to be implemented by a set of specialized attention heads that display an interpretable behavior (e.g., induction heads, which attend to a previous occurrence of a substring that is repeated in the input).
The objects of the study are 7 models of the Pythia family, with sizes ranging from 70M to 12B parameters. The authors identify the components that implement each circuit using a technique called edge attribution patching with integrated gradients. In the first analysis, the authors show that the ability of the models to carry out the four tasks emerges roughly at the same point during training (in terms of # of tokens seen by the model). At the same point during training, a subset of the models’ attention heads are observed to start implementing the specialized behaviors used to implement the tasks studied, leading to the conclusion that these specialized heads are responsible for the emergence of the model performance.
In a second analysis, the authors study the dynamics which is how different sets of heads in Pythia 160M display the specialized behaviors relative to the four tasks considered. One observation is that the functional behavior of some heads decreases over time. This is surprising as this decrease is not reflected by the model’s performance. Copy suppression might be a possible explanation for this phenomenon.
Additionally, the paper includes an analysis of the stability of the algorithm implemented by the Pythia models on IOI, suggesting that the algorithm does not undergo significant changes after emerging during training.
Finally, the paper analyzes the changes in the actual components constituting the four circuits in Pythia 70M-2.8B.
Strengths: - The analysis of circuit dynamics during language model pre-training is novel.
- The results about the emergence (and disappearance) of specialized heads, and about the models’ algorithmic stability during training are informative and are likely to be appreciated by the (mechanistic) interpretability community.
Weaknesses: - Parts of the paper might benefit from additional details and clarification (see questions 1.1-1.3).
- The conclusions to be drawn from some of the results are not completely clear (see question 2).
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In Section 4.2, the authors measure, through path patching, the effect of a node in the computational graph in Figure 4A, on the model’s accuracy when performing IOI.
1. As a reader, I would appreciate some additional details about the experimental procedure here: what are the exact heads that are being ablated? How are they selected for each model (e.g., the 5 heads with the highest CPSA score)?
1. It is not clear to me why this effect is termed “direct.” As illustrated in Appendix B, path patching involves intervening on a node H (e.g., S-Inhibition heads), measuring the change produced by the intervention on a second node R that depends on H (e.g., NMH), and finally how the change in R an outcome variable Y (in this case, the model’s output). Through this procedure, what is being measured is the *indirect* effect of H on Y that is mediated by R.
1. “For each step, our metric measures this direct effect, divided by the sum of the direct effects of ablating each edge with the same endpoint” (lines 254-255). I am confused about the denominator in this operation: what are the effects that are being summed here? Also, do the results look similar when unnormalized (i.e., when measuring the absolute effect)?
1. I am not sure whether there’s any clear conclusion that we can draw from the results presented in Section 5. It is true that the EWMA-JS for Pythia 70M seems to have a higher variance, but besides this, I struggle to see a clear trend in the measurements for the other models.
1. If you quantified the cumulative score for each type of head (e.g., induction score) over time for the top k heads, would it be constant after the point during training at which the model learns the task? This would confirm your hypothesis for which specialized heads are replaced by other heads as they lose their functional behavior during training (although my guess would be that such cumulative scores might be decreasing over time).
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations seem to be adequately discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your review and helpful suggestions! We respond to your questions (which contain your stated weaknesses), below.
> 1.1 As a reader, I would appreciate some additional details about the experimental procedure here: what are the exact heads that are being ablated? How are they selected for each model (e.g., the 5 heads with the highest CPSA score)?
Here are additional details about our algorithmic consistency experiment, which we will add to the appendix. At each checkpoint, heads were selected for ablation as follows:
1. We ablated all heads upstream of a target or set of targets (e.g., the final logits or the set of name-mover heads) one at a time via path patching to determine their effect on the logit difference metric (either directly or through an intermediate node); this mirrors Wang et al. (2023).
2. We tested each of these heads for their target function (e.g., copy score for name-mover heads). We tested the set of heads that both A. had a component function score over a specific threshold (10%) and B. had a negative effect on logit difference when patched. For S2-inhibition heads, we tested whether ablating positional signal while keeping the token signal A. reduced logit difference, B. reduced NMH attention to the IO, and C. increased NMH attention to S1. See appendix D for the component tests for induction heads and duplicate token heads. All of these tests also come from Wang et al. (2023)
> 1.2 It is not clear to me why this effect is termed “direct.” As illustrated in Appendix B, path patching involves intervening on a node H (e.g., S-Inhibition heads), measuring the change produced by the intervention on a second node R that depends on H (e.g., NMH), and finally how the change in R an outcome variable Y (in this case, the model’s output). Through this procedure, what is being measured is the indirect effect of H on Y that is mediated by R.
You are right to question our use of the term direct in this paragraph. Upon re-reading this, we see that we inadvertently overloaded the term and should not have used “direct” here. The procedure is as you described.
> 1.3 “For each step, our metric measures this direct effect, divided by the sum of the direct effects of ablating each edge with the same endpoint” (lines 254-255). I am confused about the denominator in this operation: what are the effects that are being summed here? Also, do the results look similar when unnormalized (i.e., when measuring the absolute effect)?
Taking the notation you used in the previous question, if we take the numerator as the effect of ablating heads H (e.g., all S-Inhibition heads) on Y through intermediate heads R (e.g. NHMs), the denominator is the effect on Y of ablating heads G (all heads upstream of R, and includes H as a subset) through intermediate heads R.
> 2. I am not sure whether there’s any clear conclusion that we can draw from the results presented in Section 5. It is true that the EWMA-JS for Pythia 70M seems to have a higher variance, but besides this, I struggle to see a clear trend in the measurements for the other models.
We apologize for the lack of clarity here and agree that this section might have been better combined with Section 4. We aimed to quantify the shifting set of nodes that comprised each task circuit, and to illustrate that these shifts are not model-dependent. In most cases, shifts are gradual—Jaccard similarity is often between 0.6 and 0.9—but nevertheless real. We have added plots in our response PDF (Figures 1 and 2) that illustrate similarity to the final circuit. These show gradual changes result in a significant shift between the circuits at the beginning and end of training, despite our algorithmic consistency results.
> 3. If you quantified the cumulative score for each type of head (e.g., induction score) over time for the top k heads, would it be constant after the point during training at which the model learns the task? This would confirm your hypothesis for which specialized heads are replaced by other heads as they lose their functional behavior during training (although my guess would be that such cumulative scores might be decreasing over time).
This is a good suggestion, and one that we incorporated into Figure 3 of our response PDF. The figure now shows the sum of head effects across all in-circuit heads at a given checkpoint. Some trends remain the same as previously: the timestep at which components emerge is still tightly coupled to the emergence of task behavior. However, the degree to which the sum is constant over time varies across components; we believe that this reflects not only variation in the total component effects, but how much sense it makes to sum each component’s score.
---
Rebuttal Comment 1.1:
Comment: Thank you for your thorough response. I appreciate the details provided and the additional analyses.
> However, the degree to which the sum is constant over time varies across components; we believe that this reflects not only variation in the total component effects, but how much sense it makes to sum each component’s score.
What can we conclude from these observations? Do you believe that copy suppression is the explanation for the decrease in the cumulative functional score not followed by a decrease in the model's performance on the tasks (Fig. 1), or are there other possible explanations for this phenomenon?
---
Reply to Comment 1.1.1:
Comment: This seems like a potentially interesting question, but we’re not sure we’ve understood it correctly! Just to clarify: the copy suppression and name mover heads are (in Pytha models) the two families of components with direct effects on IOI task performance. We see in Rebuttal Fig. 1 that while summed copy suppression scores exhibit a slight downward trend toward the end of training, and Pythia-160m’s summed name mover head score decreases near the end of training, IOI performance does not decrease (Main paper Fig. 1).
Why might this occur? While in GPT-2 small (studied by Wang et al. (2023)), the name-mover heads were the most important part of the IOI circuit, in Pythia models, the copy suppression heads are much more significant. As only small changes occurred in the more important component in the circuit (the CS heads) in Pythia, model behavior doesn’t change very much.
Perhaps more importantly, while our e.g. copy suppression head metric verifies that a head does copy suppression generally, this metric does not try to quantify the strength of copy suppression on the given task. It's plausible that these heads could perform differently on different data distributions. Note that this is by design—our metrics measure behaviors regardless of task relevance. If we want to measure task relevance, it might be better to combine head-level importance scores with the component metrics to ask the question, e.g. “What is the total effect of copy suppression heads in my circuit, weighted by how much they behave like copy suppression heads?” We already ensure some task relevance by summing scores only on in-circuit nodes, but this new weighted metric might better capture the quantity you’re discussing, and would be valuable.
The big conclusion for us was how much clearer the emergence points of these heads became; it’s now much more visible that their emergence directly precedes task performance. In our earlier plots, we showed the emergence of one individual successor / induction / etc. head per model, but this was flawed; in the early stages of component emergence, several competing heads often developed at once. As a result, it was hard to see component emergence when plotting just one head. By considering the emergence across all heads in the circuit, we were able to see the emergence of these heads overall, even before any particular head emerged. | Rebuttal 1:
Rebuttal: We would like to thank all the reviewers for their thoughtful responses! We are glad to hear that reviewers feel that our paper:
- Presents novel and interesting insights (dE5s, v6cM, wt3i)
- Extends beyond prior work by studying models across scales (865w, wt3i, v6cM)
- Is valuable to the mechanistic interpretability community (dE5s, 865w)
However, the reviewers also shared some critiques in common, which we would like to address here:
- **This work studies only a small number of simple tasks (865w, wt3i, v6cM)**: This is true, and due in part to our desire to study models of a varying sizes: small models often fail on complex tasks, so studying their behavior on large tasks is not insightful. Moreover, there are few complex tasks for which circuits have been found. Finally, because we already study models across two axes (scale and time), a third axis (tasks) would have been too computationally expensive. However, we agree that this aspect is important, and hope to investigate it in future work.
- **Potential flaws with our analysis of model components in section 3.2** (dE5s, 865w, v6cM): Reviewer dE5s notes that our analysis is limited by the fact that we consider only individual components, rather than the sum of their effects, while reviewer 865w notes that we analyze components without regard to whether the are in the circuit. These criticisms have merit, so we re-analyzed our data via the following procedure: Instead of plotting individual heads, we sum scores across heads for each checkpoint, and normalize, dividing by the maximum sum across checkpoints. We set the score of any head not in the circuit to 0. We plot this in Figure 1, using the same x-axis as our behavioral plots as requested by reviewer v6cM. We find that the point at which components develop in this plot more closely tracks the emergence of task behavior than it did in our previous plot. However, trends in the sum across epochs are variable across tasks: while most tasks’ sums rise over training, induction score changes wildly. We attribute this to not only actual variation in the number of heads acting as a component, but also measurement issues: by taking the sum across all in-circuit heads, we might include many heads that have a low but non-zero component score, obscuring trends in heads that actually perform that component ability.
- **Requested clarifications about our algorithmic stability analysis in section 4.2 (dE5s, v6cM)**: Reviewers dE5s and v6cM requested details about our algorithmic stability experiments in Section 4.2; specifically, they asked how we selected heads for ablation, and whether these were part of the circuit (and thus relevant to model behavior). We would like to clarify that we selected heads based on two criteria: first, we selected heads based on their component scores as in Section 3.2. Second, we selected heads that had a large effect when targeted using the path-patching causal intervention; this path-patching effect is roughly the quantity estimated by EAP-IG when we find circuits. Thus, we actually use a more accurate method of finding important components in Section 4.2. For more details on this procedure, see our responses to dE5s and v6cM.
- **Questions about circuit similarity in section 5 (dE5s, wt3i, v6cM)**: Reviewer dE5s asked about the interpretation of Figure 5, and reviewer wt3i remarked that it was difficult to see if circuits changed significantly versus only the previous checkpoint, as we measured weighted similarity with all previous checkpoints. Reviewer v6cM also suggested measuring the Jaccard similarity of circuit components weighted by importance.
In our PDF response, we provide two plots: one with the given checkpoint’s circuit’s unweighted node similarity to the final circuit (Figure 2), and one with its weighted edge similarity to the final circuit; note that we cannot perform weighted node similarity, as EAP-IG only yields edge weights (Figure 3). Both plots indicate that while inter-checkpoint similarity to the final circuit varies, circuits eventually become more and more similar to the final checkpoint. We also add that our implementation of EWMA does result in scores that are mostly influenced by nearby checkpoints, as the weighting of JS to distant checkpoints drops off rapidly. As for the intention of these plots: our objective was to quantify the level of change that occurs in the constituents of these circuits even as performance largely remains stable beyond a certain point.
Pdf: /pdf/a36bb2ead2fa86bcf6d5a7fe31bc35dded8b2e78.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Expressive Gaussian Human Avatars from Monocular RGB Video | Accept (poster) | Summary: The paper focuses on improving the expressiveness of digital human avatars, particularly through detailed hand and facial expressions, learned from monocular RGB video. The main contributions are:
- SMPL-X Alignment improves the alignment of the SMPL-X model with RGB frames and aids in accurately recovering avatars.
- Context-Aware Adaptive Density Control Strategy adjusts gradient thresholds to handle the varied granularity across different body parts, enhancing the expressiveness of the avatars.
- Feedback Mechanism that predicts per-pixel confidence to better guide the learning and optimization of 3D Gaussians.
Overall, the paper presents a comprehensive framework that enhances the realism and expressiveness of digital human representations, validated by substantial quantitative and qualitative experiments.
Strengths: The paper is written in a clear and understandable manner. Comprehensive ablation studies and clear visualizations help to demonstrate the effectiveness of different components of the method. The paper also compares the new method with previous methods and shows that it achieves better performance. It is commendable that the authors added SMPL-X to existing methods for fair and thorough comparisons.
Weaknesses: - One of the contributions of using and optimizing SMPL-X for expressive avatar is not novel. This has been done in previous works [1].
- The paper lacks a comparison to recent methods such as Splatting Avatar [2]. Including this comparison in Table 1 can help to distinguish the improvement made by the proposed approach.
- There is an inconsistent ablation study across the datasets. Table 2 should illustrate how the alignment affects the baseline method by comparing scenarios with and without CADC, CL, and alignment. Similarly, Table 3 should also show the method with CADC and CL included.
- PSNR, SSIM, and LPIPS should be calculated from four novel views to ensure the robustness of the method across different perspectives. Additionally, animation and novel pose metrics can be included to display differences between methods.
- From Table 2, it seems that CL does not have a significant impact. This should be addressed or clarified to understand its role in the framework. It would be good to provide more qualitative visualization to demonstrate its effectiveness.
[1] (Liu et al., 2024) GVA: Reconstructing Vivid 3D Gaussian Avatars from Monocular Videos.
[2] (Shao et al., 2024) SplattingAvatar: Realistic Real-Time Human Avatars with Mesh-Embedded Gaussian Splatting.
Technical Quality: 3
Clarity: 3
Questions for Authors: - **Project Page for Visualizations:** Is there a project page available that shows visualizations of the rendered method, such as a 360-degree rotating video of the generated person and animation to better illustrate the quality? Specifically, could visualizations zoom into detailed areas such as the hands and face?
- Is e and λt in Equation 14 learned or fixed? If learned, it would be advantageous to list the values to demonstrate that the learned parameters effectively handle different resolution details for different body parts.
- Discussing failure cases and outlining potential future work would provide a more comprehensive view of the method's limitations and areas for improvement.
- Information on the optimization time compared to previous methods and the resource requirements for running the proposed method would be valuable for assessing its efficiency and scalability.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: While the use of CADC is interesting, the other two contributions are of limited novelty. SMPL-X usage and alignment have been incorporated by previous works. The improvement of CL is not significant. The addition of project page for visualizations would be helpful to demonstrate the improvements made by the methods. In addition, more quantitative results should be added to Tables 1, 2 and 3.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: One of the contributions of using and optimizing SMPL-X for expressive avatar is not novel. This has been done in previous works GVA.**
A1: GVA is a concurrent work and has not been accepted. Compared with GVA, our fitting design is quite different, since ours explicitly focuses on the fine-grained areas (especially on hand) to meet the requirement of expressiveness. Considering GVA is not open-sourced, we are not able to compare with it. However, we have visualized that our fitting method outperforms the current SOTA SMPL-X estimation method in both Figure 4 (main paper) and Figure 1 (rebuttal pdf).
**Q2: More comparison with recent methods such as SplattingAvatar.**
A2: Thanks for this suggestion. The results of SplattingAvatar are shown in A1 of General Response. We will add these experiment results in Table 1.
**Q3: Clarification on inconsistent ablation study across the datasets.**
A3: Table 2 demonstrates the ablation results on the XHumans dataset. For XHumans, we directly utilize its provided accurate SMPLX annotation. That is to say, we do not need the SMPL-X fitting involved. For Table3, we further update the ablation results.
| **Method** | **PSNR (Full)** | **SSIM (Full)** | **LPIPS (Full)** | **PSNR (Hand)** | **SSIM (Hand)** | **LPIPS (Hand)** | **PSNR (Face)** | **SSIM (Face)** | **LPIPS (Face)** |
|-------------------|--------------------|--------------------|-----------------------|--------------------|--------------------|-----------------------|--------------------|--------------------|-----------------------|
| w/o Align | 25.02 | 0.9435 | 73.82 | 24.64 | 0.9396 | 63.64 | 24.27 | 0.9009 | 93.56
| w/o CADC | 26.53 | 0.9504 | 68.76 | 26.43 | 0.9494 | 50.59 | 26.45 | 0.9265 | 71.17
| w/o CL | 26.74 | 0.9507 | 67.90 | 26.74 | 0.9507 | 48.54 | 26.69 | 0.9293 | 69.60
| EVA | 26.72 | 0.9519 | 65.37 | 26.90 | 0.9523 | 46.11 | 26.75 | 0.9298 | 66.41
**Q4: PSNR, SSIM, and LPIPS should be calculated from four novel views to ensure the robustness of the method across different perspectives. Additionally, animation and novel pose metrics can be included to display differences between methods.**
A4: We calculate PSNR, SSIM and LPIPS from different viewpoints. To our best knowledge, we cannot find any animation and novel pose metrics reported in relevant works (GART, GauHuman, SplattingAvatar, etc.)
**Q5: Clarification on CL role in the framework.**
A5: We observed that CL helped with perceptually improving the results, which is also reflected in the consistent performance improvement on the LPIPS metrics. Moreover, another benefit of CL is that it led to a more compact representation (e.g., reducing the number of Gaussians from 21,038 to 19,993). We further present some qualitative results (w/o and w confidence-aware loss) in Figure 3 of the rebuttal pdf.
**Q6: Project page for visualization including a 360-degree rotating video, animation, and zooming.**
A6: Due to the NeurIPS policy ("all the texts you post (rebuttal, discussion and PDF) should not contain any links to external pages"), we are unable to provide the external link to our built project page. In fact, we have included 360-degree rotating video (Supp video) and animation (Supp video) and zooming results (both Supp video and figures in main paper). We promise to release the project page afterward.
**Q7: Is e and λt in Equation 14 learned or fixed?**
A7: Both e and λt are fixed.
**Q8: Discussion on failure cases and outlining potential future work**.
A8: Thanks for pointing out this issue. For the fine-grained expressive areas, the failure cases happen when hand interaction exists, which is mainly caused by the false driven SMPLX signal (e.g. the driven SMPLX has implausible interpenetration between hands). From the holistic view, some 'floaters' may exist sometimes, where general 3DGS modeling also exists.
We outline the potential future works as follows,
1) Modeling capability on non-rigid elements, such as loose cloth (dress). Modeling cloth on the avatar has been a challenging topic, even separately studied in several works. A potential solution could be parameterizing the cloth deformation or adding more prior on cloth type, so as to provide more driving signals.
2) Generalizable human avatar from monocular RGB video. Current methods in this topic mostly need per-subject optimization, which needs to be re-trained to any new given subjects. It is worth exploring if we could get an avatar from a monocular RGB video of any given subject with a single feed-forward pass.
3) Robustness. It is worth exploring if we could build the human avatar well with more limited source inputs, e.g. a few images with even mutual occlusion, a single image, etc.
**Q9: Information on the optimization time compared to previous methods and the resource requirements.**
A9: For the optimization time, we generally divide it into two stages. Take a real-world 1080p video as an example, the first stage (data preprocessing) needs 10.2 minutes, while the second stage (avatar modeling) needs 7.5 minutes on RTX A5000. Compared with the avatar modeling stage of EVA, 3DGS+SMPLX needs 7.5min, GART+SMPLX needs 12.9 min, GauHuman+SMPLX needs 4.5min, and SplattingAvatar needs 45.2 min. The resource requirement could be further lowered down to consumer GPU cards like 3090, but at the expense of speed.
---
Rebuttal 2:
Comment: Thank you for the detailed responses. I also recommend including the fixed values of e and λt in the paper, along with a justification for how these values were determined. Given the overall quality of this submission, I will recommend the acceptance of this paper. Please include the additional experimental results in your revision.
---
Rebuttal Comment 2.1:
Comment: Thank you for recognizing our efforts in addressing your concerns. We are pleased that our responses have been helpful and will incorporate the additional experimental results and responses into the main paper, as per your suggestions.
We are always open to further discussion that could help enhance our paper. If there are no further concerns, we would greatly appreciate it if you could kindly consider raising the rating.
Thank you again for your valuable input. | Summary: The paper introduces a 3DGS-based human avatar generation method from a monocular RGB video. Specifically, the authors first optimize the SMPL-X model to better align with the RGB frames. Then, they propose an adaptive adjustment method for different body parts and use per-pixel confidence to guide 3DGS. It is reported that the proposed method achieves state-of-the-art performance. Visual results show fine-grained hand and facial details.
Strengths: 1. Generating the human upper body videos is a critical task. It is more expressive and more challenging than facial generation. This method overall demonstrates fine-grained generation details.
2. The existing SMPL-X model has alignment issues when estimating the human body. The optimized SMPL-X model in this method significantly improves the alignment between SMPL-X and the RGB frames.
3. The paper is overall easy to follow, and the experimental results demonstrate the method's superiority in numerical metrics.
Weaknesses: 1. In the paper, the confidence scores for CL are all learned from the rendered results. How can the effectiveness of this confidence be ensured? In Table 2, the face area without CL also shows better results.
2. [11] also demonstrates photorealistic 3D human body generation, and the authors need to include more discussions with it.
3. The paper has many writing issues, such as a too-brief related work section, inconsistent formula definitions (e.g., $\Pi_K$ in Eq. 11), and some missing punctuation in formulas.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What are the specific requirements for videos in terms of the method, such as the length of the video and the range of camera angles?
2. How does the method perform when the magnitude of the action being driven is out-of-distribution?
3. What is the inference speed of the method?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: This paper has involved discussions about limitations and broader impact.
Flag For Ethics Review: ['Ethics review needed: Research involving human subjects']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: How can the effectiveness of this confidence be ensured?**
A1: The predictor could leverage the information contained in rendered RGB and depth to adaptively learn the confidence in an end-to-end data-driven manner. The experimental results have validated the effectiveness of CL and we also present some visualization in Figure 3 of the rebuttal pdf. Another benefit of CL is to reduce the number of Gaussians from 21,038 to 19,993.
**Q2: GaussianAvatar also demonstrates photorealistic 3D human body generation, and the authors need to include more discussions with it.**
A2: GaussianAvatar is one of the pioneering animatable 3D Gaussian models which is learned from a monocular RGB video. Its representation is further enhanced via two key components for final photorealistic quality. Dynamic properties are designed to support pose-dependent appearance modeling, while joint optimization of motion and appearance helps tackle inaccurate motion estimation. We will add this discussion in related work.
**Q3: Suggestion on revising some writing issues, such as a too-brief related work section, inconsistent formula definitions (e.g., in Eq. 11), and some missing punctuation in formulas.**
A3: Thanks for pointing out these issues. We will discuss more works in the related work section, e.g. GaussianAvatar, fix the inconsistent formula definitions, and add the missing commas in Eq. 7, 8, 9, 10 and 13. We will incorporate these modifications in the revised manuscript.
**Q4: Specific requirements for videos in terms of the method.**
A4: Since our method is reconstruction-based, our main requirement is that the areas of interest should be captured in the video. For example, for the full-body X_Human dataset, the monocular RGB video includes observations around the person of interest. On the other hand, for the upper-body UPB dataset, since the area of interest for sign language is only on the upper body, we only need to capture the corresponding appearance of the person. We note that we keep the input RGB frames of all comparison methods consistent for fair comparison.
**Q5: How does the method perform when the magnitude of the action being driven is out-of-distribution?**
A5: It is hard to quantify the magnitude of out-of-distribution actions in a principal way (although if the reviewer would like to indicate a specific metric/evaluation, we would be happy to explore it). With that being said, on the UPB benchmark for instance, our evaluation is conducted on novel poses that are not used during training, so our evaluation indicates the performance on the novel pose setting. Moreover, qualitatively, we demonstrate the results of our avatar driven by an in-the-wild SMPL-X sequence with unseen poses and unseen identities. By inspecting the results, the expressive details are quite faithful. However, if the motion used at test time is completely different, it is likely we will observe artifacts in the reconstruction.
**Q6: Inference speed of the method.**
A6: Our method could render the 1080p image with the inference speed of 361.02 fps on one RTX A5000. | Summary: The paper presents EVA, a model that can recover expressive human avatars from monocular RGB videos. The primary focus is enhancing fine-grained hand and facial expressions using 3D Gaussian Splatting and the SMPL-X model. EVA introduces three key contributions: a plug-and-play module for better SMPL-X alignment, a context-aware adaptive density control strategy, and a feedback mechanism for optimizing 3D Gaussian learning. Extensive experiments on two benchmarks demonstrate EVA's superiority in capturing detailed expressiveness compared to previous methods.
Strengths: The paper introduces three significant innovations: a plug-and-play alignment module, a context-aware adaptive density control strategy, and a feedback mechanism for 3D Gaussian optimization.
2. Extensive quantitative and qualitative evaluations on two benchmarks show EVA's effectiveness in capturing fine-grained details, particularly in hand and facial expressions.
3. The proposed method addresses the challenging task of creating expressive avatars from monocular RGB video, which has practical applications in VR/AR, movie production, and video games.
4. The paper comprehensively explains the technical approach, including SMPL-X alignment, Gaussian optimization, and adaptive density control.
Weaknesses: 1. The proposed method's complexity might pose implementation challenges for those not well-versed in the field. Simplifying some aspects or providing more intuitive explanations could help.
2. While the method shows superior performance on the provided datasets, the generalizability to other types of monocular videos or environments is not thoroughly discussed.
3. Although the paper compares EVA with previous SOTA methods, it would be beneficial to include more diverse baselines or variants to demonstrate the robustness of the proposed approach.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. How does EVA perform with videos that have significant occlusions or low lighting conditions?
2. Are there any specific preprocessing steps required for the input monocular RGB videos?
3. Can the proposed method be extended to capture other expressive elements, such as cloth or hair movements?
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: 1. The performance heavily relies on the quality of the datasets used. Real-world applications might present challenges not covered by the current benchmarks.
2. The method requires significant computational power, which might limit its applicability in real-time scenarios or on devices with limited resources.
3. While the method focuses on hand and facial details, other expressive elements like body language, clothing, and hair dynamics are not addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Suggestion on simplifying some aspects or providing more intuitive explanations.**
A1: Thanks for this suggestion. We commit to releasing our code upon acceptance. Our work makes a step towards building an expressive avatar from the real-world video. In short, our method aims to answer two key questions below,
1) How to generate a well-aligned SMPLX mesh to provide accurate correspondence for expressive avatar modeling? We propose a fitting-based alignment method explicitly focusing on fine-grained details (e.g. hand).
2) How to enhance the expressiveness during avatar modeling? We propose context-aware adaptive density control, in coordination with the feedback mechanism.
**Q2: More discussion on the generalizability to other types of monocular videos or environments.**
A2: Currently, our work has included various types of monocular RGB videos from controlled capturing setting to those collected from the Internet. The involved human poses contain large body motion (e.g., playing basketball, weight lifting, do kicks, dancing), along with fine-grained motions (e.g. finger counting, using tools, sign language). Among them, considering the complexity and practicability of sign language, we separately involve it to evaluate expressiveness and collect a new benchmark (UPB) from the Internet. We acknowledge that there are more types of videos and environments that could be further explored (such as when the people are occluded by other objects, changing light conditions) and will add this discussion as promising future work.
**Q3: Suggestion on including more diverse baselines or variants to demonstrate the robustness of the proposed approach.**
A3: Thanks for your suggestion! We have used the Splatting Avatar method from CVPR 24 and evaluated it on the datasets we used. The results are shown in A1 of General Response. We will add these experiment results in Table 1.
**Q4: How does EVA perform with videos that have significant occlusions or low lighting conditions?**
A4: EVA can deal well with self-occlusions, which usually exist in each frame, especially on the hand areas. This can be attributed to our proposed SMPL-X fitting method which could provide reliable correspondence to map the pixels to the canonical space. Both qualitative and quantitative results demonstrate the effectiveness of our method. Besides, we expect that EVA could also handle low lighting conditions (as long as the lighting is not changing significantly), since our SMPL-X fitting method should be robust to low lighting conditions and with good SMPL-X estimates, we expect the Gaussian Splatting stage to perform well.
**Q5: Are there any specific preprocessing steps required for the input monocular RGB videos?**
A5: Since our method is reconstruction-based, our main requirement is that the areas of interest should be captured in the video. For example, for the full-body X_Human dataset, the monocular RGB video includes observations around the person of interest. On the other hand, for the upper-body UPB dataset, since the area of interest for sign language is only on the upper body, we only need to capture the corresponding appearance of the person. We note that we keep the input RGB frames of all comparison methods consistent for fair comparison.
**Q6: Can the proposed method be extended to capture other expressive elements, such as cloth or hair movements?**
A6: We believe our work could be extended to capture other expressive elements. One possible solution is to embed the driving signal with more modeling capabilities considering the other expressive elements that we need to incorporate. In this way, our method could benefit from explicit signals to perform modeling. However, this is beyond the scope of our current work.
**Q7: The performance heavily relies on the quality of the datasets used. Real-world applications might present challenges not covered by the current benchmarks.**
A7: The high-quality dataset is needed to ensure sufficient resolution on fine-grained expressive areas for training and evaluation. With the popularization of low-cost consumer cameras, high-quality data (e.g. 1080p) becomes much easier to capture, even via personal phones. With that being said, our proposed SMPLX fitting simplifies and robustifies the whole avatar generation pipeline, such that it works with sufficient robustness in Internet videos (e.g. see the UPB videos that come from YouTube).
**Q8: The method requires significant computational power, which might limit its applicability in real-time scenarios or on devices with limited resources.**
A8: We only need one GPU for both training and testing. Once our model is trained, it is efficient to run, which could render the 1080p image at 361.02 fps on one RTX A5000.
---
Rebuttal Comment 1.1:
Title: Thanks for the responses
Comment: Thanks for the detailed responses. Given the overall quality of this submission, I will recommend the acceptance of this paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for recognizing our efforts in addressing your concerns. We feel very grateful that you will recommend our work for acceptance. We will revise the manuscript as you suggested. | Summary: This paper proposes a solution to generate expressive human avatars from monocular RGB videos. The main focus is to improve the expressiveness. To this end, a few ideas such as combining 3d Gaussian splatting and SMPL-X (SMPL+parametric hands), minimizing 2d reprojection error, finegrained density control, etc.
There are a few similar papers in citation. To better understand the novelty of this paper, here's a high level comparison:
+ SMPLer-X [1]: Representation is SMPL-X only. Input is monocular RGB videos. Losses: SMPL-X keypoint losses.
+ GART[23]: Representation is 3d Gaussians+SMPL. Input is monocular RGB videos. Losses: photometric, perceptual.
+ 3DGS-Avatar [33]: Representation is 3d Gaussians+SMPL; Input is monocular RGB videos; Losses: photometric, perceptual, mask, etc;
+ This paper: Representation is 3d Gaussians+LBS+SMPL-X. Input is monocular RGB videos. Losses: SMPL-X keypoint losses, perceptual.
Strengths: From the high level comparison, the paper does find a unique and timely combination of 3d Gaussians and SMPL-X, which is well-motivated by improving the expressiveness. Some of the ideas such as adaptive density control are therefore new as it leverages the fine-grained details of SMPL-X.
Weaknesses: The concept of model and image alignment by minimizing reprojection losses is not new. For example, the keypoints losses (Sec. 3.2) are very common in literature [1, 30].
Similarly mask loss and perceptual loss can be found in [33].
The adaptive density control part seems novel, as it's initialized from body parts and leveraging the fine-grained topology of SMPL-X.
The confidence-aware loss seems new but only provides marginal improvement as in Table 2.
Lack of hyperparameter ablation. Not even in supplementary.
Minor issue: `video` is countable. Please explicitly use `videos` or `a video` in the paper. Actually it's not clear how many videos of one subject are used to train one avatar.
Technical Quality: 3
Clarity: 2
Questions for Authors: What's 'E' in Eq (16)? It looks like the so-called feedback module. But what's the design? Is it a learning-based predictor?
Please also address the novelty questions in the weakness part.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Limitations are discussed. It'd be good to discuss the failure cases.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Clarification on reprojection losses.**
A1: We want to clarify that the reprojection losses (keypoint, mask, perceptual loss) are not claimed as our contributions. Instead, we include them in our paper to describe the necessary components of our framework and make sure our paper is self-contained.
**Q2: Clarification on confidence-aware losses.**
A2: We observed that the Confidence-aware loss helped with perceptually improving the results, which is also reflected to the consistent performance improvement on the LPIPS metrics. Moreover, another benefit of using the Confidence-aware loss is that it led to a more compact representation (e.g., reducing the number of Gaussians from 21,038 to 19,993). We also present some qualitative results (w/o and w confidence-aware loss) in Figure 3 of the rebuttal pdf.
**Q3: Ablation on hyperparameter.**
A3: We observed that one of the most crucial hyperparameters was the value $\lambda_{t}$ in the context-aware adaptive density control. We perform an ablation on this hyperparameter.
| **$\lambda_{t}$** | **PSNR (Full)** | **SSIM (Full)** | **LPIPS (Full)** | **PSNR (Hand)** | **SSIM (Hand)** | **LPIPS (Hand)** | **PSNR (Face)** | **SSIM (Face)** | **LPIPS (Face)** |
|-------------------|--------------------|--------------------|-----------------------|--------------------|--------------------|-----------------------|--------------------|--------------------|-----------------------|
| 0.0 | 28.92 | 0.9611 | 35.14 | 25.23 | 0.9175 | 84.00 | 26.08 | 0.9080 | 80.76 |
| -6.0 | 29.28 | 0.9619 | 33.71 | 25.69 | 0.9221 | 77.81 | 26.33 | 0.9114 | 74.21 |
| **-9.0** | **29.66** | **0.9632** | **33.05** | **26.27** | **0.9279** | **72.95** | **26.56** | **0.9156** | **72.30** |
| -12.0 | 29.29 | 0.9619 | 33.85 | 25.67 | 0.9219 | 78.75 | 26.34 | 0.9113 | 74.29 |
**Q3: Typo on 'video'.**
A3: Thanks for pointing out this issue. We will revise the usage of 'video' in the manuscript. We clarify that only one monocular RGB video of one subject is used to train the corresponding avatar.
**Q4: What's 'E' in Eq (16)? It looks like the so-called feedback module. But what's the design? Is it a learning-based predictor?**
A4: 'E' is the feedback module. It mainly consists of two convolutional layers and is a learning-based predictor. We show the pseudo code of this module below.
```
class E_feedback(nn.Module):
def __init__(self, num_feat=4):
super(E_feedback, self).__init__()
self.conv1 = nn.Conv2d(num_feat, 3, 1, 1, 0)
self.conv2 = nn.Conv2d(num_feat + 3, 1, 1, 1, 0)
self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True)
def forward(self, x):
x1 = self.lrelu(self.conv1(x))
x2 = self.conv2(torch.cat((x, x1), 1))
```
**Q5: Suggestion on discussing the failure cases.**
A5: Thanks for pointing out this issue. For the fine-grained expressive areas, the failure cases happen when hand interaction exists, which is mainly caused by incorrect SMPLX reconstruction (e.g. the estimated SMPLX model that drives the reconstruction has implausible interpenetration between hands). Another failure case is the existence of floaters in the reconstruction which is a common issue of 3DGS modeling. We show some representative examples in Figure 2 of the rebuttal pdf.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification. Please include these in the revision.
---
Reply to Comment 1.1.1:
Comment: Thank you for recognizing our efforts in addressing your concerns. We are glad that our responses have been helpful and will incorporate these responses into the revised manuscript as you suggested. We are always open to discussions that could further improve our paper. If there are no additional concerns, we would greatly appreciate it if you could kindly consider raising the rating.
Thank you again for your valuable comments. | Rebuttal 1:
Rebuttal: **General Response**
We sincerely appreciate the reviewers for their insightful and constructive comments. We are encouraged by their positive comments and the recognition of the merits of our work. More specifically, the reviewers have appreciated the great importance and challenges introduced by expressiveness in human avatar modeling (R-1sDc, R-fFxo, and R-g7vJ). Moreover, they have highlighted the novelty (R-fFxo, R-4SGk, and R-g7vJ) and strong performance (R-fFxo, R-1sDc, R-g7vJ, and R-ZKQ6) of the proposed EVA framework for capturing expressiveness. Finally, the reviewers have positively commented on our comprehensive experiments (R-fFxo, R-1sDc and R-ZKQ6), the clear visualizations (R-fFxo, R-ZKQ6, and R-g7vJ), and the well-written and easy-to-follow paper (R-1sDc, R-fFxo, R-g7vJ and R-ZKQ6).
In the following, we first address the common concern, and then the concerns of each reviewer. We will revise the manuscript accordingly.
**Q1: Suggestion on including more comparison methods.**
A1: We have used the Splatting Avatar method from CVPR 24 and evaluated it on the datasets we used. The results are shown below. Our approach outperforms Splatting Avatar.
| **Method** | **Dataset** | **N-GS** | **PSNR (Full)** | **SSIM (Full)** | **LPIPS (Full)** | **PSNR (Hand)** | **SSIM (Hand)** | **LPIPS (Hand)** | **PSNR (Face)** | **SSIM (Face)** | **LPIPS (Face)** |
|------------|-------------|------------|-----------------|-----------------|------------------|-----------------|-----------------|------------------|-----------------|-----------------|------------------|
| SplattingAvatar | UPB | 257,811 | 25.13 | 0.9355 | 96.16 | 24.20 | 0.9298 | 70.91 | 24.48 | 0.8962 | 127.63 |
| EVA | UPB | 20,829 | 26.78 | 0.9519 | 65.07 | 27.00 | 0.9524 | 45.90 | 26.85 | 0.9298 | 65.90 |
| SplattingAvatar | XHumans | 103,193 | 29.33 | 0.9606 | 44.39 | 26.19 | 0.9264 | 78.53 | 26.47 | 0.9103 | 92.51 |
| EVA | XHumans | 19,993 | 29.67 | 0.9632 | 33.05 | 26.27 | 0.9279 | 72.95 | 26.56 | 0.9157 | 72.30 |
We will add the discussion on this paper in related work and the experiment results in Table 1.
Pdf: /pdf/8d06f2e1abfa15706746e9abcbb174cc1c279845.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: - Given an RGB human video, the paper focuses on photorealistic reconstruction of the human in 3D using SMPL-X and Gaussian rendering.
- The main idea is to align the expressive SMPL-X with the video evidence followed by Gaussian Avatar modeling with better gaussian density control and loss.
- Technical contributions: Context-aware adaptive density control, confidence-aware loss, SMPL-X alignment.
- Baselines: 3DGS, GART, GauHuman.
- Evaluation are done on two datasets, XHuman and UPB. Metrics: PSNR, SSIM, LPIPS.
- The results show that the proposed method EVA consistently outperforms the baselines.
Strengths: - The paper is well written, organized and easy to follow.
- The focus problem is of great importance for the field of human digitization.
- The ablative studies are informative, giving us insights into the importance of the technical components.
Weaknesses: - Weak technical contributions: In comparison to GauHuman, CVPR 2024, the proposed method EVA makes three changes. 1) Context-aware adaptive density (CADC), 2) Confidence-aware loss (CL), 3) SMPL-X alignment. CADC is a nice idea for human modeling, and is inspired by the original 3DGS paper. However, it is a slight change to the originally proposed heuristic. CL is frequently used in the field of pose-estimation and 3D modeling (eg. DUST3R, CVPR 2024). SMPL-X alignment is very simply mesh fitting to 2D keypoints and is well explored in SMPLify-X, CVPR 2019 and its related works. Overall, the main technical ideas proposed lack technical novelty.
- XHuman over ZJU_MoCap for evaluations: Is there a reason why metrics were reported on XHuman and not ZJU-MoCap dataset? The missing metrics on the popular ZJU-MoCap benchmark makes it hard to fairly compare EVA with existing. I am guessing this because ZJU-MoCap does not have SMPL-X annotations. In this context, can we use the same setup as UPB and still evaluate using predicted SMPL-X parameters?
Technical Quality: 3
Clarity: 3
Questions for Authors: Given the lack of technical novelty, it is important to establish the empirical usefulness of the proposed method.
In this context, I am asking for a fair evaluation with prior methods (mostly using SMPL) on popular benchmarks.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Clarification on technical contributions**.
A1: Our proposed three technical contributions are well-motivated and well-formulated to meet the requirement of expressiveness in human avatar modeling, which are validated by extensive experiments and have been acknowledged by Reviewer 4SGk ('the paper introduces three significant innovations').
Our proposed fitting aims at mitigating the misalignment issues, especially on the fine-grained hands, which designs effective loss terms leveraging multiple pseudo ground truths (meshes and 2D keypoints predicted by off-the-shelf methods). As shown in Figure 1 of the rebuttal pdf, our fitting results are clearly better than both SOTA regression-based method SMPLer-X and optimization-based fitting (SMPLify-X). Its effectiveness has also been acknowledged by Reviewer g7vJ ('significantly improves the alignment between SMPL-X and the RGB frames').
**Q2: Reasons for not reporting results on ZJU-Mocap.**
A2: The main reason for not choosing ZJU-Mocap is that it is not suitable for evaluating expressiveness. It has limited diversity on hand gestures/facial expressions (only clenched fist or open palm), and insufficient hand resolutions following the practice of previous methods.
Meanwhile, we also prioritize the experiment on ZJU_Mocap and have spent over 24 hours trying to solve it.
1) The first option (proposed by the reviewer) is that we use the same setup as UPB and still evaluate using predicted SMPL-X parameters, the input SMPLX quality will be worse than the SMPL counterpart given in ZJU-Mocap, which will lead to an unfair comparison.
2) Another option is that we try to convert the given SMPL annotation to the form of SMPL-X. However, we still find there remains the projection discrepancy which will also lead to an unfair comparison.
We have conducted the experiments on two benchmarks to perform a fair comparison with previous methods, which is acknowledged by Reviewer ZKQ6 ('It is commendable that the authors added SMPL-X to existing methods for fair and thorough comparisons').
**Q3: Clarification on the empirical usefulness of the proposed method.**
A3: The main empirical usefulness is that our method could build the expressive avatar from monocular RGB video while getting rid of time-consuming SMPLX annotation. From the application perspective, it could be utilized in sign language production. We have demonstrated this application by conducting experiments on the UPB benchmark (videos collected from the Web).
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal. Appreciate your efforts on incorporating ZJU-Mocap. Overall, my concerns are addressed.
I am recommending the work for acceptance.
---
Reply to Comment 1.1.1:
Comment: We are glad that we could address your concerns and feel very grateful that you raised the score and recommended our work for acceptance. We will incorporate our responses in the revised manuscript. | null | null | null | null | null | null |
On Tractable $\Phi$-Equilibria in Non-Concave Games | Accept (poster) | Summary: The paper presents new results about no-$\Phi$-regret learning when the utilities are not concave. When $\Phi$ is either finite or in a precise sense "local", it is shown that no-$\Phi$-regret learning is possible, and hence that the corresponding notions of $\Phi$-equilibria can be efficiently approximated at rate $\text{poly}(1/\epsilon)$.
Strengths: This presents some interesting and novel results in a regime (non-concave games) where positive results seem to be relatively difficult to come by. The paper was also very well written and easy to read. I vote to accept and only have some minor comments.
Weaknesses: It was useful to me to keep in mind that the regime $\delta = O(\epsilon)$ (where $O$ hides game/instance-dependent constants) is trivial, because no $\delta$-local change can affect a Lipschitz utility function by more than $O(\delta)$. It would be nice to explicitly state this somewhere.
In Table 1, it would be nice to distinguish the contributions of this paper from past contributions. Perhaps for each cell, include either the past citation or the theorem number in the current paper.
Minor things and typos (not affecting score):
* Table 1: effifient -> efficient
* Missing close-parens: Algorithm 1, line 5; two in the botton half of page 18
* Algorithm 2, line 3: form -> from
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. Could you elaborate on Corollary 1? It is not obvious to me how that bound follows from Theorem 2.
1. I find it interesting that the algorithms in this paper are presented as *randomized* algorithm outputting a *single* $x^t$ rather than a *determinisitic* algorithm outputting a *distribution* $\mu^t \in \Delta(\mathcal X)$ (as done by Peng and Rubinstein [2024], Dagan et al [2024], and Zhang et al [2024]). At least with your techniques, this difference seems to be an unavoidable consequence of the fact that the utilities $u^t$ are nonlinear, and therefore one cannot say $\mathbb E_{\phi \sim p} u^t(\phi(\cdot)) = u^t(\mathbb E_{\phi \sim p} \phi(\cdot))$. Do I understand this correctly?
1. If I understood the previous point correctly: in Theorem 5, the nonlinearity of $u^t$ is a nonissue by assumption. So, is it possible to de-randomize that result by instead deterministically outputting the uniform distribution $\mu^t := \text{unif} \\{ x_1, \dots, x_K \\}$?
1. Is there an efficient algorithm for $\Phi^\mathcal{X}_\text{Beam}(\delta)$-regret minimization?
1. Do the results of this paper have any novel implications for interesting special cases, such as nonconvex-concave zero-sum games, or simply the classical setting of concave no-regret learning? For example the results of Section 4 seem to have interesting (new?) implications for local (mixed) Nash equilibria in nonconvex-nonconcave zero-sum games.
Papers cited here but not in the submission:
* Dagan, Y., Daskalakis, C., Fishelson, M., & Golowich, N. (*STOC* 2024). From External to Swap Regret 2.0: An Efficient Reduction for Large Action Spaces.
* Peng, B., & Rubinstein, A. (*STOC* 2024). Fast swap regret minimization and applications to approximate correlated equilibria.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your positive review and constructive comments! We will add a note on the trivial regime where $\delta = O(\epsilon)$. We will update Table 1 to distinguish our contributions better. Below, we address your questions.
Q: *Could you elaborate on Corollary 1? It is not obvious to me how that bound follows from Theorem 2.*
A: We will add the proof of Corollary 1 in the revised version of the paper. In Corollary 1, we consider the case where $\Phi$ is the class of $M$-Lipschitz functions from $[0,1]^d$ to $[0,1]^d$ and the utility is $G$-Lipschitz. The $\alpha$-covering number of $\Phi$ satisfies $\log N(\alpha) = \Theta((1/\alpha)^d)$ (this fact is also used in [1, Example 23]). Thus, we can run Algorithm 1 over the finite $\alpha$-cover of $\Phi$. This leads to a regret bound of $\sqrt{T(M/\alpha)^d)} + \alpha G T$, where the second term is due to the covering error. By choosing $\alpha = M T^{-1/(d+2)}$, we get regret bound $O(c \cdot T^{(d+1)/(d+2)})$ where c is some constant that only depends on $G$ and $M$.
[1] Stoltz, Gilles, and Gábor Lugosi. "Learning correlated equilibria in games with compact sets of strategies." Games and Economic Behavior, 2007
Q: *I find it interesting that the algorithms in this paper are presented as randomized algorithm outputting a single $x^t$ rather than a deterministic algorithm outputting a distribution $\mu_t \in \Delta(\mathcal{X})$ (as done by Peng and Rubinstein [2024], Dagan et al [2024], and Zhang et al [2024]). At least with your techniques, this difference seems to be an unavoidable consequence of the fact that the utilities ut are nonlinear, and therefore one cannot say $\mathbb{E}_{\phi \sim p}[u^t(\phi(\cdot))] = u^t( \mathbb{E}\_{\phi \sim p} \phi(\cdot))$. Do I understand this correctly?*
A: Two issues prevent us from outputting a distribution. The first one, as you pointed out, is the nonlinearity of the utility function. The second is that the distribution we constructed has an exponentially large support of $|\Phi|^\sqrt{T}$. To achieve an $\varepsilon$-approximate $\Phi$-equilibrium, we need $T$ to be $\textnormal{poly}(1/\varepsilon)$, so outputting the entire distribution requires time exponential in $1/\varepsilon$. Instead, using a sampling procedure allows us to significantly improve the dependence on $1/\varepsilon$ and obtain an algorithm whose running time is only polynomial in $1/\varepsilon$.
Q: *If I understood the previous point correctly: in Theorem 5, the nonlinearity of ut is a nonissue by assumption. So, is it possible to de-randomize that result by instead deterministically outputting the uniform distribution $\mu^t = \\{x_1, \ldots, x_K\\}$?*
A: We can derandomize this result by directly outputting the distribution $\mu^t$. In this case, we allow the algorithm to output a mixed strategy and consider regret over the expected utility in each iteration. Thank you for this insightful observation.
Q: *Is there an efficient algorithm for $\Phi^{\mathcal{X}}\_{\textnormal{Beam}}(\delta)$-regret minimization?*
A: We currently don’t have an efficient algorithm and believe this is an interesting open question. We introduce $\Phi^{\mathcal{X}}\_{\textnormal{Beam}}(\delta)$-regret to show that even for simple local strategy modification sets $\Phi(\delta)$, the landscape of efficient local $\Phi(\delta)$-regret minimization is already quite rich, and many basic and interesting questions remain open.
Q: *Do the results of this paper have any novel implications for interesting special cases, such as nonconvex-concave zero-sum games, or simply the classical setting of concave no-regret learning? For example the results of Section 4 seem to have interesting (new?) implications for local (mixed) Nash equilibria in nonconvex-nonconcave zero-sum games*.
A: To the best of our knowledge, the notion of $\Phi_{\textnormal{proj}}$-regret is new, and its efficient minimization is unknown even in the classical setting of no-regret learning with concave utilities. We think our upper and lower bound results for the $\Phi_{\textnormal{proj}}$-regret are, therefore, interesting even in this setting. We do not see any immediate implications for local mixed Nash equilibria in nonconvex-nonconcave zero-sum games, but this is a very interesting open question.
---
Rebuttal Comment 1.1:
Comment: Thank you. My opinion of the paper was and remains positive, and I will keep my score. | Summary: This paper describes a concept of $\Phi$-equilibrium in non-concave games, which is a rarely discussed notion in the recent literature. This notion of equilibrium is defined over a set of strategy modifications, with specific definitions of $\Phi$, $\Phi$-equilibrium can recover commonly known notions such as CCEs, though this paper mostly focuses on more limiting choices of $\Phi$. The paper shows that there exists efficient algorithms to find an $\epsilon$-approximate $\Phi$-equilibrium in tractable time for some specific families of set $\Phi$.
Strengths: Although some assumptions on families of set $\Phi$ seem restrictive, it is known that without any assumptions, the task of finding e.g. CCE, even approximately and locally, is intractable. This paper provides a solid direction towards better understanding of algorithms in non-convex games.
This paper is quite self-contained, and section 1 explained the problem formulation such that the reviewer find easy to follow.
For the infinitely large $\Phi$, the three families offered good insight and can potentially cover many existing strategy modification schemes in application.
Weaknesses: The structure of this paper is questionable. Section 1 alone takes 4 pages, and the remainder of this paper seems rushed and/or crammed. The authors did address the space constraint of this paper, but I believe the paper can be greatly improved in terms of better readability.
For instance, I believe it would be beneficial to address algorithm 1 in the main paper. And more space could be assigned to the finite $\Phi$ setting, as a warm-up. The reviewer suggests that some paragraphs or even theorems in section 4 could be moved to the appendix.
Some statements throughout the paper appear to be slightly repetitive and reiterates of previous sections, many statements can be shortened and more concise. For example, section 1.1 interweaves between introduction and contributions.
Although this paper is quite high-level and technical, it would be reasonable to consider some empirical results to validate and strengthen the theoretical results such as the realistic complexity for the sampling procedure. This can perhaps be added in the appendix.
Technical Quality: 3
Clarity: 3
Questions for Authors: No
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately address the limitations of this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the positive review and constructive comments on the structure and presentation of the paper!
We will incorporate your suggestions and improve the presentation in the revised version of the paper. Since one more page is allowed in the camera-ready version, we will assign more space to the finite $\Phi$ setting and include Algorithms 1 and 2 and relevant discussions in the main paper. We will also adjust the structure of section 4 and make all statements in the main paper more concise.
**Complexity of our algorithms**: The algorithms we study (except for Algorithms 1 and 3) are gradient-based algorithms like gradient descent and optimistic gradient descent, which have efficient practical implementations. Regarding the complexity of the sampling procedure (Algorithm 2), the per-iteration complexity is exactly $\sqrt{T} \cdot |\Phi|$, where we need to sample from a distribution that supports on $|\Phi|$ points, evaluate the strategy modification function $\phi$, and repeat the above procedure for $\sqrt{T}$ steps.
The above complexity analysis for Algorithm 2 is tight. There is an equivalent implementation that improves the expected running time by a factor of 2: we could first sample a number N uniformly between $[1, \sqrt{T}]$ and then only run the sampling procedure for $N$ steps rather than $\sqrt{T}$ steps, and finally return the last point. This improves the expected per-iteration complexity to $(\sqrt{T} \cdot |\Phi|) / 2$.
We will add the discussion in the revised version of the paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for the comments, my concerns have been addressed. Given that the issues will be addressed in the revised manuscript, while also referencing the discussion between the author and other reviewers, I have decided to increase my score. | Summary: This work studies the $\Phi$-equlibrium in non-concave games and discuss when the $\Phi$-equlibrium of a non-concave game can be learned in polynomial time. The theoretical results presented in this paper indicate that if the set $\Phi$ is finite, then there exists an efficient uncoupled algorithm that converges to the corresponding $\Phi$-equilibria. Moreover, this work also considers the case when $\Phi$ is not finite but consist of local modifications.
Strengths: Significance: This paper gives a non-trivial results on the $\Phi$-equilibrium of non-concave games. It provides a simple condition to tell if $\Phi$-equilbria can be learned in polynomial iterations.
Originality: There are no existing literature that have resolved the $\Phi$-equilibrium problem. So, the result of this paper is new. This paper extends existing framework to consider infinite local strategy modifications, which also leads to new technical novelty.
Clarity: This work includes sufficient backgroun introduction and put adequate materials in the appendix for references. I feel easy to understand the main contribution of this work.
Weaknesses: I believe this is a high quality paper and I give a clear accept. But I do have some confused points. I put them in the next sections.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. I find some papers showing that for Markov games, the CE or CCE can be learned in polynomial time. For example: *Jin, Chi, et al. "V-Learning--A Simple, Efficient, Decentralized Algorithm for Multiagent RL." arXiv preprint arXiv:2110.14555 (2021).* From my understanding, when setting their horizon $H=1$, the Markov game seems to be a classical game. Is there any difference between this setting and yours?
2. Is there any applications of $\Phi$-equilibrium? For example, when do we need to efficiently solve some $\Phi$-equilibrium?
3. The definition of $\Phi$-equilibrium looks really similar to the Correlated Equilibrium (CE) in your Definition 6. Is CE as the same as $\Phi$-equilibrium if $\Phi$ is choosen as the whole space?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: This is a purely theoretical work so there is no any negative impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your very positive review. Below, we address your questions.
Q: *I found some papers showing that for Markov games, the CE or CCE can be learned in polynomial time. For example Jin, Chi, et al. "V-Learning--A Simple, Efficient, Decentralized Algorithm for Multiagent RL." arXiv preprint arXiv:2110.14555 (2021). From my understanding, when setting their horizon H = 1, the Markov game seems to be a classical game. Is there any difference between this setting and yours?*
A: If the horizon $H = 1$, then a general-sum Markov game becomes a general-sum normal-form game where each player’s utility function is **linear** in their own strategy. This is a special case of the non-concave games we consider in this paper since, in a non-concave game, each player’s utility could be **non-concave** in their own strategy. Additionally, Markov games with $H \ge 1$ are **special cases** of non-concave games that include additional structure, such as the Markovian property. Since our focus is on the more general class of non-concave games, these specific results do not apply.
Q: *Is there any applications of $\Phi$-equilibrium? For example, when do we need to efficiently solve some $\Phi$-equilibrium?*
A: The notion of $\Phi$-equilibrium is very general. If $\Phi$ is all constant strategy modifications, the corresponding $\Phi$-equilibrium coincides with the notion of coarse correlated equilibrium (CCE); If $\Phi$ is all strategy modifications, then the corresponding $\Phi$-equilibrium coincides with the notion of correlated equilibrium (CE). CE and CCE are both central and classical notions extensively studied in the literature and used in practice. Generally, a $\Phi$-equilibrium guarantees that no single agent has an incentive to unilaterally deviate by using strategy modifications from their set $\Phi_i$. This provides a stability guarantee that we believe will be relevant for many economics and machine learning applications, including game AI and multi-agent reinforcement learning.
Q: *The definition of $\Phi$-equilibrium looks really similar to the Correlated Equilibrium (CE) in your Definition 6. Is CE as the same as $\Phi$-equilibrium if $\Phi$ is chosen as the whole space?*
A: Yes. If we choose $\Phi$ as all possible strategy modifications, the corresponding $\Phi$-equilibrium is exactly CE.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. I have read the rebuttal and I will keep my current positive score. | Summary: The paper studies nonconcave games and $\Phi$-equilibrium. When $\Phi$ is finite, the paper showed that there exists an uncoupled learning algorithm that can efficiently find the equilibria. Under certain classes of strategy modification, online gradient descent can also approximate $\Phi$-equilibria when $\Phi$ is infinite.
Strengths: I am not an expert in non-concave games and $\Phi$ equilibria. Thus I suggest referring to other reviewer for the strength and weaknesses
Weaknesses: I am not an expert in non-concave games and $\Phi$ equilibria. Thus I suggest referring to other reviewer for the strength and weaknesses
Technical Quality: 3
Clarity: 3
Questions for Authors: I am not an expert in non-concave games and $\Phi$ equilibria. Thus I suggest referring to other reviewer for the strength and weaknesses
Confidence: 1
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Better by default: Strong pre-tuned MLPs and boosted trees on tabular data | Accept (poster) | Summary: The authors introduce RealMLP, an improved multilayer perceptron (MLP), alongside improved default parameters for GBDTs and RealMLP. The authors tune RealMLP on a meta-train benchmark with 71 classification and 47 regression datasets and compare them to hyperparameter-optimized versions on a disjoint meta-test benchmark with 48 classification and 42 regression datasets, as
well as the GBDT-friendly benchmark by Grinsztajn et al. (2022), claiming that RealMLP offers a better time-accuracy tradeoff than other neural nets and is competitive with GBDTs.
Strengths: 1. The inclusion of the standard recent benchmark of Grinsztajn et al. (2022) is welcome, and gives more credence to the authors' claims.
2. The description of the authors' improvements in Sec. 3 is unusually good, containing sufficient detail that, if needed, someone could most likely recreate the method from scratch.
3. The goal of improving the performance of MLPs on tabular data is both useful and interesting.
Weaknesses: MAJOR
1. The support for a key claim ("RealMLP offers a better time-accuracy tradeoff than other neural nets and is competitive with GBDTs") is, at best, inconsistent. In Fig. 2, utilizing the authors' own benchmark suite, RealMLP is consistently less performant than the best GBDT. The claim that it outperforms other neural nets is not strongly supported by Fig. 2, since only a handful of the baseline NN methods from [1] are included. Particularly notable is the exclusion of [2], despite the fact that the authors reference it in their prior work. [2] requires less than a second to train and there exist methods in the literature to extend it to datasets of arbitrary size. As for Fig. 3, it seemingly contains no confidence intervals. However, the authors' method appears to perform comparably to FT-Transformer and Tab-R for classification tasks; on regression tasks, it is faster, but less performant, than FT-Transformer.
2. Sec 2.1 explains how the new benchmark was assembled, but not why it was necessary to do so. [1], [2] and Grinsztajn et al. have already introduced robust tabular benchmark suites into the literature. The authors also mention, but utilize only parts of, the AutoML benchmark and the OpenML-CTR23 regression benchmark.
3. While the concerns raised in Sec. 2.2 about some previously used metrics are reasonable, the choice of shifted geometric mean error, which adds noise to extremely low error scores, is also problematic. A better approach would have been to report multiple metrics.
MINOR
1. The 95% confidence intervals in the figures are very difficult to see, and should be made clearer.
[1] When do neural nets outperform boosted trees on tabular data?
[2] TabPFN: A Transformer That Solves Small Tabular Classification Problems in a Second
Technical Quality: 2
Clarity: 3
Questions for Authors: QUESTIONS
* Why was it necessary to create a new benchmark suite? Why is this the fairest choice of benchmark for these methods?
* Why report only one non-standard metric?
* Why are so few deep tabular methods included in the core experiments?
SUMMARY
While the detailed description of the authors method is helpful and the ideas contained therein could be of interest to the NeurIPS audience, the experimental results are not yet robust enough to warrant publication at NeurIPS.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback.
Major weaknesses:
1.
- We meant to write “is competitive with GBDTs in terms of benchmark scores”, we will update this sentence. Of course, GBDTs are faster on most datasets with our settings. But note that lower is better in terms of benchmark scores in Figure 2 and RealMLP is better than GBDTs on all benchmarks in terms of scores in Figure 2.
- The simple “TabPFN+subsampling” version from [1] is outperformed by TabR and FT-Transformer in the recent large benchmark in https://arxiv.org/pdf/2407.00956, both of which are included in our Figure 3 and partially in Figure 2. Regarding versions of TabPFN that scale to large datasets, many recent scaling works are concurrent works. There is TuneTables from February, but still it (1) will be substantially slower than 1 second when scaled to larger datasets, and (2) is limited to classification. TabPFN-based results will probably be outdated very soon anyways since TabPFNV2 is supposed to be out soon.
- Figure 3 contains no confidence intervals because we used the codebase of Grinsztajn et al. to compute results and it does not provide confidence intervals with the same meaning as the confidence intervals in Figure 2 (in general, producing such confidence intervals for normalized benchmark scores as done in Figure 3 would be statistically questionable, as the normalization of the metric depends on the benchmark results itself.)
- Your last claim is incorrect: RealMLP outperforms FT-Transformer in terms of speed and benchmark scores for regression on the Grinsztajn et al. benchmark (and performs slightly better for classification as well).
2.
- Regarding the meta-train benchmark: The benchmark consists of datasets that were available to us when we started experimenting. While it is not established, we cannot change it post-hoc since we already used it to design our methods.
- Regarding the meta-test benchmark: The benchmark consists of the AutoML benchmark and the CTR23 benchmarks, which are well-curated tabular benchmarks. The AutoML benchmark is well-established in AutoML, and [1, 2] also use it for their dataset collection. The contained datasets span a large range of sizes and dimensionalities, allowing us to test meta-generalization of tuned defaults. We removed some datasets according to well-defined criteria in Appendix C.3.2, mainly datasets that are already included in the meta-train benchmark or are too small.
- The benchmark suite of [1] is interesting but was released concurrently to when we started experimenting with the meta-test benchmark, and is mainly covering classification. As far as we can see, [1] doesn’t provide clear criteria by which they have selected their datasets, and their TabZilla subset has selection criteria prefering datasets that are hard for GBDTs, which would be unfairly favorable to neural networks such as our RealMLP.
- [2] contains many datasets but mostly small ones, which we do not design our methods for, and which result in noisier evaluation due to small test sets. They also use some datasets from the AutoML benchmark, which we already use.
- We use the Grinsztajn et al. benchmark datasets separately in Figure 3, to have a common benchmark where many baselines can run. We chose not to use these datasets for the meta-test benchmark since there are less of them (less than half of what we have now after excluding the meta-train datasets), and they are more limited in terms of number of features etc.
3. We **do report alternative aggregation metrics** in Figure B.7 – B.15 and reference some of them from Section 5.2. We also report normalized accuracies on the Grinsztajn et al. (2022) benchmark (Figure 3), to conform with the original benchmark. We agree that all aggregation strategies have some upsides and downsides.
Minor weaknesses:
- We increased the linewidth of the error bars, see the rebuttal PDF.
Questions:
- See above.
- See above, we do report multiple aggregation metrics.
- Because the meta-test benchmark is so expensive to run, see the global response. We already have the Grinsztajn et al benchmark for this purpose. Another issue is the lack of easily usable interfaces for most other published deep learning methods, including details such as how to pass a separate validation set, how to change the early stopping metric, how to pass categorical data, how to select the correct GPU device, etc. In contrast, we make scikit-learn interfaces available that allow configuring all of this easily.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their response; I will not change my score at this time.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for acknowledging our response, but are surprised to see no change of score given that we addressed most concerns:
- Weakness 3 was incorrect.
- Weakness 2 is unjustified in our opinion since we do include a standard benchmark and additional benchmarks *on top of this* with well-defined dataset inclusion criteria. Again, we used all datasets from the AutoML and CTR23 benchmarks except those that were excluded for a good reason based on exclusion criteria defined in Appendix C.3.2.
- RealMLP performed better in terms of benchmark scores than claimed by the reviewer in Weakness 1.
- We did add more baselines in the rebuttal, and we already have most of the well-performing baselines from [1] in Figure 3 except TabPFN, and in addition to [1] we have TabR and MLP-PLR which we believe to be more relevant since we are improving standard NNs and not PFNs, which are currently hard to compare against (How do you fairly compare to a method that doesn’t use a validation set? What do you do about limitations in #features/#classes? Which of the recent methods do you use to scale it to large datasets?) | Summary: In the paper "Better by default: Strong pre-tuned MLPs and boosted trees on tabular data", the authors make two major contributions: (i) they propose a multi-layer perceptron configurations that is tuned on a set of training datasets and (ii) investigate in a large scale empirical study how different default parameterizations (library vs tuned on a set of training datasets) compares to optimized hyperparameters on a set of test datasets. The study reveals interesting results regarding the performance of default parameterizations set in standard ML libraries, better default parameterizations exist (at least for the scope of the benchmark), and that the proposed MLP often performs competitive to gradient-boosted decision trees when it comes to a Pareto-optimal tradeoff between accuracy and training time.
Strengths: - The paper is very well written and easy to follow. Sufficient details on the architecture of RealMLP and the implementation of tuning the default parameterization or optimizing the hyperparameters are given.
- Generally speaking, the authors are very eager to provide all the details to make their work reproducible which is of high importance for such a paper.
- In principle, I like the 2D-comparisons but a summary plot how the D vs TD vs HPO behaves for the different methods would be very much appreciated.
Weaknesses: - Intuitively one would expect that the performance of TD lies in between D and HPO but I could also imagine that TD is oftentimes very close to HPO or might even exceed HPO's performance due to overtuning. But this comparison is not considered as an overview.
- It is not clear why random search has been used as HPO technique in the paper, especially, since the authors even mention more sophisticated techniques such as SMAC that are based on Bayesian optimization.
- The style of writing in Section 3 could be improved as there are various content-wise enumerations in the form of sentences starting with "We".
Technical Quality: 4
Clarity: 4
Questions for Authors: - How would a CASH approach compare here if we would first select the learning algorithm according to the (tuned) default parameterization and then optimize its hyperparameters? As it is done in Mohr, F., Wever, M. Naive automated machine learning. Mach Learn 112, 1131–1170 (2023). https://doi.org/10.1007/s10994-022-06200-0
- Why was random search used for hyperparameter optimization?
- Is there any intuition why using the last of tied epochs as the final model as stated on line 161?
- Why are hyperparameter values rounded as stated on line 170?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Limitations have properly been described in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback.
Weaknesses:
- Thank you for the suggestion. We have something like this in Figures B.12-B.15, D.2, D.6, D.7, or do you have a specific suggestion for an overview plot?
- Random search 1) is used in the Grinsztajn et al benchmark, 2) is more convenient to run since all steps can be run in parallel and resource constraints (RAM usage) can be adjusted per parameter configuration, 3) allows to change the optimization metric post-hoc as we do for AUROC in Figure B.4 since the sampled configurations do not depend on the optimization metric. Random search has also been used in the benchmark of McElfresh et al. (2023), for example. We have a comparison to TPE (in hyperopt) for GBDTs in Figure B.6 and the differences are rather small.
- Thank you for the comment, we will try to improve this in the next version.
Questions:
- Thanks for the question, we also looked at this. It seems that the results are about halfway between Best-TD and Best-HPO (whose difference is quite small already). We were not sure if including this would complicate the message too much.
- See above.
- Not a lot, but a reason could be that when the MLP continues to improve slowly but the last few validation accuracies are identical, the last one is still more promising for the test set.
- To make them look nicer and more memorable in paper/code. (We checked that it doesn’t significantly hurt the performance.)
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Thank you very much for the on-point responses and the clarification.
Regarding the use of random search: I am well aware of the benefits of random search in empirical studies, but it should be mentioned in the paper to justify the choice and explain the benefits to the reader.
Regarding the overview plot, I was rather thinking of some spider chart where the axes are given by the learner. TD, D, and HPO are then data rows for that kind of chart so that one can see the area spanned by the respective types for each base learner or something in that direction. Does that make sense?
---
Reply to Comment 1.1.1:
Comment: Thank you for your answer and your suggestions!
> Regarding the use of random search: I am well aware of the benefits of random search in empirical studies, but it should be mentioned in the paper to justify the choice and explain the benefits to the reader.
We will discuss this choice in more detail in the revised manuscript.
>Regarding the overview plot, I was rather thinking of some spider chart where the axes are given by the learner. TD, D, and HPO are then data rows for that kind of chart so that one can see the area spanned by the respective types for each base learner or something in that direction. Does that make sense?
Thank you for this insightful suggestion, we will experiment with spider charts and add it to the revised version if it gives a clear picture of the situation. We are not sure the D versions of different algorithms are directly comparable, but for TD and HPO at least it should be interesting. | Summary: The work investigates/provides better default hyperparameter configurations for MLPs and gradient-boosted decision trees. Additionally, it proposes an augmented neural network with a series of enhancements. Experimental results are provided showing that the neural network is on par with gradient-boosted decision trees. Moreover, the authors argue that the neural network and the gradient-boosted decision trees with the new defaults are competitive with their HPO-tuned per-dataset counterparts and motivate practitioners to use the different methods with the new defaults and perform ensembling.
Strengths: - The work is written well.
- The work uses a wide collection of benchmarks that are well-known in the community.
- The work includes results with default, tuned-per-benchmark, and tuned-per-dataset hyperparameter configurations for the main methods investigated.
Weaknesses: - The novelty of the paper is limited in my opinion. The majority of the components that are proposed in Figure 1. C are not novel and are used in AutoML methods. Moreover, it is not clear what subset of the proposed components could actually benefit the other methods and not only the MLP.
- The baselines used are not consistent, for example, FT-Transformer and SAINT are included for the Grinsztajn et al. benchmark, but not for the other benchmarks.
- The protocol is not fully consistent, as for example, for the results of the meta-train and meta-test benchmark, HPO is not applied for the ResNet method or Tab-R.
- Figure 2 shows results for accuracy, where RealMLP-TD strongly outperforms gradient-boosted decision tree methods for the meta-test classification and regression benchmark. However, AUROC is a metric that considers class imbalance and provides a better view of the method performances, Figure B.3 which reports results for AUROC shows a different picture, where, gradient-boosted decision trees outperform the RealMLP-TD. As such, I would propose to place Figure B.3 in the main paper and Figure 2 in the Appendix.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Could the authors provide results for the missing baselines and for the baselines that do not feature HPO results?
- How does this work fair in comparison to using a portfolio of configurations using dataset meta-features and selecting the default configuration per dataset based on a similarity distance between the portfolio of configurations and the context dataset?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: - I believe the authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback.
Weaknesses:
1. What is the source for your claim? Which AutoML tool would these be in? Are you referring to some of the color-coded improvements or only to the gray ones? There are some non-novel components in Figure 1 (c) since we wanted to start from a “vanilla MLP”, but as we color-coded in the figure, there are many new and unusual components. We also tried out the two NNs in AutoGluon and their default settings performed worse than MLP-D on the meta-train benchmark. Regarding your second point, please see the global response.
2. While we understand the desire to have every method on every benchmark, unfortunately the meta-test benchmark is much more expensive to run than the other two benchmarks since it contains larger and higher-dimensional datasets. For example, FT-Transformer runs into out-of-memory issues on some datasets, trying to allocate, e.g., 185 GB of RAM. We did not try SAINT since it’s even more expensive.
3. Continuing on the point above, TabR-HPO would potentially take 5-20 GPU-months (based on extrapolating the runtime of TabR-S-D) to run on meta-test on RTX 3090 GPUs. We are now running it on the Grinsztajn et al. benchmark to have it on at least one benchmark. We did not run ResNet-HPO before since in most papers and Fig. 3 it is quite similar to MLP-HPO while being slower (takes around two GPU-weeks on our benchmarks in Fig. 2). We are currently running it and provide preliminary results in the uploaded PDF, which are confirming our expectations.
4. We included the AUROC experiments after the main experiments were finished to include an imbalance-sensitive metric. However, since the whole meta-learning and model development process was performed with accuracy as the target metric, and the best-epoch selection was also performed with accuracy as the target metric, we think that the results in Figure 2 are more suited for the main paper as they are more appropriate to demonstrate the effect of meta-learning default parameters. We do refer to the AUROC experiments in the main paper, though, and will try to emphasize these experiments more in the next version.
Questions:
- See above.
- This is an interesting question, but would deserve its own paper. This would also be challenging to do without too much overfitting, given the limited number of meta-train datasets. TabRepo, perhaps the most large-scale study on portfolio learning, still only considers static portfolios.
---
Rebuttal 2:
Title: Author Rebuttal Response
Comment: I would like to thank the authors for the reply. I have carefully read the other reviews and all the responses from the authors.
- Regarding: **"What is the source for your claim? Which AutoML tool would these be in? Are you referring to some of the color-coded improvements or only to the gray ones? There are some non-novel components in Figure 1 (c) since we wanted to start from a “vanilla MLP”, but as we color-coded in the figure, there are many new and unusual components. We also tried out the two NNs in AutoGluon and their default settings performed worse than MLP-D on the meta-train benchmark. Regarding your second point, please see the global response."**
The gray components would be an example, but additionally, the highlighted label smoothing is not something surprising. As mentioned by another reviewer, the paper provides a set of results that give intuition on the different method performances. As such, it would be beneficial for techniques that can be applied to the MLP to be added to the other baselines. It might be that the components hurt performance, it might be that the components improve performance.
One simple test would be to reuse the defaults for the MLP for another deep baseline (for what component can be applied), while this might not be conclusive as the hyperparameters probably need to be tuned per method, it would provide some insights.
- Regarding: **"While we understand the desire to have every method on every benchmark, unfortunately, the meta-test benchmark is much more expensive to run than the other two benchmarks since it contains larger and higher-dimensional datasets. For example, FT-Transformer runs into out-of-memory issues on some datasets, trying to allocate, e.g., 185 GB of RAM. We did not try SAINT since it’s even more expensive."**
In my perspective, the already considered list of methods is extensive. For example, running TabNet from my perspective is not interesting, the method is outperformed significantly from many methods. However, the already included baselines in the comparisons should be present in all results, since they leave "holes" in the results.
One might worry that some methods might achieve more competitive performance and that is the reason why the authors do not include them. I personally do not think the authors are hiding results, however, the concerns should be addressed, especially considering the work lies towards the benchmarking and experimental side.
If a method fails on certain datasets, the authors can point it out in the work.
- Regarding: **"We included the AUROC experiments after the main experiments were finished to include an imbalance-sensitive metric. However, since the whole meta-learning and model development process was performed with accuracy as the target metric, and the best-epoch selection was also performed with accuracy as the target metric, we think that the results in Figure 2 are more suited for the main paper as they are more appropriate to demonstrate the effect of meta-learning default parameters. We do refer to the AUROC experiments in the main paper, though, and will try to emphasize these experiments more in the next version."**
While that is true, the main attention is invested in the main manuscript, as such, the results can be potentially misleading. AUROC is a better metric that considers class imbalance. Personally, I find the work interesting, whether it beats CatBoost or not.
- As a last question, was TabR run with embeddings or without? My understanding is the latter. If that is the case, it is questionable, since the main version of TabR contains embeddings. Additionally, the MLP is run with embeddings.
---
Rebuttal Comment 2.1:
Comment: We thank the reviewer for the detailed response.
- The label smoothing is highlighted as “unusual”, and we would argue that it is indeed unusual in the tabular context. We tried TabR-S-D with our preprocessing + beta_2=0.95 + scaling layer + parametric Mish + label smoothing on meta-train-class and it was slightly better than with just the preprocessing, which is in turn better than with the original preprocessing (as we showed in the Appendix). At this point the message of this investigation is unclear and would require more ablations, tuning, etc., opening a big rabbit hole.
- We understand that some readers might be concerned by the non-matching baselines, and we will explicitly mention the reasons for this in the next version of the paper. For baselines that fail on some datasets, we have the additional problem that not all aggregation metrics support missing results.
- In case of acceptance, we will try to use the extra page to add Figure B.3 or B.4, which contain the AUC results, to the main paper.
- TabR-S-D was run without numerical embeddings, since this is the version that was proposed in the paper for default hyperparameters. While this might deteriorate performance, it also improves the training time, which we also report. The new TabR-HPO results accidentally didn’t use numerical embeddings, we will rerun them.
---
Rebuttal 3:
Title: Response to Authors
Comment: I thank the authors for the response.
- Regarding: **"The label smoothing is highlighted as “unusual”, and we would argue that it is indeed unusual in the tabular context. We tried TabR-S-D with our preprocessing + beta_2=0.95 + scaling layer + parametric Mish + label smoothing on meta-train-class and it was slightly better than with just the preprocessing, which is in turn better than with the original preprocessing (as we showed in the Appendix). At this point the message of this investigation is unclear and would require more ablations, tuning, etc., opening a big rabbit hole."**
I am not sure about the practitioners, but personally, I have used label smoothing in my experiments. Nevertheless, this is a paper for the community and I would need to review the other papers in the literature to confirm this very minor point. A recipe from the authors could be beneficial for the community. I agree that it is more complicated in the scenario the authors are pointing out for other baselines. One could maybe frame it as a general recipe or set of defaults for deep learning methods but it is not trivial and I am not sure if that is the message the authors want to convey.
- Regarding: **"TabR-S-D was run without numerical embeddings, since this is the version that was proposed in the paper for default hyperparameters. While this might deteriorate performance, it also improves the training time, which we also report. The new TabR-HPO results accidentally didn’t use numerical embeddings, we will rerun them."**
I understand, in that case, I would agree with the authors on using TabR-S-D for the default case and the one with numerical embeddings for the HPO case.
In my perspective, the work is in a good position, it needs a bit more work regarding the inconsistent baseline usage, but overall it has extensive results. Based on this, I will increase my score from 3 to 4.
---
Rebuttal Comment 3.1:
Comment: We thank the reviewer for the response and the score increase.
We now ran TabR-S-D with the above modifications (except label smoothing) on our meta-train regression benchmark as well (took ~5 GPU-hours) and see the exact same picture as for classification. We are wondering whether we should try a version of TabR-HPO in Figure 3 that includes some of our tricks into the search space. If this helps, then of course it is unclear which tricks help and what this means for defaults.
Regarding the inconsistent baseline usage, most other papers evaluate their methods on a single benchmark. Therefore, we would argue that the fact that we have an additional very diverse meta-test benchmark, even though we cannot run all methods due to large and high-dimensional datasets, should not be seen as a weakness but a strength of our paper. | Summary: The paper introduces RealMLP, an enhanced Multilayer Perceptron (MLP) designed for classification and regression tasks on tabular data. It also proposes optimized default parameters for both RealMLP and Gradient Boosted Decision Trees (GBDTs). Through benchmarking on diverse datasets, the authors demonstrate that RealMLP achieves a superior balance between time efficiency and accuracy compared to other neural networks, while remaining competitive with GBDTs. The integration of RealMLP and GBDTs using optimized defaults shows promising performance on medium-sized tabular datasets, alleviating the need for extensive hyperparameter tuning.
Strengths: 1. The study includes comprehensive benchmarking on a meta-train dataset comprising 71 classification and 47 regression datasets, and a meta-test dataset comprising 48 classification and 42 regression datasets.
2. Detailed experimental settings and hyperparameters are explicitly specified in the paper and appendices, ensuring reproducibility.
3. The experiments establish statistical significance, presenting error bars and critical-difference diagrams.
4. RealMLP demonstrates superior efficiency by offering a better time-accuracy tradeoff compared to other neural networks.
5. The optimized combination of RealMLP and GBDTs with improved default parameters achieves outstanding performance without the need for hyperparameter tuning.
Weaknesses: 1. The finding presented in this work is not entirely novel. Similar conclusions have been reached by Tree-hybrid MLPs `[1]`, although the methods are not quite the same. Moreover, ensemble different methods, including combinations like MLP + GBDTs, have long been recognized as effective approaches in Kaggle competitions and do not necessitate rediscovery.
2. While mentioning good performance by default, it's worth noting that Excelformer achieves one of the best performances under default settings. However, a direct comparison was not included, which would enhance the clarity of the findings. Please include the comparison in the rebuttal.
3. Despite emphasizing the applicability to medium-sized datasets, practical applications often involve diverse tabular data with varying feature lengths and training data sizes. The small application scope could potentially limit the impact of this study. Besides, most of tabular data sizes are often small.
4. The compared methods are evasive. Since the authors want to claim the speed-performance balance, the effective approaches Net-DNF `[2]`, TabNet `[3]`, and TabCaps `[4]` are not compared. The compared models FT-Transformer and SAINT, especially SAINT, are heavy.
**REF**
`[1]` Team up GBDTs and DNNs: Advancing Efficient and Effective Tabular Prediction with Tree-hybrid MLPs
`[2]` Net-DNF: Effective Deep Modeling of Tabular Data
`[3]` Tabnet: Attentive interpretable tabular learning
`[4]` Tabcaps: A capsule neural network for tabular data classification with bow routing
Technical Quality: 3
Clarity: 3
Questions for Authors: The effective default performance achieved by MLP + GBDTs may not be suitable for all scenarios, such as those requiring tabular pre-training or zero-shot scenarios, where pure tabular models might be more appropriate. How does the proposed method address applications in tabular pre-training scenarios?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback.
Weaknesses:
1. **[1] was uploaded on 13th of July 2024**, so this paper cannot be used to question our novelty. (Besides, it also uses a mix of architectures, while we consider algorithm selection / ensembling.) Regarding ensembles, we do not claim to rediscover ensembles. Rather, we try to see how meta-learning better defaults affects the trade-off between ensembling / algorithm selection and HPO.
While the ExcelFormer paper claimed to have very good defaults, ExcelFormer performed poorly in the experiments at https://github.com/pyg-team/pytorch-frame. So either the original results were problematic or the method is difficult to use correctly, both of which don’t make it attractive for us to compare to ExcelFormer. Moreover, as a transformer with inter-feature attention, ExcelFormer would likely result in out-of-memory errors on some high-dimensional datasets of the meta-test benchmark (as FT-Transformer does). We compared to TabR-S-D since their paper also claimed to have better defaults than GBDTs. We can, however, acknowledge the missing ExcelFormer comparison as a limitation.
2. We consider dataset sizes of 1K to 500K samples and up to 10K features, which is quite a large range, larger than the Grinsztajn et al. benchmark, for example. Smaller datasets are also interesting but hard to evaluate due to noisy test scores, and might be much more affected by details in early stopping etc. In addition, they are easier to handle for TabPFN or LLM-based methods, necessitating different baselines and making MLPs less attractive.
3. We agree that additional efficient baselines can help to substantiate our claim, but we think that the mentioned methods are not particularly promising based on recent results from the literature (see below). Instead, we evaluate MLP-PLR [5], an MLP with numerical embeddings that was presented as better than FT-Transformer from the group that created FT-Transformer, and performed well in recent benchmarks [6, 7]. MLP-PLR is faster than RealMLP but does not match RealMLP’s benchmark scores. Here are some reasons why we think that the mentioned NNs are not very promising:
- TabNet performs poorly both in recent benchmarks as well as in our preliminary experiments. In McElfresh et al. (2023), TabNet performs much worse than MLP or FT-T while also being slower than both of them. In the FT-Transformer and SAINT papers, TabNet is also worse than MLP. The same holds for https://arxiv.org/abs/2407.09790 and https://arxiv.org/abs/2407.00956
- Net-DNF performed slightly worse than TabNet in Shwartz-Ziv et al. (2022). It did perform a bit better than MLP on 4 datasets in Borisov et al. (2022). It was also outperformed by TabCaps in the TabCaps paper.
- TabCaps performs worse than MLP in a recent large benchmark (https://arxiv.org/abs/2407.00956), see Tables 19 and 20 therein. The TabCaps paper also doesn’t contain a lot of datasets, so their own claims are probably not too reliable.
Questions:
- We do not aim to address tabular pre-training or zero-shot scenarios. In cases where one can pre-train on a dataset with the same columns, RealMLP might be a good option.
[5] https://proceedings.neurips.cc/paper_files/paper/2022/hash/9e9f0ffc3d836836ca96cbf8fe14b105-Abstract-Conference.html
[6] https://arxiv.org/abs/2407.02112
[7] https://arxiv.org/abs/2406.19380
---
Rebuttal 2:
Comment: Thank you for your reply.
I’m aware that [1] was uploaded on July 13th, 2024, and that it is completely different from your paper. As I mentioned in my initial comment, it was only an example to illustrate that this paper revisits an old conclusion. I’m not questioning the novelty of your work based on this example.
Regarding the performance of Excelformer, I believe you should use the official code. I’ve tried using the code from PYG, but it seems to have many bugs, and the hyperparameters were clearly misused.
Additionally, in my experience, Net-DNF often outperforms both TabNet and NODE.
But, the performance of previous works and the similarity to [1] are not the main points! We all know that a paper should aim to present new scientific findings.
You said: "we try to see how meta-learning better defaults affects the trade-off between ensembling / algorithm selection and HPO". Essentially, integrating different methods (such as MLP + GBDT) has long been recognized as an effective approach in Kaggle tabular prediction competitions. Besides, achieving good results in these competitions typically does not require extremely rigorous HPO (that may leads to over-fitting), which are well understood by tabular data prediction experts.
---
Rebuttal Comment 2.1:
Comment: We thank the reviewer for the insights on various NN models. Does the reviewer have insights on the performance of Net-DNF vs. more modern / well-studied methods like MLP/ResNet from RTDL that are used in many more recent benchmarks?
Regarding the main point “a paper should aim to present new scientific findings”:
- The general benefits of combining GBDTs and NNs are of course widely known both on Kaggle as well as established scientifically through AutoML tools as well as Shwartz-Ziv et al. (2022). However, we specifically study whether (1) ensembling is preferable to HPO under time constraints and whether (2) meta-learning defaults shifts this balance. While some Kagglers may have intuitions about this, we doubt that there would be a consensus, for example, by looking at the posts of Kaggle Grandmaster Bojan Tunguz (“XGBoost is all you need”). We invite the reviewer to provide concrete evidence to the contrary if they disagree.
- Even though practitioners and Kaggle experts have some important intuitive knowledge not found in the literature, we believe that verifying this knowledge on large scale benchmarks and making it accessible to the scientific community is important.
- Even if the reviewer does not find our results on this aspect particularly interesting, there are other contributions of our work: we also provide an easily accessible improved MLP which performs very well both with default HP and after HPO, find better hyperparameters for standard GBDT libraries, and provide numerous insights in our ablations. | Rebuttal 1:
Rebuttal: Dear Reviewers,
thank you for the constructive feedback. We identified two main points raised by multiple reviewers:
- 4/5 reviewers asked for more baselines. In total, ten baselines were mentioned (MLP-PLR, ExcelFormer, Net-DNF, TabNet, TabCaps, versions of TabPFN, TabR-HPO, ResNet-HPO, SAINT, FT-Transformer). In the **attached PDF**, we **add MLP-PLR** as a related baseline with good speed/accuracy trade-off, as well as **TabR-HPO** on the Grinsztajn et al. benchmark and **ResNet-HPO** on the meta-benchmarks. However, we hope that the reviewers understand that we cannot run all possible baselines for reasons of computational cost, implementation/verification effort and cluttering of the paper/figures. In particular, we should have emphasized more that the meta-test benchmark is very expensive to run (ca 4 GPU-weeks for RealMLP-HPO, ca 2 GPU-weeks for ResNet-HPO, probably >5 GPU-months for TabR-HPO given that the runtime of TabR-S-D is >4 GPU-days). In addition, some methods (Transformer-based/TabPFN-based) fail due to the high numbers of features (up to 10K). The diversity in size and dimensions of the meta-test datasets allow for a broad study of meta-generalization. We have already included the Grinsztajn et al. benchmark in Figure 3 in order to have a common and less expensive benchmark with more deep learning baselines. Details on our decisions can be found in our individual replies.
- 2/5 reviewers asked whether our tricks would help other architectures as well. While we agree that this is an interesting question, we would argue that it is not relevant to our central question, namely how well MLPs and GBDTs can perform with tuned default settings. Moreover, due to the much higher training cost of models like TabR or FT-Transformer, such experiments would require a lot of extra effort. We therefore do not want to explore this question further in this paper beyond the preprocessing experiments that we already have in Table B.5.
We kindly ask the reviewers to consider raising their score, given that our benchmarking efforts are already more extensive than what can be found in most other method-papers.
Pdf: /pdf/b415aa4d35a80f8ad615d195fc99a8da048f29f7.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper proposes an enhanced version of the tabular MLP model -- RealMLP. By using multiple tricks over simple MLP, the proposed method becomes much more competitive with GBDTs than a simple MLP. Moreover, the authors provide strong "tuned default" configurations for GBDTs and RealMLP. These configs considerably reduce training time in exchange for arguably small test error increase compared to hyperparameter optimization procedures.
Strengths: * The paper addresses two important problems in tabular DL: 1) lack of an efficient strong DL baseline 2) high cost of extensive hyperparameter tuning for a good performance
* Extensive evaluation of the proposed method. Appendix A and B are also provide important ablations on RealMLP.
* Great experimental setup in terms of meta-train and meta-test splits that allow to find tuned-defaults.
* RealMLP is considerably better than MLP.
* Tuned-defaults for GBDTs.
* Everything necessary to reproduce the results is provided.
Weaknesses: 1. One of the main contributions of the paper is the different DL techniques from Figure 1 that have been added to MLP. Although there are enough useful experiments and ablations on these techniques for RealMLP (Figure 1 and Appendix B), these tricks can be applied to any DL model. From Figure 3 we observe that TabR-S-D and FT-Transformer-D are much better than MLP-D and slightly worse than RealMLP-TD. So, I think it is essential to ablate the mentioned techniques to improve these models and compare the results with RealMLP. I expect that the techniques improve stronger models much less compared to MLP, and it would be a beneficial result. It will show that RealMLP-TD is a good baseline while TabR-S-D-with-tricks is only slightly better than TabR-S-D. Otherwise, if improved TabR or FT-T considerably outperform RealMLP, the results will be even closer to GBDTs' which is also a positive outcome. I think that the results in Table B.5, which show an advantage of RC+SF preprocessing for TabR-S-D , also support my idea.
2. Table B.1. from Appendix: changing Adam's $\beta_2$ from 0.95 to 0.999 significantly increases the error rate while changing Adam's $\beta_2$ from 0.999 to 0.95 on Figure 1 has almost no effect. I would make a conclusion that the resulting RealMLP-TD architecture with all tricks is very sensitive to hyperparameters.
3. Also Table B.1: Strangely, PLR embedding is much worse than PL one. The only difference is ReLU if I am not mistaken.
4. No non-parametric DL baselines in Figure 2. Only TabR-S-D is presented. Also, it would be interesting to compare tuned MLP-PLR [1]. Authors from [1] use extensive HPO and do not fix embedding size, so a comparison with RealMLP-TD would be interesting to see if a properly tuned MLP-PLR is better/worse than RealMLP with tricks and fixed embedding size.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Please, see weaknesses.
2. Probably I missed it in the text but how many HPO iterations have you used to obtain -TD configs?
3. Have you measured inference efficiency? I think MLP is usually a bit faster than GBDTs but I wonder if parametric activations or proposed numerical embeddings make MLP slower.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Limitations are addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback.
Weaknesses:
1. While we agree that such experiments would be interesting, they would be very costly and are not central to the research question we want to answer in the paper (how good can we make MLPs and GBDTs with default parameters?). We would argue that the existing results should already be interesting to readers and that our paper's ability to pose many interesting questions and potential follow-up studies should be viewed as a strength rather than a weakness.
2. We can mention this in the main paper. We already mentioned in Appendix B that this is likely due to the very high weight decay value (we did an earlier ablation without weight decay for regression and the differences were much smaller). So we would argue that the sensitivity to some hyperparameters is not a big issue as it can be fixed with lower weight decay or by just using our provided defaults.
3. We double-checked our implementation and we also checked MLP-PLR and MLP-PL with the original libraries and observed the same effect for defaults on meta-train (we didn’t run MLP-PL-HPO). So it appears that PLR is better than PL on the 11 datasets of its own paper with HPO, but is worse with fixed hyperparameters on our 118 meta-train datasets. The PLR library documentation also mentions that PL can allow for a lower embedding dimension (perhaps because there is no information loss due to the ReLU). So this seems to be a weakness of PLR embeddings and not of our paper.
4. TabR-S-D is a non-parametric baseline. Unfortunately, TabR-HPO would be way too expensive to run on the meta-test benchmark (easily 5-20 GPU-months on RTX 3090 GPUs extrapolated from the runtime of TabR-S-D). We did also include TabR-HPO on the Grinsztajn et al. benchmark in the rebuttal. Since other non-parametric methods like SAINT are even more expensive, we cannot afford to run them in Figure 2 (but SAINT is in Figure 3). We did include MLP-PLR now. The defaults are on the level of MLP-D (the paper does not propose defaults, but the library does). MLP-PLR-HPO comes close to RealMLP-HPO on meta-test-reg but underperforms RealMLP-TD on the other three meta-benchmarks. Given the results, optimizing the embedding dimension could be interesting for RealMLP-HPO as well, although the larger embedding dimensions appear to considerably increase CPU runtimes. (MLP-PLR-HPO has about equal runtime to RealMLP-HPO, despite using early stopping.)
Questions:
1. See weaknesses.
2. We did not mention this in the paper since it is a one-time cost, but: Around 100-300 steps for GBDTs (could have probably matched this with 20-30 manual tuning steps). For RealMLP, we did not use automated tuning, since we repeatedly implemented new components and tried them, so it is hard to say but several hundred configurations were tried.
3. We have measured inference efficiency but did not want to report it since it could be quite sensitive to implementation details (inference batch size, data conversions, etc.). For example, we heard from AutoGluon developers that they could greatly improve AutoGluon’s inference times just by running its preprocessing in numpy instead of pandas. Nonetheless, here are some of our measurements (inference time per 1K samples):
- CatBoost-D_CPU: 0.00273057 s
- XGB-D_CPU: 0.00297026 s
- RF-SKL-D_CPU: 0.016169 s
- RealMLP-TD_CPU: 0.00501239 s
- RealMLP-TD-S_CPU: 0.00377973 s
- MLP-D_CPU: 0.0187927 s
- TabR-S-D_CPU: 0.125827 s
As you can see, RealMLP-TD is a bit slower than GBDTs but still fast, while for MLP-D there seems to be a suboptimal implementation (we don’t know why it would be so slow otherwise) and TabR is very slow.
---
Rebuttal 2:
Title: Response to Authors
Comment: I carefully read the rebuttal and would like to thank the authors! Also, I apologise for the late reply.
1. In my opinion, the idea of making MLP as strong as possible is really valuable for the field but the lack of "fair" comparison is a big weakness of the paper. ("fair comparison" = other models with the proposed techniques). Though I understand the high cost of the tuning, I think even results for TabR-S-D+some_tricks would be beneficial for the paper.
2. Thanks for the answer. It should be added to the main text since stability to hyperparameters is important.
3. I think this "issue" could also be related to optimization stability (or sensitivity to hyperparameters).
4. Sorry, I made a typo, TabR is the non-parametric baseline. I meant that the paper lacks a strong parametric baseline since FT-T is quite expensive, and ResNet is just bad. MLP-PLR is a good baseline thanks for the provided results.
5. Inference time seems reasonable, thanks for the results.
Overall, in my opinion, the main weakness (the first one) remains and, thus, I keep my score unchanged.
---
Rebuttal Comment 2.1:
Comment: We thank the reviewer for taking the time to consider our response.
Regarding the fairness of our comparison between our MLP and other neural networks, we do not strive to be fair between different architectures, and state this in the paragraph on limitations. This paper is not about comparing architectures, but rather offering a good model, and measuring the importance of meta-learning good default hyperparameters.
We tried some of our tricks on top of TabR-S-D on our meta-train-class benchmark. The result is slightly better than the TabR-S-D version with only our preprocessing, which is already better than TabR-S-D. We are not sure how to turn this into a fair comparison with a reasonable amount of effort. | null | null | null | null | null | null |
Graph Structure Inference with BAM: Neural Dependency Processing via Bilinear Attention | Accept (poster) | Summary: The paper presents a novel neural network model designed for supervised graph structure inference. The core contribution is the introduction of a Bilinear Attention Mechanism (BAM) which processes dependency information at the level of covariance matrices of transformed data. This approach respects the geometry of the manifold of symmetric positive definite matrices, leading to robust and accurate inference of both undirected graphs and partially directed acyclic graphs (CPDAGs). The model is trained on variably shaped and coupled simulated input data and requires only a single forward pass for inference. Empirical evaluations demonstrate the model's robustness in detecting a wide range of dependencies, outperforming several state-of-the-art methods.
Strengths: - The introduction of the Bilinear Attention Mechanism (BAM) is novel and aims to address key challenges in graph structure inference.
- The empirical evaluations are comprehensive, demonstrating the robustness and effectiveness of the proposed method.
- The paper is well-written. The figures depicted contribute to clearly demonstrating the pipeline proposed.
- The method provides a significant improvement over existing approaches.
Weaknesses: - The method's complexity, particularly in terms of the bilinear attention mechanism and its application, may limit its accessibility to a broader audience.
- The training process, requiring extensive computational resources, may pose a challenge for researchers with limited access to high-performance computing facilities.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How does the performance of the model scale with larger and more complex graph structures?
2. Please summarize the research motivation and major contributions of this work.
3. Can the authors provide more details on the computational requirements and potential optimizations for the training process?
4. Considering the high computational cost and significant memory resources required during the training phase (e.g., approximately 80 GB of GPU memory), what strategies or architectural modifications do you plan to implement to mitigate these computational demands without compromising the model's performance in the future?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately discussed the limitations and potential negative societal impact of their work. However, more detailed discussions on the scalability and computational requirements could be beneficial. Constructive suggestions for improvement include exploring optimizations for the training process and providing additional real-world case studies to validate the method's applicability.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer wDfW,
Thank you for the insightful and constructive feedback. We sincerely appreciate your recognition of the novelty and effectiveness of our Bilinear Attention Mechanism (BAM) for graph structure inference.
**Research motivation:** We agree that the paper will benefit from a clearer explanation of the research motivation, particularly regarding why we introduce the BAM layer in the SPD space for graph structure inference. We have addressed this in the overall author rebuttal.
**Major contributions:** Our primary contribution lies in the introduction of the novel Bilinear Attention Mechanism (BAM) layer. Additionally, we propose the innovative application of geometric deep learning methods, inspired by the computer vision field, in the SPD space for supervised causal learning. Moreover, we introduce a novel learning task that respects identifiability and distinguishes between moralized and skeleton edges, enabling more precise causal structure inference. In the final version of our paper, we will enhance our presentation to ensure that these essential contributions are more readily accessible to the reader.
**Computational complexity and resource requirements**
We acknowledge the potential limitations of our method in terms of its computational complexity and the requirement for extensive computational resources during training, which is a common challenge in deep learning, particularly when compared to the vast computational architectures employed by large companies in fields such as computer vision and GPT models.
Nevertheless, we would like to highlight that our approach can be effectively scaled down to accommodate more modest computational resources. By reducing model and training generator parameters such as the maximum number of samples ($\overline{M}=500$), maximum data dimension ($\overline{d}=50$), and number of channels ($C=50$), we have successfully conducted training on a standard MacBook equipped with an M1 chip and 16 GB memory. We utilized Apple Metal to facilitate GPU-like operations on the M1 chip. This approach is also feasible on computers equipped with a suitable NVIDIA graphics card, offering an alternative for efficient training without the need for a dedicated computing server.
Although this scaled-down version may not achieve the same level of performance as the high-resource setup for large dataset regimes with large $M$ and $d$ values, it showcases the flexibility and adaptability of our approach to various computational constraints.
**Memory complexity:**
The overall memory complexity of our proposed method is $O(CMd + Cd^2 + Md^2 + M^2d+C^2)$, where $M$ is the number of datapoints, $d$ is the data dimension, and $C$ is the number of channels. This complexity arises from the following components: attention-between-attributes $O(CMd+Md^2)$, attention-between-datapoints $O(CMd+M^2 d)$, covariance matrix calculation $O(C(Md+d^2))$, bilinear attention $O(Cd^2)$, and matrix logarithm $O(Cd^2)$. These complexities are derived from the matrix shapes used in Figure 3 of the paper. Additionally, $C\times C$ weight matrices are used, which require $O(C^2)$ memory. We thank the reviewer for drawing our attention to the incorrect memory complexity given in line 311, which will be corrected in the final version of the paper.
**Computational time complexity:**
We thank the reviewer for the suggestion to also add an analysis of the computational complexity. The runtime complexity of our approach is $O(C^2 M d+CMd^2+CM^2 d +C^2 d^2 + Cd^3)$. We present a detailed breakdown of the single components:
The complexity of attention-between-attributes is $O(C^2 Md+CMd^2)$. By switching axes, attention-between-datapoints has a time complexity of $O(C^2 Md+CM^2d)$.
In a BAM layer, multiplying the $d\times d\times C$ tensor with a $C\times C$ weight matrix has a complexity of $O(C^2 d^2)$. The bilinear operation in line 174 (for calculating the attention matrix and the output of the BAM layer) is defined as $C$-parallel computation of two $d\times d$ matrix multiplications, resulting in a complexity of $O(Cd^3)$. Computing the custom softmax can be expressed as $d\times d$ matrix multiplications in $C$ channels, so its complexity is $O(Cd^3)$. Calculating the matrix logarithm is equivalent to computing an eigendecomposition, which has a complexity of $O(Cd^3)$ when performed over $C$ channels.
**Handling larger datasets:**
We acknowledge the importance of reducing computational complexity for large, high-dimensional datasets. We propose using local attention instead of global attention to address those cases.
Consider a dataset with a large sample size $M$. In such cases, the attention-between-datapoints layer becomes a bottleneck due to its $O(M^2 d)$ memory complexity and $O(CM^2 d)$ computational complexity. To alleviate this, we propose dividing the sample axis $M = ms$ randomly into $s$ subsamples, each containing $m$ samples. Applying local attention to these smaller segments reduces the runtime complexity to $O(Cm^2 ds)=O(\frac{CM^2d}{s})$ and the memory complexity to $O(dm^2s)$ for calculating the attention matrix between samples. By maintaining a fixed subsample size $m$, our approach achieves linear scalability with respect to $M$ by incrementing linearly in $s$. This technique can be similarly applied to the attention-between-datapoints and our bilinear attention methods, enhancing overall efficiency.
In the revised manuscript, we will incorporate the computational complexity analysis and provide a detailed discussion on efficiency in the appendix. Furthermore, we will address the limitations related to computational complexity, including the potential of local attention as a mitigating approach.
Best,
Authors of Submission15606 | Summary: This paper studies the problem of graph structure inference using a neural network with bilinear attention mechanism.
Strengths: The proposed framework is novel and can operate in Euclidean and positive semi-definite matrix spaces. Attention mechanism is utilized in an innovative manner to reveal interdependencies. Experiments demonstrate that the proposed framework achieves competent and robust performance.
Weaknesses: 1. Could you clarify whether BAM has any theoretical guarantees on the recovery of correct structure? In the absence of such guarantees, it is not clear how BAM might perform in settings beyond those explored in the paper.
2. The advantages of BAM over the state-of-the-art approaches could be explained better from a conceptual perspective.
Technical Quality: 3
Clarity: 2
Questions for Authors: See Weaknesses.
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Limitations have been adequately discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer 2mp1,
Thank you for recognizing the novelty and innovation of our framework. We appreciate the constructive and insightful feedback, on clarifying the advantages of our BAM layer in the SPD space and the theoretical guarantees of our approach.
**Significance and advantages of SPD layers:**
We agree that the paper will benefit from a clearer justification of the integration of SPD layers and the Bilinear Attention Mechanism (BAM) in our neural network architecture, highlighting their advantages. We have addressed this in the overall author rebuttal.
**Theoretical guarantees:**
Thank you for raising the important question of whether our BAM framework provides theoretical guarantees on the recovery of the correct graph structure. We acknowledge that the current paper does not include formal proofs of identifiability or consistency. We have addressed the challenges in theoretically proving identifiability and consistency in the Limitations section. The universal approximation theorem for neural networks provides some assurance that, in principle, a sufficiently expressive neural network can learn any continuous mapping from the data distribution to graph structures. However, several assumptions need to be fulfilled to ensure convergence to such a solution.
Moreover, the theorem pertains to performance on the training data, and generalization to out-of-sample data is a different challenge. Rigorously extending these results to our specific architecture with observational attention and bilinear attention layers is non-trivial.
That said, we believe our extensive empirical evaluation provides compelling evidence that BAM is able to reliably recover meaningful graph structures in practice. Across a range of settings, BAM achieved strong performance competitive with or exceeding state-of-the-art baselines. This situation, where empirical performance outpaces theoretical understanding, has been a common phenomenon in the deep learning field. Many deep learning approaches have achieved state-of-the-art results in various domains despite limited theoretical foundations. While we absolutely agree that developing a rigorous theoretical understanding is crucial and the lack thereof is a clear limitation, we also believe that research in deep learning and its applications to new domains should not entirely be avoided due to the absence of a complete theoretical framework.
We will incorporate a more detailed discussion on the advantages of SPD layers and the BAM architecture, as well as an extended analysis of the limitations regarding theoretical guarantees, in the revised manuscript and appendix, following your suggestions.
Best,
Authors of Submission15606
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the rebuttal. After considering it and other reviews, I have raised my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for considering our rebuttal and for raising your score. | Summary: This paper proposes to use supervised causal learning approach to learn causal structure. It takes a dataset as input, and outputs a moral graph. The moral graph is an undirected graph with two types of edges: skeleton edge and moralized edge.
In technical space:
architecture: The approach adopts alternating node-wise and sample-wise attention + covariance in embedded space + bilinear attention via SPD net
training data: ER sampling to generate graphs, Chebyshev polynomial functions as basis to generate mechanism and with Gaussian distribution for noise terms.
learning task: The main task is a three-class edge classification to output a moral graph.
experiment shows good performance by comparing with other SOTA competitors, for both the moral graph prediction and CPDAG prediction.
Strengths: 1. Supervised causal learning is an interesting area, and it has potentially more impact to the causal discovery community.
2. The learning task respects the identifiability, and the main task is to predict a moral graph, which is new to the best of my knowledge.
3. A novel NN architecture is interesting and could make potential impact to the community.
Weaknesses: 1. Superiority of model architecture: What is the significance and necessity of using semi-definite (SPD) matrix space? How to "enhancing graph structure inference" should be explained in more detail. Compared with only using alternating attention, what advantages does BAM have and what scenarios are more suitable? The theoretical discussion and experiments in the current paper are not enough to explain the above problems.
2. The necessity of introducing Moralized edges: Overall I like the idea of representing the undirected graph by the moral graph, which is new and sound. However, given that the paper also has a second prediction model to do orientation from moral graph to yield the final CPDAG, so, what are the advantages compared to learning skeleton and v-structure separately? So, I would like to see more discussion and some empirical evidence to demostrate the advantages of choosing moralized graph instead of skeleton as the key learning task.
3. The motivation for choosing Chebyshev as the mechanism of training data: For example, in "E.1 Chebyshev polynomials for training", this Chebyshev polynomial form does not seem to be able to describe complex multivariate coupling effects. In addition, there should be corresponding experiments for the hyperparameter setting of Chebyshev to prove that these training effects are robust or sensitive to this hyperparameter.
4. In the experimental part, a large amount of synthetic data is used to test performance, but the functions (mechanisms) of these synthetic data are relatively simple. More complex or more representative mechanisms should be used for testing to enhance persuasiveness, such as MLP, Random Fourier features, etc.
5. In addition, the experimental part is all tested on synthetic data, and should be tested on real benchmarks, such as Sachs, etc.
Technical Quality: 3
Clarity: 4
Questions for Authors: See my comments on the weakness part above.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer rbW5,
Thank you for the insightful and constructive feedback. We sincerely appreciate your recognition of the potential impact of our approach.
**Significance of SPD layers:**
We agree that the paper will benefit from a clearer justification of the integration of SPD layers and the Bilinear Attention Mechanism (BAM) in our neural network architecture. We have addressed this in the overall author rebuttal.
**Evidence for the effectiveness of moralized edge estimation:**
Thank you for acknowledging our contribution in distinguishing between skeleton and moralized edges in the undirected edge estimation task. We agree that empirical evaluation would further demonstrate its effectiveness. To address this, we ran additional experiments where we maintained the undirected edge estimation network but modified the second step for estimating immoralities. We trained another neural network to test each possible immorality in the graph, not just those identified by the first neural network, using the same hyperparameters for training as before. The results, shown in Table 1 of the rebuttal PDF (at the bottom of the author rebuttal), indicate that our approach of only testing immoralities detected by the first neural network consistently outperformed the case where all possible moralizations were tested.
We observed that the neural network testing all possible moralizations had a strong bias towards estimating no moralization, likely because this occurs more frequently in sparse graphs used for training. While this bias could potentially be addressed using a weighted loss function, it highlights the efficiency of our approach in avoiding these issues. Information about conditional dependencies might be easier to detect "globally" (i.e., having information about the whole graph, not only the Markov blanket estimated beforehand), akin to the Gaussian case, where this information is contained in the precision matrix.
**Multidimensional couplings:**
We appreciate the reviewer's suggestion to test our approach on more complex and representative mechanisms. We agree that this would enhance the persuasiveness of our experimental results. To address this point, we conducted additional experiments, as shown in Figure 1 of the rebuttal PDF. In these experiments, we tested our approach for detecting Random Fourier feature dependencies.
For the undirected case, shown in (A, B, C) of Figure 1, our algorithm outperforms the baselines for all sample sizes except the two highest, $M=500, 1000$, where our method still produces very good results. We believe that for this sample size, our approach might overfit the training data, becoming more confident in not estimating an edge when it behaves differently from what was seen during training.
Regarding the directed estimation results shown in (D, E, F), our algorithm still exhibits reasonable performance, although it is not the best when compared to the baselines. This highlights opportunities for improvement for multidimensional dependencies, which we will discuss in the limitations section.
Additionally we have already tested our approach on MLP dependencies, as shown in Appendix G.2 of our paper, where we demonstrate competitive performance of our approach.
**Rationale behind using Chebyshev polynomials:**
Chebyshev polynomials exhibit factorially decreasing coefficients when approximating functions with bounded derivatives of any order. By sampling the coefficients from a uniform distribution with factorial decay, we can generate coupling functions that are characterized by being smooth and well-behaved with a large degree of variation. This property ensures that the generated coupling functions are diverse and well-behaved, without being dominated by high-frequency oscillations or other unstable behavior.
The orthogonality of Chebyshev polynomials is a desirable property for our application, as it allows us to easily evaluate the magnitude of individual effects and ensures that the effects do not interfere with each other in an unstable manner. This is particularly important to control the magnitude of the causal effects to ensure that an edge in the adjacency matrix corresponds to a meaningful, controllable effect in the generated data. Training with disrupted training data, where an edge is to be predicted without being predictable is causing problems for learning.
In contrast, using e.g. a randomly initialized MLP as coupling may not provide the same level of control and interpretability. While the universal approximation theorem suggests that MLPs can represent a wide range of functions, the expressive power of a randomly initialized MLP is limited. The weights are typically drawn from simple distributions like Gaussian or uniform, which can lead to similar patterns in the network's behavior across different initializations. As a result, the simulated relationships may be opaque and limited expressive.
Our experiments demonstrate that Chebyshev polynomials can approximate a wide range of simple and complex dependencies, including random MLP and RFF dependencies. However, we acknowledge our algorithm's limitations in detecting multidimensional dependencies, particularly in the RFF dependency case for directed edge estimation.
**Evaluation on real benchmarks:**
We acknowledge the suggestion to test on real-world benchmark datasets. While we recognize the importance of testing on real data, due to time constraints and the scope of this submission, we focused on synthetic data to validate our method.
For real-world data, the true graph structure is often uncertain, and there are many random effects, making principled evaluation challenging. Synthetic data allows for a more controlled evaluation of detection accuracy. Nevertheless, we agree that real-world data are crucial and will be the next step in our future studies.
We will incorporate all points discussed above in the final version of our paper.
Best,
Authors of Submission15606 | Summary: This paper proposes a novel neural network model for supervised graph structure inference. The model aims to learn the mapping between observational data and their underlying dependence structure using a bilinear attention mechanism (BAM). The BAM operates on the level of covariance matrices of transformed data and respects the geometry of the manifold of symmetric positive definite (SPD) matrices. The paper demonstrates the robustness of the proposed method in detecting dependencies and inferring graph structures.
Strengths: 1. The introduction of the bilinear attention mechanism for processing dependency information in the SPD space is novel and well-motivated. This approach leverages the geometric properties of SPD matrices, offering a fresh perspective on graph structure inference.
2. The paper provides extensive experimental results, showing that the proposed method outperforms state-of-the-art algorithms in various scenarios.
Weaknesses: Although the paper claims that the proposed method is computationally efficient compared to some unsupervised approaches, the actual computational cost, especially during training, is unclear.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Provide a more detailed analysis of the theoretical computational complexity and practical scaling ability of the proposed method.
2. Discuss potential strategies for optimizing the computational efficiency and handling larger datasets effectively.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer wqVr,
Thank you for the insightful and constructive feedback. We greatly appreciate your recognition of the potential of our approach.
**Efficiency:**
We acknowledge that the level of computational resources required for our method (approximately 82 GiB, as stated in lines 309-312) may be a limiting factor in cases where such resources are not readily available. This is indeed an advantage of traditional causal learning algorithms. Deep learning models, particularly attention-based architectures, which exhibit quadratic complexity with respect to sequence length, require significant computational resources to achieve optimal performance. However, when coupled with large computational resources, deep learning has demonstrated superior performance across various domains, such as natural language processing (e.g., GPT-4) and computer vision (e.g., state-of-the-art object detection and segmentation models).
**Memory complexity:**
The overall memory complexity of our proposed method is $O(CMd + Cd^2 + Md^2 + M^2d+C^2)$, where $M$ is the number of datapoints, $d$ is the data dimension, and $C$ is the number of channels. This complexity arises from the following components: attention-between-attributes $O(CMd+Md^2)$, attention-between-datapoints $O(CMd+M^2 d)$, covariance matrix calculation $O(C(Md+d^2))$, bilinear attention $O(Cd^2)$, and matrix logarithm $O(Cd^2)$. These complexities are derived from the matrix shapes used in Figure 3 of the paper. Additionally, $C\times C$ weight matrices are used, which require $O(C^2)$ memory. We thank the reviewer for drawing our attention to the incorrect memory complexity given in line 311, which will be corrected in the final version of the paper.
**Computational time complexity:**
We thank the reviewer for the suggestion to also add an analysis of the computational complexity. The runtime complexity of our approach is $O(C^2 M d+CMd^2+CM^2 d +C^2 d^2 + Cd^3)$. We present a detailed breakdown of the single components:
We first consider the computational complexity of the tensor multiplication defined in lines 127-128: a tensor $\boldsymbol{A}\in\mathbb{R}^{M\times d \times C}$ is multiplied by a matrix $\boldsymbol{B}\in \mathbb{R}^{C\times C}$. This operation involves multiplying each of the $M$ slices $\in\mathbb{R}^{d\times C}$ of $\boldsymbol{A}$ by $\boldsymbol{B}$. The multiplication of a $d\times C$ matrix by a $C\times C$ matrix has a time complexity of $O(C^2 d)$. Since this multiplication is performed in parallel over the M axis, the total complexity of this tensor multiplication is $O(C^2 M d)$.
Similarly, the parallel computation of the attention matrix $\boldsymbol{K}\odot\boldsymbol{Q}$ for attention-between-attributes (line 498, matrix $\boldsymbol{A}$ in Figure 3 left) consists of $M$ parallel matrix multiplications of $d\times C$ with $C\times d$ matrices, resulting in a complexity of $O(CMd^2)$. Multiplying the attention matrix with the values (line 503) is an M-parallel computation of a $M\times d\times d$ tensor with a $M\times d \times C$ tensor, having complexity $O(CMd^2)$. Thus, the overall complexity of attention-between-attributes is $O(C^2 Md+CMd^2)$. By switching axes, attention-between-datapoints has a time complexity of $O(C^2 Md+CM^2d)$.
Calculating $C$ covariance matrices has a complexity of $O(CMd(\min{M,d}))$. In a BAM layer, multiplying the $d\times d\times C$ tensor with a $C\times C$ weight matrix has a complexity of $O(C^2 d^2)$. The bilinear operation in line 174 (for calculating the attention matrix and the output of the BAM layer) is defined as $C$-parallel computation of two $d\times d$ matrix multiplications, resulting in a complexity of $O(Cd^3)$. Computing the custom softmax can be expressed as $d\times d$ matrix multiplications in $C$ channels, so its complexity is $O(Cd^3)$. Calculating the matrix logarithm is equivalent to computing an eigendecomposition, which has a complexity of $O(Cd^3)$ when performed over $C$ channels.
In summary, the overall time complexity of our approach is $O(C^2 M d+CMd^2+CM^2 d +C^2 d^2 + Cd^3)$.
**Handling larger datasets:**
We acknowledge the importance of reducing computational complexity for large, high-dimensional datasets. We propose using local attention instead of global attention to address those cases.
Consider a dataset with a large sample size $M$. In such cases, the attention-between-datapoints layer becomes a bottleneck due to its $O(M^2 d)$ memory complexity and $O(CM^2 d)$ computational complexity. To alleviate this, we propose dividing the sample axis $M = ms$ randomly into $s$ subsamples, each containing $m$ samples. Applying local attention to these smaller segments reduces the runtime complexity to $O(Cm^2 ds)=O(\frac{CM^2d}{s})$ and the memory complexity to $O(dm^2s)$ for calculating the attention matrix between samples. By maintaining a fixed subsample size $m$, our approach achieves linear scalability with respect to $M$ by incrementing linearly in $s$. This technique can be similarly applied to the attention-between-datapoints and our bilinear attention methods, enhancing overall efficiency.
In the revised manuscript, we will incorporate the computational complexity analysis and provide a detailed discussion on efficiency in the appendix. Furthermore, we will address the limitations related to computational complexity, including the potential of local attention as a mitigating approach.
Best,
Authors of Submission15606
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response. It effectively addresses my questions. I will maintain my acceptance rating.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response and your positive assessment. | Rebuttal 1:
Rebuttal: We appreciate the time and effort the reviewers have invested in evaluating our work. We are grateful for your constructive feedback and insightful suggestions, which have helped us identify areas for improvement.
We have added a PDF at the end of the author rebuttal, which includes a comparison of the identification of moralizations and evaluations on data using Random Fourier feature dependencies.
In response to the reviewers' requests for further elaboration on the motivation and rationale behind the advantages of our approach, we provide a more detailed explanation of the advantages of our approach.
**Motivation for Optimizing in the SPD Space for Graph Estimation: GLASSO optimization:**
We acknowledge the importance of clearly justifying the integration of SPD-layers in the neural network architecture.
The use of SPD layers is inspired by the success of traditional GLASSO-based methods for undirected graph estimation, such as the influential works of Banerjee et al. (2008), Friedman et al. (2008), and Yuan and Lin (2007). These methods optimize a penalized likelihood over the SPD space using techniques like block coordinate descent, ensuring that the estimate remains positive definite at each iteration. These approaches have proven to be efficient for learning graph structures when the input SPD matrix encodes sufficient information, particularly in the Gaussian case where the covariance matrix is a sufficient statistic for the distribution (neglecting a shift by the mean). The main reason optimization is performed solely in the SPD space is that inverse covariance matrices naturally represent graphs. Zero entries indicate missing edges and non-zero entries correspond to the strength of conditional dependencies. This characteristic makes them a powerful tool for estimating well-conditioned and interpretable structures within the SPD space, particularly when combined with $l_1$ regularization to promote sparsity.
However, for general SEMs with non-linear dependencies and non-Gaussian error terms, pure SPD models cannot capture the full information of the data. Neural networks, on the other hand, excel at learning and decoding complex non-linear relationships. The observational attention layers in our model are trained to nonlinearly transform the data into a batch of $C$ covariance matrices, learning an end-to-end observational transformation that filters the information useful for graph prediction and provides sufficient information for efficiently solving the undirected graph estimation task.
Thus, our BAM-network learns data-driven matrix transformations akin to those used in solving the GLASSO optimization problem. Our network implicitly encodes the solution to a parametric optimization problem. For some objective function $F$ (e.g. penalized negative log-likelihood for GLASSO), we optimize $\min_{\Sigma} F(\Sigma, \mathcal{D})$ with respect to $\Sigma$, where the solution is parametric in the data $\mathcal{D}$, or any sufficient statistics thereof. The solution describes a mapping $\mathcal{D} \rightarrow \hat{\Sigma}$, with $\hat{\Sigma} = \arg\min_{\Sigma} F(\Sigma, \mathcal{D})$ being the space of all optimization solutions for different data $\mathcal{D}$. Hence, instead of solving the SPD GLASSO optimization for each new dataset $\mathcal{D}$ to find the corresponding $\hat{\Sigma}$, the network learns the entire mapping $\hat{\Sigma}(\mathcal{D})$, requiring only a single cheap forward pass through a neural network for evaluation on a specific $\mathcal{D}$. Additionally, a transformation in the observational data space to handle non-Gaussian data is learned at the beginning. It is worth noting that the operations for solving GLASSO problems by a traditional algorithm can be applied to input data of any dimension $d$ and any sample size $M$. Our network also achieves this by applying only attention-based, shape-invariant operations.
Another advantage of our end-to-end learning approach over traditional likelihood-based methods is that it does not require a sensitive sparsity hyperparameter.
**Advantages of Optimizing in the SPD Space for Graph Estimation: Enhanced expressiveness and natural representation:**
In traditional supervised causal discovery, the adjacency matrix values are directly calculated from a learned $M \times d\times C$ transformed observational data matrices. This requires decoding the $d \times d$ pairwise relationships within each of the $M$ samples that far, that probabilities can be obtained after a few simple operations. Before the output is calculated, the $M$ axis is max-pooled out (Lorch et al., 2022) or cut out (Ke et al., 2022), resulting in a $C\times d$ data representation in both cases. However, extracting $d(d-1)/2$ distinct pairwise interactions from a $C \times d$ representation by one operation on the $C$-axis lacks expressiveness especially when the data dimension $d$ varies. Also, using Max-Pooling for the sample axis might be problematic for varying sample sizes $M$, because the magnitude of the output values scales with $M$ alone.
In contrast, using a $d \times d$ representation naturally accommodates the varying dimensionality and allows for a more direct and efficient extraction of the pairwise dependencies. We argue that using $C \times d$ representations for inferring $d \times d$ pairwise interactions is structurally misaligned, leading to inefficiencies compared to directly operating on $d \times d$ matrices.
**References**:
Banerjee et al. (2008). Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data. JMLR
Friedman et al. (2008). Sparse inverse covariance estimation with the graphical lasso. Biostatistics
Ke et al. (2022). Learning to Induce Causal Structure. In ICLR.
Lorch et al. (2022). Amortized inference for causal structure learning. In NeurIPS
Yuan \& Lin (2007). Model selection and estimation in the Gaussian graphical model. Biometrika
Pdf: /pdf/820eeba36ca9901cd6869d2b3bb7cd0840ba1675.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Unleashing the Potential of the Diffusion Model in Few-shot Semantic Segmentation | Accept (poster) | Summary: This paper explores the potential of the Latent Diffusion Model for Few-shot Semantic Segmentation. The authors study four crucial elements of applying the Diffusion Model to Few-shot Semantic Segmentation and propose several reasonable solutions for each aspect.
Based on their observations, the authors establish the DiffewS framework, which maximally retains the generative framework and effectively utilizes the pre-training prior.
Experimental results demonstrate that the proposed method outperforms the previous SOTA models in multiple settings.
Strengths: - The topic is interesting and the proposed method is novel. How to better utilize the diffusion model for perception tasks is currently a promising direction.
And this paper takes a first step to apply diffusion model to few-shot semantic segmentation.
- The writing is clear and easy to follow. The author presents clear and logical thinking, starting from four key elements of implementing FSS, proposing a series of reasonable solutions, and validating the effectiveness of these solutions through experiments. The KV-fusion strategy, the supervisory form of the query mask, or the single-step generation strategy, all of these provide certain insights for subsequent research on FSS and visual perception.
- The Diffews method has demonstrated strong performance in in-context settings, especially in in-domain scenarios. It also shows excellent performance in strict few-shot settings and converges faster compared to traditional methods.
Weaknesses: - Some places lack proper references. For example, in L110, "v-prediction," we typically cite Tim Salimans and Jonathan Ho's work: "Progressive distillation for fast sampling of diffusion models" (ICLR, 2022).
In L241, I noticed that the appendix contains an analysis of the training objectives for different generation processes, and a reference should be added here.
- In Section 4.4, L257, the variances $\beta_1$ and $\beta_2$ are not clearly defined. Since $\beta_t$ in L106 represents the variance at time t, it would be better to use $\beta^1$ and $\beta^2$ to denote these two different settings here. Additionally, the meaning of the two values within the tuple is not explained. In my view, $\beta^1 = (\beta^{start},\beta^{end})$ should represent the initial and final values of the DDIM scheduler's $\beta$ .
Furthermore, it is necessary to explain the intuitive differences between these two settings.
- Other typos:
L273, "we tested" -> "we test"
L151, "linear projection layer. ." ->"linear projection layer"
L279, "function as"-> "functions as"
Technical Quality: 4
Clarity: 4
Questions for Authors: I've noticed that the current method doesn't perform well enough on 5-shot, can this be mitigated by making adjustments during training?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: NaN
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We deeply appreciate your acknowledgment of our motivation and approach. We will try our best to address the questions and weaknesses you have raised.
**W1: Lack of proper references**
Thank you for your feedback. We will add appropriate references in the next version.
**W2: Definition of $\beta_{1}$ and $\beta_{2}$**
Yes, you are correct. We will clearly define $\beta^{1}$ and $\beta^{2}$ in Section 4.4 and explain the intuitive meanings of the two values. $\beta^{1}=(\beta_{start},\beta_{end})$, where $\beta_{start}$ and $\beta_{end}$ respectively represent the initial and final values of $\beta$ in the DDIM scheduler. We also mentioned this in our discussion with Reviewer uveh. Intuitively, $\beta^{2}$ adds more noise at smaller time steps compared to $\beta^{1}$. Experimental results show that adding more noise at smaller time steps improves the model's performance, which is why we ultimately chose OI2M. .
**W3: Other typos**
Thank you once again. We will fix these errors in the next version.
**Questions: 5-shot performance**
We are grateful for your question. We have also mentioned this issue in our response to Reviewer zc26. We have tried two approaches to address the N-shot setting issue.
**Inference Time Improvement**
The first approach is an improvement in the inference phase. Considering the transition from 1-shot to n-shot, where additional support samples' Keys and Values are concatenated (see Appendix A.6), this leads to a significantly larger number of Keys and Values for the self-attention layer to process during inference compared to the training phase. An intuitive solution is to randomly sample the Keys and Values of the support samples at each layer during inference, ensuring the number of samples matches the number of Keys and Values during training. We found that this approach indeed improves the model's performance in 5-shot and 10-shot settings. The experimental results are shown in the table below.
Table: Performance of DiffewS with Inference Time Improvement
| | 1-shot | 5-shot | 10-shot |
|---------------------|--------|--------|---------|
| **COCO** | | | |
| Diffews (ori) | 71.3 | 72.2 | 70.1 |
| Diffews (sample) | 71.3 | 74.7 | 73.4 |
| **PASCAL** | | | |
| Diffews (ori) | 88.3 | 87.8 | 87.2 |
| Diffews (sample) | 88.3 | 89.4 | 89.6 |
Another more straightforward idea is to introduce multiple support samples during the training phase. This way, the model can learn how to select among multiple support images during training. To save computational resources, we conducted experiments under the ablation study setting on the COCO fold0. The experimental results are shown in the table below.
Table: Performance of DiffewS with Training Time Improvement
| | 1-shot | 5-shot | 10-shot |
|---------------------------|--------|--------|---------|
| Diffews (ori, train 1 shot)| 47.7 | 52.0 | 49.1 |
| Diffews (train 1-5 shot) | 46.4 | 57.6 | 55.9 |
| Diffews (train 1-7 shot) | 47.1 | 57.3 | 58.7 |
(train 1-5 shot) indicates that we randomly select 1 to 5 support samples as input during a single training iteration. From the table, we can see that when multiple support samples are introduced during the training phase, the model's performance significantly improves in both 5-shot and 10-shot scenarios. Additionally, as the number of support samples increases during training, the model's performance in the 10-shot scenario gradually improves. We also found that when multiple support samples are introduced during the training phase, the 1-shot performance may decrease. This could be due to the inconsistency between training and inference.
In summary, both approaches effectively alleviate the issues associated with the N-shot setting, demonstrating the potential of DiffewS in Few-shot Semantic Segmentation. We also hope that our work can provide some inspiration for researchers in the fields of diffusion-based models and Few-shot Semantic Segmentation.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their detailed responses. All my concerns were thoroughly addressed. Overall, this work provides clear insights and significant contributions to the field. Therefore, I will maintain my original rating.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your thoughtful review and for acknowledging our work. We are delighted that our responses addressed your concerns, and we greatly appreciate your recognition of the insights and contributions our work offers to the field. Your positive feedback is invaluable to us.
We also want to assure you that we will incorporate your feedback and any potential improvements into future versions of our paper. | Summary: This paper explores the potential of diffusion model in Few-Shot Segmentation (FSS) tasks. For achieve better performance, it examines four critical elements of applying the diffusion model to FSS. Building on this research, the paper introduces the DiffewS framework, with experimental results validating its effectiveness. The authors assert that this is the first diffusion-based model specifically designed for Few-shot Semantic Segmentation.
Strengths: This article comprehensively explores four key aspects of adapting the Diffusion Model to FSS tasks: the interaction between query and support images, injection of support mask information, supervision from query masks, and exploration of the generation process. Multiple implementation approaches regarding the above four aspects are studied and compared through experiments. And the relevant experimental results, presented in figures 2, 3, and 4, are convincing.
Weaknesses: The weakness of this article is as follows:
1. Why should the diffusion model be applied to FSS tasks? What advantages does DM offer over Multi-modal fundamental models (like SAM or LLaVA) for FSS tasks? This article's research motivation requires stronger support through theoretical explanations or experimental evidence.
2. The default generation process (OI2M) transforms "gradually approaching ground truth in denoising" to "single-step prediction in segmentation", yielding optimal performance. Does this suggest that DM's multi-step denoising process is unsuitable for FSS? Does this paper not unleash the potential of DM, instead transforming it into a segmentation model? Section 4.4 should include more theoretical explanations.
3. The comparison of inference speed and training cost is important and should be supplemented to illustrate the advantages of DiffewS.
4. Other studies that apply the Diffusion Model to Few-Shot Segmentation (e.g., DifFSS[1], Ref LDM-Seg[2]) need to be compared.
[1] DifFSS: Diffusion Model for Few-Shot Semantic Segmentation. Github: https://github.com/donglongzi/DifFSS-BAM/tree/main
[2] Explore In-Context Segmentation via Latent Diffusion Models. Project Page: https://wang-chaoyang.github.io/project/refldmseg/
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Did the pretrained Stable Diffusion (SD) weights utilized in the model training? How much parameters were updated?
2. What are the experimental settings of the comparing results in Fig2/3/4?
3. In Section 4.4, this paper chose OI2M as the generation pipeline. Since it is a one-step process, what is the timestep setting in this pipeline during training? And did any accelerated generation schedule (such as DDIM) is utilized?
4. Did the few-shot support images being processed as the same as one-shot support image in support image encoding branch? What is the difference of model settings between using few-shot support images and one-shot support image? How about the comparison of computational cost?
5. In Section 5.2, the “In-context” setting means “these specialist models are also trained on the test categories from COCO and PASCAL VOC”. Is it a general experimental dataset setting? And is it a fair comparison for the methods in Table 1?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: No
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your very careful and detailed review. We are delighted that you found our work to be comprehensive and convincing. The questions and weaknesses you've pointed out are incredibly helpful to us, and we will do our utmost to address them below.
**W1: Why should the diffusion model be applied to FSS tasks?**
In L43-L53, we briefly introduce the motivation for applying the diffusion model to FSS tasks. We will make this clearer in the revised version. The pretrained diffusion model exhibits significant potential in fine-grained pixel prediction ability[1,2]
and semantic correspondence ability [3-5]. Both of these capabilities are essential for Few-Shot Segmentation (FSS) tasks, making diffusion models particularly suitable for this application.
For example, SAM has strong fine-grained pixel prediction capabilities but lacks semantic abilities, leading to a series of works attempting to address this issue[6,7]. Therefore, it is difficult to use SAM in FSS. For example, PerSAM[8] is based solely on SAM's capabilities but does not perform well in FSS.
On the other hand, LLaVA has strong semantic abilities but lacks fine-grained pixel prediction capabilities due to its reliance on only image-text pairs for pretraining(lacks fine-grained supervision). Therefore, LISA[7] needs to rely on SAM to achieve segmentation prediction.
**W2: Multi-step denoising process**
We believe that the supervised objective of the multi-step denoising process is not suitable for FSS, but this does not mean that single-step prediction fails to unleash the potential of the diffusion model (DM). The difference between OI2M and MI2M/MN2M lies only in their training objectives (see Appendix A.2). Compared to OI2M, MI2M and MN2M use noisy target masks as input, which could potentially lead to information leakage in segmentation tasks, especially when the noise level is low, thus hindering the model's learning. The experimental results in Figure 3 also confirm this, showing that setting $\beta_{2}$ is more effective than $\beta_{1}$. Intuitively, $\beta_{2}$ adds more noise at smaller time steps compared to $\beta_{1}$, and OI2M can be understood as an extreme case where the input contains no information about the target mask at all (or the target mask has become pure noise). We will provide more theoretical explanations in Section 4.4.
**W3: Inference speed and training cost**
Thank you for your suggestions. We can compare the advantages of DiffewS in two aspects:
1.Compared to traditional FSS methods, DiffewS converges faster and has lower training overhead. For instance, taking DCAMA[9] as an example, both our model and DCAMA were trained on 4 V100 GPUs. The comparison of training time is as follows:
Table: Training Time Comparison
| Method | Training Time |
|---------|---------------|
| DCAMA | 36h |
| DiffewS | 6h |
2. Compared to other generative methods based on multi-step denoising, DiffewS has a faster inference speed. For instance, considering MI2M and MN2M mentioned in this paper, the inference speed comparison is as follows:
Table: Inference Speed Comparison
| Method | Forward times |
|---------|-----------------|
| OI2M | x1 |
| MI2M | x10 |
| MN2M | x10 |
In our implementation, MI2M and MN2M use DDIM sampling, requiring the model to forward 10 times, while OI2M only needs to forward once. Thus, OI2M should be 10 times faster than MI2M/MN2M."
**W4:Other studies**
Thank you for your valuable suggestions. We will include discussions of DifFSS and Ref LDM-Seg in the Related Work section, as well as comparisons with our work. DifFSS uses generated images as auxiliary support images, which can improve the performance of existing FSS methods to some extent. This approach enhances FSS performance from the data level, making it orthogonal to our method. Ref LDM-Seg is a contemporaneous work to ours. Unlike DiffewS, which utilizes self-attention to achieve interaction between the query and support images, Ref LDM-Seg uses cross-attention for this purpose. Additionally, Ref LDM-Seg introduces multiple linear projection layers, increasing the number of parameters in the diffusion model, which might also disrupt the original priors of the diffusion model. Our work provides a more comprehensive and systematic analysis of applying diffusion models to Few-shot Semantic Segmentation tasks. Considering that Ref LDM-Seg has not been open-sourced, and the training data and experimental setups cannot be strictly aligned, we currently cannot conduct a direct and fair experimental comparison.
**References**
[1]Ke B, Obukhov A, Huang S, et al. Repurposing diffusion-based image generators for monocular depth estimation. CVPR2024
[2]Lee H Y, Tseng H Y, Yang M H. Exploiting Diffusion Prior for Generalizable Dense Prediction. CVPR2024
[3]Tang, Luming, et al. Emergent correspondence from image diffusion. NeurIPS2023
[4]Luo, Grace, et al. Diffusion hyperfeatures: Searching through time and space for semantic correspondence. NeurIPS2023
[5]Zhang, Junyi, et al. A tale of two features: Stable diffusion complements dino for zero-shot semantic correspondence. NeurIPS2023
[6]Li F, Zhang H, Sun P, et al. Semantic-sam: Segment and recognize anything at any granularity. ECCV 2024
[7]Lai X, Tian Z, Chen Y, et al. Lisa: Reasoning segmentation via large language model. CVPR 2024
[8]Zhang R, Jiang Z, Guo Z, et al. Personalize segment anything model with one shot. ICLR2024
[9]Shi, Xinyu, et al. Dense cross-query-and-support attention weighted mask aggregation for few-shot segmentation. European Conference on Computer Vision. ECCV2022
**Due to space constraints, we will respond to your valuable questions in the global response.**
---
Rebuttal Comment 1.1:
Comment: Thanks to the author's thorough response, most of my concerns have been addressed. Therefore, I am upholding the original rating.
---
Reply to Comment 1.1.1:
Comment: Thanks again for your thoughtful review and for your recognition of our efforts to address your concerns. We deeply appreciate your professional insights and the constructive feedback that has helped us improve our work. | Summary: This paper introduces DiffewS, a novel Diffusion-based generalist model designed for few-shot segmentation. It systematically examines four key components involved in applying Diffusion Models to Few-shot Semantic Segmentation. For each component, the work proposes several viable solutions, which are validated through extensive experiments. Notably, the work introduces Key-Value Fusion Self-Attention (FSA) to facilitate the interaction between the query and support images.
Strengths: 1. The idea is clear and readily comprehensible
2. The writing is of high quality and the structure is coherent.
3. The experiments are adequate and demonstrate the eff ectiveness of the framework.
Weaknesses: The method of incorporating support masks, adapted from Marigold, may have limitations in a few-shot setting. The experiments detailed in Table 1 indicate that DiffewS achieves only marginal improvements in the COCO few-shot scenario compared to the one-shot scenario and even exhibits a decline in performance on the PASCAL dataset. Further evaluation of DiffewS’s capabilities with additional support samples, such as through experiments in a ten-shot setting, would be beneficial.
Technical Quality: 4
Clarity: 4
Questions for Authors: I am highly intrigued by the possibility of whether DiffewS can perform open-vocabulary segmentation by substituting the support mask with text.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The limitation is discussed in the paper supplementary material.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for acknowledging our motivation and approach. The questions and weaknesses you've pointed out are incredibly helpful to us, and we will do our utmost to address them below.
**Weaknesses: N-shot setting**
Thanks again for pointing out this issue. As we mentioned in the Experiment and Limitation sections of our paper, our current work primarily focuses on unleashing the potential of Diffusion Models in Few-shot Semantic Segmentation. Consequently, our experiments and optimizations are mainly concentrated on the One-shot setting. Since the model receives only one support image at a time during the training phase, using 5-shot or 10-shot during the inference phase can indeed cause inconsistencies. The model might not learn how to effectively utilize multiple support images during training, which could result in interference from lower-quality samples. However, we fully agree with your point that we should further explore the performance of DiffewS with additional support samples. Reviewer zc26 raised a similar request, asking, "can this be mitigated by making adjustments during training?"
Therefore, we have tried two approaches to address the N-shot setting issue.
**Inference Time Improvement**
The first approach is an improvement in the inference phase. Considering the transition from 1-shot to n-shot, where additional support samples' Keys and Values are concatenated (see Appendix A.6), this leads to a significantly larger number of Keys and Values for the self-attention layer to process during inference compared to the training phase. An intuitive solution is to randomly sample the Keys and Values of the support samples at each layer during inference, ensuring the number of samples matches the number of Keys and Values during training. We found that this approach indeed improves the model's performance in 5-shot and 10-shot settings. The experimental results are shown in the table below.
Table: Performance of DiffewS with Inference Time Improvement
| | 1-shot | 5-shot | 10-shot |
|---------------------|--------|--------|---------|
| **COCO** | | | |
| Diffews (ori) | 71.3 | 72.2 | 70.1 |
| Diffews (sample) | 71.3 | 74.7 | 73.4 |
| **PASCAL** | | | |
| Diffews (ori) | 88.3 | 87.8 | 87.2 |
| Diffews (sample) | 88.3 | 89.4 | 89.6 |
From the table, we can see that following the original DiffewS inference method, the 10-shot performance is even lower than the 1-shot result. This might be due to the reasons we analyzed earlier. However, after adopting the random sampling strategy during inference, there is a significant improvement in the model's performance for both 5-shot and 10-shot scenarios
**Training Time Improvement**
Another more straightforward idea is to introduce multiple support samples during the training phase. This way, the model can learn how to select among multiple support images during training. To save computational resources, we conducted experiments under the ablation study setting on the COCO fold0. The experimental results are shown in the table below.
Table: Performance of DiffewS with Training Time Improvement
| | 1-shot | 5-shot | 10-shot |
|---------------------------|--------|--------|---------|
| Diffews (ori, train 1 shot)| 47.7 | 52.0 | 49.1 |
| Diffews (train 1-5 shot) | 46.4 | 57.6 | 55.9 |
| Diffews (train 1-7 shot) | 47.1 | 57.3 | 58.7 |
(train 1-5 shot) indicates that we randomly select 1 to 5 support samples as input during a single training iteration. From the table, we can see that when multiple support samples are introduced during the training phase, the model's performance significantly improves in both 5-shot and 10-shot scenarios. Additionally, as the number of support samples increases during training, the model's performance in the 10-shot scenario gradually improves. We also found that when multiple support samples are introduced during the training phase, the 1-shot performance may decrease. This could be due to the inconsistency between training and inference.
In summary, both approaches effectively alleviate the issues associated with the N-shot setting, demonstrating the potential of DiffewS in Few-shot Semantic Segmentation. We also hope that our work can provide more inspiration for researchers in the fields of diffusion-based models and Few-shot Semantic Segmentation.
**Questions:open-vocabulary segmentation**
We believe it is feasible; however, it would be more appropriate to inject text information through cross-attention, as discussed in (Sec 4.1 Tokenized Interaction Cross-Attention). A similar idea has already been validated in the Referring Segmentation task (UniGS [1]). We will include a discussion on this in the Related Work section. Our work focuses more on how to effectively achieve interaction between the features of two images, which is why we ultimately chose the KV Fusion Self-Attention approach. However, these two methods are not mutually exclusive and can coexist. Nevertheless, to achieve open-vocabulary segmentation, more segmentation datasets with richly annotated text are needed for training, as merely using a few-shot dataset is insufficient. Therefore, we have not conducted experimental verification.
[1] Qi L, Yang L, Guo W, et al. Unigs: Unified representation for image generation and segmentation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 6305-6315.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the detailed rebuttal, which has satisfactorily addressed my concerns. I believe that the paper has merit and should be accepted. I will be keeping my original rating.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your positive feedback and are grateful for your recognition of the merits of our work. Your constructive comments have been invaluable in enhancing the quality of our paper.
Thank you again for your time and expertise. | null | null | Rebuttal 1:
Rebuttal: Dear Reviewers,
We would like to extend our heartfelt thanks to all of you for taking the time and demonstrating professionalism in thoroughly evaluating our work. Your constructive feedback has been immensely helpful in further improving the quality of our work and refining our research. We are also greatly encouraged by your recognition of our efforts.
We will carefully address the issues and suggestions raised by the reviewers and will make further revisions and improvements to our paper. Below each Official Review, we have provided responses to the questions and suggestions made by the reviewers, and we hope these responses adequately address the issues and concerns raised.
Due to the character limit for each review's rebuttal, we have moved some of the responses to Reviewer **uveh**'s questions here.
**Q1: Pretrained Stable Diffusion (SD) weights**
Yes, we utilized the pretrained Stable Diffusion (SD) weights in our model training. We keep the VAE fixed and only fine-tune the Unet.
**Q2: Experimental settings of Fig2/3/4**
The experimental settings for the results in Fig2/3/4 are our ablation study settings. We will provide more detailed experimental settings in the revised version. Specifically, We use one 4090 GPU to train the model with a batch size of 4, other settings are strictly aligned with the Table 2 settings. There are small differences between Fig3 and Fig4(46.64 vs 47.7).
This is because we did not use an optimized threshold in Fig. 3. In Fig. 3, we used different forms of mask supervision, each maybe corresponding to a different optimal threshold. See Appendix A.4 for more details.
**Q3: OI2M timestep setting**
We set the timestep to 1 in the OI2M pipeline during training. We don't use any ccelerated generation schedule like DDIM, because it is a one-step process. We also add an additional ablation study to compare different timestep settings(the input time embbeding of unet) during training, the results are shown in Table below.
Tabel: Performance of DiffewS with different timestep settings
| Timestep | mIoU |
|----------|-------|
| 1 | 47.7 |
| 10 | 47.8 |
| 50 | 47.2 |
| 100 | 43.14 |
It can be observed that the performance is better with smaller timesteps. This is because the UNet in the original diffusion model receives noiseless real images as input at smaller timesteps, and our current model also receives noiseless real images as input.
**Q4: Few-shot support images**
Yes, the few-shot support images are processed in the same way as the one-shot support image in the support image encoding branch. We use the same model weights for both one-shot and few-shot settings. The only difference is that we concatenate the Key and Value of the support images during self-attention(see Appendix A.6).
Since our model can process multiple support images in batches, the inference time doesn't increase significantly. Specifically, when evaluating on COCO fold0, the inference time for 5-shot is approximately twice that of 1-shot, and the GPU memory usage is also about twice that of 1-shot.
**Q5: In-context setting**
Yes, the "In-context" setting is a general experimental dataset setting.
This setting was first proposed by SegGPT[A] and has been adopted by works such as Matcher[B] and PerSAM[C]. The comparison is fair, as these specialist models were also trained on COCO and PASCAL VOC test categories. The specific results can be found in Table 1 of the SegGPT paper. We will provide a more detailed description in the experimental section.
[A] SegGPT: Segmenting Everything In Context ICCV2023
[B] Matcher: Segment Anything with One Shot Using All-Purpose Feature Matching ICLR2024
[C] Personalize Segment Anything Model with One Shot ICLR2024 | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Towards Multi-dimensional Explanation Alignment for Medical Classification | Accept (poster) | Summary: This work introduces a novel end-to-end concept-based framework called Med-MICN, which is quite inspiring and important for the next XAI era due to its multi-dimensional, powerful interpretability, and efficient performance. Furthermore, they propose an automated process for efficiently and accurately obtaining concept labels, which are costly to acquire in reality.
Strengths: 1. This paper introduces a novel attempt that creatively integrates medical image classification, neural symbolic solving, and concept semantics. This integration enables multidimensional interpretability, making it more comprehensive and powerful compared to previous interpretable models.
2. Med-MICN demonstrates superior accuracy and interpretability across datasets from different modalities.
3. The ample experiments are convincing, as the authors conducted experiments on multiple datasets and baselines to validate the model's interpretability and accuracy, while also providing rich and impressive visualization results.
Weaknesses: 1. The caption of the picture is not detailed enough, as shown in figure 3.
2. The author considers interpretability on the basis of concept embedding, neural-sybolic reasoning and saliency map. The author lacks the reason why it is necessary to analyze the interpretation from these aspects and whether there are other more dimensions.
Technical Quality: 3
Clarity: 4
Questions for Authors: See weeknesses.
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewers' recognition of our work and your constructive comments. Please find below our detailed responses to the queries you have raised. We hope you could consider increasing our overall score if our response addresses your concerns :)
> W1: "The caption of the picture is not detailed enough, as shown in figure 3."
**Response**: Thank you for your suggestions. We have provided additional details for each figure caption as follows:
- Figure 2: (a) module, output rich dimensional interpretable conceptual information for the specified disease through the multimodal model and convert the conceptual information into text vectors through the text embedding module; (b) module, access the image to the image embedder to get the image features, and then match with the conceptual textual information to get the relevant attention region. Then, we get the influence score of the relevant region information through pooling, and finally send it to the filter to sieve out the concept information with weak relevance to get the disease concept of image information.
- Figure 3: Overview of the Med-MICN Framework. The Med-MICN framework consists of four primary modules: (1) **Feature Extraction Module**: In the initial step, image features are extracted using a backbone network to obtain pixel-level features. (2) **Concept Embedding Module**: The extracted features are fed into the concept embedding module. This module outputs concept embeddings while passing through a category classification linkage layer to obtain predicted category information. (3) **Concept Semantic Alignment**: Concurrently, a Vision-Language Model (VLM) is used to annotate the image features, generating concept category annotations aligned with the predicted categories. (4) **Neural Symbolic Layer**: After obtaining the concept embeddings, they are input into the Neural Symbolic layer to derive conceptual rules. Finally, the concept embeddings obtained from module (2) are concatenated with the original image embeddings and fed into the final category prediction layer to produce the ultimate prediction results.
- Figure 4: A full comparison of our method with current saliency and CBM methods in terms of interpretable dimensions and effectiveness is shown in Fig. 4. CBM class methods lack interpretation of saliency map and rules dimensions. saliency map class methods lack interpretation of concept and rules dimensions. For this medical classification task, single dimensions such as saliency map methods do not accurately capture the features of disease images, and CBM-type methods for concept extraction of image information carry false predictions. And both of them lack explanatory information about the conceptual rules. Our method is able to predict interpretable information in three dimensions: saliency map with concept and rules, which provides more accurate multi-dimensional information, as reflected in the more accurate capture of disease features in image information, clearer concept output, and logical rule information generated based on the predicted concepts.
> W2: "The author considers interpretability on the basis of concept embedding, neural-sybolic reasoning and saliency map. The author lacks the reason why it is necessary to analyze the interpretation from these aspects and whether there are other more dimensions."
**Response**:
**concept**: This approach focuses on understanding the internal representation of high-level concepts in a model. By analyzing the learned embeddings, researchers can gain insights into how the model associates different input features and concepts. It helps in understanding the semantic meaning of the model's hidden layers and can reveal if the model has learned meaningful abstractions.
**neural-symbolic**: This combines the power of neural networks with neural symbolic logic, aiming to make models more transparent by grounding their decisions in symbolic rules via concept. It allows for more explicit concept reasoning processes, which can be more interpretable than purely connectionist models.
**saliency map**: These highlight the parts of the input that are most influential in the model's decision. By visualizing these maps, one can see which features or pixels are driving the prediction, providing a more intuitive understanding of the model's behavior.
There are also indeed other dimensions:
(1) LIME (Local Interpretable Model-agnostic Explanations)
(2) SHAP (SHapley Additive exPlanations)
These are post-hoc explainability methods, and these techniques could be less scalable, less suitable for large models, and may struggle to align with specific tasks or samples.
Our framework, however, goes beyond these limitations. By integrating concept-based and attention map explanations, we offer a more intrinsic understanding of the model's decision-making process. Furthermore, the introduction of neural symbolic methods allows us to delve into clearer concept reasoning and decision-making, thus providing a third dimension of interpretability.
Specifically, we have successfully aligned the attention-based image features with concept-based semantics, ensuring a direct correspondence between the model's perception and the underlying meaning. Moreover, the neural-symbolic approach enables us to elucidate the interactions among concepts, which is crucial in understanding complex medical scenarios.
---
Rebuttal Comment 1.1:
Title: Thanks for your responses
Comment: I appreciate the author's effort to answer my questions. My concerns are well addressed and I will update my score. Btw, I personally have two minor additional questions:
1. What is the concept prediction accuracy in this model?
2. What is the model's robustness against perturbation? This point is also crucial for practical applications in the medical field.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your prompt and positive response to our rebuttal, along with the improved rating. We are currently conducting experiments to address your additional questions. Thank you for your time!
---
Reply to Comment 1.1.2:
Comment: 1. What is the concept prediction accuracy in this model?
The questions you raised have been very insightful. We used our previous work to perform concept labeling on the DDI dataset. We compared our predicted concepts with the ground truth to determine the accuracy of different backbones, as shown in the table below.
| Backbone | RN50 | VGG | DenseNet |
|----------|-------|-------|----------|
| ACC | 87.12 | 92.13 | 92.83 |
As can be seen, the concept accuracy predicted by Med-MICN is relatively high, indicating better reliability for the concepts it explains.
2. What is the model's robustness against perturbation? This point is also crucial for practical applications in the medical field.
We conducted stability experiments by applying Gaussian noise with δ=0.1 to the images. We compared the saliency maps generated by our method to those produced by the baseline backbone. Our results show that our method maintains greater stability in focusing on the regions of interest compared to the baseline. Due to the rebuttal policy, we cannot present these results, bu We will present images in the later version. | Summary: This paper focuses on the explainable classification of medical images with multi-dimensional explanation. The proposed Med-MICN framework contains four modules including feature extraction, auto-annotation, concept embedding, and neural symbolic layer. The incorporation of fuzzy logic rules is novel in the interpretation study of deep neural networks. The experimental results show that Med-MICN surpasses the baseline networks such as vgg and resent, and also outperforms concept bottleneck models (CBM). The generated attention maps and predicted concepts match better than traditional CBM.
Strengths: See summary
Weaknesses: 1. The whole presentation of the paper should be further improved including mathematical symbols, figures 2 and 3, and equations. The symbols should be better matched with Figures 2 and 3 to improve readability. It is difficult to understand these figures without reading the contents. Some symbols like Cosine should be given in the figures. Detailed notations are needed in the figures or in the captions.
2. It is better to give pseudo codes of training and inferences to help understand the whole workflow.
3. At the bottom of Page 3, "In instances ... judgment", why do the authors claim alternative explanations can aid judgment? Any references? More explanations may confuse the physicians.
4. In line 131, does $y_{i}$ the one-hot labels?
5. In the Concept labeling alignment module, it is not clear why the authors use "average pooling to heatmaps". I think this will lose some spatial localizations of lesions, for example, one heatmap with multiple lesions and another with one lesion, both of them may have similar average pooling results.
6. Under the line 183, the $V$ is three-dimensional, and why here it has two subscripts.
7. In line 194, does the representation be $c \in \{C_{1}, ..., C_{M}\}$ ?
8. In line 142, the logic symbols should be defined may be in supplementary.
9. Lack of training details. In line 277, how the learning rate decay? Any data augmentations were used?
Technical Quality: 2
Clarity: 2
Questions for Authors: See weakness.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The whole framework of Med-MICN is complex, and the so-called multi-dimensional explanations are attention maps and predicted concepts which have been proposed in traditional CBM models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewers' recognition of our work and your constructive comments. Please find below our detailed responses to the queries you have raised. We hope you could consider increasing our score if our response addresses your concerns :)
> W1: "captions and notations..."
**Response**: We have added notations to Figure 2, 3 and also enhanced the captions for Figures 2, 3, and 4 to improve readability. We have placed the modified table with the notation changes in the PDF. The corresponding original equations have the following modifications:
- On line 131: N -> M
- On line 133,135,136: k -> $d_c$
- On line 130: c = {$p_1, \cdots, p_k$\} -> $\mathcal{C} = ${$C_1, C_2, \dots, C_N $}
- On line 173: $\mathcal{C} = ${$c_1, c_2, \dots, c_N $} -> $\mathcal{C} = ${$C_1, C_2, \dots, C_N $}
- Throughout the text, replace all heatmap H with P.
- On line 184: $H_{p,k,i} = \frac{t_i^T V_{p,k}}{||t_i||\cdot ||V_{p,k}||},\quad p = 1, \ldots , P, \quad k = 1, \ldots , K $ -> $P_{h,w,i} = \frac{t_i^T V_{h,w}}{||t_i||\cdot ||V_{h,w}||},\quad h = 1, \ldots , H, \quad w = 1, \ldots , W $
- On line 189: $s_i = \frac{1}{H \cdot W} \sum_{h=1}^{H} \sum_{w=1}^{W} P_{h,w,i}$
- On line 195: $c =$ {$C_1, \ldots, C_M$} -> $c = ${$c_1, \ldots, c_M$}
- On line 212: $D$ -> $X$ (Here, the dataset and the dimensional representation in the previous text conflict)
- On line 215: $f_c(x_{m}), \hat{c_m}, f(x_m) = \Theta_c(\Theta_b(x_m))$ for $i \in [M]$ -> $f(x_m) = \Theta_b(x_m), f_c(x_{m}),\hat{c_m} = \Theta_c(f(x_m))$ for $i \in [M]$
- More detail of $I_{o,j}$ and $I_{r,j}$
> W2: "It is better to give pseudo codes of training and inferences..."
**Response**: The pseudocode has been supplemented, as seen in Figure 2 of the PDF.
> W3: "At the bottom of Page 3..."
**Response**: It has been demonstrated that unidimensional interpretive aids such as saliency map [1,2], concept [3,4], and other information will contribute to medical decision making. To the best of our knowledge, our work is the first to propose a deeper aiding strategy for medical aids using multidimensional interpretable means. The advantage of this multidimensional aid is not only in the cumulative nature of the multiple facilitation strategies but also in the fact that when there is an error in one dimension of interpretability, the information from the other dimensions is able to correct that erroneous information, thus providing more accurately interpreted information.
In addition, we also combine neural symbolic and fuzzy logic to further the connection between the concepts, which makes the interpretation of concepts more auxiliary meaning.
> W4: "In line 131, does $y_i$ the one-hot labels?"
**Response**: Yes, $y_i$ is the one-hot labes.
> W5: "It is not clear why the authors use "average pooling to heatmaps"..."
**Response**:
|Method|COVID-CT|DDI|
|--|:---:|:--:|
|Convolution|**89.81**|80.95|
|Linear|69.84|73.92|
|Average pooling|89.79|**91.89**|
Thank you for the valuable suggestions. Our approach is based on experimental results, which show that average pooling is comparable to or even better than other methods.
> W6: "Under the line 183, the $V$ is three-dimensional, and why here it has two subscripts."
**Response**: Yes, it is a 3-dimensional tensor. Here, we use $V_{h,w}$ to represent $V_{h,w,\cdot}$, which is a vector whose length is $D$. The term $ V_{h,w} $ is dot-multiplied with $ t_i^T $ to obtain an intermediate result of size $ \mathbb{R}^{D \times D} $. We then apply pooling to obtain the result for the heatmap at the position $(h, w)$.
> W7: "In line 194, does the representation be ...."
**Response**: The meaning of $c$ is the set of all image concept labels, where $c=${$c_1, . . . , c_M$}. This notation is correct, with each $c_i \in ${$0, 1$} $^N$.
> W8: "In line 142, the logic symbols should be defined may be in supplementary."
**Response**: Thank you. We will define them in supplementary. For example:
- Negation ($\neg $): This symbol represents the inverse of a truth value. In fuzzy logic, the negation of a degree of truth $x$ is computed as $\neg x = 1 - x$. For example, if $x$ represents a 0.7 probability of a statement being true, then $\neg x$ would be 0.3, indicating a 0.3 probability of the statement being false.
- T-norm (Conjunction, $\land$): The t-norm, or triangular norm, is a continuous version of the AND operation. It takes two truth values $x$ and $y$ and combines them to produce a new truth value in the range $[0, 1]$. Common t-norms include the minimum function ($\min(x, y)$) and the product function ($x \cdot y$). In the example given, $c_{GO} \land \neg c_{LC}$ would represent the conjunction of "GO" being present and "LC" not being present.
- T-conorm (Disjunction, $\lor$): The t-conorm, or triangular conorm, is a continuous version of the OR operation. It also takes two truth values $x$ and $y$ and combines them, typically using functions like the maximum function ($\max(x, y)$) or the sum function ($x + y - xy$), ensuring the result is within $[0, 1]$.
> W9: "Lack of training details..."
**Response**: To highlight the significant performance of our method, we did not use learning rate decay; the learning rate was kept constant at 5e-5 throughout the entire training process. For data augmentation, we only used the most basic operations: resize (256), center crop (224), and normalization.
*Reference*
[1] Van der Velden et al. "Explainable artificial intelligence (XAI) in deep learning-based medical image analysis." Medical Image Analysis 79 (2022): 102470.
[2] Patrício et al. "Explainable deep learning methods in medical image classification: A survey." ACM Computing Surveys 56.4 (2023): 1-41.
[3] Patrício et al. "Coherent concept-based explanations in medical image and its application to skin lesion diagnosis." CVPR 2023.
[4] Eminaga et al. "PlexusNet: A neural network architectural concept for medical image classification." Computers in biology and medicine 154 (2023): 106594.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer #WJ93,
Many thanks for taking the time to review our paper. We have provided our responses to your comments. Can we know whether our responses have addressed your concerns? We are more than happy to hear from you :)
Best wishes,
Authors of the paper #2244
---
Rebuttal Comment 1.2:
Title: Response to Rebuttal
Comment: Thanks for your detailed rebuttal. All of my concerns have been addressed, and I will increase my score.
---
Reply to Comment 1.2.1:
Comment: We sincerely appreciate your prompt and positive feedback on our rebuttal, as well as the increase in your rating. Your valuable suggestions have greatly improved the clarity and overall readability of our work. | Summary: This work introduces Med-MICN, an explainable framework for medical image classification. This framework leverages a concept bottleneck framework combined with a neural symbolic reasoning framework to generate simple explanations for its predictions. In general, strong performance is demonstrated.
Strengths: The overall framework is interesting, and the idea of “multi-dimensional explainability” seems widely applicable. The ability to see some explanation for each concept’s logic seems useful, and the decision rules generated are quite simple. The performance of Med-MICN is quite compelling, with superior accuracy across multiple medical datasets.
Weaknesses: In general, the notation used in the paper is inconsistent and somewhat ill defined. For example, in section 3, N is used to denote the number of sample in a dataset, and k the number of concepts. In section 4, N becomes the number of concepts, and M the number of samples. Lower case k is then used to index along the height dimension of a feature map. Several objects ($I_{o, j}$ and $I_{r,j}$, for example) are not clearly described upon their introduction.
There are a few seemingly key hyper parameters ($\lambda_1$, $\lambda_2$, the threshold used for binarizing concept vectors), but the experimental details do not indicate that there was a validation/development set used to select these values.
If these two points can be addressed, I believe this paper will warrant acceptance.
Technical Quality: 2
Clarity: 2
Questions for Authors: There appear to be some notational issues in section 4.2 — initially H and W are used as the height and width of the feature map, but in the definition of $H_{p,k,i}$, it seems that P and K are used to fill the same roll.
“to ensure the truthfulness of 196 concepts, we discard all concepts for which the similarity across all images is below 0.45. “ — does this refer to all training images?
The equation following line 214 is quite confusing — what is the dimensionality of each of the three values returned by $\theta_c$? How does it return $f(x_m)$?
During training, which components are trained jointly? It seems like $L_{neural}$ only applies to the neural symbolic component, while $L_c$ and $L_{task}$ apply only to the concept extraction component and some additional classification fully connected layer.
Figure 4 is somewhat unclear — which cells in the figure should be compared to which?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 4
Limitations: Discussion of limitations is appropriate.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewers' recognition of our work and your constructive comments. Please find below our detailed responses to the queries you have raised. We hope you could consider increasing our overall score if our response addresses your concerns :)
> W1,Q1: "Notations.."
**Response**: We have placed the modified table with the notation changes in the PDF. The corresponding original equations have the following modifications:
- On line 131: N -> M
- On line 133,135,136: k -> $d_c$
- On line 130: c = {$p_1, \cdots, p_k$\} -> $\mathcal{C} = ${$C_1, C_2, \dots, C_N $}
- On line 173: $\mathcal{C} = ${$c_1, c_2, \dots, c_N $} -> $\mathcal{C} = ${$C_1, C_2, \dots, C_N $}
- Throughout the text, replace all heatmap H with P.
- On line 184: $H_{p,k,i} = \frac{t_i^T V_{p,k}}{||t_i||\cdot ||V_{p,k}||},\quad p = 1, \ldots , P, \quad k = 1, \ldots , K $ -> $P_{h,w,i} = \frac{t_i^T V_{h,w}}{||t_i||\cdot ||V_{h,w}||},\quad h = 1, \ldots , H, \quad w = 1, \ldots , W $
- On line 189: $s_i = \frac{1}{H \cdot W} \sum_{h=1}^{H} \sum_{w=1}^{W} P_{h,w,i}$
- On line 195: $c =$ {$C_1, \ldots, C_M$} -> $c = ${$c_1, \ldots, c_M$}
- On line 212: $D$ -> $X$ (Here, the dataset and the dimensional representation in the previous text conflict)
- On line 215: $f_c(x_{m}), \hat{c_m}, f(x_m) = \Theta_c(\Theta_b(x_m))$ for $i \in [M]$ -> $f(x_m) = \Theta_b(x_m), f_c(x_{m}),\hat{c_m} = \Theta_c(f(x_m))$ for $i \in [M]$
- More detail of $I_{o,j}$ and $I_{r,j}$
> W2: "hyperparameters.."
**Response**: Thank you for your valuable feedback. We conducted experiments on reasonable parameter ranges for λ1 and λ2, as detailed in the table below. The table shows the accuracy of various backbones with different combinations of λ1 and λ2. Notably, the combination of λ1=0.1 and λ2=0.1 (used in the paper) performs well overall. When λ1 and λ2 are too small, accuracy generally decreases, indicating that λ1 and λ2 play a positive role in model optimization. Conversely, as λ1 and λ2 increase, different backbones exhibit varying performance. Overall, λ1 and λ2 values between 0.1 and 1 yield good model performance and show significant improvement compared to the baseline.
| Dataset | Model | Baseline | λ1=0.1, λ2=0.1 | λ1=0.1, λ2=0.01 | λ1=0.1, λ2=0.05 | λ1=0.1, λ2=0.5 | λ1=0.1, λ2=1.0 | λ1=0.01, λ2=0.1 | λ1=0.05, λ2=0.1 | λ1=0.5, λ2=0.1 | λ1=1.0, λ2=0.1 |
|-----|---|------|:-------:|:-----:|:-----:|:-----:|:----:|:----:|:---:|:----:|:----:|
| | RN50 | 81.36 | **84.75** | 83.90 | 82.20 | 82.25 | 83.05 | 81.33 | 83.90 | 83.92 | 82.53 |
| COVID-CT | DenseNet | 85.59 | 86.44 | 84.75 | 84.74 | **87.29** | 85.59 | 84.09 | 84.75 | 84.12 | 84.28 |
| | VGG | 79.60 | 83.05 | 79.66 | 80.52 | 80.51 | 85.60 | 80.85 | 79.66 | 86.44 | **87.01** |
We conducted experiments to determine the appropriate threshold values. We sampled 100 samples and the results are as follows:
| Dataset | 0.1 | 0.2 | 0.3 | 0.4 | 0.45 | 0.5 | 0.6 |
|----------|-------|-------|-------|-------|-----------|-------|-------|
| COVID-CT | 59.13 | 63.25 | 76.25 | 84.00 | **90.13** | 88.50 | 86.50 |
> Q2: "all training images.."
**Response**: We annotated 20% of the training data for fine-tuning the model and labeled all images, including both training and testing images. This is because we need to use the automatically generated concept labels during inference.
> Q3: "equation.."
**Response**: Thank you for pointing this out. The expression in our paper was somewhat unclear. We have revised it as follows:
For each image $x_m$, after inputting it through the backbone, we obtain $f(x_m)$, represented as $f(x_m) = \theta_b(x_m)$, where $f(x_m) \in \mathbb{R}^{d}$. Next, we input the image feature $f(x_m)$ into the concept encoder $\theta_c$. It is important to note that $\theta_c$ returns two values: one is the predicted concept $\hat{c}_m$ and the other is the concept feature $f_c(x_m)$. The dimensions of these are $\hat{c}_m \in ${$0, 1$}$^N$ $\in \mathbb{R}^{d_c}$ and $f_c(x_m) \in \mathbb{R}^{d_c}$, respectively. In the paper, $d$ and $d_c$ is set to 1000, and $ N $ depends on the number of concepts.
> Q4: "during training.."
**Response**: During training, all modules are trained together. As shown in Table 1, the result of using only the neural-symbolic layer (DCR) is very poor, indicating that joint training is necessary. It is also worth noting that while $\mathcal{L_{neural}}$ primarily affects the output of the neural-symbolic layer, $ \mathcal{L_c} $ impacts the concept prediction results, and $ \mathcal{L}_{task} $ affects the classification results. Due to the nature of backpropagation, these losses also optimize the concept encoder and backbone. As evident from the ablation studies in Table 2, removing any of these losses leads to a decline in overall model performance.
> Q5: "Figure 4.."
**Response**: A full comparison of our method with current saliency and CBM methods in terms of interpretable dimensions and effectiveness is shown in Fig. 4. CBM class methods lack interpretation of saliency map and rules dimensions. saliency map class methods lack interpretation of concept and rules dimensions. For this medical classification task, single dimensions such as saliency map methods do not accurately capture the features of disease images, and CBM-type methods for concept extraction of image information carry false predictions. And both of them lack explanatory information about the conceptual rules. Our method is able to predict interpretable information in three dimensions, saliency map with concept and rules, which provides more accurate multi-dimensional information, as reflected in the more accurate capture of disease features in image information, clearer concept output, and logical rule information generated based on the predicted concepts.
---
Rebuttal Comment 1.1:
Comment: Thank you for your thorough response. You have addressed the majority of my concerns, and particularly those around notation. My primary concern with the selection of $\lambda_1$ and $\lambda_2$ was that, since they seem to have been selected using test set performance, the models may have been overfit to each test set. However, since they were held constant across datasets after converging on one reasonable set of values, I am somewhat less concerned about this.
I think I understand Figure 4 a bit better now, but think the layout could still be improved for clarity. As is, it's a bit confusing how the "saliency map" heading bleeds into the section of the figure from CBM. I think clearer visual separation between areas of the figure corresponding to different methods would improve readability.
That said, this is a fairly minor point, and my primary concerns have been addressed. I've increased my score accordingly, and appreciate the authors' response.
---
Reply to Comment 1.1.1:
Comment: We are profoundly grateful for your timely and positive responses to our rebuttal and the uplift in your rating. Your valuable suggestions have improved the readability of our work. We will make a clearer visual separation between areas of the figure corresponding to different methods. | Summary: The authors proposed a novel interpretable model called Med-MICN in this work. This method manages medical image classification tasks with multi-dimensional aspects with neural symbolic and concept semantics. With the help of LLMs, the work performed superior on four medical benchmark datasets. In the ablation study, the authors presented that the performance of the proposed method performed better with the complementary multi-dimensional loss functions in evaluating the classification result.
Strengths: - The paper proposed a novel method that combines extra information (text and logic rules) to enhance the classification performance and the interpretability of the model outputs. This enables the interpretability of the deep learning classification model with multi-dimensional information.
- Instead of post-hoc interpretable methods, the authors integrate fuzzy logic rules into the proposed method. This gives a clear decision rule for image classification and interpretability.
- The method description is relatively clear and easy to follow. The ablation study gave a good overview of the usage of multi-dimensional information.
Weaknesses: - The authors show that the proposed method performed superior to the baseline methods in image classification (Tab. 1). However, there is a missing part on the evaluation of the interpretability. How correct are those heatmaps generated from the proposed model, do they align with the labels or the clinicians?
- Also the comparison of the interpretability between the proposed method and the baseline methods is missing. In Tab. 1, the authors mentioned those methods are interpretable, but they did not evaluate the interpretability. This could be crucial for medical applications.
- For section 4.3, the description of Neural-Symbolic Layer is not easy to follow. What is the difference between the functions of "Concept Polarity" and "Concept Relevance"? It seems like their outputs are the same.
Technical Quality: 3
Clarity: 2
Questions for Authors: - The authors used LLM (GPT-4V) in their proposed method. Would it be better or make a difference, if the LLM is a more medical-specific model?
- How do authors deal with different numbers of concept sets from the LLM? If I understood correctly, the LLM could output several different concepts and does not necessarily fit into the input size of the text encoder. (Figure 2.a)
- Did authors try with different $\lambda_1$ and $\lambda_2$?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: - The proposed method provides an opportunity to interpret the medical image classification results and in the meanwhile also increase the classification performance. However, heatmaps look somehow blurry and may not be trustable (Fig 1).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewers' recognition of our work and your constructive comments. Please find below our detailed responses to the queries you have raised. We hope you could consider increasing our overall score if our response addresses your concerns :)
> W1, W2: "evaluation of the interpretability"
**Response**: The questions you raised are very insightful. Although quantifying interpretability remains a challenging task with no well-established metrics, our extensive examples demonstrate that the interpretability information provided by our model is meaningful. This is supported by numerous examples presented in our paper, particularly in Figures 1 and 5-12. While aligning saliency maps with features has been a longstanding issue in interpretability, we have compared Med-MICN with baseline saliency maps. As illustrated in Figure 1 of the attached PDF, we used the most distinctive skin modality for clarity. The results show that our method better highlights pathological regions and achieves a significant improvement in alignment compared to the baseline.
> W3: "For section 4.3, the description of Neural-Symbolic Layer is not easy to follow....".
**Response**: *Concept Polarity* focuses on determining the role of a concept as either positive or negative for a given prediction class. The output of the neural network $\Phi_j(\cdot)$ is a soft indicator value between 0 and 1, which signifies the degree to which the concept contributes positively (closer to 1) or negatively (closer to 0) to the prediction. For example, if the output for the concept is 0.8 for class $j$, it suggests that this concept has a strong positive influence on predicting class j.
*Concept Relevance* assesses how useful or significant a concept is within the context of the sample feature for a prediction class. The neural network $\Psi_j(\cdot) $also outputs a soft indicator value between 0 and 1, but this value represents the concept's relevance score, indicating how much it impacts the prediction. If the output for the concept is 0.2 for class $j$, it means that this concept has a relatively low relevance to predicting class j.
While both functions output scalar values between 0 and 1, Concept Polarity indicates the concept's positive or negative influence, and Concept Relevance measures the concept's overall importance or significance in the prediction. These two outputs are combined in Equation (2) to generate the logical reasoning rules for the concept.
> Q1: "The authors used LLM (GPT-4V) in their proposed method. Would it be better..."
**Response**: Thanks to your suggestion, we have fully considered the generalization and accuracy of different diseases when using multimodal models for questioning. Here, we have added two medical multimodal models for comparison experiments (Due to word count limitations, the results from each model will be uniformly presented in the appendix of the paper). The first model is XrayGPT, The XrayGPT model is limited by the form of training data for different kinds of disease images, such as plain view lung CT map (dataset COVID) can not output accurate medical conceptual information. Moreover, for chest CT images, its prediction about different classes of viruses such as COVID output information is also affected and lacks generalization to different disease types.
Besides, we also use the better generalized Llava Med model for comparison. For the COVID dataset, it is capable of obtaining certain medical category information, however, the output disease load categories are not comprehensive compared to the methods used in this framework, and the comprehensiveness of the information.
> Q2: "How do authors deal with different numbers of concept sets from the LLM? ...."
**Response**: It is important to note that each concept within the concept set is processed individually by the text encoder to obtain a corresponding text embedding, rather than inputting the entire concept set simultaneously. The LLM handles varying input sizes through different lengths of concepts, such as "Peripheral ground-glass opacities" and "Bilateral involvement." By obtaining the text embedding for each individual concept, we can derive a concept heatmap for each image and subsequently assign the appropriate concept labels.
> Q3: "Did authors try with different λ1 and λ2?"
**Response**: Thank you for your valuable feedback. We conducted experiments within reasonable parameter ranges for λ1 and λ2, as detailed in the table below. The table shows the accuracy of various backbones with different combinations of λ1 and λ2. When λ1 is fixed, a larger λ2 is preferred, and similarly, when λ2 is fixed, a larger λ1 is preferred. This indicates that each loss function contributes to the model’s predictive accuracy. Additionally, we need to adjust these coefficients for different backbones to achieve optimal performance. For example, with VGG, setting λ1 = 1 and λ2 = 0.1 resulted in a 7.41% increase in accuracy.
| Dataset | Model | Baseline | λ1=0.1, λ2=0.1 | λ1=0.1, λ2=0.01 | λ1=0.1, λ2=0.05 | λ1=0.1, λ2=0.5 | λ1=0.1, λ2=1.0 | λ1=0.01, λ2=0.1 | λ1=0.05, λ2=0.1 | λ1=0.5, λ2=0.1 | λ1=1.0, λ2=0.1 |
|----------|----------|----------|:---------------:|:----------------:|:----------------:|:---------------:|:---------------:|:----------------:|:----------------:|:---------------:|:---------------:|
| | RN50 | 81.36 | **84.75** | 83.90 | 82.20 | 82.25 | 83.05 | 81.33 | 83.90 | 83.92 | 82.53 |
| COVID-CT | DenseNet | 85.59 | 86.44 | 84.75 | 84.74 | **87.29** | 85.59 | 84.09 | 84.75 | 84.12 | 84.28 |
| | VGG | 79.60 | 83.05 | 79.66 | 80.52 | 80.51 | 85.60 | 80.85 | 79.66 | 86.44 | **87.01** |
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' effort in answering my concerns. My concerns are majorly well addressed. I have increased my score accordingly.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your positive feedback on our rebuttal, as well as the increase in your rating. Your valuable suggestions have greatly improved the experimental completeness and overall readability of our work. | Rebuttal 1:
Rebuttal: We thank the reviewers for their insightful feedback. We have revised and supplemented our work in the following two aspects.
1. **Notations and Captions**: We have added notations to Figure 2, 3 and also enhanced the captions for Figures 2, 3, and 4 to improve readability. We have placed the modified table with the notation changes in the PDF. The corresponding original equations have the following modifications:
- On line 131: N -> M
- On line 133,135,136: k -> $d_c$
- On line 130: c = {$p_1, \cdots, p_k$\} -> $\mathcal{C} = ${$C_1, C_2, \dots, C_N $}
- On line 173: $\mathcal{C} = ${$c_1, c_2, \dots, c_N $} -> $\mathcal{C} = ${$C_1, C_2, \dots, C_N $}
- Throughout the text, replace all heatmap H with P.
- On line 184: $H_{p,k,i} = \frac{t_i^T V_{p,k}}{||t_i||\cdot ||V_{p,k}||},\quad p = 1, \ldots , P, \quad k = 1, \ldots , K $ -> $P_{h,w,i} = \frac{t_i^T V_{h,w}}{||t_i||\cdot ||V_{h,w}||},\quad h = 1, \ldots , H, \quad w = 1, \ldots , W $
- On line 189: $s_i = \frac{1}{H \cdot W} \sum_{h=1}^{H} \sum_{w=1}^{W} P_{h,w,i}$
- On line 195: $c =$ {$C_1, \ldots, C_M$} -> $c = ${$c_1, \ldots, c_M$}
- On line 212: $D$ -> $X$ (Here, the dataset and the dimensional representation in the previous text conflict)
- On line 215: $f_c(x_{m}), \hat{c_m}, f(x_m) = \Theta_c(\Theta_b(x_m))$ for $i \in [M]$ -> $f(x_m) = \Theta_b(x_m), f_c(x_{m}),\hat{c_m} = \Theta_c(f(x_m))$ for $i \in [M]$
- More detail of $I_{o,j}$ and $I_{r,j}$
2. **Hyperparameters and Experimental Details**: We conducted experiments on reasonable parameter ranges for λ1 and λ2, as detailed in the table below. The table shows the accuracy of various backbones with different combinations of λ1 and λ2. Notably, the combination of λ1=0.1 and λ2=0.1 (used in the paper) performs well overall. When λ1 and λ2 are too small, accuracy generally decreases, indicating that λ1 and λ2 play a positive role in model optimization. Conversely, as λ1 and λ2 increase, different backbones exhibit varying performance. Overall, λ1 and λ2 values between 0.1 and 1 yield good model performance and show significant improvement compared to the baseline.
| Dataset | Model | Baseline | λ1=0.1, λ2=0.1 | λ1=0.1, λ2=0.01 | λ1=0.1, λ2=0.05 | λ1=0.1, λ2=0.5 | λ1=0.1, λ2=1.0 | λ1=0.01, λ2=0.1 | λ1=0.05, λ2=0.1 | λ1=0.5, λ2=0.1 | λ1=1.0, λ2=0.1 |
|----------|----------|----------|:---------------:|:----------------:|:----------------:|:---------------:|:---------------:|:----------------:|:----------------:|:---------------:|:---------------:|
| | RN50 | 81.36 | **84.75** | 83.90 | 82.20 | 82.25 | 83.05 | 81.33 | 83.90 | 83.92 | 82.53 |
| COVID-CT | DenseNet | 85.59 | 86.44 | 84.75 | 84.74 | **87.29** | 85.59 | 84.09 | 84.75 | 84.12 | 84.28 |
| | VGG | 79.60 | 83.05 | 79.66 | 80.52 | 80.51 | 85.60 | 80.85 | 79.66 | 86.44 | **87.01** |
We conducted experiments to determine the appropriate threshold values. We sampled 100 samples and the results are as follows:
| Dataset | 0.1 | 0.2 | 0.3 | 0.4 | 0.45 | 0.5 | 0.6 |
|----------|-------|-------|-------|-------|-----------|-------|-------|
| COVID-CT | 59.13 | 63.25 | 76.25 | 84.00 | **90.13** | 88.50 | 86.50 |
Pdf: /pdf/830ca5cb3c67516bd8a8d3eccd8acb491262425d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Exploiting Activation Sparsity with Dense to Dynamic-k Mixture-of-Experts Conversion | Accept (poster) | Summary: This paper proposes a more effective dynamic-k expert selection rule that adjusts the number of executed experts on a per-token basis to reduce the high computational cost of the transformer models. They claim their D2DMoE model outperforms existing approaches.
Strengths: 1. The motivation of this paper is good. They try to save up the inference cost of the transformer models.
2. They perform experiments on several tasks.
Weaknesses: 1. The most important contribution is the dynamic-k routing algorithms. However, the main work is a small modification to MoEfication. The difference of the proposed method with MoE-based methods is limited.
2. The experiments are not conducted well. This paper lacks a comprehensive comparison with other methods. There exist many MoE-based VIT methods, such as [1, 2]. Furthermore, why does the authors does not provide any performance number in the table? This paper claims they outperforms existing approaches on common NLP and vision tasks, however, only two methods are compared.
3. The major advantage of D2DMoE compared with MoEfication is saving inference cost. However, the saved cost is quite limited observed from fig. 4.
[1] Mod-squad: Designing mixtures of experts as modular multi-task learners
[2] M3 vit: Mixture-of-experts vision transformer for efficient multi-task learning with model-accelerator co-design
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Training Overhead: Can you provide more details on the additional training time required for sparsity enforcement and router training? How does this overhead compare to the overall training time of the original dense models?
2. Real-world Deployment: What steps or considerations are necessary for deploying D2DMoE models in real-world applications, particularly in terms of latency, hardware compatibility, and integration with existing pipelines?
3. Adversarial Robustness: Have you evaluated the robustness of D2DMoE models against adversarial attacks? If not, do you have any plans to explore this aspect in future work?
4. Comparison with Pruning Techniques: How does D2DMoE compare with other model pruning or compression techniques in terms of performance, computational savings, and implementation complexity?
5. Scalability: Can the proposed method be effectively scaled to extremely large models like GPT-3? Are there any specific challenges or considerations when applying D2DMoE to such models?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The paper does not sufficiently address potential challenges in real-world deployment, such as latency variations, hardware compatibility, and integration with existing systems.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time spent on our work. We are glad that the reviewer recognizes the proper motivation of our method and the thoroughness of our experiments. Below, we address the concerns raised by the reviewer.
### Weaknesses
> The most important contribution is the dynamic-k routing algorithms. However, the main work is a small modification to MoEfication. The difference of the proposed method with MoE-based methods is limited.
The contributions of our work are not only dynamic-k gating, but also the sparsification phase, expert contribution routing, and extension to MHA projections. All these contributions improve upon the baseline method (Figure 6a). Figures 2a and 6b show the impact of the sparsification phase on the model performance. Regression routing significantly improves the performance on Gemma2B, even when applied in isolation to MoEfication (see Appendix C), while the baseline method performs very badly in this setting. This indicates our method is more robust when scaling to larger models (note how our routing significantly differs from the MoEfication objective, as explained in Appendix A).
> The major advantage of D2DMoE compared with MoEfication is saving inference cost. However, the saved cost is quite limited observed from fig. 4.
We respectfully disagree with the reviewer’s claim that the cost savings with our method are insignificant. Note how our method matches the performance of the dense model using around 60% (ViT), 30% (BERT), 80% (GPT-2) or 70% (Gemma2B) of its computation, significantly reducing the cost when compared to the MoEfication baseline.
> The experiments are not conducted well. This paper lacks a comprehensive comparison with other methods. There exist many MoE-based VIT methods, such as [1, 2].
> This paper claims they outperforms existing approaches on common NLP and vision tasks, however, only two methods are compared.
The references provided by the reviewer are MoE-based methods for multi-task learning, and therefore are inapplicable in our setting. Our setting assumes a pre-trained dense model checkpoint, and MoEfication is the only MoE-based method fulfilling this condition that we are aware of.
To strengthen the evaluation of our method, we implement dynamic inference method A-ViT [1] and include it as an additional baseline. A-ViT is spatially-adaptive in terms of its computational effort spent on an input image, similarly to our method (Figure 5a). Moreover, it also starts with a pre-trained dense model checkpoint, and thus is applicable in our setting. We provide the results of the ViT experiment updated with A-ViT baseline in Figure 2 in the rebuttal PDF. Our method outperforms the A-ViT baseline.
> Furthermore, why does the authors does not provide any performance number in the table?
We had considered plots as a more informative way to show the performance of our method. However, as per the reviewer’s suggestion, we have added them to the appendix of our current revision.
### Questions
> Training Overhead: (...)
We provide those numbers in multiple places in the paper (e.g. L101-102; L193-194; Appendix H). Overall, this overhead is less than 1% of the overall training cost.
> Adversarial Robustness: (...)
The adversarial robustness of dynamic inference methods is an issue that warrants an in-depth investigation in the form of a separate paper, as in Hong et al. [2]. We consider it too complex to fit within the scope of our work, but it’s an interesting and understudied future work direction.
> Comparison with Pruning Techniques: (...)
D2DMoE is not a pruning or compression method, and therefore we do not consider such methods as suitable for direct comparison. We rather see model compression techniques as a complementary approach that can be applied alongside our method (see Section 5.6).
> Scalability: (...)
We do not see any specific issues that would prevent the application of our method to larger models, aside from the limited computational resources that prevent us from running such experiments.
> The paper does not sufficiently address potential challenges in real-world deployment, such as latency variations, hardware compatibility, and integration with existing systems.
> Real-world Deployment: (...)
We did not list deployment limitations as they are not specific to our method but are general across dynamic inference methods. However, we followed the reviewer's suggestion and listed them in our paper to make sure that the reader understands them:
1. In contrast to static models, latency depends on the sample. In our case, it is upper-bounded by the latency of executing the forward pass with every token being routed to every expert.
2. The software has to support dynamic computational graphs.
3. Hardware compatibility is dependent on the method. We provide the wall-clock time measurement results for our implementation in the rebuttal PDF.
We would like to thank the reviewer again for his time spent on reviewing our work. We hope that our answers resolve the reviewer’s concerns, and we are open to further discussion in case of any further questions. We also kindly ask the reviewer to reassess our work and adjust the score accordingly after taking into consideration our answers and the additional experiments from the response PDF.
##### References
[1] Yin, Hongxu, et al. "A-vit: Adaptive tokens for efficient vision transformer." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.
[2] Hong, Sanghyun, et al. "A panda? no, it's a sloth: Slowdown attacks on adaptive multi-exit neural network inference." arXiv preprint arXiv:2010.02432 (2020).
---
Rebuttal Comment 1.1:
Comment: I thank the reivewer's time and effort to give me a detailed response. However, I still think the dynamic-k routing is a slight modification of MOE's top-k routering. Furthuremore, from fig. 4, we cannot observe the performance directly. Using loss value to represent the performance is not a good choice. Although [1,2] are multi-task methods, they all provide pretrained models. The authors should attempt to compare them because the proposed method is quite similar to [1,2].
---
Rebuttal 2:
Comment: We thank the reviewer for the response. While the brevity of their comment suggests that they may have already formed a definitive opinion about our work, we believe it is important to clarify our view to other participants. Apart from addressing the raised concerns, we also want to highlight how they have been presented. In particular, we find it challenging to understand why the **reviewer's response took so long, given that it is brief, somewhat vague, and largely reiterates previously stated points**. We point out other issues in our responses below. **We respectfully ask other participants to carefully consider both the reviewer's comments and our discussion to arrive at their own conclusions regarding the validity of the raised concerns.**
> I still think the dynamic-k routing is a slight modification of MOE's top-k routering.
**The reviewer seems to disregard our discussion, where we clearly state that dynamic-k is not our only contribution.** We also emphasize that reviewer uQnr praised the novelty of our router training scheme.
We have provided a clear justification for the introduction of dynamic-k. This modification is evidently not that obvious given that dynamic-k was not adopted in earlier works. We demonstrate that static top-k gating is inappropriate for MoEs converted from dense models (Figure 2b) and that dynamic-k **enables consistent improvements** (Figure 4, Figure 5a).
We also highlight that many influential ideas, such as the GELU activation function [1], mixup augmentation [2], dropout [3], or expert choice routing for MoEs [4] could be seen as __slight modifications__. Straightforward, yet well-motivated methods that provide consistent improvements are crucial for advancing the field, and simplicity often facilitates broader adoption.
> Furthuremore, from fig. 4, we cannot observe the performance directly. Using loss value to represent the performance is not a good choice.
**The reviewer initially described the cost savings shown in Fig. 4 as minimal. However, following our rebuttal, the reviewer apparently no longer seems to contest the improvement shown in the figure.** Instead, they raise a new concern shortly before the conclusion of the discussion period.
We emphasize that the **reviewer’s statement is inaccurate, as we do report accuracy for classification tasks**. We use test loss only for LLM evaluation, as done in multiple other works. In particular, a significant portion of the **industry and research community heavily invests in research on scaling laws [5, 6]. These studies focus on improving model loss**, which indicates that it is widely considered as an important indicator of model performance.
To further support our claim, **we provide a downstream evaluation of our Gemma models on the BoolQ dataset**. We take the base model, which achieves 68.40% accuracy, and convert it to MoE with D2DMoE and MoEfication and perform zero-shot evaluation. In the table below, we report **relative accuracy** of the models at different compute budgets. Similarly to the loss in Figure 4d, **our method largely retains the performance** across multiple compute budgets, while the **performance of MoEfication decreases significantly**.
| Compute budget | 100% | 90% | 80% | 70% | 60% | 50% | 25% | 10% |
|----------------|---------|--------|--------|--------|--------|--------|--------|--------|
| D2DMoE | 100.00% | 99.68% | 99.37% | 98.69% | 97.60% | 94.34% | 92.75% | 90.89% |
| MoEfication | 100.00% | 92.24% | 92.19% | 92.15% | 88.79% | 75.40% | 86.70% | 77.53% |
> Although [1,2] are multi-task methods, they all provide pretrained models. The authors should attempt to compare them because the proposed method is quite similar to [1,2].
We still consider the fact that the referenced works are multi-task learning solutions as a fundamental issue that renders any fair comparison impossible. The reviewer's response and his original review **fail to clarify how the referenced works are similar or relevant to our work**, with the only superficial connection being that they also use the MoE architecture. [7] targets Taskonomy and PASCAL-Context datasets, and [8] targets NYUD-v2 and PASCAL-Context datasets (all of them exclusively multi-task datasets). Neither of these works includes evaluations on baselines that are even remotely similar to ours, nor does our method target multi-task learning. Even though [7,8] might provide pre-trained models, comparison of models trained on different datasets **would mostly reflect performance differences stemming from the use of different training data** instead of those inherent to the method.
We include the references in a separate response.
---
Rebuttal 3:
Comment: #### References
[1] Hendrycks, Dan, and Kevin Gimpel. "Gaussian error linear units (gelus)." arXiv preprint arXiv:1606.08415 (2016).
[2] Zhang, Hongyi, et al. "mixup: Beyond empirical risk minimization." arXiv preprint arXiv:1710.09412 (2017).
[3] Srivastava, Nitish, et al. "Dropout: a simple way to prevent neural networks from overfitting." The journal of machine learning research 15.1 (2014): 1929-1958.
[4] Zhou, Yanqi, et al. "Mixture-of-experts with expert choice routing." Advances in Neural Information Processing Systems 35 (2022): 7103-7114.
[5] Kaplan, Jared, et al. "Scaling laws for neural language models." arXiv preprint arXiv:2001.08361 (2020).
[6] Clark, Aidan, et al. "Unified scaling laws for routed language models." International conference on machine learning. PMLR, 2022.
[7] Chen, Zitian, et al. "Mod-squad: Designing mixtures of experts as modular multi-task learners." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
[8] Fan, Zhiwen, et al. "M^3vit: Mixture-of-experts vision transformer for efficient multi-task learning with model-accelerator co-design." Advances in Neural Information Processing Systems 35 (2022): 28441-28457.
---
Rebuttal 4:
Title: Further clarification on differences between D2DMoE and multi-task methods provided by the reviewer
Comment: To further emphasize the key differences between multi-task methods referenced in [7,8] and our work, we outline them below:
* Our goal is to reduce inference cost by skipping the redundant computation during the model inference through the MoE framework. In contrast, the referenced works employ MoE primarily to solve the multi-task problem, with **efficiency being only a secondary concern**.
* Both referenced methods use a classic formulation of MoE adapted for the multi-task setting (e.g. [7] optimizes mutual information between experts and tasks). **None of these works mentions activation sparsity or anything similar to our router training scheme**.
* The cited works do always spend the **same amount of compute for each sample** (classic formulation of MoE), with the differences being present only between tasks for [7]. **In contrast, our dynamic-k gating adjusts the computational effort of the model on a per-sample and per-token basis**.
* Our method allows for the **adjustment of computational cost after training** and during deployment through the change of the $tau$ hyperparameter. In contrast, [7,8] must retrain the model each time to target a different budget. While this may be not initially obvious from the FLOPs vs accuracy plots in ours and referenced works, we consider this a vital property of dynamic inference methods.
* Finally, **we target scenarios with limited computational resources** for the adaptation of pre-trained model, while the references provided by the reviewer require a **significant compute budget to train models from scratch** (e.g. [7] reports training for 80 hours with 240 NVIDIA V100 GPUs). | Summary: The paper presents Dense to Dynamic-k Mixture-of-Experts (D2DMoE), a method to convert a dense model into a MoE model, that exploits the fact that activations in Transformer models are typically very sparse. The sparsity can be further improved by the addition of square Hoyer regularization during a light fine-tuning phase. Since the sparsity pattern changes over examples, the use of a dynamic selection rule is preferred. The strategy used to convert the dense model into a MLP is almost identical to that of Zhang et al. from 2022, but the loss function used to train the router is different. The router in this paper is trained to predict the norm of the output of the corresponding expert (and the experts with the an expected l2 norms over a certain dynamic threshold are selected). Finally, the present work also MoEfies not only the FFN layers as usual, but also all the linear layers in the MHA (for that, each linear layer is converted into a MLP of the same cost, minimizing the l2 error, similar to distillation).
The proposed approach is evaluated on different architectures (ViT, BERT, GPT2, Gemma) and benchmarks (including both vision and language). In all cases the proposed approach offeres a better FLOP-quality trade-off than that of Zhang et al. (2022), and achieves a quality similar to the original dense baseline but at significantly fewer FLOPs.
Strengths: - The paper is very well structured and written. All the steps involved in the proposed method are clearly explained in the main paper, or briefly explained first and then thoroughly documented in the appendix.
- Figure 5a shows that the step done to improve sparsification is clearly useful. Although this doesn't show how the quality of the resulting model is affected, figure 4 shows that the quality of the original dense model is matched, or very nearly matched, with significantly fewer FLOPs (and better FLOP-quality trade-off than the MoEfication proposed by Zhang et al. in 2022).
- Figure 5b shows that more FLOPs (i.e. experts) are allocated to image regions with higher semantical meaning, as one intuitively would expect. This might not be a strength if one only cares about FLOP vs quality, but it certainly makes the approach more interpretable.
Weaknesses: - The use of MoEs from scratch is criticized because the training difficulties that these have. However, the training recipes for these have improved significantly in the past two or three years, and are now a component of many state-of-the-art models (e.g. [Mixtral](https://mistral.ai/news/mixtral-of-experts/), [DeepSeekMoE](https://arxiv.org/abs/2401.06066), [Gemini 1.5](https://storage.googleapis.com/deepmind-media/gemini/gemini_v1_5_report.pdf)). Thus, I believe that these should definitely be considered as a baseline, otherwise the potential impact of this work is more limited.
- The proposed method involves many steps. They are well documented (see strengths), but makes the whole approach a bit cumbersome, in my opinion. Alternativelly, I wonder how the dynamic routing, the sparsity-indicuing regularization and other of the components used would perform when training a model from scratch.
- As usual, one must be carefull when doing only FLOP vs quality comparisons, since the proposed method, especially the dynamic selection of experts, may be hard to efficiently implement on modern hardware (GPUs, TPUs). I couldn't find this discussed anywhere in the paper, so I would appreciate more clarity on this (perhaps even additional plots showing Runtime vs quality, even if it's only on the appendix).
Technical Quality: 2
Clarity: 4
Questions for Authors: See the comments mentioned in the weaknesses.
Confidence: 5
Soundness: 2
Presentation: 4
Contribution: 3
Limitations: No particular ethical considerations affecting this work, in my opinion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for assessing our work and recognizing that our paper is well-written. Below, we address the reviewer's concerns.
> The use of MoEs from scratch is criticized because the training difficulties that these have. However, the training recipes for these have improved significantly in the past two or three years, and are now a component of many state-of-the-art models (e.g. Mixtral, DeepSeekMoE, Gemini 1.5). Thus, I believe that these should definitely be considered as a baseline, otherwise the potential impact of this work is more limited.
MoE models trained from-scratch may at first seem like an appropriate baseline due to similar architecture. However, note that the primary goal of training MoE from-scratch is to scale the parameter count of the model without affecting its *training* cost. In contrast, our aim is to reduce the *inference* cost of existing pre-trained models by skipping redundant computations while preserving the model size. All our baselines start from the same pre-trained model checkpoint, and we assume only a small budget for model adaptation, so we cannot conduct a fair comparison between our method and models suggested by the reviewer.
> The proposed method involves many steps. They are well, documented (see strengths), but makes the whole approach a bit cumbersome, in my opinion. Alternativelly, I wounder how the dynamic routing, the sparsity-indicuing regularization and other of the components used would perform when training a model from scratch.
While our method is multi-step, we show the robustness to the change of hyperparameters in Figures 6b, 6d, and 11. We agree that applying our method to training MoE from-scratch is an interesting future work direction. Expert contribution routing could be useful in augmenting the training of gating networks, which often perform similarly to fixed mappings to experts [1]. Moreover, as activation sparsity levels change as training progresses [2], one might investigate the application of dynamic-k gating for faster training. According to our knowledge, nothing similar to our contributions appeared in the context of MoE literature before.
> As usual, one must be carefull when doing only FLOP vs quality comparisons, since the proposed method, especially the dynamic selection of experts, may be hard to efficiently implement on modern hardware (GPUs, TPUs). I couldn't find this discussed anywhere in the paper, so I would appreciate more clarity on this (perhaps even additional plots showing Runtime vs quality, even if it's only on the appendix).
We address this concern in the joint response to all reviewers and in the rebuttal PDF in Figures 1 and 2, where we show our wall-clock time measurement results.
We would like to thank the reviewer again for his time spent on reviewing our work. We hope that our answers resolve the reviewer’s concerns, and we are open to further discussion in case of any further questions. We also kindly ask the reviewer to reassess our work and adjust the score accordingly after taking into consideration our answers and the additional experiments from the response PDF.
##### References
[1] Roller, Stephen, Sainbayar Sukhbaatar, and Jason Weston. "Hash layers for large sparse models." Advances in Neural Information Processing Systems 34 (2021): 17555-17566.
[2] Wild, Cody, and Jesper Anderson. "Uncovering Layer-Dependent Activation Sparsity Patterns in ReLU Transformers." arXiv preprint arXiv:2407.07848 (2024).
---
Rebuttal Comment 1.1:
Comment: Thank you very much for addressing my concern, as well as those from other reviewers.
I am still reluctant to increase my score, since I think that the method is quite cumbersome (as I mentioned) given the goal and what it achieves. For instance, the authors have mentioned during the rebuttal that the comparison with MoEs trained from-scratch is not fair because the goal of this work is "to reduce the inference cost of existing pre-trained models". If so, a really imporant and very popular strategy is missing as a baseline: distillation to a smaller model (dense or MoE).
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the response. We understand the reviewer's concerns with distillation as a baseline, but - similarly to pruning - we see it more as a complementary model acceleration technique [1]. However, due to time constraints, we cannot provide any experiments that could demonstrate it to the reviewer and will only be able to include them in the camera-ready version.
Additionally, we would like to point out that the CoFi method [2] evaluated in combination with our method in Section 5.6 does use distillation as a part of its training scheme. Our method still allows for computational saving when applied on top of CoFi. Input-dependent activation sparsity is an inherent property in almost all models, and therefore our method should be universally applicable.
Finally, while we appreciate the thoroughness of the reviewer, to the best of our knowledge so far no papers have examined the compatibility of dynamic inference with all three major model compression methods—pruning, quantization, and distillation. In this context, our comparisons with pruning and quantization are already particularly comprehensive.
We thank the reviewer for the discussion and his suggestions that helped us improve the paper.
#### References
[1] Han, Yizeng, et al. "Dynamic neural networks: A survey." IEEE Transactions on Pattern Analysis and Machine Intelligence 44.11 (2021): 7436-7456.
[2] Xia, Mengzhou, Zexuan Zhong, and Danqi Chen. "Structured pruning learns compact and accurate models." arXiv preprint arXiv:2204.00408 (2022). | Summary: This paper proposes a method to convert a dense pre-trained transformer network into its sparse counterpart. The approach begins by fine-tuning a pre-trained network to sparsify its activation values. Subsequently, the method clusters the neurons in the MLP layers to form distinct experts and introduces a router network. Finally, the router network is fine-tuned to predict the norm of the output of each expert. Additionally, this work explores the possibility of modifying attention layers and clustering MLP layers using the GLU activation function. The results demonstrate that the proposed method is robust to high sparsity and outperforms baseline model
Strengths: 1. The computational cost of large models is a significant research problem. This work proposes a promising method that can reduce FLOPs by up to 60% without compromising performance.
2. This study conducts extensive experiments across different modalities, models, and datasets, making the results convincing.
Weaknesses: 1. This method introduces several hyperparameters during training and inference, which could potentially make it difficult to reproduce and deploy.
2. I find that Figure 2 does not justify dynamic-k very well, as most layers have a similar percentage of non-zero activations. However, this concern is alleviated after reviewing Figures 12 through 16.
3. Although this work sparsifies at a coarse granularity, which is beneficial for reducing latency, reporting wall-time would be more convincing than solely reporting FLOPs.
Technical Quality: 3
Clarity: 3
Questions for Authors: After converting the attention weight matrices, is any router network introduced to determine which head is activated? In the main paper, I only read that the attention matrices are replaced with two-layer networks with comparable parameters and cost. I didn't find any rationale provided for this conversion.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: 1. Currently, this method is only evaluated on small networks, so it is unknown whether the proposed method can be effectively applied to larger networks.
2. Although this method leverages sparsity in activations to accelerate inference, it does not reduce the number of parameters, and may even increase them, unlike other methods such as pruning that simultaneously reduce the parameter count.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and effort spent on our paper. We are pleased to see that the reviewer recognizes the significance of our work and the robustness of our method across different experimental settings. Below, we answer to the issues raised by the reviewer:
### Weaknesses
> This method introduces several hyperparameters during training and inference, which could potentially make it difficult to reproduce and deploy.
While our method introduces several hyperparameters, we show that its performance is quite robust to hyperparameter selection (please see Figures 6b, 6d, and 11 for analysis of the impact of different hyperparameters). We found out that the staged nature of the conversion process significantly simplifies the tuning for both D2DMoE and MoEfication, as it allows us to tune and verify the performance of a stage before proceeding to the next one.
> I find that Figure 2 does not justify dynamic-k very well, as most layers have a similar percentage of non-zero activations. However, this concern is alleviated after reviewing Figures 12 through 16.
We want to clarify that the *average* (aggregated over all tokens from the entire test set) percentage of non-zero activations does not need to be different between layers for the use of dynamic-k gating to be justified. For a single FFN layer, we are interested in the variance of the number of non-zero activations. We show that this variance is significant (error bars in Figure 2b, top; we also plot the coefficient of variation in Figure 2b, bottom) for every FFN layer, hence executing the same number of experts for each input would be suboptimal.
Note that some layers do exhibit different numbers of average non-zero activations (Figure 2b, top). While one could tune $k$ for each layer separately, it would be a cumbersome process, and to our best knowledge, there are no works in that direction. Our dynamic-k gating allows for an uneven spread of computation throughout the depth of the model without the need for tuning any hyperparameters (Figure 5a).
> Although this work sparsifies at a coarse granularity, which is beneficial for reducing latency, reporting wall-time would be more convincing than solely reporting FLOPs.
We address this concern in the joint response to all reviewers and in the rebuttal PDF in Figures 1 and 2, where we show our wall-clock time measurement results.
### Questions
> After converting the attention weight matrices, is any router network introduced to determine which head is activated? In the main paper, I only read that the attention matrices are replaced with two-layer networks with comparable parameters and cost. I didn't find any rationale provided for this conversion.
Please note that we do not introduce routing across the attention heads nor change how attention is calculated. We only replace the projection matrices (W_q, W_k, W_v, W_o). We introduce this modification to extend the computational savings from our method to attention layers without changing the self-attention mechanism itself.
### Limitations
> Currently, this method is only evaluated on small networks, so it is unknown whether the proposed method can be effectively applied to larger networks.
While our evaluation is done mostly on small-scale models, please note that we also demonstrate that our method works well with the 2B parameter Gemma model. Our computational budget does not allow for larger models, and as such we leave the exploration of this question for future work. However, since bigger models are more overparameterized and exhibit higher activation sparsity, we do not see any reasons why our method would not scale further. Finally, we see more improvement upon the baseline when D2DMoE is applied to the larger model (Gemma 2B) than when applied to smaller models; this suggests that our method scales better than the baseline (please see the discussion on why in Appendix C).
> Although this method leverages sparsity in activations to accelerate inference, it does not reduce the number of parameters, and may even increase them, unlike other methods such as pruning that simultaneously reduce the parameter count.
While this is true, the parameter overhead from our method is very low. We consider model compression methods such as pruning to be complementary to D2DMoE (see Section 5.6).
We would like to thank the reviewer again for his time spent on reviewing our work. We hope that our answers resolve the reviewer’s concerns, and we are open to further discussion in case of any further questions. We also kindly ask the reviewer to reassess our work and adjust the score accordingly after taking into consideration our answers and the additional experiments from the response PDF.
---
Rebuttal Comment 1.1:
Title: Response by Reviewer
Comment: Thank the authors for providing further clarification.
I have read the rebuttal and other reviews. I decide to maintain my original score of 6.
---
Rebuttal 2:
Comment: We thank the reviewer for the feedback and the discussion. | Summary: This paper improves over MoEfication with four innovations: (i) enforcing higher activation sparsity; (ii) directly predict the norm of the output of each expert; (iii) dynamic-k expert selection scheme; and (iv) generalization to any standalone linear layer. The resulting method achieves significant improvements in terms of cost-vs-performance trade-offs in text classification, image classification, and language modeling.
Strengths: - The proposed method is a nice addition of MoEfication, in terms of both performance and generalization (to any standalone linear layer). It makes this kind of method more usable in practice.
- I like the idea of directly predicting the norm so as to achieve dynamic-k expert selection. This dynamic scheme is novel and bridges the fields of dynamic inference and MoE.
- The experiments illustrate that the proposed method is applicable in various settings, including text classification, image classification, and language modeling.
Weaknesses: - The baseline seems weak. ZTW only receives a few citations and is not published at a top machine learning conference/journal.
- Only FLOPs are reported and it is a poor indicator of practical latency/throughput, especially in the LLM era.
- It is not clear if the proposed method could be combined with other methods to show its broader applicability. In particular, quantization is a very general method and is almost adopted as a default in many practical applications.
- The writing could be improved. For example, more technical details of MoEfication should be included.
- In all figures, the lines are continuous. However, these lines should have been a result of interpolation. The original point values are missing.
Technical Quality: 4
Clarity: 3
Questions for Authors: NA
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: Please see weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and effort spent on our paper. We are pleased that the reviewer recognizes the novelty, applicability, and performance of our method. Below, we address the issues raised by the reviewer.
> The baseline seems weak. ZTW only receives a few citations and is not published at a top machine learning conference/journal.
ZTW [1] was published at NeurIPS 2021, and its extended version [2] was published in Neural Networks in 2023. We apologize for the confusion. We cited the extension believing it is more relevant due to being more recent, but in our new revision we cite both versions of that paper.
> Only FLOPs are reported and it is a poor indicator of practical latency/throughput, especially in the LLM era.
We address this concern in the joint response to all reviewers and in the rebuttal PDF in Figures 1 and 2, where we show our wall-clock time measurement results.
> It is not clear if the proposed method could be combined with other methods to show its broader applicability. In particular, quantization is a very general method and is almost adopted as a default in many practical applications.
Multiple studies such as [4,5,6,7,8] show that dynamic computation methods combine well with quantization and pruning. In Section 5.6, we demonstrate that our method integrates effectively with the existing structured pruning method CoFi [3]. For reviewers' convenience, we perform a similar experiment with 8-bit and 16-bit dynamic quantization and provide the results in the rebuttal PDF. Our method integrates seamlessly with quantization.
> The writing could be improved. For example, more technical details of MoEfication should be included.
Following the reviewer's suggestion, we have included a more detailed description of MoEfication in the paper.
> In all figures, the lines are continuous. However, these lines should have been a result of interpolation. The original point values are missing.
We changed the plots to add points as suggested by the reviewer.
We would like to thank the reviewer again for his time spent on reviewing our work. We hope that our answers resolve the reviewer’s concerns, and we are open to further discussion in case of any further questions. We also kindly ask the reviewer to reassess our work and adjust the score accordingly after taking into consideration our answers and the additional experiments from the response PDF.
##### References
[1] "Zero time waste: Recycling predictions in early exit neural networks.", Wołczyk et al., NeurIPS2021.
[2] "Zero time waste in pre-trained early exit neural networks.", Wójcik et al, Neural Networks 168 (2023).
[3] "Structured Pruning Learns Compact and Accurate Models.", Xia et al., ACL2022.
[4] "Mixture of Quantized Experts (MoQE): Complementary Effect of Low-bit Quantization and Robustness.", Young, arXiv 2023.
[5] "The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models.", Kurtic et al., EMNLP 2022.
[6] “Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding”, Han et al., ICLR2016.
[7] “McQueen: Mixed Precision Quantization of Early Exit Networks”, Saxena et al., BMVC2023.
[8] “Two sparsities are better than one: unlocking the performance benefits of sparse–sparse networks.”, Hunter et al., Neuromorphic Computing and Engineering 2022 Volume 2 Number 3.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply and the additional experiments. Most of my concerns are addressed except two:
- Although the baseline is from a top conference, it seems too old. Additional experiments with newer baselines may not be necessary during this rebuttal period, but I highly recommend trying out more recent ones.
- If your method does not hurt accuracy, and since quantization usually does not hurt accuracy too much, I'm not surprised that when combining your method with quantization, the accuracy will be maintained. However, this may not be true for latency. Therefore, it's better to report both accuracy and latency, when combining your method with quantization.
---
Rebuttal 2:
Comment: We appreciate the reviewer’s involvement and are grateful for a very swift response.
> Although the baseline is from a top conference, it seems too old. (...)
The extension paper from November 2023 demonstrates that ZTW remains a SOTA early exit method and outperforms other baselines such as GPF [1] (ACL2021) and L2W [2] (ECCV2022).
In the response to reviewer 85BP we have also added A-ViT [3] (CVPR2022), with the results available in Figure 3 of the rebuttal PDF.
Overall, we compare our method against four baselines (ZTW, MoEfication[4], CoFi[5], A-ViT) and conduct experiments on four datasets from different modalities. This surpasses the evaluation scope of other established works on dynamic architectures published at top conferences, which often evaluate on fewer datasets and baselines or compare against static models instead ([3], CVPR2022; [4], ACL 2022; [5], CVPR2023; [6], EMNLP2023; [7], NeurIPS2021; [8], NeurIPS2022; [9] ICLR2023).
> (...) it's better to report both accuracy and latency, when combining your method with quantization.
To show that quantization reduces the latency of D2DMoE, we modify our kernels to handle `float16` and `int8` data types. We perform a similar experiment to the one from Figure 1 of the rebuttal PDF: 1) we sample gating decisions from the Bernoulli distribution with probability $p$ 2) measure the execution time of our experts for the three data type variants. We present the latency (ms) in the tables below:
### RTX 4090
| p | 0.0 | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 | 0.9 | 1.0 |
|---------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|
| float32 | 0.005 | 0.009 | 0.013 | 0.018 | 0.023 | 0.028 | 0.033 | 0.038 | 0.042 | 0.047 | 0.052 |
| float16 | 0.004 | 0.005 | 0.007 | 0.009 | 0.011 | 0.014 | 0.016 | 0.018 | 0.021 | 0.024 | 0.027 |
| int8 | 0.004 | 0.004 | 0.005 | 0.007 | 0.008 | 0.009 | 0.010 | 0.011 | 0.012 | 0.013 | 0.014 |
### A100
| p | 0.0 | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 | 0.9 | 1.0 |
|---------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|
| float32 | 0.006 | 0.009 | 0.012 | 0.015 | 0.019 | 0.022 | 0.025 | 0.028 | 0.031 | 0.035 | 0.038 |
| float16 | 0.006 | 0.007 | 0.008 | 0.010 | 0.011 | 0.013 | 0.014 | 0.016 | 0.017 | 0.019 | 0.021 |
| int8 | 0.007 | 0.008 | 0.009 | 0.010 | 0.011 | 0.012 | 0.014 | 0.015 | 0.016 | 0.017 | 0.019 |
The results show that both the higher activation sparsity (lower $p$) of our method and lower-precision data types are complementary in terms of wall-clock time reduction. While we see a smaller improvement from using `int8` over `float16` on A100, we attribute this to differences between GPU architectures and software support for low-precision arithmetic.
##### References
[1] Liao, Kaiyuan, et al. "A global past-future early exit method for accelerating inference of pre-trained language models." Proceedings of the 2021 conference of the north american chapter of the association for computational linguistics: Human language technologies. 2021.
[2] Han, Yizeng, et al. "Learning to weight samples for dynamic early-exiting networks." European conference on computer vision. Cham: Springer Nature Switzerland, 2022.
[3] Yin, Hongxu, et al. "A-vit: Adaptive tokens for efficient vision transformer." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.
[4] Zhang, Zhengyan, et al. "MoEfication: Transformer Feed-forward Layers are Mixtures of Experts." Findings of the Association for Computational Linguistics: ACL 2022. 2022.
[5] Chen, Xuanyao, et al. "Sparsevit: Revisiting activation sparsity for efficient high-resolution vision transformer." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
[6] Tan, Shawn, et al. "Sparse Universal Transformer." Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. 2023.
[7] Rao, Yongming, et al. "Dynamicvit: Efficient vision transformers with dynamic token sparsification." Advances in neural information processing systems 34 (2021): 13937-13949.
[8] Schuster, Tal, et al. "Confident adaptive language modeling." Advances in Neural Information Processing Systems 35 (2022): 17456-17472.
[9] Chataoui, Joud, and Mark Coates. "Jointly-Learned Exit and Inference for a Dynamic Neural Network." The Twelfth International Conference on Learning Representations. 2023.
---
Rebuttal Comment 2.1:
Comment: Thank you for conducting the additional experiments. I don’t see any major concerns that would result in a rejection of this paper. However, after reading the comments from the other reviewers, I think there is still room for improvement in experiments. As a result, I decide to maintain my current score.
---
Rebuttal 3:
Comment: We appreciate the reviewer's response, active participation in the discussion, and support for our work. Their feedback has been valuable in helping us improve our paper. | Rebuttal 1:
Rebuttal: We thank all the reviewers for the time and effort spent on our work and their valuable comments that helped us improve our work. We appreciate that our work has been praised by the reviewers for its thorough empirical evaluation (reviewers VwY8, R1eJ), novelty (reviewer uQnr), the generality of our method (reviewer uQnr), good empirical results (reviewers VwY8, uQnr, and R1eJ), and the paper being well-written (v3M2).
## Rebuttal summary
We highlight the main points that changed in our new revision thanks to the valuable feedback that we received from the reviewers:
- Reviewers VwY8, uQnr, R1eJ, and v3M2 requested wall-clock time measurements of the performance of our method. To strengthen our work we provide an efficient implementation of D2DMoE and show its performance in the rebuttal PDF. In Figure 1, we show that FLOPs for D2DMoE correspond to actual latency and that our method can significantly speed up the inference (63% reduction of wall-clock time with a negligible performance drop). Moreover, in Figure 2 we show end-to-end accuracy vs latency plots that show D2DMoE can reduce the end-to-end computation by around 25% time without any performance drop. We consider our efficient implementation of D2DMoE as a strong technical contribution of our work.
- Reviewer 85BP requested additional baselines. As a result, we add A-ViT [1] into our comparison. A-ViT is a dynamic inference method that saves compute by dropping tokens. The results are presented in Figure 3 of the rebuttal PDF. D2DMoE outperforms this baseline as well.
- Reviewer uQnr asked about the integration of D2DMoE with quantization. Therefore, in Figure 4 of the rebuttal PDF, we provide an additional experiment with quantization. We show that our method integrates seamlessly with 16-bit and 8-bit quantization, further proving the robustness of our method.
Below, we elaborate on wall-clock measurements and our rationale for using FLOPs in more detail.
## Wall-clock time measurement results
We implement the forward pass of our MoE module using GPU kernels written in Triton [2] and employ several optimizations for our implementation, including an efficient memory access pattern, kernel fusion, and configuration auto-tuning. As suggested by Tan et al. [4], our implementation also avoids unnecessary copies when grouping tokens.
We verify the performance of our implementation for a single FFN MoE layer in isolation by feeding it with synthetic random data. We measure the execution time of our layer for a range of different numbers of experts executed on average and compare it to the corresponding MLP module. The results, presented in Figure 1 of the rebuttal PDF, show that our implementation has almost no overhead, and that FLOPs for D2DMoE perfectly correlate with the wall-clock time of execution.
Furthermore, we want to ensure that these gains translate to real-world usage speedups. We therefore use our new implementation in a ViT-B-converted D2DMoE model and measure the average sample processing time on the entire ImageNet-1k test set. As suggested by reviewer v3M2, in Figure 2 of the rebuttal PDF we plot the accuracy vs wall-clock time trade-off of our method. While a small overhead when compared to the ViT-B baseline is visible, our method still allows for significant savings. This shows the speed-up potential of D2DMoE in practice.
We have added the above wall-clock measurement experiments to our paper, along with a detailed description of our Triton implementation and its source code.
## FLOPs rationale
Finally, we provide our reasoning for using FLOPs as the main metric in our work instead of using wall-clock time measurements:
1. Wall-clock measurements are heavily affected by the choice of device hardware. Even if the device model (i.e. GPU model) is the same, the environment (cooling efficiency, temperature, package versions, I/O load by other users) may be different and can significantly affect the result. As such, wall-clock time does not allow for reliable comparisons to results presented in other works.
2. Implementing efficient GPU kernels is often non-trivial, and entire papers have been published with their sole contribution being an efficient implementation of MoE [2, 3]. Latencies obtained with poorly implemented and slow kernels could suggest to the reader that the method is GPU-unfriendly, even if it is a problem of only that specific implementation. We emphasize the rapid progress in research on efficient MoE implementations, exemplified by the recent work of Tan et al. [4], which demonstrates significant advancements compared to Gale et al. [3].
3. Finally, as discussed in "The hardware lottery" paper by Hooker, S. [5], compatibility with the current hardware should not overshadow the ideas proposed in research papers. FLOPs, as a metric for computational complexity, are independent of hardware choice and can be used to compare algorithms rather than their specific implementations.
We thank the reviewers for their valuable insights. We hope that our response and additional experiments fully address the concerns of the reviewers, and we are open to further discussion should reviewers have any additional inquiries.
##### References
[1] Yin, Hongxu, et al. "A-vit: Adaptive tokens for efficient vision transformer." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.
[2] Tillet, Philippe, Hsiang-Tsung Kung, and David Cox. "Triton: an intermediate language and compiler for tiled neural network computations." Proceedings of the 3rd ACM SIGPLAN International Workshop on Machine Learning and Programming Languages. 2019.
[3] Gale, Trevor, et al. "Megablocks: Efficient sparse training with mixture-
of-experts." Proceedings of Machine Learning and Systems 5 (2023).
[4] Tan, Shawn, et al. "Scattered Mixture-of-Experts Implementation." arXiv preprint arXiv:2403.08245 (2024).
[5] Hooker, Sara. "The hardware lottery." Communications of the ACM 64.12 (2021): 58-65.
Pdf: /pdf/6c0ee369a996f66edec54b480d71929f3627ac7f.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper introduces a method called D2DMoE aimed at enhancing the efficiency of transformer models. D2DMoE implements a dynamic-k routing mechanism that allows the model to select a variable number of experts based on the input. The method leverages the inherent activation sparsity in transformer models to reduce the number of active parameters during inference, leading to significant computational savings—up to 60%—without compromising performance. The approach demonstrates that by converting dense layers into Mixture-of-Experts (MoE) layers, transformer models can achieve better accuracy-sparsity trade-offs, making them more efficient for various NLP and vision tasks.
Strengths: The paper presents a thorough empirical evaluation across multiple tasks (image classification, text classification, language modeling) and model architectures (ViT, BERT, GPT-2, Gemma). The experiments compare against relevant baselines and demonstrate consistent improvements.
Weaknesses: In my view, this paper appears to be primarily a repackaged sparse-activated pruning technique stemming from the MoE concept. Several concerns arise:
1. Limited comparison with alternative sparsification methods: The paper predominantly contrasts with MoEfication, neglecting a thorough analysis against other compression or sparsification strategies beyond early-exit techniques. Consequently, the comparative experiments presented lack depth and fail to be fully convincing.
2. Lack of practical acceleration results: The paper only presents theoretical reductions in FLOPs and parameter counts. Without actual inference acceleration results on real hardware (e.g., V100, H100, GTX-4090Ti), it's impossible to assess the practical benefits of the method. The additional overhead from the gating mechanism could potentially negate some of the theoretical gains.
3. Questionable novelty of Dynamic-k gating: The proposed expert selection based on ℓ2-norm of output is indeed more reminiscent of pruning techniques than traditional MoE approaches. This calls into question the novelty of the method when viewed in the context of pruning literature, and makes the comparisons with MoE methods potentially unfair or irrelevant.
4. Limited novelty and inadequate discussion of related work: Many of the proposed operations bear similarities to existing sparse activation pruning methods. The paper fails to adequately discuss these connections, instead focusing on less relevant work. This omission of crucial related work in pruning literature significantly undermines the claimed novelty of the approach.
5. Lack of discussion of some SOTA sparse methods: such as Pruner-Zero: Evolving Symbolic Pruning Metric From Scratch for Large Language Models. ICML2024.
6. Theoretical foundation: The paper lacks a strong theoretical justification for why the proposed methods work better, which becomes even more critical given the concerns about its relationship to existing pruning techniques.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time spent reviewing our work. We are grateful that the reviewer recognizes that our paper presents a thorough empirical evaluation and compares the proposed method against relevant baselines. Below, we present the responses to the issues listed by the reviewer.
> In my view, this paper appears to be primarily a repackaged sparse-activated pruning technique stemming from the MoE concept.
> Limited comparison with alternative sparsification methods: (...)
> Lack of discussion of some SOTA sparse methods: (...)
We would like to highlight that our method is not a pruning technique, but a dynamic inference method. Dynamic inference methods (also known as conditional computation or adaptive inference) [8] are a distinct area of research fundamentally different from pruning approaches. We list the exact differences below:
1) Pruning focuses on model **weights**, while dynamic inference results in sparsity in model **activations**.
2) During inference, the pruned model **statically** uses the same subset of weights for all inputs. In contrast, dynamic inference methods **dynamically** select an appropriate subset of weights on a per-input basis instead of removing them.
3) Pruning is usually concerned with **model compression**, while dynamic inference mostly focuses on **inference acceleration**. The two objectives are not always correlated; in particular, unstructured pruning fails to provide any speedups on GPUs in practice due to the lack of proper hardware support for sparse matrix multiplications [1, 2].
It has already been shown that combining weight sparsity and activation sparsity can lead to higher speedups than with weight sparsity alone [2]. Since conditional computation methods such as MoE can be seen as a form of structured activation sparsity [1], we consider our method and weight pruning as complementary.
Crucially, we highlight that we do explore the relationship of our method to pruning in our paper. In Section 5.6 we perform an experiment that tests our method for compatibility with pruning, where we demonstrate that our method indeed improves the performance of a network pruned by the CoFi structured pruning method [9].
> Questionable novelty of Dynamic-k gating: (...)
> Limited novelty and inadequate discussion of related work: (...)
We would like to gently disagree with the reviewer in regards to the lack of novelty of our method and point out that other reviewers did not raise similar concerns and even praised the novelty of our work (reviewer uQnr). To the best of our knowledge, we were the first to propose MoE routing based on the norms of the outputs of the experts. Our expert contribution routing is conceptually consistent, significantly improves performance (Figure 6a), and is not dependent on ReLU (Figure 6c). The use of the ℓ2-norm (or any norm) is widespread in the machine learning literature (e.g. adversarial examples [3], knowledge distillation [4], normalization [5]), and we do not see any connection to pruning in particular.
Please also note that expert contribution routing and dynamic-k gating are separate contributions (Figure 6a). We show that static top-k gating is inappropriate for MoEs converted from dense models (Figure 2b), a setup first proposed by Zhang et al. [6]. Similarly, we are the first to show the existence of this problem, and we propose dynamic-k gating as a remedy - that in our view also has no connection to the pruning literature.
Following the discussion with the reviewer, in addition to Section 5.6, we add the discussion on pruning and its relation to our method to the paper to better highlight the differences for the readers.
> Theoretical foundation: (...)
While we acknowledge the importance of strong theoretical foundations in machine learning, we focus on empirical evaluation as done in works similar to ours [2,6,7,8].
> Lack of practical acceleration results: (...)
We address this concern in the joint response to all reviewers and in the rebuttal PDF in Figures 1 and 2, where we show our wall-clock time measurement results.
We would like to thank the reviewer again for his time spent on reviewing our work. We hope that our answers resolve the reviewer’s concerns, and we are open to further discussion in case of any further questions. We also kindly ask the reviewer to reassess our work and adjust the score accordingly, considering our answers and the additional rebuttal experiments.
##### References
[1] Hoefler, Torsten, et al. "Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks." Journal of Machine Learning Research 22.241 (2021): 1-124.
[2] Hunter, Kevin, Lawrence Spracklen, and Subutai Ahmad. "Two sparsities are better than one: unlocking the performance benefits of sparse–sparse networks." Neuromorphic Computing and Engineering 2.3 (2022): 034004.
[3] Costa, Joana C., et al. "How deep learning sees the world: A survey on adversarial attacks & defenses." IEEE Access (2024).
[4] Gou, Jianping, et al. "Knowledge distillation: A survey." International Journal of Computer Vision 129.6 (2021): 1789-1819.
[5] Hoffer, Elad, et al. "Norm matters: efficient and accurate normalization schemes in deep networks." Advances in Neural Information Processing Systems 31 (2018).
[6] Zhang, Zhengyan, et al. "Moefication: Transformer feed-forward layers are mixtures of experts." arXiv preprint arXiv:2110.01786 (2021).
[7] Shazeer, Noam, et al. "Outrageously large neural networks: The sparsely-gated mixture-of-experts layer." arXiv preprint arXiv:1701.06538 (2017).
[8] Han, Yizeng, et al. "Dynamic neural networks: A survey." IEEE Transactions on Pattern Analysis and Machine Intelligence 44.11 (2021): 7436-7456.
[9] Xia, Mengzhou, Zexuan Zhong, and Danqi Chen. "Structured Pruning Learns Compact and Accurate Models." Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2022.
---
Rebuttal Comment 1.1:
Title: About dynamic inference method
Comment: The authors highlight that their work is a dynamic inference approach, so are there any comparisons with previous works on dynamic pruning?
There are also more advanced approaches to dynamic inference, such as dynamic token sparsity, etc. This work seems to have no advantage against these approaches
---
Rebuttal 2:
Comment: We thank the reviewer for the comments. We understand we have addressed all the other concerns.
>There are also more advanced approaches to dynamic inference, such as dynamic token sparsity, etc. This work seems to have no advantage against these approaches
We refer the reviewer to Figure 3 of the rebuttal PDF, where we add A-ViT [1], a dynamic token sparsity method, as a baseline for the response to reviewer 85BP. The results show that we outperform A-ViT.
>The authors highlight that their work is a dynamic inference approach, so are there any comparisons with previous works on dynamic pruning?
According to a recent survey [2], at least two groups of works use the term “dynamic pruning”.
Dynamic pruning approaches such as [3, 4, 5, 6] differ from static pruning because they allow the network structure to change by pruning or reactivating connections during training. The resulting model is static and therefore should complement dynamic inference methods such as ours (as shown in Section 5.6).
A small subset of dynamic inference works do use the “dynamic pruning” term for methods that dynamically select channels for execution in convolutional layers [7, 8, 9]. However, they all target exclusively CNNs, while our work focuses on Transformer models.
##### References
[1] Yin, Hongxu, et al. "A-vit: Adaptive tokens for efficient vision transformer." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.
[2] He, Yang, and Lingao Xiao. "Structured pruning for deep convolutional neural networks: A survey." IEEE transactions on pattern analysis and machine intelligence (2023).
[3] Lin, Tao, et al. "Dynamic Model Pruning with Feedback." ICLR-International Conference on Learning Representations. 2020.
[4] Ruan, Xiaofeng, et al. "DPFPS: Dynamic and progressive filter pruning for compressing convolutional neural networks from scratch." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 3. 2021.
[5] Lin, Shaohui, et al. "Accelerating Convolutional Networks via Global & Dynamic Filter Pruning." IJCAI. Vol. 2. No. 7. 2018.
[6] Chen, Zhiqiang, et al. "Dynamical channel pruning by conditional accuracy change for deep neural networks." IEEE transactions on neural networks and learning systems 32.2 (2020): 799-813.
[7] Elkerdawy, Sara, et al. "Fire together wire together: A dynamic pruning approach with self-supervised mask prediction." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
[8] Gao, Xitong, et al. "Dynamic Channel Pruning: Feature Boosting and Suppression." International Conference on Learning Representations.
[9] Lin, Ji, et al. "Runtime neural pruning." Advances in neural information processing systems 30 (2017). | null | null | null | null | null | null |
Aggregating Quantitative Relative Judgments: From Social Choice to Ranking Prediction | Accept (poster) | Summary: NOTE: I have reviewed a previous version of this paper submitted to AAAI 2024. The review here is an updated version of that review reflecting the changes in the paper.
==============================
This paper explores the problem of comparing numerical judgements against each other in order to arrive at a scoring of candidates. e.g. Comparing the scores of competitors against one another, possibly when some scores are missing. The model of Quantitative Relative Judgement Aggregation accepts a set of tuples (a, b, y) which indicate that a is better than b by y units. The goal of the framework is a score for each candidate such that the difference between scores of each candidate is similar to the reported difference between candidates.
The paper narrows their framework to l_p QRJA which uses a more restricted loss function in order to ease the use of known complexity results. They show various complexity results as p changes, notably that l_p QRJA can be solved in near-linear time with bounded error for p >= 1 and that the problem is NP-hard when p <1. Experiments on several datasets show that using p = 1 and p = 2 does lead to a higher accuracy and lower loss than a number of other comparable metrics.
Strengths: Novelty
The paper appears to extend an existing framework. The results expressed in the paper and the evaluation of their framework are novel while the methodology used is standard.
Quality
The paper presents several theoretical results which appear quite plausible. I am not well equipped to thoroughly check the proof of Theorem 1 but it appears to me to be quite reasonable.
The experimental results compare the developed framework across multiple datasets with several other methodologies that can be used to solve the same problem. Some of the other methodologies are more well-suited to the task than others but overall this comparison is substantial and provides a useful overview of the performance of l_p QRJA.
Clarity
The paper is well written in general. The introduction and motivating examples provide excellent intuition as to the problem domain. The framework and theory are fairly clearly described; I do feel as though the significance of the results could be highlighted more strongly but overall the paper is quite understandable.
Significance
The main result of the paper is theory showing the complexity of their l_p QRJA framework under varying conditions, backed by simulations verifying the performance of the framework. The paper is a meaningful improvement to existing work which appears roughly as impactful as most papers in top-tier conferences. There is certainly room for future work to develop based on this or for this work to be adapted into an algorithm for deployed use.
Weaknesses: See previously written and lightly edited section above.
Technical Quality: 3
Clarity: 3
Questions for Authors: Have you re-run the experiments since the previous submission that I reviewed? I ask because the numbers in Figure 4 are quite similar but do not match exactly the previous version. I cannot figure out why 2 weeks of server time would have been spent re-running the same (?) experiments previously reported.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have done a good job of filling out the checklist and have addressed the limitations of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your recognition and thoughtful comments.
We are grateful that you updated your review reflecting the changes in our paper. Compared with the AAAI 2024 version, we significantly improved our work. Notably, we strengthened our theoretical results \- our algorithm was improved from polynomial time to almost linear time. In addition, we reorganized the paper structure, re-ran some experiments, and addressed other reviewer comments.
**On your question**
We have re-run the experiments. This is because one of the AAAI 2024 reviewers suggested that we should “consider the average of the square of the error of the predictions”, i.e., an $\\ell\_2$ version of the quantitative loss we used. Therefore, in the subsequent revision of our work, we re-ran the experiments to incorporate this metric. See Appendix C.1 for the details.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response! | Summary: This work defines and studies the "Quantitative Relative Judgment Aggregation" problem, which involves asking a set of judge agents to predict the performance of a set of "competitor agents" in some kind of content (e.g., a race). This task is related to recsys style collaborative filtering, ranking systems in games, and other models of preference-over-items, but the work argues this particular task is distinct and should be theoretically analyzed as a distinct setting. The paper provides several theorems about the computational complexity of QRJA and experiments using real competition data with a backtesting-style evaluation.
Strengths: This paper has a number of strengths. Below, they are briefly listed. I'll also note that while the overall length of the "Weaknesses" section in this review is longer than "Strengths", overall the potential contribution here is very strong.
- Theoretical analysis of computational complexity for a new problem (variant of previous problem) + experiments to back this up.
- The paper includes several novel aspects (updating task and framing within an existing literature on social choice, the actual results)
- Code is shared, documented, and useful in providing additional clarity about experiments.
- Relatively clear paper overall. The actual presentation of model, code (supplementary materials) + experiments are all extremely clear. The explanation and presentation of Theorems 1 and 2 were reasonably clear (familiar with some of the cited social choice work, but had to refer to the provided previous work to understand the motivation for different loss functions f -- this could be a bit clearer).
In terms of significance, this work has the potential to be broadly relevant to many areas where social choice theory is relevant (i.e., many domains, as the current draft points out) and also will be of interest to readers who are familiar with this specific literature, the computational complexity results of similar methods, the implementation, etc.
Weaknesses: One minor point regarding the interpretation of experimental results: the specific claim that QRJA is inherently more interpretable than MF was made a bit briefly and might benefit from additional remarks. My initial interpretation of the experimental results was: it seems like the new method introduced in the paper is about as good as MF, and has potential explainability benefits because we can walk through / "print out" the aggregation steps that took place and explain how a QRJA-using system came to the conclusion that Alice is faster than Bob. However, it seems possible that in the realm of MF one can apply various results from explainability in recommender systems (including post-hoc methods, e.g. . Concretely, I think just a brief expansion of this point would be helpful, and especially the extent to which there's a core modelling limitation with latent factors approach vs. a practical problem of 'need large embedding space to make MF good for these datasets'.
From the perspective of someone working on social choice-related theory or experimental work, I think this paper will be a strong contribution. However, one potential high-level weakness with this paper for a general audience is that readers may have some trouble understanding the general motivation for this problem, because of the mixed examples (trying to use "judgments" which are actually sensor readings from e.g. a relatively objective "race" to estimate a physical parameter of something like a car vs. trying to aggregate human judgements). While I think after spending some time with the paper, I get what the current draft is going for (something along the lines of, "social choice theory is useful in the context of these subjective aggregation problems in which some people will be weighted more or less, and is ALSO useful in estimating real "physical" models), I think this may not be totally clear.
Section 2 is helpful in this regard, but I think it could be useful to just generally clarify if the first-order goal here is to use this method to model "true rankings".
Technical Quality: 4
Clarity: 3
Questions for Authors: Summarizing some of the points above into questions:
- Can you further clarify the explainability benefits of your approach vs MF, perhaps drawing on the provide "Examples" section
- Is it possible to provide additional clarity about the primary motivation for the work (estimating "true" / "physical" values vs. contributing to work on aggregating subjective values)
- Minor: tying the choice of loss functions back to the provided examples and motivation could make the results more exciting for a broad audience / general ML audience.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The current draft is reasonable in its claims: the theoretical analyses and experiments provided are commensurate to the claimed contributions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your recognition and thoughtful comments.
We are glad that you find our contribution novel and strong, and our presentation extremely clear. We are particularly delighted to see you noted and recognized our documented code for the experiments. Below, we respond to the questions raised in your review.
**On the first question**
QRJA and Matrix Factorization (MF) are both optimization-based methods for contest ranking prediction. QRJA can have explainability benefits over MF because its variables have clear intuitive meanings: they can be interpreted as the strength of each contestant. In contrast, the variables in MF are latent features of contests and contestants, which can be harder to interpret. We will expand the discussion of this in our paper.
**On the second question**
The general formulation of QRJA is motivated by the common need of aggregating quantitative relative “judgments”. These judgments are often subjective opinions from human judges, e.g.,“using 1 unit of gasoline is as bad as creating 3 units of landfill trash”. Our modeling and theoretical study of QRJA is motivated by the need of aggregating such judgments.
Moreover, we observe that these relative “judgments” can also be produced by an objective process, like a contest, rather than human judges. This gives us the opportunity to explore the interplay between social choice theory and the learning-to-rank community. Therefore, we apply the QRJA rules to the problem of contest ranking prediction for further study.
We will reorganize the introduction to provide additional clarity about the motivation.
**On the third question**
Thank you for the constructive comments.
---
Rebuttal Comment 1.1:
Comment: Thanks for these responses. This helped to clarify my minor questions about the paper, and I stand by my original positive recommendation. | Summary: The paper studies Quantitative Relative Judgment Aggregation, in which they want to learn a quantitative score for a group of alternatives that align with a series of pairwise quantitative differences between alternatives as much as possible. They extend the result in [Conitzer et al, 2016] from linear loss function to high-order loss function and show the NP-hardness to find an optimal score when the order $p < 1$. They also test their algorithms on real-world racing data and shows its supremacy towards previous methods.
Strengths: + QJRA problem is indeed interesting and important.
+ The theoretical results are sound and non-trivial.
+ Clear example demonstrate why simple methods do not work.
Weaknesses: -The relationship between previous work. I don't quite see how this paper conceptually distinguishes itself from [Conitzer et al, 2016].
The two papers study similar problems under similar settings. In this case, I would rather grant the conceptual contribution of "social-choice-motivated solution concepts to the problem of ranking prediction" to their paper. Your paper indeed has its own conceptual contribution compared to the previous work, but the current claim is not precise enough.
- The motivation of the race. I feel that the whole introduction emphasizes that the paper focuses on race scenarios, yet multiple questions are unanswered: For example, why race is a reasonable representation of judgments? What are the differences between traditional scenarios that are worth a separate study? What are the real-world applications?
- The proofs and the sketches are hard to follow with too many skips and magic numbers.
Technical Quality: 4
Clarity: 2
Questions for Authors: 1. Please distinguish your paper from previous work, especially how your work is different from [Conitzer et al, 2016] and [Conitzer et al, 2015] conceptually and technically.
2. Please answer my question related to the motivation of the race in the weakness section.
3. Please specify what simple algorithm you apply for $l_2$ norm in the experiment.
Confidence: 4
Soundness: 4
Presentation: 2
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful comments.
We are glad that you find our problem interesting and important, and our theoretical results sound and non-trivial. Below, we respond to the questions raised in your review.
**On the first question**
Conitzer et al., 2015 \[1\] is a short visionary paper that proposes the abstract problem of setting numerical standards of societal tradeoffs such as “using 1 unit of gasoline is as bad as creating 3 units of landfill trash”. Conitzer et al., 2016 \[2\] axiomatically characterize a specific aggregation rule for the societal tradeoffs problem, which is mathematically equivalent to QRJA with loss function $f(x) \= x$. Zhang et al., 2019 \[3\] study the computation complexity of the specific aggregation rule characterized in \[2\].
Conceptually, \[1\], \[2\] and \[3\] are all confined to computational social choice, since the societal tradeoffs problem was originally motivated by setting numerical tradeoff standards. In our work, the “relative judgments” to be aggregated can be those subjective judgments as in \[1\], \[2\] and \[3\], but they can also be produced by an objective process, like a race, rather than human judges. This gives us the opportunity to explore the interplay between social choice theory and the learning-to-rank community. In this sense, we "apply social-choice-motivated solution concepts to the problem of ranking prediction".
On the technical level, only \[3\] studies computational problems related to our work. However, the techniques used by \[3\] and our work are fundamentally different. \[3\] takes a linear programming-based approach, while we need to use convex optimization techniques. We present a non-trivial reduction to the convex minimum cost flow problem and utilize recent advancements for this problem \[4\] to solve $\\ell\_p$ QRJA when $p \\geq 1$. To complement our results, we show that when $p \< 1$, $\\ell\_p$ QRJA is NP-hard by reducing from Max-Cut. None of these techniques are present in \[3\].
**On the second question**
We observe that the “relative judgments” can also be produced by an objective process, like races. In this sense, races are instances of judgments, rather than the representation of judgments.
This specific instance of judgments is worth a separate (empirical) study because of its conceptual value of bridging the social choice and the learning-to-rank communities. That being said, our QRJA model and theoretical results are not confined to races. These contributions are also valuable within the social choice community.
One direct real-world application is assigning contest ratings to a set of contestants, as illustrated in Section 2 of our paper. Besides that, our work can also be applied to the use cases of prior works on societal tradeoffs \[1-3\] like setting numerical tradeoff standards.
**On the third question**
$\\ell\_2$ QRJA is reducible to $\\ell\_2$-regression, which is often referred to as the linear least-square regression problem. In our experiments, we use the Python package `scipy.sparse.linalg.lsqr` to solve it (see Line 120 of the uploaded `code/qrja.py` in supplementary materials). We will clarify this.
**References**
\[1\] Conitzer, V.; Brill, M.; and Freeman, R. 2015\. Crowdsourcing societal tradeoffs. In Proc. of the 14th International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS).
\[2\] Conitzer, V.; Freeman, R.; Brill, M.; and Li, Y. 2016\. Rules for choosing societal tradeoffs. In Proc. of the 30th AAAI Conference on Artificial Intelligence (AAAI).
\[3\] Zhang, H.; Cheng, Y.; and Conitzer, V. 2019\. A better algorithm for societal tradeoffs. In Proc. of the 33rd AAAI Conference on Artificial Intelligence (AAAI).
\[4\] Chen, L.; Kyng, R.; Liu, Y. P.; Peng, R.; Gutenberg, M. P.; and Sachdeva, S. 2022\. Maximum flow and minimum-cost flow in almost-linear time. In Proc. of the IEEE 63rd Symposium on Foundations of Computer Science (FOCS).
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I have one additional question related to your response on question 1.
In the second paragraph, you state that "(relative judgments to be aggregated) can also be produced by an objective process, like a race, rather than human judges" and then "This gives us the opportunity to explore the interplay between social choice theory and the learning-to-rank community". Could you deliberate it bit more on how the first statement logically leads to the second? Is it because that the subjective human judges does not fit into learning-to-ranking problems? Or are there other reasons?
---
Reply to Comment 1.1.1:
Comment: Thank you for your reply.
Generally speaking, in social choice, people typically consider subjective opinions and judges as inputs, e.g., voting. Conversely, in the learning-to-rank community, the inputs are usually more objective, e.g., ranking web pages in response to a search query. The difference in subjectiveness has led to the difference in the focus of these two communities. For example, the learning-to-rank community is less concerned about strategic aspects of voting. By considering a social-choice-motivated problem with objective inputs typically seen in the learning-to-rank community, we explore the interplay between them. | Summary: The paper generalizes relative quantitative judgments by Conitzer et al., which aims to aggregate judgments on the relative quality of different candidates from various sources and applies it to a learning-to-rank setting. The authors introduce new aggregation rules QRJA that tries to assign vector $x_1,\dots x_n$ that minimizes error
$$\sum_i w_i f(|x_{a_i}-x_{b_i}-y_i|)$$
from collection of quantitative relative judgement $J_i = (a_i,b_i,y_i)$ where $f$ is some increasing function.
The paper mostly focuses on $f(t)$ being a monomial $t^p$. They provide a near-linear time algorithm to solve the above optimization problem if $p\ge 1$ and NP-hardness results when $p<1$. Empirically, the paper validates the effectiveness of QRJA-based methods through some simple baselines, (mean, median, Borda rule, matrix factorization, and KY rule) on on real-world datasets, including chess, Formula 1, marathons, and programming contests.
Strengths: +) The combination of social choice and ranking prediction is pretty neat.
+) The dichotomy results on the degree of the monomial is intuitive and interesting.
Weaknesses: 1. The technical writing is not very friendly or informative.
- It seems the optimization problem can be reduced to a sparse $p$-norm regression problem. Is there an off-shelf method to solve it? Why is it necessary to consider the dual program?
- The proof of Lemma 1 is not clear to me. Which results in Chen et al. 2022 are you used to solve Equation (4) (e.g., Theorem 10.14)? Moreover, the author should provide an explicit statement of the reduction.
2. I feel the empirical results should be compared to more relevant literature.
- Given the long history of rank aggregation or ranking system, I feel the evaluation in the paper is insufficient. A rating system is an algorithm that adjusts a players rating upwards after each win, or downwards after each loss. Some notable rating systems used in practice include Harkness, Elo, Glicko, Sonas, TrueSkill, URS, and more. As ordinal accuracy in the paper is also derived from pairwise comparisons, it seems reasonable to apply Elo and other methods.
- Another related area may be item response theory, where contestants have expertise and events have difficulty.
3. The presentation should be more cogent
- It is not clear to me how the model conceptually addresses issues in those motivating examples (Examples 1 to 3).
- The empirical results only test QRJA under $p = 1,2$, and provide little discussion on the choice of $p$. Note that standard gradient descent should solve the optimization problem for all $p\ge 1$.
Technical Quality: 3
Clarity: 2
Questions for Authors: Can you address the first point in weakness of Theorem 1?
- why dual program
- Limitation of using $p$-norm regression
- Reduction to Chen et al. 2022
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The paper addresses the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your recognition and thoughtful comments.
We are glad that you find the topic of our work neat and our theoretical dichotomy results intuitive and interesting. Below, we respond to the questions raised in your review.
**On the first point in Weaknesses**
$\\ell\_p$ QRJA is indeed a special case of sparse $p$-norm regression, but it has additional structure: for $\\ell\_p$ QRJA, the matrix $A$ in $\\min \\|Ax-z\\|\_p$ has exactly two nonzero entries per row and they sum to $0$. This is crucial because it allows the dual of $\\ell\_p$ QRJA to correspond to a flow problem.
Without this additional structure, applying the state-of-the-art algorithm for sparse $p$-norm regression \[1\] would result in an $\\Omega(m \+ n^{\\omega})$ runtime for $\\ell\_p$ QRJA, where $\\omega \\geq 2$ is the matrix multiplication exponent. This is significantly slower than almost linear time.
With this additional structure, we can solve $\\ell\_p$ QRJA in almost-linear time using Theorem 10.13 of \[2\]: In Equation (4), one can view each entry $f\_e$ in $\\mathbf{f}$ as the directed flow on an edge $e$, and the optimization constraints as flow conservation constraints. For edge $e$, its contribution to the total cost is $|f\_e|^q \- z\_e f\_e$, and the total cost is edge-separable and convex. This allows us to use Theorem 10.13 of \[2\].
We will clarify this.
**On the second point in Weaknesses**
We agree that many other approaches deserve mention in this context, including Elo and other methods. In our work, we had to select a subset of these methods to compare with QRJA. We chose Mean and Median due to their straightforwardness, Borda and Kemeny-Young from the social choice literature, and Matrix Factorization from the machine learning literature.
**On the third point in Weaknesses**
QRJA addresses issues in the motivating examples by considering the relative performance of contestants rather than their absolute performance in each contest. More specifically:
* Example 1: Even if a contestant only participates in “easy” races, QRJA can better avoid over-rating their strength by using the relative performance data of other contestants in those races.
* Example 2: If past data shows that Alice consistently beats Bob, and Bob consistently beats Charlie, the QRJA model will predict that Alice runs faster than Charlie, as it tends to be consistent with previous relative judgments.
* Example 3: Although the Boston race indicates that Charlie is slightly faster than Bob, the other two races suggest that Bob is faster than Charlie by a large margin. To minimize inconsistencies across all judgments, QRJA will predict that Bob is faster than Charlie.
We focus on $\\ell\_1$ and $\\ell\_2$ QRJA because the almost-linear time algorithm for general values of $p \\geq 1$ relies on galactic algorithms for $\\ell\_p$ norm mincost flow \[2\]. While standard gradient descent works for general values of $p$, its running time is too slow on large datasets like Codeforces and Cross-Tables. We acknowledged this in the conclusion: “An interesting avenue for future work would be to develop fast (e.g., nearly-linear time) algorithms for $\\ell\_p$ QRJA with $p \\neq 1, 2$ that are more practical, and evaluate their empirical performance.” We will further clarify this.
**References**
\[1\] Bubeck, S.; Cohen, M. B.; Lee, Y. T.; and Li, Y. 2018\. An homotopy method for $\\ell\_p$ regression provably beyond self-concordance and in input-sparsity time. In Proc. of
the 45th annual ACM Symposium on Theory of Computing (STOC).
\[2\] Chen, L.; Kyng, R.; Liu, Y. P.; Peng, R.; Gutenberg, M. P.; and Sachdeva, S. 2022\. Maximum flow and minimum-cost flow in almost-linear time. In Proc. of the IEEE 63rd Symposium on Foundations of Computer Science (FOCS).
---
Rebuttal Comment 1.1:
Comment: Thanks for these responses. I maintain my original positive recommendation. | Rebuttal 1:
Rebuttal: We thank all reviewers for taking the time to read our paper and provide thoughtful comments.
We are delighted to learn that the reviewers find our topic “interesting and important” (Fgap), and the “combination of social choice and ranking prediction” “neat” (iMha). In addition, we are glad that reviewers judged our theoretical results to be “sound and non-trivial” (Fgap), our empirical evaluation “substantial” (Z9Qp), and our potential contribution “very strong” (zJAB).
Below we provide detailed responses to each reviewer’s questions. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
X-Ray: A Sequential 3D Representation For Generation | Accept (spotlight) | Summary: The paper presents X-Ray, a new 3D sequential representation inspired by the penetrating quality of X-ray scans. This technique converts a 3D object into surface frames at different layers, ideal for creating 3D models from images. Experimental findings show that the X-Ray approach outperforms existing methods in improving the precision of 3D generation, opening up new possibilities for research and real-world applications in 3D representation.
Strengths: * The motivation is interesting. Exising methods indeed cannot completely generate objects that include both visible and hidden surfaces. While the proposed method can the hidden interior of the object can be fully reconstructed.
* The compatibility of X-Ray data structures with sequential 3D representations in video formats opens up new opportunities for using video diffusion models in 3D generation.
* Great experimental results. The paper outperforms competitors by achieving the best results.
Weaknesses: * The author asserts that "General, accurate, and efficient 3D representations are three crucial requirements for 3D generation." However, the citation is missing or the author should provide evidence to support the claim that 3D representations need to be general, accurate, and efficient.
* The ablation study is insufficient. The author should conduct ablation on X-Ray frames, for example, by using video or image diffusion techniquesm or the impact of frame complexity.
* The author claims to have synthesized a complete interior mesh in Figure 1, but I did not observe any other results besides cars or analysis in the experiments.
* I think the method used is the X-ray edge method because the dataset quality does not allow for a complete X-ray scan. For instance, the 3D CAT data has empty spaces within.
Technical Quality: 3
Clarity: 3
Questions for Authors: * The authors need to provide a clear explanation of X-ray and complete internal mesh, as some training data contain empty spaces within.
* The authors should conduct additional ablation studies.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors acknowledged limitations but did not address the potential negative societal impact of the proposed technology.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # To Reviewer 5c6P
### Question 1: General, accurate, and efficient 3D representations
The author asserts that "General, accurate, and efficient 3D representations are three crucial requirements for 3D generation." However, the citation is missing or the author should provide evidence to support the claim that 3D representations need to be general, accurate, and efficient.
### Response 1
We are grateful for the reviewer's feedback for our missing of citation. The statement "General, accurate, and efficient 3D representations are three crucial requirements for 3D generation" is based on the common understanding in the 3D generation community, and cited from [1].
* General: A 3D representation should be able to represent a wide range of 3D objects, scenes, and shapes [2]. For example, voxel grids are general representations to any 3D shape, but they are computationally expensive.
* Accurate: A 3D representation should accurately capture the geometry, appearance, and structure of 3D objects. For example, point clouds are accurate representations that capture the exact 3D points of an object, but they lack connectivity information [3].
* Effcient: A 3D representation should be efficient in terms of memory usage, computational cost, and training time. For example, 3DGS [4] are efficient representations that can generate high-quality 3D shapes with low memory and computational cost.
### Question 2: Synthesizing a complete interior mesh
The author claims to have synthesized a complete interior mesh in Figure 1, but I did not observe any other results besides cars or analysis in the experiments.
### Response 2
We appologize for missing a detailed description to the concept of inside surface. More results in "Figure 5: Quantitative Comparison in Image-to-3D Generation" have illustrated the effectiveness of our method in synthesizing a __complete interior mesh__. Here are the explaination:
* In the GSO dataset shown in Figure 5, our method is able to capture the inside surface of the shoe, while other methods fail to detect it and hide the welt;
* Also, the OmniObject3D dataset in Figure 5 showcases our method's ability to capture the inside surface of objects such as cupboards and bowls. Other methods fail to accurately predict the inside surface under the input view.
### Question 3: Complete Internel Mesh Dataset.
The authors need to provide a clear explanation of X-ray and complete internal mesh, as some training data contain empty spaces within.
### Response 3
For datasets obtained through 3D scanning using multi-view images or depth sensors, it is true that the inside of the objects appears empty. However, the Objaverse dataset contains a variety of 3D models with and without interior meshes, which is why we conducted experiments on this dataset. Our X-Ray method outperforms the state-of-the-art method TripoSR by a large margin on this dataset. We acknowledge the reviewer's concern regarding the difficulty in obtaining datasets with interior details. However, as 3D modeling techniques advance, we anticipate that future datasets will include more detailed 3D models contains the inside mesh. Furthermore, we have found that indoor scenes and 3D CAD models are now accessible for training, providing abundant interior details that can be effectively learned by the X-Ray. As a result, we are able to synthesize complete scene-level 3D rooms or buildings with intricate interior details. We will include the results of these datasets in the revised paper.
### Question 4: Ablation studies
The ablation study is insuffcient. The author should conduct ablation on X-Ray frames, for example, by using video or image diffusion techniquesm or the impact of frame complexity.
### Response 4
* We appreciate the reviewer's suggestion for additional ablation studies. Due to limited computational resources, we were unable to conduct extensive ablation studies on arbitary X-Ray frame. However, previously, we first selected 16 X-Ray frames to represent each 3D object and then found that using only 8 frames achieved very close performance.
* Additionally, we conducted an experiment to analyze the Encoding-Decoding Intrinsic Error, as shown in Figure 4 of paper. This is why we chose to use 8 frames in our experiments. The previous experimental results are as follows:
|Method|CD(L1) ↓|FS@0.1 ↑|
|-|-|-|
|X-Ray w/ 16 frames|0.053|0.982|
|X-Ray w/ 8 frames|0.056|0.973|
|
* Besides, we attempted to use the image diffusion model as our backbone for the lightweight computation. However, it did not produce satisfactory results, which is because the image diffusion model only considers the 2D spatial relation and cannot effectively capture the relationship between different frames. Consequently, we made the decision to switch to an off-the-shelf Stable Video Diffusion method for generating X-Ray frames. This method is better suited for addressing the sequential generation problem in 3D reconstruction. The previous experimental results are as follows:
|Backbone|CD(L1) ↓|FS@0.1 ↑|
|-|-|-|
|Stable Diffusion|0.114|0.651|
|Stable Diffusion + Frame Attention|0.062|0.936|
|Stable Video Diffusion |__0.056__|__0.973__|
|
### Reference
[1] Xiaoyu Li, Qi Zhang, Di Kang, Weihao Cheng, Yiming Gao, Jingbo Zhang, Zhihao Liang, Jing Liao, Yan-Pei Cao, Ying Shan. "Advances in 3D Generation: A Survey". arXiv:2401.17807.
[2] Zhen Liu, Yao Feng, Yuliang Xiu, Weiyang Liu, Liam Paull, Michael J. Black, Bernhard Schölkopf. "Ghost on the Shell: An Expressive Representation of General 3D Shapes". ICLR2014.
[3] Mescheder, Lars and Oechsle, Michael and Niemeyer, Michael and Nowozin, Sebastian and Geiger, Andreas. "Occupancy Networks: Learning 3D Reconstruction in Function Space". CVPR2019.
[4] Kerbl, Bernhard and Kopanas, Georgios and Leimk{\"u}hler, Thomas and Drettakis, George. "3D Gaussian Splatting for Real-Time Radiance Field Rendering". ACM Transactions on Graphics. | Summary: The paper addresses the problem of missing mesh interiors in image-to-3D generation. The proposed representation, X-Ray, adopts ray casting to encode both visible and hidden surfaces into a video format. With this representation, the authors enable single image-to-3D mesh generation, including the inside of the mesh, through a video diffusion model. Experiments demonstrate the effectiveness of the proposed method in 3D generation from a single image, both quantitatively and qualitatively.
Strengths: (1) The task of single-view image to 3D mesh generation including interior surface of mesh is extremely challenging and well-motivated. As far as I know, this is the first paper that generates a mesh considering the interior surface.
(2) The idea of using ray casting to scan the interior of the mesh and convert it into video frames is novel. I really like this approach.
(3) The results show that the proposed method achieves SOTA performance on single image-to-3D generation and point cloud generation with a short inference time compared to existing baselines.
(4) The writing is easy to understand and the design choices are well presented.
Weaknesses: (1) Considering Computed Tomography, it is expected that the performance will significantly increase as the number of layer $L$ increases. However, the experiments analyzed in Section 5.2 show that performance improvements are very minimal when $L$ is larger than 8. If additional computing resources are available, it would be interesting to see the performance when the frame resolution is low, but $L$ is extremely high (not mandatory).
(2) The entire training pipeline is quite long and memory-intensive. It was conducted on 8 NVIDIA A100 GPU servers for a week.
Technical Quality: 4
Clarity: 4
Questions for Authors: (1) When training, it seems necessary to normalize the object so that the rays from a specific camera position can capture the entire object, making it easier to train. I'm curious whether this preprocessing step was applied to the Objaverse mesh. If this process was included, it would be beneficial to add this information to the paper.
(2) How much GPU memory is required during inference?
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors mention the limitations related to the number of sequence layers and the missing parts when X-Ray is truncated, as discussed in Section 6 and Appendix A.5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # To Reviewer wtWj
We appreciate the reviewer's highly positive rating to our paper! We address the reviewer's further concerns and questions in the following responses.
### Questions 1: Performance of Computed Tomography
Considering Computed Tomography, it is expected that the performance will significantly increase as the number of layer L increases. However, the experiments analyzed in Section 5.2 show t performance improvements are very minimal when L is larger than 8. If additional computing resources are available, it would be interesting to see the performance when the frame resolution is but L is extremely high (not mandatory).
### Response 1
We understand reviewer's concern about the performance when using all frames without omitting any surface. Indeed, conducting experiments with extremely high layers using Diffusion Models is not feasible for GPU memory limitation. However, this issue can be resolved via autoregressive Large-Large Model (LLM) to generate varying numbers of surfaces for each ray, ranging from 0 to even 100+, depending on the actual surface layers for each ray. With LLM, we can flexiblely model both indoor and outdoor scenes using X-Ray. We are going to release the excited research work in the future!
### Questions 2: Training Resources
The entire training pipeline is quite long and memory-intensive. It was conducted on 8 NVIDIA A100 GPU servers for a week.
### Response 2
We appreciate the reviewer's concern about the training pipeline. The training pipeline is relative long and memory-intensive, which is a common issue in training large-scale 3D models. For example, previous state-of-the-art method, Open-LRM, trained their model on 64 NVIDIA V100 GPUs for 5-6 days. Given our limited GPU resources, we made efforts to optimize and minimize the computational requirements as much as possible.
### Questions 3: Normalization
When training, it seems necessary to normalize the object so that the rays from a specific camera position can capture the entire object, making it easier to train. I'm curious whether this preprocessing step was applied to the Objaverse mesh. If this process was included, it would be beneficial to add this information to the paper.
### Response 3
The reviewer is correct. We did apply a preprocessing step to normalize the object, ensuring that the rays from a specific camera position can capture the entire object. The normalization code for arbitrary 3D objects is as follows:
```python
def normalize_object():
bbox_min, bbox_max = scene_bbox()
scale = 1 / max(bbox_max - bbox_min)
for obj in scene_root_objects():
obj.scale = obj.scale * scale
# Apply scale to matrix_world.
bpy.context.view_layer.update()
bbox_min, bbox_max = scene_bbox()
offset = -(bbox_min + bbox_max) / 2
for obj in scene_root_objects():
obj.matrix_world.translation += offset
bpy.ops.object.select_all(action="DESELECT")
```
### Questions 4: GPU Memory
How much GPU memory is required during inference?
### Response 4
During inference, the GPU memory required is 4.8 GB for X-Ray diffusion model and 2.5 GB for X-Ray Upsampler. Thank you for the question, we will include this information in the revised paper.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thanks for the authors' response. I have read it as well as other reviews.
I believe this paper presents a new problem (single image to mesh generation including interior surface) that the academic community has not yet solved, and I consider it the first paper to suggest a new direction for research. Additionally, the proposed method (using ray casting + video diffusion model) is novel and has been proven through experiments to be superior to existing methods, which I believe can serve as a good reference for future research.
Regarding the concern about the number of $L$, which I and other reviewers were worried about, the authors claimed to address it through an autoregressive Large-Language Model (LLM), but this part is not explained in detail, so it's hard to understand. However, given the current limitations in computing resources, the performance is sufficiently guaranteed even with $L=8$, to the extent that it surpasses other baselines, so I don’t think this is a major issue.
In the revised version, it would be good to make revisions to address the concerns of the other reviewers, and particularly, I strongly recommend adding a section on the efficiency of the X-Ray representation (Rebuttal to Reviewer xHzv, Response 2). Since a new representation has been proposed, if a comparison with other representations is organized in a table, readers will be able to easily grasp the efficiency of the X-Ray representation at a glance.
I appreciate the additional experiments conducted for the rebuttal, and for the reasons mentioned above, I will maintain my previous rating of 9.
---
Reply to Comment 1.1.1:
Comment: Thank you once again for recognizing the value of our work and maintaining the rating of 9, which is highly encouraging for us and our future efforts to advance X-Ray representation in the 3D domain! We have carefully considered the reviewers’ comments and will add an experimental section on the efficiency of the X-Ray representation to enhance clarity. As mentioned in our response to Reviewer xHzv, we compare it with other 3D representations, such as Voxels, point clouds, and multi-view depths, to make it easier for readers to grasp its efficiency. | Summary: The paper proposes a new representation that encodes a 3D mesh into the ray intersection points from a single view point. The position, color, and surface normal at the intersection points are stored as the representation. Poisson reconstruction is used to recover the 3D mesh from the representation. A cascaded diffusion model is trained using the extracted representation from 60k objaverse meshes. Evaluation is conducted on three other datasets.
Strengths: The paper proposes a new representation, which is compact, efficient, and tailored to single image to 3D applications. It demonstrated better performance compared to other methods in the paper in the metrics. The visual quality of the results are also better. 3D generation is a very relevant topics and the paper will be interesting to many readers.
Weaknesses: 1. The paper does not discuss how to extend the proposed representation to include multiple viewpoints to provide better encoding/decoding quality. Even though the representation records the intersection points along a ray, the intersection points from front and top views would be very different.
2. One limitation of X-ray is that the limited number of intersection points (8 in the paper) makes it difficult to encode complex shapes like hairs and carefully designed meshes with all internal structures. For example, taking a real engine, the number of surfaces a ray can pass through would be far more than 8 (eg., the wires, the bores, shafts). The paper mentions the limitation, however, it does not seem clear to me from the paper how the limitation can be addressed efficiently.
3. Another related limitation is the potentially nonuniform number of intersection points along different rays from the same viewpoint. For example, on a human head, the rays passing through the hair region would need a much higher number of intersections to avoid losing information than those through the face region. The nonuniformity extends to across objects. To model one object that has a large number of intersection points, it might need to increase L used for all objects in order to learn a diffusion model. This property makes the presentation inefficient.
4. In order to generate all interior structures of a 3D object, we need meshes that are carefully designed with interior details. This kind of dataset is difficult to get and if existed would increase L, making the method inefficient.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The proposed representation is related to the pointmap used in DUSt3R (CVPR2024). From my understanding, at a high level, it extends pointmaps to multiple points along a ray. To me, it is an interesting connection and can be mentioned in the paper.
2. The core of the representation is finding the intersection points and creating the tensor representation. Currently the paper only mentions that it calls a function in trimesh. I would suggest describing the operation in more detail -- for example, I do not know how the Hit is constructed. Does the method try to aggregate intersection points at similar depths at the same index along the L-dimension? Or is Hit simply a padding mechanism?
3. In Sec 5.3 quantitative comparison, Table 1 and Table 2 show metrics that evaluate the quality of the geometry. How is the photometric quality of the proposed method compared to other methods, e.g., PSNR or SSIM on the conditioning viewpoint (and other novel views)?
4. How significant is the effect of the predicted normal, and is it fair to compare rendering-based methods like LRM using the extracted meshes in Table 1? My understanding is the quality of the Poisson reconstruction can be affected heavily by the quality of the normal, which may be poorly modeled by volumetric density. Should the paper also show the same metrics directly on the point clouds or depth maps?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper mentions the limited number of intersection points. However, it does not seem clear how to address the limitation potentially without significantly increasing the number of L.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # To Reviewer EY92
### Question 1: Multiple viewpoints Extension.
How to extend the proposed representation to include multiple viewpoints to provide better encoding/decoding quality.
### Response 1
we aim to show the advantage of the X-Ray by providing a simple baseline for single-view 3D reconstruction. While extending this to multiple viewpoints is interesting, it might not be necessary for this task. To address the reviewer’s concern, we encoded 1000 randomly selected 3D meshes into X-Ray from different viewpoints, like front and top views, and then decoded them back into 3D meshes. The standard deviation of reconstruction error from different views is negligible (CD < 1e-3). This demonstrates the robustness of the X-Ray to multiple viewpoints and its ability to reconstruct 3D meshes from any viewpoint. We will include this discussion in the revised paper.
### Question 2: Limitation of finite and nonuniform intersection points
One limitation of X-ray is that the limited number of intersection points makes it difficult to encode complex shapes like hairs. Another limitation is nonuniform number of intersection points makes the presentation inefficient.
### Response 2
* In the experimental results (Figure 4 (a)/(b)), using 8 or even 6 frames is sufficient to cover most of the surfaces, as most 3D models are not highly detailed. However, for complex shapes such as hairs, this limitation can be addressed by increasing the number of frames, to encode more intricate details.
* To further overcome the nonuniform number of surface layers limitation, autoregressive Large-Language Model (LLM) can be employed as the generator to handle surfaces with a dynamic number of layers. This approach allows for rays to generate varying numbers of surfaces, ranging from 0 to 100+, depending on the actual surface layers for each ray.
### Question 3: Dataset
Carefully designed dataset is difficult to get and if existed would increase L making the method inefficient.
### Response 3
Obtaining a dataset with interior details is a challenge, as mentioned by the reviewer. However, we are optimistic about the increasing availability of fine-grained 3D datasets. With advancements in 3D modeling, more detailed 3D models will likely be included in future datasets. Additionally, we have access to indoor scenes and CAD models, which contain rich interior details that can be effectively learned by the X-Ray. This allows us to synthesize complete scene-level 3D rooms or buildings with intricate interior details. As described above, LLM is our next version of generator to handle surfaces with a dynamic number of layers, which will be more efficient in generating complex shapes.
### Questions 4: Related Work: DUSt3R
It is a interesting to connect and mention DUSt3R (CVPR2024).
### Response 4
We will mention the connection to the work of DUSt3R in our revised paper as it has made significant contributions to multi-view depth and pose estimation.
### Question 5: Representation Operation
Describe the representation operation in more detail.
### Response 5
We have described the process of obtaining X-Ray from a 3D mesh in the Appendix PDF and source code. The "hit" value is determined by checking if the depth is greater than zero. We first identify the intersected mesh face index and then query its properties as X-Ray. Below is the pseudocode we provided:
```python
from trimesh.ray.ray_pyembree import RayMeshIntersector
def ray_cast_mesh(mesh, ro, rd):
intersector = RayMeshIntersector(mesh)
index_faces, _, _ = intersector.intersects_id(
ray_origins=ro, rd=rd, multiple_hits=True)
return index_faces
def Mesh2XRay(mesh, Rt, K):
# get camera center and ray direction
ros, rds = get_camera_ray(Rt, K)
XRay = []
for ro, rd in zip(ros, rds):
index_faces, = ray_cast_mesh(mesh, ro, rd)
depth, normal, color = extract_features(mesh, index_faces)
# calc hit
hit = depth > 0
xray = torch.cat([hit, depth, normal, color], dim=-1)
XRay += [xray]
XRay = torch.stack(XRay) # (H, W, L, 8)
return XRay
```
### Question 6: Photo-metric Quality
How is the photometric quality of the proposed method compared to other methods, e.g., PSNR or SSIM on the conditioning viewpoint (and other novel views)?
### Response 6
Novel view synthesis and single-view reconstruction are two distinct tasks. Evaluating the quality of 3D reconstruction based on photometric metrics may not be appropriate. For example, methods like NeRF and 3DGS can achieve high scores in novel-view synthesis at the photometric level, however, their extracted 3D shapes and textures often suffer from really poor quality. Ours focus is on accurately reconstructing the 3D ground-truth shape as the evaluation. We appreciate the reviewer's observation regarding photometric quality. We will predict additional 3D Gaussian Splatting parameters to synthesize high-quality novel view images in the revised paper.
### Question 7: Normal Prediction
How significant is the effective of the predicted normal?
### Response 7
Point normals can be estimated from the point cloud, but determining their orientation (inside or outside the surface) is challenging. To solve this, we generate additional point cloud normals for consistent orientation. We tested the importance of normal prediction for rendering methods like LRM and our method. By fine-tuning OpenLRM with normal supervision, we found that adding ground-truth normals did not significantly improve its reconstruction metric and is still not comparible with ours. However, normal prediction is crucial for our method due to Poisson surface reconstruction. This ablation study will be included in the revised paper.
| Method | Chanfer Distance ↓ |
|--------------|----------|
| OpenLRM w/o normals | 0.143 |
| OpenLRM w/ normals | 0.138 (+3.5%) |
|
| X-Ray w/o normals | 0.067 |
| X-Ray w/ normals | 0.056 (+16.4%) |
|
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the reply. From Response 1, does it mean that it is difficult to extend x-ray to few-view to 3d use cases? Though x-ray can represent simple meshes well, the constraint of single-view only seems an important characteristic of the proposed method that should be discussed in the paper. For example, few-view (eg, 2-4) to 3d is also a frequently studied problem and common in practice.
For Response 6, I understand and agree with the description about the difference between single-view to 3d and multiview reconstruction. However, the PSNR value on the given single view is still an informative and meaningful metric even for single view to 3d task.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer EY92
Comment: ### We sincerely appreciate the reviewer's comments and the subsequent discussion. We will do our best to address the concerns raised.
### 1. Does it mean that it is difficult to extend x-ray to few-view to 3d use cases?
* Although single-view X-Ray can solve most cases as a simple baseline, the reviewer suggests further strengthening the performance by introducing a multi-view solution.
* Similar to multi-view depth representation, it is easy to extend X-Ray to a multi-view style by casting rays under multiple cameras then concatenating these frames as video. There will be $N \times L$ frames in the multi-view X-Rays, where $N$ is the number of views and $L$ is the number of surface layers in each view. We can then generate the multi-view X-Rays from a single image via the same video diffusion model.
* We greatly appreciate the reviewer's suggestion and will focus on validating how multi-view X-Rays outperform single-view X-Ray generation in our subsequent research work.
### 2. PSNR value on the given single view is still an informative and meaningful metric even for single view to 3d task
* The reviewer is very expert in novel view synthesis and 3D reconstruction and generation research. We thank the reviewer's comment that the PSNR value is an informative and meaningful metric for the single-view to 3D task.
* For evaluation, the deeper question is which should be considered the ground truth for photometric performance: albedo (original surface color) or image (rendered color under lighting). We predict the albedo before rendering, while rendering-based methods predict the final image color after rendering. These are two different colors. Due to this ambiguity, we did not report these results. However, in our previous experiment using albedo as the ground truth, our X-Ray achieved 21.3 PSNR, whereas the state-of-the-art TripoSR only achieved 17.6 PSNR on the GSO dataset.
* We understand that the reviewer might be referring to the performance of evaluating rendered colors, similar to other rendering-based methods. Our solution is to predict the 3DGS parameters for each surface of X-Ray and render the image to compare with the ground truth. As described in Response 1, as the first work, we aim to provide a simple baseline X-Ray representation and avoid involving more technical complexities. We will report both photometric performance results in the revised paper. Similar to the normalized shape evaluation in this paper, we hope the photometric evaluation under both albedo and image can provide a new benchmark for the community. | Summary: This paper introduces X-Ray, a new 3D representation designed for efficient generation of 3D objects from single images. The key idea is to represent a 3D object as a sequence of 2D "surface frames" capturing hit/miss, depth, normal, and color information along rays cast from the camera viewpoint. This sequential representation lends itself well to generation using video diffusion models, enabling the synthesis of both visible and hidden surfaces. The authors propose a pipeline consisting of an X-Ray diffusion model to generate low-resolution surface frames from an input image, followed by an X-Ray upsampler to enhance resolution. They evaluate their method on single-view 3D reconstruction and unconditional 3D shape generation tasks, reporting quantitative results on standard benchmarks.
Strengths: - The paper tackles the important challenge of reconstructing complete 3D models, including hidden surfaces, from single images. This is a significant limitation of current rendering-based approaches, and the authors' focus on this problem is well-motivated and timely.
- The X-Ray representation, while its novelty requires further substantiation, offers an intuitive way to encode 3D surface information in a sequential manner. By focusing solely on surface details rather than volumetric data, the representation has the potential to be more memory-efficient than voxel grids or dense point clouds, especially for objects with complex internal structures.
- The authors smartly leverage recent advancements in video diffusion models for 3D generation. This is a promising direction, as video diffusion models have shown impressive capabilities in synthesizing high-quality and temporally coherent sequences of images. Adapting these models to the task of 3D generation through the X-Ray representation is a reasonable and potentially fruitful approach.
Weaknesses: - The paper does not provide a convincing argument for the novelty of the X-Ray representation. While its sequential capture of surface information is intuitive, a thorough comparison to existing techniques is lacking. Specifically, the authors should clearly differentiate X-Ray from methods like depth peeling, multi-view depth images, multi-plane images (MPI), and notably, the PI3D representation (Liu et al., CVPR 2024), which also leverages diffusion models for 3D generation.
* Liu, Ying-Tian, Yuan-Chen Guo, Guan Luo, Heyi Sun, Wei Yin, and Song-Hai Zhang. "Pi3d: Efficient text-to-3d generation with pseudo-image diffusion." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 19915-19924. 2024.
- The paper repeatedly emphasizes the efficiency of the X-Ray representation without providing concrete evidence or analysis. The authors should quantify their claims by comparing the memory footprint and computational costs of X-Ray to alternative representations like voxel grids, point clouds, and neural implicit representations (e.g., NeRFs) for objects of varying complexity.
- While leveraging video diffusion models is promising, the paper does not clearly articulate how the specific properties of the X-Ray representation are exploited within the diffusion process beyond being a sequential data format. Do the hit/miss indicators or the ordered nature of surface frames influence the model architecture or training? Would similar performance be achieved with alternative sequential representations as input to the diffusion model?
- The evaluation heavily relies on reconstruction metrics (CD, EMD), even when assessing a generative model. While these metrics are relevant for the single-view reconstruction task, they do not capture the generative capabilities of X-Ray. The authors should expand generative evaluation to diverse categories beyond ShapeNet Cars. The paper could assess generative quality by evaluating the diversity and realism of multiple shapes generated from the same input image.
- The evaluation would be also strengthened by: (1) Including recent SDS-based single-view 3D generation methods as baselines; (2) Providing visualizations of generated shapes for the unconditional generation experiment; (3) Dedicating a section to analyze failure cases, visually showcasing problematic inputs and outputs.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Could the authors please elaborate on the key differences between the X-Ray representation and existing techniques like depth peeling, multi-view depth images, and multi-plane images (MPI)? Also please include the discussion with PI3D.
- To support the claims of efficiency, could the authors provide a quantitative analysis of the memory footprint and computational cost (encoding, decoding, generation) of X-Ray compared to voxel grids, point clouds, or NeRFs? This analysis should consider objects of varying complexity and resolutions.
- Could the authors please include generative metrics for the single-view 3D reconstruction experiments on GSO and OmniObject3D?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: - The authors merely state limitations without explaining their causes, impact, or potential solutions. E.g., saying "X-Ray frames become sparse" is not enough. How does this sparsity affect generation? How can it be addressed?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # To Reviewer xHzv
### Question 1: Comparison to Existing Techniques
A thorough comparison to existing techniques like depth peeling, multi-view depth images, MPI, and the PI3D.
### Response 1
* We discussed multi-view images and MPI in the related work. Furthermore, Multi-view depth cannot sense the inside surface and might records the redundancy surfaces into nearby views. MPI divides the object into planes with fixed distances, while our X-Ray stores a dynamic number of surfaces. Refer to the following Table 1 for efficient comparison.
* We apologize for not being familiar with depth peeling before submission. Our X-Ray and depth peeling serve different purposes. Depth peeling is mainly for rendering transparent surfaces, while X-Ray transform any 3D object in video format. Besides, our main contribution is using video diffusion as generator to generate objects.
* The PI3D is an interesting approach that uses diffusion models for 3D generation. Since we do not have access to source code, to ensure a fair comparison, we will reimplement the PI3D method and include it in the revised paper.
### Question 2: Efficiency of the X-Ray
The paper should provide more evidence and analysis to support its claims about the efficiency of the X-Ray.
### Response 2
We compared the efficiency of different representations using 500 3D meshes from ShapeNet dataset. The results showed that both point cloud and X-Ray were highly efficient, with lower memory, faster encoding & decoding times. However, the X-Ray had the advantage of being reorganizable as a video format for diffusion models, leading to better performance.
|Method|Memory (↓)|Encoding Method|Encoding Time (↓)|Decoding Method|Decoding Time (↓) |Reconstruction Error(CD)(↓)|Generation Metric (Cov-EMD) (↑)|
|-|-|-|-|-|-|-|-|
|3D Grid|67.09 MB|Voxlization|0.105 s|Poisson|~5 s|7.7e-3|3D-DiT[36] → 56.38|
|Multi-View Depths (8 views)|1.57 MB|Rendering via Blender|0.045 s|Fusion & Poisson|~10 s|1.1e-2|-|
|MPI (8 planes)|1.57 MB|Slicing & Rendering via Blender|0.049 s|Poisson|~5 s|8.9e-3|-|
|Point Cloud (200,000 points)|__0.90 MB__|Surface Sampling|__0.013__ s|Poisson|~5 s|__7.2e-3__|LION[59] → 56.53|
|
|X-Ray (8 layers)|__0.82 MB__|Ray Casting|__0.016 s__|Poisson|~5 s|7.8e-03|Ours → __60.27__|
|
Table 1. Comparison with other 3D representations in Efficiency.
### Question 3: Ablation Studies
The paper should explain how the unique properties of the X-Ray are utilized in the diffusion process and whether other sequential representations could achieve similar performance.
### Response 3
In section A.3 of the Appendix, we have conducted ablation studies to examine various aspects of the X-Ray model, including the impact of the hit/miss indicators. After exploring different formats, we transitioned to an off-the-shelf Stable Video Diffusion method, which proved to be more effective in addressing the sequential generation problem in 3D reconstruction.
### Question 4: Evaluation Metrics
The evaluation should include a broader range of categories to assess the generative capabilities of X-Ray. Could the authors please include generative metrics for the single-view 3D reconstruction experiments on GSO and OmniObject3D?
### Response 4
We chose to focus on cars as they have complex internal and external 3D surfaces, making them an ideal category to showcase the benefits of our method. Objaverse has over 1000 categories. Previous state-of-the-art methods only used reconstruction metrics for evaluation. Thank you for reminding us. We included generative metrics for the single-view 3D reconstruction experiments on GSO in the table below. Since our method can sense the inner surfaces, we have achieved overwhelming superiority. Any additional metrics will be included in the revised paper.
|Method| 1-NNA (EMD) ↓ |Cov (EMD) ↑|
|-|-|-|
|One-2-3-45[24]|64.37|25.85|
|OpenLRM[13]|59.14|38.31|
|TripoSR[48]|57.25|40.69|
|X-Ray|__52.42__|__48.27__|
|
Table 2. Generative metrics on GSO Dataset.
### Question 5: Additional Suggestions
The evaluation would be also strengthened by: (1) Including recent SDS-based single-view 3D generation methods as baselines; (2) Providing visualizations of generated shapes for the unconditional generation experiment; (3) Dedicating a section to analyze failure cases, visually showcasing problematic inputs and outputs.
### Response 5
1. SDS-based methods like Zero-1-to-3 and DreamGaussian rely on diffusion loss to optimize NeRF or 3D Gaussian splatting. However, these methods are time-consuming, taking several minutes to generate a single 3D mesh from an image. This makes them impractical for evaluation on large datasets like GSO and OmniObject3D. They all did not report their 3D reconstruction metrics. In contrast, all rendering-based methods and our X-Ray can generate 3D meshes from images in just a few seconds. Additionally, SDS-based methods also struggle to generate inner surfaces, limiting their performance on datasets of GSO and OmniObject3D.
2. We have add more visualizations in the Appendix (Sec A.4). Specifically, Figures 6 and 7 showcase the arbitary 3D objects generated by our proposed method. We appreciate your feedback and are planning to further enhance the performance on the 3D CAD model dataset (abundant inside and outside meshes) in the revised paper.
3. We have included a dedicated section in the Appendix (Sec A.5) and Figure 8 to analyze failure cases, and we will enhance this section in the revised paper.
### Question 6: Solution to Limitation
How does this sparsity affect generation? How can it be addressed?
### Response 6
To further overcome the sparse related limitation, an autoregressive Large-Language Model (LLM) can be employed as the generator, allowing for each ray to generate varying numbers of surfaces, ranging from 0 to 100+, depending on the actual surface layers. | Rebuttal 1:
Rebuttal: ### Responses to All Reviewers:
We would like to express our sincere gratitude for the valuable feedback provided by the reviewers. It is truly encouraging to see that our X-Ray work has been given positive evaluation by all the reviewers. We appreciate the comments and constructive suggestions given by the reviewers.
Although more visualizations and ablation studies have been included in the Appendix of the submitted paper, we apologize that there are some confusions caused by unclear expressions, potentially inadequate comparisons with existing methods, and a lack of detailed explanations in the paper. In response, we have diligently addressed all the concerns and suggestions raised by the reviewers in the corresponding rebuttals. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
BetterDepth: Plug-and-Play Diffusion Refiner for Zero-Shot Monocular Depth Estimation | Accept (poster) | Summary: This paper proposes a plug-and-play diffusion refiner for pre-trained zero-shot feed-forward MDE (monocular depth estimation) models, so that these generalized models can capture fine-grained details. In this paper, the coarse depths output from a pre-trained feed-forward MDE model are used as additional conditions for the proposed diffusion refiner, which is parameterized into any existing diffusion-based MDE model. In addition, a global pre-alignment strategy and a local patch masking strategy are proposed to ensure the faithfulness of the predicted depths.
Strengths: (1) The paper is well-organized.
(2) The proposed diffusion refiner is simple and clear, which could be be easily combined with different existing feed-forward MDE methods.
(3) Experimental results on several datasets demonstrate the effectiveness of the proposed method in some cases.
Weaknesses: (1) The motivation of this work is a bit far-fetched: As stated in the introduction, the low accuracy of the existing feed-forward MDE methods is caused by noisy and incomplete depth labels collected in real-world scenarios, while the poor generalizability of the diffusion-based MDE methods is caused by less diverse synthetic training samples. However, if only training data results in limited MDE performance for these existing methods, it seems that a straightforward motivation for alleviating the above problems is to enrich the training datasets, rather than to modify the model architecture as done in this work.
(2) Some statements in Table 1 are confusing: (i) To the reviewer’s knowledge, many existing feed-forward methods only use real datasets for training, rather than using real and synthetic datasets together. Why do the authors indicate that feed-forward methods use both real and synthetic datasets for training? (ii) Some existing diffusion-based methods (e.g., "Monocular depth estimation using diffusion models.arXiv preprint arXiv:2302.14816, 2023.") also use real datasets for training. Why do the authors indicate that diffusion-based methods use only synthetic datasets for training in Table 1? (iii) The authors state that feed-forward methods have a better generalizability than diffusion-based methods. But to the reviewer’s knowledge, in many zero-shot learning tasks (e.g., zero-shot recognition), generative models (e.g., GANs and diffusion models) generally show their better generalizability than the feed-forward counterparts. Could the authors give some explanations on why to make the above statement?
(3) Comparaive evaluation: As seen from Tables 2 and 3, when the proposed BetterDepth is used together with two early methods MiDaS (published in 2020) and DPT (published in 2021), it could bring some improvements. However, when the proposed BetterDepth is used together with Depth Anything[43] (a SOTA method), it only brings a slight improvement in most of the used datasets (particularly NYUv2 and ScanNet). So it seems that the proposed BetterDepth has a limited effect for boosting the performance of SOTA methods.
Technical Quality: 2
Clarity: 3
Questions for Authors: (1) As for the global pre-alignment, since the estimated depth values are derived from their corresponding depth labels, what is the performance of only using them in the local patch masking? And for the local patch masking, the authors simply discard significantly different regions in the estimated depths and real depths, will this lead to the loss of visual world information?
(2) In lines 128-132, the author's conclusion is that large-scale datasets lead to the generalizability of the feed-forward MDE methods. Therefore, in order to improve the generalizability of diffusion-based MDE methods, why not to simply train them with large-scale datasets following the feed-forward MDE methods? Why is this still a challenge?
(3) In lines 133-137, the author's conclusion is that in the diffusion-based MDE methods, the high-quality labels in the synthetic datasets lead to the ability to capture fine-grained details, but some feed-forward MDE methods also use these synthetic (and real) datasets for training as indicated in Table 1. Why are these feed-forward MDE methods unable to capture fine-grained details?
(4) Is Fig. 3 a schematic diagram or a direct visualization of the output distribution? What is the difference between the output distributions X(M_DM,D_syn) and \hat{X}? Moreover, for the middle sketch, why does the global pre-alignment not only affect \hat{X} but also X(M_DM,D_syn)?
(5) As indicated in Fig. A2, lower η generally means stricter filtering, but if η is too small, too many local regions would be discarded, which would lead to severe information loss. To this end, I’m wondering how many local regions are discarded in the current version of BetterDepth, and how to balance the strictness of the filter and the loss of information.
(6) Is there any diffusion-based MDE method trained on real datasets?
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The proposed method might be improved in two ways: (i) The offsets from the real depths to the estimated depths could be modeled in an implicit learnable manner; (ii) The pairs of significantly different local regions in two depths may not be harmful, but contain useful information for monocular depth estimation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thoughtful comments. We kindly ask the Reviewer to read the **top-level global response** first. Our detailed responses to the comments in the weaknesses (denoted as W) and questions (denoted as Q) sections are listed below.
> **W-1: Motivation**
Both the training data and the model architecture are important for performance. The very recent method Depth Anything V2 [R4] uses synthetic data for better details. Although promising improvements are achieved in Depth Anything V2, BetterDepth still shows better performance as discussed in the `detail evaluation` section of global response, thanks to the iterative refinement of diffusion models.
Enriching the training dataset helps to boost MDE performance, but (i) obtaining high-quality labels for real datasets is difficult due to depth sensor limitations, (ii) synthetic datasets offer high-quality labels but are costly to generate at scale, and (iii) training on large datasets is both time-consuming and resource-intensive.
BetterDepth efficiently combines the strengths of feed-forward and diffusion-based MDE models, achieving robust performance with fine-grained details with minimal training effort.
> **W-2: Table 1**
(i) Feed-forward models can easily use large-scale datasets (both synthetic and real ones) to gain robust performance, e.g., Depth Anything V2 [R4], so we include both in Tab. 1 for generality.
(ii) Recent works [9,11,14] validate the effectiveness of capturing fine details by training diffusion models on synthetic datasets. While techniques like depth infilling [34] enable training on real datasets, the resulting depth maps tend to be smoother due to the sparse and noisy labels. Thus, we focus on the recent diffusion-based MDE methods with synthetic data. We will explicitly indicate this to avoid confusion.
(iii) Diffusion-based models could generalize better in tasks with diverse, accurately annotated datasets, e.g., recognition, but the sparse/noisy labels of real datasets and the limited diversity of synthetic datasets in the MDE task make it challenging. Tab. 2 also supports the significant performance gap between the state-of-the-art diffusion and feed-forward MDE approaches, e.g., Marigold v.s. Depth Anything.
> **W-3: SOTA improvements**
BetterDepth aims to achieve robust MDE performance with fine-grained details. Despite achieving state-of-the-art performance, Tab. 2 cannot show the full advantages of BetterDepth (especially detail extraction) due to the sparse and noisy depth labels (Fig. A6-A15), which is also observed in [R4]. Thus, we provide the `detail evaluation` experiment in the global response, and the results in Tab. T3 verifies our significant improvements over the state-of-the-art model.
> **Q-1: Training strategies**
To test patch masking with only depth labels, we use a smoothed version of depth label as conditioning to imitate the estimation of pre-trained MDE models. As shown in the table below, patch masking with only depth labels (denoted as Label Only) shows inferior performance due to the significant distribution gap of depth conditioning in the training and inference stages.
|Method||NYUv2||KITTI||ETH3D||ScanNet||DIODE|
|-|-|-|-|-|-|-|-|-|-|-|
|Label Only||4.4/**98.0**||8.0/94.0||7.8/97.8||4.4/98.0||23.0/75.4|
|**BetterDepth**||**4.2**/**98.0**||**7.5**/**95.2**||**4.7**/**98.1**||**4.3**/**98.1**||**22.6/75.5**|
For patch masking, discarding patches will indeed result in loss of visual information. However, since BetterDepth utilizes the rich geometric prior from the pre-trained model, training on limited visual information already yields promising results, e.g., BetterDepth-2K is comparable with our full model in Tab. 2.
> **Q-2: Large datasets**
BetterDepth aims to achieve both robust performance and fine details. Although large-scale real datasets can be employed to gain better generalizability, the sparse and noisy labels hinder models from extracting fine details. Synthetic datasets provide high-quality labels for fine detail extraction, but their low diversity limits the learned geometric priors. Similar discussions can be also found in Sec. 2-3 of [R4].
> **Q-3: Fine details**
Training with synthetic datasets could help improve detail extraction, but the model architecture is also important. The recent Depth Anything V2 [R4] employs synthetic training data for better details, and we perform comparison in the `detail evaluation` section of the global response. Thanks to the iterative refinement scheme, BetterDepth shows the best performance on detail extraction (Tab. T3).
> **Q-4: Fig. 3**
Fig. 3 is a schematic diagram where X_(M_FFD, {D_syn, D_real}) and X_(M_DM, D_syn) are fixed distributions representing the characteristics of different methods (Tab. 1). By contrast, \hat{X} indicates the learned output distribution of BetterDepth and we mainly analyze its change under different training strategies. Thus, only the \hat{X} is affected by global pre-alignment in the middle sketch. We will clarify this to avoid confusion.
> **Q-5: Filtering**
The percentage of discarded patches on the training dataset is 36.6% in BetterDepth. Although a small $\eta$ will lead to fewer valid patches, BetterDepth works well with small-scale training datasets even under the information loss (as discussed in Q-1). We empirically find the optimal value $\eta=0.1$ to achieve the best performance balance (Fig. A2).
> **Q-6: Diffusion MDE + real data**
Both [34] and [R5] employ depth infilling techniques to train diffusion-based MDE models on real datasets and achieve promising results. However, these works primarily show the feasibility of applying diffusion models on MDE without exploring fine detail extraction like recent methods, e.g., Marigold. Besides, they focus on in-domain testing instead of zero-shot evaluation. We will add [R5] to the related work section.
> **Limitation**
Thank you. We will explore metric depth and improve information utilization in future works.
---
Rebuttal Comment 1.1:
Title: Please discuss
Comment: Dear reviewer,
The discussion period is coming to a close soon. Please do your best to engage with the authors.
Thank you,
Your AC
---
Rebuttal Comment 1.2:
Comment: Dear authors,
Thanks for the rebuttal. The rebuttal has cleared some of my concerns. However, although the evaluation on an extra dataset has been added, one main concern still remains: the proposed BetterDepth has a limited effect for boosting the performance of SOTA methods. Additionally, comparing Table T2 in the rebuttal and Table A2 in the submitted text, it is noted that some results of “the proposed method + Depth Anything” become better. Which results should the readers believe? Hence, I would keep my initial rating.
---
Reply to Comment 1.2.1:
Comment: Dear Reviewer EiFe,
Thank you for your responses and comments. We'd like to provide further clarifications for your remaining concerns:
> **Improvements over SOTA**
The proposed BetterDepth aims to achieve robust MDE performance with fine details. Our experiments verify the superiority of BetterDepth in both zero-shot performance (Tab. T2) and fine-grained detail extraction (Tab. T3). Extensive visual results also support the improvement of BetterDepth over SOTA methods (e.g., Fig. 1, 5, A4, A5).
> **Tab. A2 and Tab. T2**
Thanks for the comments. Tab. A2 uses the same settings as **Tab. 2** in the main paper, where BetterDepth is trained with Depth Anything. For Tab. A2, we train an additional BetterDepth variant with DPT [25], and we explicitly indicate the experimental settings in the caption of Tab. A2 and Sec. E.
We hope this addresses your concerns. Please feel free to let us know of any additional comments and suggestions. Thank you.
Best,
Authors | Summary: The paper proposes a simple approach to improve and refine current monocular depth estimation (MDE) methods. Leveraging the strong geometric prior from a state-of-the-art discriminative depth estimation method, and the strong image prior from a generative model, the authors set a new state-of-the-art in MDE. They condition a pre-trained latent diffusion model on an image and a corresponding depth map from a pre-trained MDE model and fine-tune the diffusion part to obtain higher-fidelity depth maps. Additional loss masking and alignment of affine-invariant depth to the ground truth are found to be crucial for performance.
Strengths: The paper is well-organized and well-written, with a clear common thread. The authors present a comprehensive set of experiments, validating the effectiveness of their method, and clearly ablating their contributions. The idea is simple and achieves very good results on a broad range of depth estimation benchmark datasets with minimal additional training effort, utilizing strong image and depth foundation models.
Weaknesses: - Since the idea is that simple, the general impact or contribution might only be moderate in my opinion. The main contribution is the injection of additional conditioning information into the diffusion model. Since this prior already gives state-of-the-art results, adding a full diffusion model on top with a strong image prior to learn the affine-invariant residual between ground truth and prediction is obvious to improve benchmark metrics. This also explains why there is only few data needed, and why the error bar in appendix D is much lower than Marigold. The task for Marigold is much harder compared to the refinement of a very good depth map.
- Further contribution claims incorporate the global pre-alignment and local patch masking. The first, global pre-alignment, is a simple re-phrasing of scale and shifting the ground truth depth map to the predicted depth map of the MDE method, which is a common approach for testing affine-invariant depth estimation methods. Since the least squares fit cannot satisfy all pixels to match perfectly, discarding some portion of it, which the authors termed local patch-masking, is also just a small trick to further boost performance. Hence, these two contributions lack novelty in my opinion, and the only real novelty is the incoporation of a strong depth prior into the diffusion model.
Besides the concern about contribution impact I think the idea is very clearly and nicely presented, and the paper very nicely polished.
Technical Quality: 4
Clarity: 4
Questions for Authors: - Figure 1: I was wondering why the surface normals still show some wobbles for flat surfaces. Since the strong geometric prior from DepthAnything does not show these artefacts, it seems like the diffusion model inserts these. Do you have any explanation for that? Have you evaluated whether this stems from the first stage?
- Line 196: Maybe I am missing something, but is there a reason why you choose max-pooling? As far as I understand that means that if at least one patch in a certain region has a small distance between ground truth depth and scaled and shifted depth, the full region is included for training. Do you have an estimate of how often a full region is rejected? And at which semantic regions in an image that usually is the case (e.g. only for the sky)?
- Furthermore, since you exclude non-matching regions via the local patch masking, wouldnt it be better to directly use an outlier-aware method (e.g. RANSAC)? This would directly better align matching parts and non-matching parts can be more easily filtered out.
- Since depth refinement alone is in my opinion only a moderate contribution, I was wondering whether you could extend your method to transfer affine-invariant depth to metric depth, since this might be a more challenging and thus more interesting task?
- Line 305: Does this time include the MDE model? Maybe I was missing it, but can you give a clear separation of much time goes into which step and how many ensembles you are taking for this inference time measure?
- If possible I would appreciate further clarification on appendix C. Is the model without any geometric prior just Marigold? And did you train the diffusion model without image prior completely from scratch?
- Lastly, one of your claims is to capture more "fine-grained scene details" compared to other methods. I was wondering whether you know about some quantitative way to show that?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors have adequately addressed and discussed the limitation of their method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful comments. We kindly ask the Reviewer to read the **top-level global response** first. Our detailed responses to the comments in the weaknesses (denoted as W) and questions (denoted as Q) sections are listed below.
> **W-1: Contribution**
Apart from the depth-conditioned diffusion refiner, a key contribution of BetterDepth is the proposed training strategies to achieve both robust performance and fine details. While Depth Anything gives state-of-the-art results, naively conditioning on it without our training strategies only yields inferior results as shown in Tab. T1 (Naive Conditioning v.s. BetterDepth) and discussed in the `contribution` section of the global response. In addition, the advantages of BetterDepth, e.g., lower error bar, also come from the proposed training strategies. We compare the standard deviation (std) based on the settings in Sec. D, and the results below show that the naive conditioning model is even more unstable than Marigold. This is because the injection of additional conditioning information makes it harder to determine which prior to follow, and the performance of BetterDepth further highlights the importance of our training methods.
|Methods||AbsRel std $\downarrow$||$\delta1$ std $\downarrow$|
|-|-|-|-|-|
|Marigold||0.66||0.99|
|Naive Conditioning||0.81||1.06|
|**BetterDepth**||**0.28**||**0.28**|
> **W-2: Training strategies**
To achieve robust MDE performance with fine details, the key challenge is how to ensure conditioning strengths while enabling the learning of detail refinement. We agree that our training strategies are not difficult to implement, but the motivation to use them during training is more important.
- To ensure **conditioning strength**, we propose to narrow the distance between depth conditioning and labels in a global-to-local manner. The pre-alignment first eliminates the global differences caused by unknown scale and shift, and the patch masking further addresses the local estimation bias in depth conditioning.
- For **detail refinement**, global pre-alignment and local patch masking together contribute to fine-grained detail extraction. Although significantly different regions are excluded to ensure conditioning strength, the combination of pre-alignment and patch masking still enables the learning of detail refinement as shown in Fig. S1 of the attached PDF, e.g., the basket.
Thus, the proposed training strategies are critical for better MDE performance and fine-grained details, which is also supported by the results/analyses in W-1.
> **Q-1: Wobbles**
Diffusion-based MDE methods tend to introduce subtle variation due to the random noise in the diffusion process. This can be fixed by using the mean instead of the median in test-time ensembling (Fig. S2 in the attachment), which also achieves better performance as follows.
|Method||NYUv2||KITTI||ETH3D||ScanNet||DIODE|
|-|-|-|-|-|-|-|-|-|-|-|
|Median||**4.2**/98.0||7.5/95.2||4.7/**98.1**||**4.3/98.1**||22.6/**75.5**|
|Mean||**4.2/98.1**||**7.4/95.3**||**4.6/98.1**||**4.3/98.1**||**22.5/75.5**|
> **Q-2: Max-Pooling**
Max-pooling is used to convert pixel-level masks to latent space. We follow Marigold to employ 8x8 max-pooling for mask downsampling as the VAE encoder in our diffusion model performs 8x downsampling for pixel-to-latent conversion. Since the patch size is also set to 8x8, max-pooling only happens within each patch without affecting a larger region. We will clarify this in revision.
> **Q-3: Outlier-aware method**
Better alignment like outlier-aware methods could indeed have more patches survive for better performance. Although the data efficiency experiments (Tab. 2) show that fewer valid training patches lead to lower model performance, the models trained with small datasets, e.g., BetterDepth-2K, already achieve comparable results to our full model, indicating that limited patches can be sufficient. Nevertheless, better alignment methods could improve patch preservation to further benefit efficient training.
> **Q-4: Metric depth**
Transferring affine-invariant depth to metric depth is a promising direction to benefit practical use but poses challenges, e.g., scale/shift ambiguity and diverse depth ranges. Nonetheless, BetterDepth shows potential to boost metric depth estimation. We employ the metric Depth Anything model and apply BetterDepth in a plug-and-play manner on zero-shot datasets iBims-1 [R6] and SUN RGB-D [R7]. Due to the unknown scale and shift in the current BetterDepth, we align the outputs of Depth Anything and BetterDepth to depth labels and compare the quality of depth maps. The table below (metrics are AbsRel $\downarrow$ / $\delta1 \uparrow$) and Fig. S3 of the attachment verify the superiority of BetterDepth. One promising next step is to model the scale/shift in an implicit learnable manner (as suggested by Reviewer EiFe), and we will further study this in future works.
|Method||iBims-1||SUN RGB-D|
|-|-|-|-|-|
|Depth Anything||5.1/97.7||13.5/87.7|
|BetterDepth||**4.5/98.2**||**12.7/88.8**|
> **Q-5: Inference**
The inference time includes the pre-trained MDE model, i.e., Depth Anything. To measure the time spent at each step, we reproduce the experiment, and the inference time of the pre-trained model and diffusion model are 0.02 and 0.38 seconds per sample, respectively. The ensemble size is set to 1.
> **Q-6: Appendix C**
The model without geometric prior uses the same network and fine-tuning method as Marigold but estimates inverse depth (following Depth Anything) instead of relative depth. For the model without image prior, we follow Stable Diffusion to train the latent UNet from scratch and keep the pre-trained VAE unchanged.
> **Q-7: Detail evaluation**
Thanks for the suggestions. We conduct quantitative evaluation for detail extraction in the `detail evaluation` section of the global response, and the results in Tab. T3 demonstrates the state-of-the-art performance of BetterDepth in detail extraction.
---
Rebuttal Comment 1.1:
Title: Rebuttal Response
Comment: Thank you for your response and the detailed answers to my questions, which addressed most but not all of my concerns.
I still have doubts about the contribution of the method. As mentioned earlier, providing a diffusion model with a very strong depth prior, which is already state-of-the-art, is highly likely to improve metrics. The proposed training strategies - global pre-alignment and local patch masking - are common techniques in the MDE literature. The global pre-alignment seems to be a rephrasing of the well-known approach of scaling and shifting the ground truth depth map to match the predicted depth map, which is standard practice for testing affine-invariant depth estimation methods. The local patch masking, which involves discarding certain portions to boost performance, appears to rather be a trick to circumvent the issues of non-matching scale and shift parameters.
In my view, the real novelty lies in incorporating a strong depth prior into the diffusion model, but the other contributions seem to lack originality or novelty.
Nevertheless, I will maintain my score, since the small incremental step of adding a strong prior is well ablated and justified.
---
Reply to Comment 1.1.1:
Title: Discussions
Comment: Dear Reviewer Qdvk,
Thank you very much for your responses and comments. We'd like to provide further clarifications in the hope of addressing your remaining concerns:
> **A diffusion model + a very strong depth prior is likely to improve metrics**
Simply providing a diffusion model with a strong depth prior does not outperform the pre-trained MDE model itself (see Depth Anything v.s. Naive Conditioning in Tab. T1 of the global response). As such, it does not immediately improve metrics and these counter-intuitive results reveal the challenges as discussed in the `contribution` part of the global response. BetterDepth efficiently addresses these challenges and achieves state-of-the-art performance.
> **Global pre-alignment + local patch masking**
Thanks for the comments. The global pre-alignment uses the same least squares fitting as the affine-invariant MDE evaluation protocol [26], and we explicitly point this out on Lines 170-171 of the main paper. We consider the simplicity of our approach as an advantage instead of a drawback and additionally provide the motivation behind each design choice as well as comprehensive analysis on Lines 162-223 of the main paper.
At last, we would like to emphasize that we respect your perspective and sincerely appreciate your comments and suggestions, which significantly improve the quality of our paper. We are also grateful for your support in continuing to recommend the acceptance of our paper. Thank you.
Best,
Authors | Summary: This paper presents a plug-and-play monocular depth estimator with the diffusion model. In the proposed method, the authors first employ the pretrained monocular depth model (MDE) to estimate a coarse depth map as the condition of the diffusion model. Then, a modified diffusion refiner is used to obtain the fine-grained depth map of the scene. Extensive experiments are conducted on benchmark datasets to demonstrate the effectiveness of the proposed method.
Strengths: 1. The proposed method is well-motivated and easy to follow.
2. The proposed method is a plug-and-play module and easy to use in different backbone models.
Weaknesses: 1. The novelty of the proposed method is somewhat limited. The proposed method is basically a conditional diffusion refiner, which combines the zero-shot MDE such as depth anything and diffusion-based MDE such as Marigold. In my opinion, the predicted depth map with depth anything can provide a “good” initialization to the diffusion model. Although the global pre-alignment and local patch masking modules are developed in the proposed method, I still think it is an incremental contribution.
2. In the proposed local patch masking module, why the patch mask strategy can obtain more refined details of the depth map? How to determine the parameters $\eta$ to control the mask ratio? In the ablation study of the supplementary material, the range of the mask ratio is [0.05, 0.3]. Can we set the range to a big interval?
3. In the experiment section, for the proposed method, different numbers of training samples are used. These samples are real data or synthetic data? For Marigold, the training samples are synthetic data without real data. Therefore, I suggest the authors to highlight the use of the real data or synthetic data. In addition, the authors could provide the reasons why the proposed method is slightly inferior to the existing methods on some datasets.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the weakness section. I hope to address the issuses in the rebuttal.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors discussed the limitations in the supplementary material.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful comments. We kindly ask the Reviewer to read our **top-level global response** first. Our detailed responses to the comments in the weaknesses (denoted as W) section are listed below.
> **W-1: Novelty**
One of our key contributions is the proposed training strategies that combine the merits of feed-forward and diffusion-based MDE methods in an efficient manner. We agree that building a conditional diffusion model to combine two components is straightforward, but **properly leveraging advantages of both** is challenging. Although the depth map estimated by Depth Anything provides good initialization, directly building a diffusion refiner on top of it without our training strategies (denoted as Naive Conditioning) yields inferior results as shown below (metrics are AbsRel $\downarrow$ / $\delta1 \uparrow$).
|Avg. Rank||Methods||NYUv2||KITTI||ETH3D||ScanNet||DIODE|
|-|-|-|-|-|-|-|-|-|-|-|-|-|
|1.8||Depth Anything||4.3/**98.0**||8.0/94.6||6.2/98.0||**4.3**/**98.1**||26.0/**75.9**|
|2.7||Naive Conditioning||5.2/97.0||8.6/92.2||5.4/96.9||5.6/96.2||**22.4**/74.7|
|**1.2**||**BetterDepth**||**4.2**/**98.0**||**7.5**/**95.2**||**4.7**/**98.1**||**4.3**/**98.1**||22.6/75.5|
This is because the naive conditioning overfits the small training datasets and thus underutilizes the prior knowledge in the pre-trained MDE model, resulting in degraded zero-shot performance (Lines 276-279 of the main paper). With the proposed training strategies, BetterDepth learns to utilize the geometric prior in pre-trained models for zero-shot transfer and the image prior in diffusion models for detail refinement, efficiently achieving robust performance with fine-grained details (as discussed in the `contribution` section of the global response).
> **W-2: Local patch masking**
To achieve better details without disregarding the geometric prior learned in pre-trained MDE models, the key challenge is to ensure the depth conditioning strength and simultaneously enable learning detail refinement. Thus, we design the patch masking mechanism to:
- improve depth conditioning strength at local regions. By excluding significantly different patches, BetterDepth learns to follow the depth conditioning and better utilizes the geometric prior for robust estimation, which is important for zero-shot performance as the comparison shown in Section W-1 (Naive Conditioning v.s. BetterDepth).
- learn detail refinement within a reasonable range. With the filtered patches, BetterDepth learns detail refinement without overfitting the training data distribution, achieving better detail extraction while maintaining robust MDE performance. A visual example is provided in Fig. S1 of the attached PDF, where the model can learn to improve the detail of the basket (Fig. S1c) according to the depth label (Fig. S1b).
However, higher conditioning strength limits the capacity for detail refinement, and lower conditioning strength results in degraded performance, e.g., the naive conditioning model in W-1. Therefore, the threshold $\eta$ is used to balance these two contradicting properties. We conduct experiments with larger $\eta$ (0.5 and 1) and combine the results in Fig. A2 as follows (metrics are AbsRel $\downarrow$ / $\delta1 \uparrow$). Too large $\eta$ often leads to worse performance as the geometric prior is not well utilized, and we choose the optimal $\eta=0.1$ in BetterDepth.
|$\eta$||NYUv2||KITTI|
|-|-|-|-|-|
|0.05||4.21/**98.06**||7.70/95.03|
|**0.1**||**4.18**/98.04||**7.47/95.22**|
|0.3||4.43/97.89||7.75/94.83|
|0.5||4.37/97.91||7.92/94.50|
|1||4.50/97.72||8.14/94.20|*
> **W-3: Training samples and method performance**
All the training samples are synthetic data in BetterDepth (Lines 249-253 of the main paper). For the two smaller training sets, we randomly select 400 and 2K samples from the full 74K synthetic dataset (composed of Hypersim and Virtual KITTI, which is the same synthetic dataset used in Marigold), and our experiments (Tab. 2) show promising performance of BetterDepth even with such small-scale training sets.
The effectiveness of quantitative evaluation heavily relies on the quality of depth labels in the benchmark. However, due to the limitation of depth sensors, the commonly adopted test benchmarks often contain incomplete and noisy depth labels. For example, Fig. A14-A15 illustrates significantly incorrect annotations and noise in the ground-truth of the DIODE dataset, and such label noise makes the reported metrics not fully reliable on a single dataset (similar discussions can be found in Tab. 2 and Sec. 6.1 of Depth Anything V2 [R4]). A more reliable evaluation is to compare the performance across diverse datasets, so we additionally provide the average ranking of the five benchmarks in Tab. 2 and our BetterDepth achieves the overall best performance. In addition, we further validate the superiority of BetterDepth on fine-grained detail extraction, as discussed in the `detail extraction` section of the global response.
---
Rebuttal Comment 1.1:
Title: Please discuss
Comment: Dear reviewer,
The discussion period is coming to a close soon. Please do your best to engage with the authors.
Thank you,
Your AC
---
Rebuttal Comment 1.2:
Comment: Thanks for the efforts on the rebuttal. The authors have addressed some of my concerns. Nonetheless, I still have some concern on the contribution of the proposed method with the training strategy of introducing the depth prior into the diffusion model. Therefore, I keep my orginial rating.
---
Reply to Comment 1.2.1:
Comment: Dear Reviewer LNRC,
We kindly ask the Reviewer to provide specific remaining concerns about the contribution so we can respond with more details.
Best,
Authors | null | null | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers and area chairs for their valuable time and comments. We will incorporate all suggestions to improve the revised paper. After providing more results/analyses, we would like to give an overall response and re-emphasize the contribution and performance of BetterDepth.
> **Contribution**
BetterDepth aims to efficiently combine the beneficial characteristics of feed-forward and diffusion-based monocular depth estimation (MDE) methods to achieve robust MDE performance with fine-grained details. Although it might seem straightforward to gain better performance by combining two models, naive conditioning without our proposed training strategies only results in inferior performance as shown in Tab. T1 below. Even having good depth maps from the pre-trained model as initialization, the naive conditioning model struggles to balance the contribution of different priors and does not yield an improvement. Thus, to efficiently achieve our goal, challenges still exist:
- **Performance Trade-off.** One solution to better utilize the initial depth map is to improve the conditioning strength. However, stronger depth conditioning limits the capacity for detail refinement, and lower conditioning strength results in degraded performance, e.g., naive conditioning in Tab. T1. Therefore, how to properly utilize the merits of different priors and balance the performance trade-off is non-trivial.
- **Resource Efficiency.** It might be possible to achieve improvements by training on diverse large-scale datasets with high-quality labels, but (i) obtaining high-quality labels for real datasets is difficult due to the imperfection of depth sensors, (ii) synthetic datasets offer high-quality labels but are costly to generate at scale, and (iii) training on large datasets is both time-consuming and resource-intensive.
To this end, we propose global pre-alignment and local patch masking to balance the performance trade-off, ensuring conditioning strength while enabling detail refinement. As a result, BetterDepth efficiently combines the strengths of feed-forward and diffusion-based MDE models, achieving robust performance and fine-grained details with minimal training effort.
**Table T1.** Comparisons with the naive conditioning model (AbsRel / $\delta1$).
|Avg. Rank||Methods||NYUv2||KITTI||ETH3D||ScanNet||DIODE|
|-|-|-|-|-|-|-|-|-|-|-|-|-|
|1.8||Depth Anything||4.3/**98.0**||8.0/94.6||6.2/98.0||**4.3**/**98.1**||26.0/**75.9**|
|2.7||Naive Conditioning||5.2/97.0||8.6/92.2||5.4/96.9||5.6/96.2||**22.4**/74.7|
|**1.2**||**BetterDepth**||**4.2**/**98.0**||**7.5**/**95.2**||**4.7**/**98.1**||**4.3**/**98.1**||22.6/75.5|
> **Zero-Shot Performance**
With the proposed architecture and training strategies, BetterDepth achieves state-of-the-art performance on the widely adopted zero-shot datasets as shown in Tab. T2 below, where the very recent approach Depth Anything V2 [R4] is also included for comparisons.
**Table T2.** Zero-Shot MDE performance (AbsRel / $\delta1$).
|Avg. Rank||Methods||NYUv2||KITTI||ETH3D||ScanNet||DIODE|
|-|-|-|-|-|-|-|-|-|-|-|-|-|
|3.7||Marigold||5.5/96.4||9.9/91.6||6.5/96.0||6.4/95.1||30.8/**77.3**|
|1.9||Depth Anything||4.3/**98.0**||8.0/94.6||6.2/98.0||4.3/**98.1**||26.0/75.9|
|2.6||Depth Anything V2||4.4/97.8||8.3/93.9||6.2/**98.2**||**4.2**/97.8||26.4/75.4|
|**1.4**||**BetterDepth**||**4.2**/**98.0**||**7.5**/**95.2**||**4.7**/98.1||4.3/**98.1**||**22.6**/75.5|
> **Detail Evaluation**
Following the suggestions of Reviewer Qdvk, we further provide quantitative evaluation of fine-grained detail extraction. Since depth labels in the commonly adopted benchmarks are generally sparse or noisy as discussed in [R4] and shown in Fig. A6-A15, which makes them less reliable for detail evaluation, we conduct experiments on a high-resolution RGB-D dataset Middlebury 2014 [R1]. Four additional edge-based metrics are employed to exclude the influence of non-edge regions: the completeness and accuracy of depth boundary errors (denoted as `DBE_comp` and `DBE_acc`) [R2] and the edge precision and edge recall (denoted as `EP` and `ER`) [R3]. As shown in the Tab. T3 below, although the recent Depth Anything V2 achieves promising improvements in detail extraction by leveraging high-quality synthetic training data, our BetterDepth still exhibits better performance even with much less synthetic training data (595K in Depth Anything V2 v.s. 74K in BetterDepth), thanks to the iterative refinement of diffusion model. In addition, BetterDepth also captures better details like the cat's hair in Fig. S5 of the attached PDF, validating its overall best performance.
**Table T3.** Evaluation of detail extraction.
|Avg. Rank||Method||AbsRel $\downarrow$||$\delta1$ $\uparrow$||DBE_comp $\downarrow$||DBE_acc $\downarrow$||EP (%) $\uparrow$||ER (%) $\uparrow$|
|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|
|3.7||Marigold||7.57||93.24||5.60||3.09||16.65||23.75|
|3.2||Depth Anything||3.14||99.44||6.35||2.66||24.73||16.12|
|2.2||Depth Anything V2||3.06||99.38||4.19||2.23||26.74||35.89|
|**1**||**BetterDepth**||**2.95**||**99.52**||**3.61**||**2.09**||**28.49**||**50.35**|
We hope we have addressed your concerns and kindly ask you to consider updating your rating based on our additional explanations and evaluations. Please don’t hesitate to let us know of any additional comments or questions.
**[Reference]**
[R1] High-Resolution Stereo Datasets with Subpixel-Accurate Ground Truth. GCPR 2014.
[R2] Evaluation of CNN-based Single-Image Depth Estimation Methods. ECCVW 2018.
[R3] Revisiting Single Image Depth Estimation: Toward Higher Resolution Maps with Accurate Object Boundaries. WACV 2019.
[R4] Depth Anything V2. arXiv preprint arXiv:2406.09414, 2024.
[R5] Monocular depth estimation using diffusion models. arXiv preprint arXiv:2302.14816, 2023.
[R6] Evaluation of CNN-based Single-Image Depth Estimation Methods. ECCVW 2018.
[R7] SUN RGB-D: A RGB-D Scene Understanding Benchmark Suite. CVPR 2015.
Pdf: /pdf/6125224df51c84c7384b5c5b6c971012ff5e52ec.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Understanding Emergent Abilities of Language Models from the Loss Perspective | Accept (poster) | Summary: This paper measures a range of language models' pretraining loss and their downstream performance, arguing that emerging capabilities are better measured by losses as opposed to previously proposed model size or compute (FLOPs). The paper also offers evidence against the argument that emergent abilities are a "mirage" when the task metrics are continuous.
Strengths: 1. Clean and comprehensive study on a range of tasks, different models and different model sizes, with a significant number of intermediate checkpoints.
1. Insightful results that builds on top of prior literature: (I) Loss > size and FLOPs and (2) It's not just that downstream task metrics needed to be continuous. Line 226 also makes a good observation on a quirk of Brier Score.
Line 122 makes a thought-provoking claim that many researchers should seriously consider, especially when considering the inference cost of these models during architectural design.
> That is, by ignoring the color differences (model sizes), the data points of different models are indistinguishable… . This indicates that the model performance on downstream tasks largely correlates with the pre-training loss, regardless of the model size.
Weaknesses: 1. The authors define emergence as a discontinuity, although it seems possible to fit an exponential function (especially when the x-axis is FLOPs as in Figure 6), or a linear function for many tasks such as the first group in section 3.1. Although Figure 4 offers stronger evidence for discontinuity, it is not definitive, as only MMLU, GSM8K and C-Eval are measured (but I accept that these are reasonably sufficient for an academic paper).
1. The authors discuss why the harder nature of MMLU and GSM8k may be causing the discontinuity, but it would be useful to present a qualitative analysis on the "easier" questions that models "first" get right with just a bit more reduction in pretraining loss.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. Is EM the same as perplexity/scoring eval?
1. Why use the training loss as opposed to validation/test loss?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: L130
> we find that the overall training loss is a good predictor of performance on both English and Chinese tasks,
Naturally, you could compute the per-corpus loss on the x-axis to strength the claim. For the final version, it'd be great to have a per-corpus loss analysis on what kinds of corpora are most predictive of downstream performance.
Not limitations just typos:
Line 128: verifying the emergency of performance -> emergence
Line 218: ground probability -> did you mean ground truth probability?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: For weakness 1, we add results on three tasks on BIG-Bench in the PDF file of global rebuttal.
For question 1, when the language model directly predicts the answer, EM and perplexity give the same trend. However, with chain-of-thought reasoning, the language model first predicts the intermediate reasoning process then gives the answer. Perplexity evaluation does not work.
For question 2, most of our pretraining corpus go through fewer than one epoch during pretraining and in the early experiments we find that the training loss is consistent with the validation loss. Therefore we use the training loss for simplicity. | Summary: This paper formulates the concept of emergence as a relationship between language modeling training loss and task performance, rather than in relation to model/data scale.
Strengths: The paper is clear. I appreciate the central message, as descriptions that relate the capabilities strictly to the scale of a language model overlook factors such as data quality and architecture. If you actually demonstrated that loss was a better predictor than model scale on specific capabilities, the paper would have a lot of value.
The paper is partly making the point of "agreement on the line" https://arxiv.org/abs/2206.13089 that in-distribution loss tends to be strongly correlated with out of distribution or specialized metrics. This point is worth making. The claim that we can identify emergence with respect to loss is more dubious, as the threshold-based measurements used are overly generous to claims of emergence in general.
Weaknesses: Even when different models have very similar pretraining loss, they may have extreme variation in other qualities, including specific benchmarks. This paper does not engage substantially with the existing literature on the connection or lack thereof between loss and specialized metrics; it nods to them in the literature review on "Relationship of Pre-training Loss and Task Performance" but does not clarify why their findings are so different from existing results.
>We argue that the perplexity of correct options is not the correct metric to evaluate the performance of multi-choice questions. The correct metric of multi-choice questions should reflect the ability of distinguishing correct options from incorrect options.
I looked into this appendix because I was intrigued as to why their results are so strongly at odds with Schaeffer et al. I completely disagree with this claim (which the paper provides no support for); if perplexity is the wrong metric, why measure loss at all? To the contrary, recent work including https://arxiv.org/pdf/2404.02418 and https://arxiv.org/pdf/2305.13264 continue to support the notion that claims of emergence should consider probabilities rather than exact matching, which obscures auxiliary task dependencies such as the capability of answering in multiple choice format. In general, our community now understands how much emergence can be attributed to metric thresholding effects, so you need much stronger evidence to argue against Schaeffer et al.
The core claim that one can predict performance on any given task just from the loss of a model would be a more useful finding if it applied across different architectures, not just different data and model scales. Although the paper considers both Llama and Pythia, they do not plot these different architectures on the same figures and it is therefore not clear whether loss is similarly predictive across architectures or whether it is only within an architecture, in which case there is no real argument for preferring it over scale as a basis of measurement. There is no excuse for continuing to present metric thresholding effects as evidence of meaningful emergence.
> That is, by ignoring the color differences (model sizes), the data points of different models are indistinguishable.
This simply doesn’t seem to be true. RACE in particular does seem to have a different slope for green compared to the other colors. For CLUESC orange and blue seem to be nearly separable, although that could be random. Still seems to be true for a bunch of these and it’s an interesting point to make, but I think you need to qualify it by actually comparing the slopes from a linear regression for each color.
> an ability is emergent if it is not present in language models with higher pre-training loss, but is present in language models with lower pre-training loss.
Not clear what it means to claim any ability is not present and then is present. It seems that they mean that below a certain point you have random chance answers. So by definition, if you perform at random chance at any point, then they would say the ability is emergent regardless of whether there is a clear breakthrough. This is the definition of a thresholding effect.
Minor:
- Please use natbib with \citet when you have an inline citation.
- "emergency trends" should be emergent
- > until the pre-training loss decreases to about 2.2, after which the performance gradually climbs as the loss increases.
- Do you mean the loss decreases? Or am I missing something? Why would the loss increase after this point?
Technical Quality: 1
Clarity: 2
Questions for Authors: Could you discuss a little further why your findings are so at odds with existing work like Schaeffer et al.? As far are as I can tell, it is because you've decided to define emergence without accounting for straightforward metric thresholding effects.
Confidence: 4
Soundness: 1
Presentation: 2
Contribution: 3
Limitations: No obvious unlisted limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We want to argue that the reviewer misunderstands the difference between our work and Schaeffer et al. In fact, Schaeffer et al do not use perplexity as the evaluation metric for multi-choice tasks. Instead, they propose to use Brier Score, which we also evaluate in Figure 4. The main difference is that Schaeffer et al only evaluate the final checkpoints of different models while we also evaluate the intermediate checkpoints. With more data points available, we show that emergency on some tasks exists even with continuous metrics like Brier Score.
We are against evaluating multi-choice tasks with the perplexity of correct options because we think a good metric should reflect the ability of distinguishing correct options from incorrect options. However, it is not the main argument of the paper and also not the reason why our findings are different from Schaeffer et al.
We prefer loss over compute/scale since it can better track the learning dynamics of intermediate checkpoints. Loss of intermediate checkpoints is influenced by many factors beyond compute/model scale, such as learning rate scheduling. We show the performance-vs-compute curves in Figure 6 and find that points from different models do not fall on the same curves on most tasks.
---
Rebuttal Comment 1.1:
Title: Clarification
Comment: Thank you for clarifying, I confused your arguments against Xia et al. (that perplexity is the wrong metric) and against Schaeffer et al. (that Brier score is the wrong metric). I think this would have been easier to keep straight if you used the natbib \citet in your discussion, but I acknowledge that I confused those two different arguments, although you did make both of them.
To be clear then, you claim that the same emergence point that Schaeffer et al. dismiss is actually a real point of emergence because the blue dots (1.5B) are flat whereas the orange dots (6B) have a positive slope. However, looking at each individual color makes it clear in the Brier score plot: for a 6B model, before the supposed phase transition, there is a lot more noise in the distribution of scores but they do not have a clearly insignificantly flatter trendline. In particular, it is entirely obvious that the 1.5B models over perform on the task relative to their loss when we compare them to 6B models with similar loss.
The reason for the emergence artifact in the thresholding functions is clearly a thresholding effect. This fact persists even if you look at loss on the X axis. I appreciate the creativity of plotting different models together with all their checkpoints, but the defense of emergence in this paper does not seem to hold, and that appears to be the framing given. The argument that we can use earlier checkpoints to plot scaling laws for many models simultaneously does not hold unless those laws all show similar slopes for different models, and they obviously don't or this paper would have shown different architectures together on the same plot instead of just different scales of the same architecture, which would make the different slopes more obvious.
I would raise my score if you could show that Llama and Pythia have similar slopes when plotted together and that they therefore support the idea that the 6B and 1.5B actually have the same slope, which they don't seem to as the 6B simply increases in variance of performance below the threshold.
---
Reply to Comment 1.1.1:
Comment: For the improvement on Brier Score before the emergency threshold, we already analyze this in Line 223 on Page 8. We find that the distribution of the correct answers is uniform in the four options in MMLU and C-Eval. A naive predictor that always gives uniform probability to all the options can achieve a Brier Score of 0.75, while another naive predictor that always gives all the probability to the first option has a Brier Score of 1.5. However, both models do not learn the relationship between questions and correct options at all. Therefore, decrease of Brier Score above 0.75 does not represent improvement on performance. In the plot we mark the Brier Score of random guess (0.75) in black dashed lines and obviously above the threshold the model's Brier Score is worse than random guess. We can't say a model's ability on a task is improving while its performance is worse than random guess.
We can't plot the curves of Llama and Pythia on the same plot because they are trained on different corpora. Pythia is trained on Pile, of which webpages constitutes 18%. Llama is trained on their own corpora, of which webpages 67%. In other words, they do not have a common x-axis. Also, the intermediate checkpoints of Llama are not publicly accessible. Training models with different architectures is beyond available computational resources. However, we already show that models with the same architecture and different sizes show similar trends. We believe it will be helpful for the community since most models adopt Llama architecture with moderate modifications. | Summary: They demonstrate that 1) pre-training loss is generally predictive of downstream capabilities in language models, rather than models size or number of tokens used during pre-training; and 2) emergent capabilities can also be clearly described in terms of pre-training loss. They also demonstrate that using continuous metrics, they still observe emergence, countering findings from prior work. They conduct their analysis on a suite of standard English and Chinese evals, using a range of models that they pre-trained themselves. They also further validate their findings using the llama and Pythia series of models.
Strengths: * The paper does a good job of correcting an extremely prominent (but largely incorrect) narrative/claim that the phenomenon of emergence in LLMs always completely disappears when a continuous metric is used in place of a discontinuous one. This paper demonstrates a few cases where even in the presence of a continuous metric, emergence is still observed. Correcting this misconception in the scientific discourse is important.
* The paper is overall well written and the experiments are very thorough, encompassing a range of models and tasks.
* They even went out of their way to ablate the effect of learning rate schedule on their findings in Section 2.4, this is extremely thorough work, and they should be commended for it.
I have a handful of concerns regarding the writing, as discussed in the weaknesses section, but on balance I think this paper should be accepted, on the condition that my concerns are fixed in the camera ready.
Weaknesses: * From the intro: "For example, LLaMA-13B with less compute [53] can outperform GPT-3 (175B) on MMLU [21]": Training for longer could be one explanation for this, another one could be llama used a higher quality corpus than gpt-3 did. I think the data quality element is a little underrated. In general we should expect different pretraining datasets to potentially have different loss / different emergence points (even if the x-axis loss is on a held out validation set and models have the same vocabulary). There is some discussion of this in the limitations section, but in general I think this point should be highlighted more in the paper. Concretely, the pretraining loss -> emergence point phenomenon is only guarenteed to be consistent for a series of models pretrained on the same data.
* The discussion of exact match in Section 2 kind of comes out no nowhere, and I'm not sure why it is discussed where it is. It would be great if there were more motivation for this in the paper.
* I understand why you did the ablation that you did in Section 2.4 (ablate the effect of lr schedule, as in Chinchilla), but in the paper this is not very clearly motivated and readers with less background knowledge might not understand why this ablation is important.
* “Note that there is one only exception at the early stage of LLaMA-65B. We can see that when the training loss is higher than 1.8, LLaMA-65B performs worse than smaller models with the same training loss.”: This could be because they used some exponential smoothing on either their loss or downstream performance plots. Exponential smoothing would perturb the earlier points more than other points (especially for curves that are fast changing or have a rapid change at the beginning), potentially leading to this effect. Moreover, did you smooth out the loss in some way when plotting loss verses downstream in the previous sections? If so, this would at least be good to note in the paper.
* Section 3.1 could use clearer discussion of explanations for why emergence occurs on the different tasks. I think grokking is a possible explanation, but what exactly grokking means (outside of the context of the simple algorithmic tasks presented in the original grokking paper) is a little vague/imprecise, and I also don't think this is the only explanation. In the case of gsm8k, emergence is, in my opinion, more likely due to the fact that the model has to get a sequence of reasoning steps correct to answer the question, this leads to an exponential (e.g. p(step_correct)^n_steps) for getting the full answer correct (some analysis of this in the paper could be cool). In the case of MMLU and C-Eval there actually is an existing paper which discovers an explanation, using interpretability techniques, for emergence on multiple choice tasks (https://arxiv.org/pdf/2307.09458).
* There's a handful of related works that aren't discussed in the paper and whose findings contradict those of this paper (I'll give you my reasoning for why your work might not contradict theirs, as a suggestion in parentheses, but you should include some discussion of this in the paper somewhere, either in the related work or elsewhere):
1) https://proceedings.mlr.press/v202/liu23ao.html (my understanding is that their theory only applies near the global optim, which is generally not the case with real-world language models)
2) https://arxiv.org/pdf/2109.10686 (see the paragraph "zooming in verses zooming out", your paper is more about the zoomed out setting)
3) Not strictly language modeling, but see Figure 3 of https://cdn.openai.com/papers/Generative_Pretraining_from_Pixels_V2.pdf (could be because they are doing probing so hidden-state size increases probe capacity causing differences with model size, but it's unclear if this is actually the case)
Technical Quality: 4
Clarity: 3
Questions for Authors: See the weaknesses for most of my questions/concerns.
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: No major limitations. See the weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We agree with the influence of data quality on performance besides compute. We will add the point in the introduction when making the comparison.
Thank the reviewer for appreciating the ablation of learning rate schedule. We will make it clearer in the final version.
The exponential smoothing is one possible explanation for the outliers on LLaMA-65B. We will add the explanation to the paper. Also, we didn't use any smoothing in all the plots in the paper.
Thank the reviewer for providing related works on explanations of emergent abilities. We will add these explanations in the paper. For contradictory conclusions of previous works [1][2], we think they mainly study the pretraining-finetuning paradigm, in which inductive bias helps improve transferability. Instead, we study the pretraining-prompting paradigm without finetuning on specific tasks, which is more common in language model pretraining. We will add more discussion about this in the paper.
---
Rebuttal Comment 1.1:
Comment: Thanks, I appreciate the response. I think this is a good work and will keep my score. | Summary: The paper investigates the link between the pre-training loss of LLMs and their downstream performance on popular benchmarks. The authors train a series of models ranging from 300M to 32B parameters on English-Chinese datasets of different sizes and study how these models perform on TriviaQA, HellaSwag, RACE, WinoGrande, MMLU and GSM8K benchmarks and their Chinese counterparts, as a function of their pre-training loss. They observe a strong link between pre-training loss and benchmark performance that is not affected by model size and dataset size, from which they conclude that pre-training loss is the main indicator of downstream performance. On MMLU and GSM8K (and their Chinese counterparts), models exhibit an emergent behavior wrt. loss, i.e. their performance is close to random before a loss value of about 2.2 and steadily increases for lower loss values. This relationship seems to hold, even under continuous evaluation metrics.
Strengths: 1. Formulating downstream (emergent) abilities in LLMs in terms of pre-training loss is a valuable contribution, that can unify several different factors that are so far thought to contribute to emergence, such as pre-training compute, model size, and (test-)perplexity. This perspective also provides a better connection between scaling laws -- which typically describe the relationship between parameter count, dataset size, compute, and pre-training loss -- and emergent abilities, which mostly have been studied through the lens of parameter count and compute so far.
2. The paper presents an extensive evaluation over a large range of model sizes, dataset sizes, compute budgets, that show a strong connection between pre-training loss and downstream performance. The results are additionally validated with results and models proposed by prior work (Llama-1 and Pythia).
2. The paper is easy to understand and follow.
Weaknesses: 1. The paper seems to make two main claims: 1) that pre-training loss is the main predictor of downstream performance, and 2) that emergent abilities appear suddenly after models cross a certain loss-threshold. I find the evidence for 1) to be convincing, but I am somewhat less confident about 2). This is mostly because the paper only shows emergent abilities on a subset of the benchmarks they were originally proposed on (MMLU and GSM8K) in [1]. Notably, BigBench is absent. Including results on the tasks in BigBench that were shown to exhibit emergent behavior would be helpful to judge whether pre-training loss predicts the emergence threshold as well as pre-training compute or model size. Including these benchmarks would also help to put the paper into a better perspective wrt. subsequent work questioning the existence of emergent abilities [2], which also studies these benchmarks.
2. There are some discrepancies and unexplored links between the results reported in the paper and findings in prior work. To develop a better understanding of how the findings here relate to prior work, it would be helpful to include a discussion in the paper. See Questions 2. and 3. below for more details.
3. I find the paragraph in 3.1, line 197 rather speculative. Grokking refers to an improvement on the test-set, despite stagnation on the training set, whereas the models studied here seem to still be improving on the training set. I am not sure whether there is a link between grokking and the emergent behavior on the specific tasks here.
4. Minor: There are a number of typos and small writing issues, e.g.
- "has thus far less studied" (line 23), ", a highly over-parameterized models" (line 79), "coefficients how that" (line 126),...
- line 120: "climbs as the loss increases" should probably be "decreases".
- line 141: "On each line" seems to be missing a reference to Figure 2.
References
- [ 1] Emergent Abilities of Large Language Models, Wei et al., https://arxiv.org/abs/2206.07682
- [ 2] Are Emergent Abilities of Large Language Models a Mirage?, Schaeffer et al., https://arxiv.org/abs/2304.15004
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The paper claims that pre-training (test) loss is a good indicator of downstream performance. What is the composition and size of the test set that the loss is evaluated on? What properties (size, diversity, etc.) should such a test-set have in order to be predictive of downstream performance?
2. Prior work [1, 2] has found that emergent behaviors can become predictable with "higher resolution" benchmarks, i.e. with larger test sets or repeated sampling from the model. This is a dimension that the paper does not touch upon. Do the authors believe, that the studied emergent abilities would still appear only after models cross the particular loss threshold, i.e. be essentially 0 before, even with those higher resolution benchmarks?
3. In related work (line 254), the authors mention that some prior work [3] has observed a disconnect between pre-training loss and downstream performance, which stands in contrast to the claims made in the paper. It would be great if the authors could comment on more on the reasons for these discrepancies.
References
- [ 1] Are Emergent Abilities of Large Language Models a Mirage?, Schaeffer et al., https://arxiv.org/abs/2304.15004
- [ 2] Predicting Emergent Abilities with Infinite Resolution Evaluation, Hu et al., https://arxiv.org/abs/2310.03262
- [ 3] Same Pre-training Loss, Better Downstream: Implicit Bias Matters for Language Models, Liu et al., https://proceedings.mlr.press/v202/liu23ao.html
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper sufficiently discusses limitations, in my opinion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: For weakness 1, the results on BIG-Bench are presented in the PDF of global rebuttal. The original emergent ability paper evaluates 4 tasks on BIG-Bench. The test set size of the figure-of-speech detection task is too small and the variance is too high. Therefore we evaluate the other three tasks.
For question 2: We already study the effect of continuous metrics in section 3.2, in which we evaluate MMLU and C-Eval with two continuous metrics, one of which is the Brier Score used in Schaeffer et al. The results show that continuous metrics cannot eliminate the observed tipping point in the performance curves. Increasing the size of the test set of GSM8K is not straight-forward. We will try repeated sampling from the model, but it takes some time. We think repeated sampling does not change the results since it only decrease the variance of the performance evaluation.
For question 3: Liu et al. mainly analyze BERT-like models, which are pretrained on masked language modeling and finetuned and evaluated on supervised classification tasks, which we denote as "transfer learning setting" in the paper. Implicit bias in model sizes and training algorithms can change the transferability of pretrained knowledge. Our work focuses on GPT-like models, which are pretrained on autoregressive language modeling and evaluated on prompted tasks without finetuning. Since the pretraining and evaluation settings are more consistent, the transferability is less important.
For weakness 3, pretraining is generally considered as multi-task learning. It is possible that the model already stagnates on some tokens in the pretraining corpus, such as digit calculation, while the overall pre-training loss is still decreasing.
---
Rebuttal 2:
Comment: I thank the authors for answering my questions in the rebuttal, and for providing additional results on BIG-Bench. They help in assessing how the findings of this work relate to observations made in prior work and make me more confident about the results.
I continue to believe that the pre-training loss perspective is a valuable contribution for predicting emergent downstream abilities. However, I am still not quite sure that the downstream abilities emerge "suddenly". The results on BIG-Bench, particularly word unscramble and IPA transliterate, might actually be following a continuous trend, e.g. a sigmoid, and for modular arithmetic there seem to be too few data points below the "emergence threshold" to accurately judge whether there is a sudden transition. Some sort of higher-resolution sampling might also show more continuous trends here.
Therefore, I maintain my score.
An interesting suggestion for future work would be to compare checkpoints from different model families on the same test-set (potentially with tokenizer-based normalization) to see whether the same loss values result in the same downstream performance for across families. To do this it would be possible to use open-checkpoint models such as Pythia, LLM360 [1] and OLMo [2].
- [1] LLM360: Towards Fully Transparent Open-Source LLMs, Liu et al., https://arxiv.org/abs/2312.06550
- [2] OLMo: Accelerating the Science of Language Models, Groeneveld et al., https://arxiv.org/abs/2402.00838
---
Rebuttal Comment 2.1:
Comment: Thank you for the valuable feedback. We understand that on some tasks it is difficult to differentiate emergent abilities from exponential performance curves. We want to emphasize that the point of emergent abilities is to identify some tasks on which it is difficult to predict the performance of larger models with the performance of smaller models. On word unscramble and IPA transliterate, before the tipping point, the curve is speciously linear with a very small slope, rather than exponential. Therefore it is very difficult to predict the sudden increase in the performance as the pretraining loss further decreases. This also distinguishes these tasks from those whose performance increases smoothly with an almost constant slope.
---
Rebuttal 3:
Comment: I thank the authors for their clarification. I still believe that some of the transitions may potentially be predictable from higher loss models.
I will maintain my score, but increase my confidence level, and would support acceptance of the paper. | Rebuttal 1:
Rebuttal: The results on BIG-Bench are presented in the PDF file. The original emergent ability paper evaluates 4 tasks on BIG-Bench. The test set size of the figure-of-speech detection task is too small and the variance is too high. Therefore we evaluate the other three tasks. We can observe with pretraining loss as the x-axis, we can clearly observe the tipping point in the performance (compared with Figure 2 in Wei et al.)
Pdf: /pdf/216e18f8e2124b0c13dd955ec341812191e54d18.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper investigates emergent abilities in language models from the perspective of pre-training loss, rather than model size or training compute. The authors challenge recent skepticism about emergent abilities by demonstrating that: (1) Models with the same pre-training loss, regardless of model and data sizes, exhibit similar performance on various downstream tasks. (2) Emergent abilities manifest when a model's pre-training loss falls below a specific threshold, before which performance remains at random guessing levels.
The study examines different metrics (including continuous ones) and proposes a new definition of emergent abilities based on pre-training loss. The authors argue that this perspective better represents the learning status of language models and provides a more precise characterization of when emergent abilities appear during training.
Strengths: - The paper offers a fresh approach to understanding emergent abilities by focusing on pre-training loss rather than model size or compute, providing valuable insights into the scaling behavior of language models.
- The paper builds upon existing scaling laws and provides a mathematical formulation (Equation 5) that explains the relationship between model size, pre-training loss, and emergent abilities.
- The study directly addresses recent challenges to the concept of emergent abilities, providing a nuanced perspective that reconciles conflicting observations in the field.
Weaknesses: - The authors acknowledge that they have not considered fundamentally different model architectures (e.g., routed Transformers) or non-Transformer architectures. This limitation may affect the generalizability of their findings.
- As noted in the limitations section, pre-training loss is affected by tokenizers and pre-training corpus distribution, making direct comparisons between models trained on different corpora challenging. While the authors suggest using normalized perplexity on a public validation set, this solution is not implemented in the current study.
- While the paper establishes a correlation between pre-training loss and emergent abilities, it does not provide a causal explanation for why certain abilities emerge at specific loss thresholds. A deeper investigation into the underlying mechanisms could strengthen the paper's contributions.
- While the authors discuss some contrary observations from previous studies, a more comprehensive comparison with existing literature on emergent abilities and their proposed explanations would enhance the paper's positioning within the field.
Technical Quality: 2
Clarity: 2
Questions for Authors: None.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We admit the limitations of the work in different model architectures, tokenizers, and pre-training corpus distribution. The main reason is that the work analyzes not only the final checkpoints of different models, but also the intermediate checkpoints, which are not publicly available for many open-source models. Training multiple models with different architectures, tokenizers, and pre-training corpus is beyond our available computational resources. We also want to argue that previous works on scaling laws often study a fixed choice of architectures, tokenizers, and pre-training corpus [1][2].
We also admit the lack of theoretical analysis in the paper. The main question the paper answers is whether the emergent abilities exist and how to track them, which is necessary for further study on emergent abilities. We believe the paper can inspire more theoretical work on the topic.
1. Kaplan et al. Scaling Laws for Neural Language Models.
2. Hoffmann et al. Training Compute-Optimal Large Language Models. | null | null | null | null | null | null |
Follow Hamiltonian Leader: An Efficient Energy-Guided Sampling Method | Reject | Summary: This paper presents a novel parallel sampling method named "Follow Hamiltonian Leader" (FHL) designed to address sampling challenges by leveraging zeroth-order information, particularly when first-order data is unreliable or unavailable. The method incorporates a leader-guiding mechanism to enhance the efficiency and effectiveness of the sampling process. Experimental results indicate that FHL significantly improves the exploration of target distributions and outperforms traditional sampling techniques, especially in scenarios involving corrupted gradients.
Strengths: 1. Innovative combination of zeroth and first-order information.
2. The effectiveness of the method is demonstrated in multiple task scenarios.
3. Theoretical analysis and prove are sufficient.
Weaknesses: 1. Is there any quantitative experiments like evaluating FID and IS on cifar10 datasets and I think it's more compelling whether a novel sampling methods combined with generative models can be used on image datasets with more complex distributions.
2. Lack of experiment of OOD in combination with EBMs or score-based models to valid the stability during sampling with proposed method.
Technical Quality: 3
Clarity: 3
Questions for Authors: Why is E(x, \tilde x) defined in this way. And is there any relation between extra elastic energy and local entropy [1].
[1] P. Chaudhari, A. Choromanska, S. Soatto, Y. LeCun, C. Baldassi, C. Borgs, J. T. Chayes, L. Sagun, and R. Zecchina. Entropy-SGD: Biasing gradient descent into wide valleys. In ICLR, 2017.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: 1. Limited exploration of integration with other advanced MCMC methods.
2. Lack of quantitative experiments to demonstrate the advantage of proposed sampling method compared with other methods.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Rebuttal**
Thank you for reviewing our paper and providing insightful feedback. We appreciate your positive remarks and constructive criticism, which will help us improve our work. Below, we address your comments and questions:
1. **Quantitative Experiments on CIFAR-10:**
We understand your concern about the absence of quantitative experiments on complex image datasets. To address this, we have conducted experiments to evaluate the performance of FHL on the CIFAR-10 dataset, and the results are included in the overall response. These findings will be incorporated into the revised manuscript to highlight the method's applicability to complex distributions.
2. **Out-of-Distribution (OOD) Experiments with EBMs or Score-Based Models:**
We apologize for any confusion regarding out-of-distribution experiments in our context. Our method is fundamentally a sampling technique designed to accurately generate the target distribution, whereas out-of-distribution experiments are typically performed in classification or detection tasks.
However, as an alternative, we had conducted experiments in scenarios where sampling is challenging, such as sampling from poorly-conditioned functions, as shown in Figure 4. These experiments aim to validate the stability and robustness of FHL during sampling. We are glad to extend the results in the revised paper to provide a more comprehensive analysis of the method's capabilities.
**Discussions**
1. **Integration with Advanced MCMC Methods:**
We recognize that our algorithm could benefit from integration with other advanced MCMC methods. In fact, our method is orthogonal to these techniques and can be combined with methods such as LMC or parallel tempering. However, due to space and time constraints, our current version focuses solely on comparison and integration with HMC. We will extend our work to include integration with other advanced MCMC methods in the future.
2. **Quantitative Comparison with Other Methods:**
As suggested, we have included quantitative experimental results for CIFAR10 in the overall response.
**Question**
- **Entropy-SGD**
The key concept of Entropy-SGD is its incorporation of Bayesian optimization elements. Unlike other optimization methods such as Adam or Momentum SGD, Entropy-SGD optimize over a local area (referred to as "local entropy") instead of searching for an isolated optimal point.
$$
F(x, \gamma) = \log \int_{x' \in \mathbb{R}^n} \exp\left( -f(x') - \frac{\gamma}{2} ||x - x'||_2^2 \right) dx'.
$$
The extra term $E(x, x') = \frac{\gamma}{2} ||x-x'||^2_2$ in Entropy-SGD controls the range over which it seeks valleys of specific widths. Another way to write the objective used in the Entropy-SGD (see Equation 3, [2]) is:
$$
f_\gamma(x) := -\log \left( G_\gamma * e^{-f(x)} \right); G_\gamma = (2\pi\gamma)^{-N/2} \exp\left( -\frac{||x||^2}{2\gamma} \right)
$$
The objective above shows that Entropy-SGD optimizes a modified, smoother energy landscape, which can be viewed as a convolution of the original energy landscape with a Gaussian kernel. This approach aids in identifying wider valleys.
- **PARLE: The Parallel Variant of Entropy-SGD**
An extension of Entropy-SGD, known as PARLE [2], essentially parallelizes the Entropy-SGD algorithm. This method optimizes the following objective function, which reduces to Entropy-SGD when $n=1$:
$$
\arg \min_{x, x^1, \ldots, x^n} \sum_{a=1}^{n} f_\gamma(x^a) + \frac{1}{2\rho} || x^a - x ||^2.
$$
By deriving this expression, it becomes evident that PARLE encourages particles to sample around $\bar{x}$, where $\bar{x} = \frac{1}{n} \sum_{a=1}^{n} x^a$ is the average of the particles.
- **Connection and Comparison for FHL**
Both Entropy-SGD and PARLE are optimization methods that ultimately produce a final point instead of a Markov chain. In contrast, FHL is specifically designed as a sampling method. Furthermore, there are other similarities and differences between FHL and Entropy-SGD/PARLE:
- **(Similarity) Exploring the Landscape:** All of three approaches allow to explore the function landscape around the anchor point (Entropy-SGD: the average of the SGD trajectory of a single particle; PARLE: the average of multiple particles; FHL: the leader of the particles).
- **(Difference) Zeroth-Order Optimization:** FHL leverage zeroth-order information, and uses the leader as the anchor point instead of the average of particles. Averaging is commonly used in optimization techniques to reduce variance. However, in sampling tasks, noise is typically not introduced, and pulling towards the average may degrade convergence performance. Therefore, FHL could be expected to perform better.
> [1] P. Chaudhari, A. Choromanska, S. Soatto, Y. LeCun, C. Baldassi, C. Borgs, J. T. Chayes, L. Sagun, and R. Zecchina. Entropy-SGD: Biasing gradient descent into wide valleys. In ICLR, 2017.
>
> [2] Chaudhari, Pratik, et al. "Parle: parallelizing stochastic gradient descent." arXiv preprint arXiv:1707.00424 (2017).
---
Thank you again for your valuable feedback. We are confident that the additional results and planned revisions will address your concerns and enhance the overall quality and impact of our work.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response and additional experiments. I have also read the reviews from other reviewers as well as the corresponding reply. I have increased my score. | Summary: This paper introduces an interesting parallel sampling method that leverages zeroth-order information to address challenges in sampling from probability distributions, particularly when first-order data is unreliable or unavailable. The method incorporates a leader-guiding mechanism, enhancing efficiency and effectiveness by connecting multiple sampling instances through a selected leader. The proposed method, named Follow Hamiltonian Leader (FHL), extends the Hamiltonian Monte Carlo (HMC) framework by concurrently running multiple replicas at different energy levels and combining both zeroth and first-order information from various chains. Experimental results demonstrate that FHL significantly improves the exploration of target distributions and produces higher-quality outcomes compared to traditional sampling techniques, showing resilience against corrupted gradients and excelling in scenarios characterized by instability, metastability, and pseudo-stability.
Strengths: - The proposed Follow Hamiltonian Leader (FHL) method markedly improves the efficiency and effectiveness of sampling processes, significantly expediting the exploration of target distributions and producing superior quality outcomes compared to traditional sampling techniques.
- FHL demonstrates greater resilience against the detrimental impacts of corrupted gradients by incorporating zeroth-order information. This robustness makes the method particularly valuable in scenarios where first-order information is compromised, ensuring more reliable and accurate sampling.
Weaknesses: - The proposed FHL method involves intricate modifications to the traditional Hamiltonian Monte Carlo framework, such as the leader-guiding mechanism and elastic leapfrog technique, which may increase the complexity of implementation and require significant computational resources.
- The effectiveness of the FHL method heavily relies on the appropriate selection of the leader particle. If the leader is not accurately chosen, it could lead to suboptimal sampling performance, potentially compromising the overall efficiency and accuracy of the method.
- While the paper presents experimental results to demonstrate the efficacy of the FHL method, there is a lack of in-depth theoretical analysis to rigorously establish the convergence properties and performance guarantees of the proposed approach.
- The method’s scalability to high-dimensional problems or extremely large datasets is not thoroughly addressed. The parallel sampling approach may encounter challenges in maintaining efficiency and effectiveness as the dimensionality and size of the data increase.
Technical Quality: 2
Clarity: 3
Questions for Authors: See weaknesses above.
Confidence: 2
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors have not adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and for highlighting both the strengths and areas for improvement in our paper.
**Explanation**
For most sampling algorithms, the objective is to develop a proposal function $Q(x'|x): \mathbb{R}^d \rightarrow \mathbb{R}^d$ that generates a new sample $x'$ from an existing sample $x$, with the goal of having $x'$ fall into a region of high probability density. Provided the sampling algorithm adheres to the principles of detailed balance and ergodicity, it typically converges to the correct distribution.
Therefore, designing an effective sampling method generally involves improving the proposal at each step. FHL introduces a bias for each particle, guiding them towards more accurate results, specifically aiming for a lower value of the objective function $U(x)$, which aligns with common optimization tasks. There are two potential scenarios where our approach can be beneficial:
1. **Missing Gradient:** When the gradient is unavailable, the leader can be used as a reference point. This concept can be illustrated with a convex function. Consider a convex function $f$, where $x$ represents our particle and $y$ represents its leader. Then, $f((1-\rho) \cdot x + \rho \cdot y) < (1-\rho)f(x) + \rho f(y) < f(x)$, suggesting that $\rho (y-x)$ is a potential loss-descent step as long as $0 < \rho < 1$. Therefore, even when the gradient vanishes, our method still ensures a decrease in loss.
2. **Non-Optimal Gradient Direction:** As illustrated in Figure 1, the leader's pulling bias can improve the convergence direction when the gradient descent direction and the Newton descent direction are not aligned. It can be demonstrated that as long as the leader lies within the cone $C = \text{cone}(d_N(x), \theta_x)$, the descent direction could be improved. Here, $d_N(x)$ is the Newton descent direction (red arrow in Figure 1), $d_G(x)$ is the gradient descent direction (blue arrow in Figure 1), and $\theta_x = \theta(d_G(x), d_N(x))$.
We hope this explanation clarifies our contributions and provides a better understanding of the potential of the FHL method.
**Rebuttal**
We have carefully considered your feedback and address your points below:
1. **Complexity and Computational Resources:**
We acknowledge that the FHL method introduces modifications that may increase complexity. However, only one additional hyperparameter, the pulling strength $\lambda$, is specifically introduced, while the number of particles per group can be determined based on available computational resources. FHL is designed for large-scale parallel sampling, supported by two key factors:
1. The computation can be executed in parallel.
2. The communication cost scales ***logarithmically***.
As the number of particles increases, the total running time is primarily influenced by the communication cost, making FHL highly scalable with the number of particles. Additionally, techniques like lazy communication can be employed to further reduce communication overhead.
2. **Selection of the Leader Particle:**
The leader selection is indeed crucial to the method's performance. However, unlike stochastic optimization, most sampling algorithms require that the zeroth-order information be correct. If this assumption fails, these methods would become ineffective, as the key Metropolis-Hastings step [1] relies on the zeroth-order information to satisfy the detailed balance condition.
Furthermore, even with suboptimal leader selection, although the convergence rate might be affected, the method should still be able to converge to the correct distribution, as long as the algorithm satisfies the detailed balance condition.
Even if we are forced to sample from a biased function $\psi(x) = U(x) + \frac{\lambda}{2}||x - z||^2$ instead of the true function $U(x)$, the error can be bounded in some sense. We have provided a theoretical analysis of this in our appendix, Section C.
> [1] Chib, Siddhartha; Greenberg, Edward (1995). "Understanding the Metropolis–Hastings Algorithm". The American Statistician, 49(4), 327–335.
3. **Lack of Theoretical Analysis:**
We have included a theoretical analysis in the overall response, demonstrating that our algorithm has the correct distribution (to sample from) as its invariant distribution.
4. **Scalability Concerns:**
We have included a large-scale experiment in the overall response, demonstrating that FHL can generate biomolecule conformations despite the high number of degrees of freedom and numerous energy landscape barriers. This sampling experiment is not only extensive but also highly significant for real-world simulations of biomolecules.
---
Thank you once again for your valuable feedback and insights. We believe the planned revisions and additional analyses will address your concerns and enhance the paper's quality. We appreciate your assessment and are committed to addressing the concerns raised to improve the paper's overall quality and impact.
---
Rebuttal 2:
Comment: Dear Reviewer N2ad,
Thank you for your detailed review and valuable feedback on our paper. We appreciate the opportunity to address your concerns and provide further clarification on our rebuttal.
Addressing Your Concerns:
* Complexity and Computational Resources: FHL introduces minimal complexity with only one additional hyperparameter. The method is designed for parallel execution, making it scalable as communication costs scale logarithmically. Techniques like lazy communication further reduce overhead.
* Selection of the Leader Particle: While leader selection is crucial, our method remains effective as long as it adheres to the detailed balance condition. Even with suboptimal selection, convergence to the correct distribution is achievable.
* Lack of Theoretical Analysis: We have included a theoretical analysis in our appendix and overall summary (see pdf), demonstrating the correctness of our algorithm's invariant distribution.
* Scalability Concerns: Our large-scale experiments showcase FHL's capability to generate biomolecule conformations effectively, highlighting its significance for real-world applications.
We are committed to improving our paper based on your insights and believe that the planned revisions will address your concerns. Thank you again for your valuable feedback.
Best regards,
---
Rebuttal Comment 2.1:
Comment: Dear Reviewer N2ad,
We kindly request your attention to our rebuttal and the experiments we have included. We value your insights and would appreciate your feedback at your earliest convenience.
Receiving your comments sooner rather than later will allow us ample time to make necessary revisions and address any further concerns you may have. We are eager to incorporate your valuable input to enhance the quality of our paper.
Thank you for your understanding and cooperation.
Best regards,
Authors | Summary: This work proposes to incorporate the energy $U$ into the gradient-based sampling techniques. In particular, it proposes to choose the lowest energy particle as the leader and then add an extra elastic tension between the leader and followers in the Hamiltonian Monte Carlo method.
Strengths: The idea is simple and clear, the toy examples are easy to understand and demonstrate the benefit of the proposed method well. In addition, the authors conduct experiments for each of the three challenging sampling scenarios identified by the authors.
Weaknesses: It might worth including the overhead of the proposed method, how much slower the algorithm is per iteration compared to HMC for instance.
The tension coefficient $\lambda$ is critical, setting it to 0 recovers the baseline. But I did not find an ablation over the $\lambda$, is it hard to choose? From my understanding, if you set $\lambda$ pretty large it might recover something like gradient descent and the sampling will collapse.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How is $\lambda$ determined per experiment? Is it possible to include an ablation over it for the three scenarios at least?
2. In Figure 7, it is unclear to me which method is better. It will be better to provide some quantitative measurement like FID or IS scores?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper and for providing constructive feedback. We appreciate your comments and have addressed your points and questions below:
1. **Simplicity and Clarity:**
We are pleased that you found our approach simple and clear. The goal was to design a method that effectively incorporates energy information into gradient-based sampling techniques while remaining intuitive. We believe this clarity helps in demonstrating the benefits of our approach through the examples provided.
2. **Overhead and Computational Cost:**
Thank you for pointing out the need to include a discussion on the overhead of our method. We have made a discussion on both communitation and computation cost in the overall response.
3. **Tension Coefficient Analysis:**
The tension coefficient is indeed critical to our method, and we apologize for not including an ablation study on this parameter. We have now provided a table of the values we explored and a brief discussion in the overall response.
5. **Determination of the Tension Coefficient:**
The tension coefficient is determined empirically based on preliminary experiments. We agree that providing an ablation study for this coefficient across the three scenarios would strengthen our findings, and we are currently conducting this analysis. The results will be included in the revised manuscript.
6. **Clarity in Figure 7:**
Thank you for your suggestion regarding Figure 7. We acknowledge that the current presentation could be more clear and we have included quantitative metrics in the overall response.
#### Limitations
We acknowledge that the limitations section was brief. We will expand this section to discuss the scenarios where our method may face challenges, as well as ethical considerations and potential applications.
---
Thank you once again for your valuable feedback and constructive suggestions. We are confident that the revisions and additional analyses will address your concerns and improve the paper.
---
Rebuttal Comment 1.1:
Title: Reply to Authors Response
Comment: I thank the authors for their clarification and additional quantitative evaluations. I look forward to the ablation studies on the tension coefficient. I will keep my original rating.
---
Rebuttal 2:
Comment: Thank you for your feedback and for acknowledging our additional quantitative evaluations. We appreciate your interest in the ablation studies on the tension coefficient.
As mentioned in the third paragraph of the "3. Sensitivity to Hyperparameters" section of the overall summary above, we have included a table displaying the hyperparameters we explored. Additionally, we have conducted an ablation study to address your concerns.
---
### Ablation Study
Due to time constraints, we will briefly discuss the selection of $\lambda$, and for simplicity, we will consider updating the momentum directly by $- \eta \nabla f(x) + \lambda (x-z)$. Notice that as $\lambda \rightarrow 0$ FHL performs similarly to vanilla HMC method. As mentioned in the overall summary, determining an optimal value for $\lambda$ is challenging because it depends on the interplay of various hyperparameters and the characteristics of the sampled function. Thus, we will consider the case where $- \eta \nabla f(x) + \lambda (x-z)$ is a better choice than $- \eta \nabla f(x)$.
### Observation
We have two key observations from both theoretical and experimental results:
- Setting $\lambda$ to a small value may enhance performance.
- Increasing the number of particles in the group, $n$, is likely to improve performance.
**Theoretical Results**
It is well-known that the Newton method typically converges faster than the gradient method (the Newton method exhibits quadratic convergence near the optimal solution while the gradient method has a linear convergence rate for strongly convex function). The Newton method suggests a direction given by $d_N(x) = -(\nabla^2 f(x) + \epsilon I)^{-1}\nabla f(x)$, where $\epsilon$ is a small value added to prevent the inversion of a singular matrix. Hence, we consider it desirable to have search directions that are close to $d_N$.
Let $\theta(u, v)$ denote the angle between vectors $u$ and $v$, and let $\cos(u,v) = \frac{u^T v}{||u|| \cdot ||v||}$. Define the leading direction with respect to a leader $z$ as $d_{\lambda}(x;z) = -\eta \nabla f(x) + \lambda (x-z)$. Consider the positive definite quadratic function $f(x) = \frac{1}{2} x^T A x$, which results in $d_G(x) = -\eta \nabla f(x) = -\eta A x$ and $d_N(x) = -x$. For the upcoming theorems, we define a cone with axis $d$ and angle $\theta_c$ as $\text{cone}(d, \theta_c) = { x : x^T d \geq 0 \text{ and } \theta(x,d) \leq \theta_c }$. Additionally, we define $\theta_c(x) = \theta(d_N(x), d_G(x))$.
**Proposition 1**: If $\lambda_1 > \lambda_2$ and $\theta(d_{\lambda_1}(x;z), d_N(x)) < \theta(d_G(x), d_N(x))$, then $\theta(d_{\lambda_2}(x;z), d_N(x)) < \theta(d_G(x), d_N(x))$. In particular, if $z \in C$ and $0 < \lambda < 1$, then $d_{\lambda}(x; z) \in C$.
**Proposition 2** Define the outward vector for a point $x$ as $N_x = d_N - \frac{<d_N,\ d_G>}{||d_G||}\cdot \frac{d_G}{||d_G||}$. If $z$ satisfies $-N_x^T (z-x) < 0$, then for sufficiently small positive $\lambda$, $d_\lambda (x; z) \in \text{cone}(d_N(x), \theta_c(x))$.
**Claim 3** There exists a region that consistently improves performance. If each particle within a group of size $n$ converges to the distribution $\pi(x) = \frac{1}{Z} e^{-\frac{1}{2}x^T A x}$ (where $Z$ is a normalizing factor), then as the number of particles $n$ increases, the probability of finding an improving leader for an arbitrary particle $\tilde{x}$ increases. Specifically, this probability is given by
$1 - \left(\frac{1}{Z} \int_{ z \in (z \mid f(z) < c(\tilde{x}) ) } e^{-\frac{1}{2}z^T A z} \ dz\right)^n,$
where
$c(\tilde{x}) = \frac{|| N_{ \tilde{x} } ||_2^4} {2 \cdot v(\tilde{x})} ,$
and
$v(\tilde{x}) = N^T_{\tilde{x}} A^{-1} N_{\tilde{x}}.$
**Experimental Results**
We initialize $A$ as a positive-definite diagonal matrix with diagonal elements randomly selected from a uniform distribution in the range $(0.2, 20)$. The dimension of $A$ is $2000$ and we then run FHT for $2000$ iterations, adjusting $\lambda$ by dividing it by the number of particles (using $\lambda_n = \lambda / n$ instead of $\lambda$). We report the value of the leader $z$, $f(z)$, for different values of $\lambda$ and $n$.
| | $\lambda = 0.0$ (HMC) | $\lambda = 0.01$ | $\lambda = 0.02$ | $\lambda = 0.04$ | $\lambda = 0.08$ |
| --------- | ---------------- | ---------------- | ---------------- | ---------------- | ---------------- |
| $n = 2$ | 406.7 | **397.7** | 398.4 | 404.8 | 411.8 |
| $n = 4$ | 399.0 | 389.5 | **384.2** | 384.9 | 395.5 |
| $n =8$ | 397.7 | 391.5 | 386.7 | 380.7 | **379.6** |
| $n=16$ | 395.1 | 391.5 | 388.0 | 382.8 | **376.0** |
The experimental results confirm the consistency with our theoretical observations.
---
Please let us know if you have further questions or need additional details.
---
Rebuttal Comment 2.1:
Comment: - *Proof of Proposition 1.* This can be easily demonstrated by noting that $d_{\lambda_1}(x;z)$ lies within the cone $C$ formed by $\text{cone}(d_N(x), \theta_c(x))$, with $-\eta \nabla f(x)$ positioned on the boundary of $C$. Due to the convexity of $C$ and the fact that the point $d_{\lambda_2}(x;z)$ lies on the line segment connecting $d_{\lambda_1}(x;z)$ and $-\eta \nabla f(x)$, we notice that $d_{\lambda_2}(x;z)$ lies in the cone as well.
- *Proof of Proposition 2.* We first define $\delta = z -x$ and $g(\lambda) = \cos(\theta(d_G + \lambda \delta, d_N)) = \frac{d_N^T (d_G+\lambda \delta)}{||d_N||\ ||d_G + \lambda \delta||}$. Then take the derivative w.r.t to $\lambda$
$$
\nabla g(\lambda) = \frac{d_N^T \delta}{||d_N||\ ||d_G + \lambda \delta||} - \frac{d_N^T(d_G+\lambda \delta)}{2 ||d_N||} [(d_G + \lambda \delta)^T (d_G + \lambda \delta)]^{-3/2} \cdot 2(d_G + \lambda \delta)^T \delta = \frac{d_N^T \delta ||d_G + \lambda \delta||^2 - d_N^T (d_G+\lambda \delta) (d_G + \lambda \delta)^T \delta}{||d_N||\ ||d_G + \lambda \delta||^3}
$$
Notice that
$$
\nabla g(\lambda)|_{\lambda = 0} = \frac{d_N^T \delta ||d_G||^2 - d_N^T d_G d_G^T \delta}{||d_N||\ ||d_G||^3} = \frac{1}{||d_N||\ ||d_G||} [d_N - \frac{<d_N,\ d_G>}{||d_G||} \cdot \frac{d_G}{||d_G||}]^T \delta > 0
$$
From Proposition 2, it can also be observed that at least half of the points in $E = \{z \mid f(z) < f(x)\}$ contribute to improving convergence (to illustrate this, consider a hyperplane defined by $H = (h \mid N_x^T h = N_x^T (z - x))$ that intersects an ellipsoid given by $(z \mid z^T A z = f(x))$). In this case, the volume of $E \cap H$ is greater than half the volume of $E$.
- *Proof of Claim 3.* To demonstrate this, we seek to find values for $\alpha$ and $y$ such that $\alpha A y = N_x$ and $y^T N_x = ||N_x||_2^2$. This gives us $\alpha = \frac{N_x^T A^{-1} N_x}{||N_x||_2^2}$,
leading to the point $y = \frac{1}{\alpha} A^{-1} N_x$. The function value at $y$ is $f(y) = \frac{1}{2}y^T A y = \alpha^{-2} N_x^T A^{-1} N_x = \frac{||N_x||_2^4}{2 \cdot N_x^T A^{-1} N_x}$.
We can assert that for all $z \in \{z \mid f(z) < f(y)\}$, the direction of convergence could be improved by taking a step $d_{\lambda}(x;z)$ instead of $d_G(x)$ when $\lambda$ is relatively small. Consequently, for an arbitrary point $\tilde{x}$ within a group of $n$ particles, the probability of not finding any improving leader is $1 - \left(\frac{1}{Z} \int_{z \in \{z \mid f(z) < f(y)\}} e^{-\frac{1}{2}z^T A z} \, dz\right)^n$. | Summary: This paper presents an interesting new approach for improving sampling methods for energy-based generative models and score-matching models. The key idea is to incorporate zeroth-order information (energy values) in addition to the typical first-order gradient information used by most sampling algorithms like Hamiltonian Monte Carlo (HMC).
The authors identify several challenging scenarios where relying solely on gradients can be problematic - cases of instability, metastability, and pseudo-stability. They argue that incorporating energy values can help mitigate issues in these situations and improve sampling efficiency and quality.
Overall, the core idea of leveraging zeroth-order information in addition to gradients is quite novel and the FHL algorithm is an elegant way to implement this for improving sampling efficiency and quality. The paper is well-motivated, the method is clearly explained, and the empirical results are compelling.
Strengths: 1. Novel idea of incorporating zeroth-order energy information into sampling algorithms like HMC, which typically only use gradients. This can help address issues like instability and metastability.
2. The Follow Hamiltonian Leader (FHL) algorithm is an elegant way to exchange both energy and gradient information across parallel sampling chains in a principled manner.
3. Thorough experimental evaluation across synthetic examples illustrating the identified challenging scenarios of instability, metastability, and pseudo-stability.
4. Promising results showing improved sampling quality over baselines for energy-based generative models on real datasets like CLEVR.
5. Clear motivation and well-explained methodology.
Weaknesses: 1. It would be better to show exploration of the sensitivity to key hyperparameters like the number of parallel sampling chains.
2. Discussion of computational cost/overhead compared to baseline sampling methods are missing in the manuscript.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. While the empirical results are compelling, can you provide any theoretical analysis or guarantees about the convergence properties, sampling quality, or stationary distribution of the FHL algorithm? Even approximate bounds or insights would strengthen the theoretical grounding.
2. What are the main computational and memory overheads introduced by running parallel chains in FHL compared to standard HMC? Is there a limit on scalability to high dimensions or is there a way to reduce overhead through approximations?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Based on the provided paper, the authors do not appear to have explicitly discussed the limitations or potential negative societal impacts of their work. The paper is primarily focused on presenting the technical details of the proposed Follow Hamiltonian Leader (FHL) sampling algorithm and its empirical evaluation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful and detailed comments on our paper. We appreciate your feedback and are grateful for the opportunity to clarify some aspects of our work. Below, we address your comments and questions:
1. **Novelty of Zeroth-Order Energy Information**
We are glad you found the incorporation of zeroth-order energy information into sampling algorithms a novel and promising idea. We believe this approach offers a fresh perspective on improving sampling stability and efficiency for sampling, particularly in challenging scenarios.
2. **Sensitivity to Key Hyperparameters**
We agree that exploring the sensitivity to key hyperparameters, such as the number of parallel sampling chains, is essential for a comprehensive understanding of the FHL algorithm. We have now provided a table of the values we explored and a brief discussion in the overall response.
3. **Discussion of Computational Cost/Overhead for Parallel Sampling**
We greatly appreciate your recognition of the elegance of the FHL algorithm across parallel sampling chains. Our aim was to develop a principled approach that effectively leverages both types of information to enhance sampling performance. We would like to point out that our method only broadcasts the leader's parameters, making it communication-efficient.
Furthermore, our method is compatible with advanced techniques like parallel tempering, as they are orthogonal approaches. We are also exploring methods to minimize overhead, such as dimensionality reduction and lazy communication.
4. **Theoretical Analysis and Quantitive Results**
Thank you for requesting a theoretical analysis of the convergence properties and sampling quality of the FHL algorithm. We have included convergence guarantees comparable to those offered by most sampling methods. Additionally, we provide further quantitative results for our CIFAR10 experiment (Section 5.2 in the paper).
5. **Large-scale Experiment**
Scalability to high-dimensional spaces is an important aspect of our algorithm. We have included a large-scale experiment in the overall response, showing that FHL can generate biomolecule conformations despite the high number of degrees of freedom and many energy landscape barriers. This sampling experiment is not only extensive but also highly significant for real-world biomolecular simulations.
#### Limitations
- **Discussion on Limitations and Societal Impacts**
We acknowledge that our paper did not explicitly discuss limitations or potential negative societal impacts. In the revised version, we will include a dedicated section addressing these concerns.
---
We sincerely thank you for your constructive feedback, which has provided valuable insights into areas for improvement. We are committed to addressing these points in our revised manuscript and believe these enhancements will strengthen our work. | Rebuttal 1:
Rebuttal: We thank all the reviewers and provide a comprehensive rebuttal to address common questions raised by several reviewers regarding the proposed FHL algorithm. Here is a brief summary:
1. Theoretical Guarantee
2. Quantitative Evaluation
3. Sensitivity to Hyperparameters
4. Parallelizability
5. Large-Scale Experiment
### 1. Theoretical Guarantee
Given the limited time available to respond to the reviewers, we have provided a sketch of the proof for the basic invariant distribution preservation property (Fixed Point Equation).
**Theorem 1:** Consider that the density $\pi$ has full support on the state space. If there is always only one unique leader in the selection steps in Algorithm 1, then Algorithm 2 preserves the invariance of the distribution $\pi$, i.e.,
$$
\pi(ds') = \int p(ds' | s) \pi(s) ds.
$$
### 2. Quantitative Evaluation
First, we want to clarify that our paper primarily focuses on sampling techniques, with generative models being just one potential application of our work. Since the FID and IS scores are mainly influenced by the quality of the generative model itself, our primary interest is in how FHL samples from a given function, even if the model is poorly trained. Therefore, we present quantitative results based on the critic provided by the model itself from our experiments on CIFAR-10.
We used the pretrained model provided in [1] and found that directly sampling by $f_\theta(x)[y]$ produces more robust results ($f_\theta$ is the neural network, $x$ is a sample, and $y$ is the class label). We report both the *value* $f_\theta(x)[y]$ and the *percentage* $\frac{\exp(f_\theta(x)[y])}{\sum_{y'} \exp(f_\theta(x)[y'])} \times 100\%$ as our metrics, where a larger value indicates better performance.
| Class | HMC (value / percentage) | FHL (value / percentage) |
| -------- | ------------------------- | ------------------------- |
| Airplane | 0.724 / 99.71% | **0.892 / 99.89%** |
| Bird | -0.131 / **99.99%** | **0.194** / 99.41% |
| Dog | 0.094 / 95.53% | **0.213 / 98.13%** |
| Frog | -0.818 / **97.33%** | **-0.733** / 95.77% |
Based on the visual results shown in the paper (Figure 7, page 8) and the quantitative results mentioned above, it is evident that samples from FHL achieve significantly higher $f_\theta(x)[y]$ values compared to HMC. Notably, the visual results also demonstrate that samples from FHL effectively highlight the main body of the target class object, aligning with the findings presented in this study.
> [1] Will Grathwohl, Kuan-Chieh Wang, Joern-Henrik Jacobsen, David Duvenaud, Mohammad Norouzi, and Kevin Swersky. Your classifier is secretly an energy-based model and you should treat it like one. In International Conference on Learning Representations, 2020.
### 3. Sensitivity to Hyperparameters
We experimented with different numbers of particles per group, specifically n = \{2, 4, 8, 16\}, and tested the pulling strength $\lambda$ = \{0.1, 0.01\}.
While a minimum of 2 particles per group might perform adequately, increasing the number of particles per group can enhance convergence as expected.
Our findings indicated that the pulling strength $\lambda$ behaves differently across datasets and requires fine-tuning for each one. Several factors determine the choice of pulling strength $\lambda$:
- The norm of the gradients and the spectrum of the Hessian matrix for $U(x)$.
- The temperature $T$ in the Gibbs distribution definition, $e^{-U(x) / T}$, when sampling (in our experiments, we set $T=1$, but for a more general case, $T$ could be a different value).
- The desired size of the search scope determines how far the particle may move away from the leader.
We thank the reviewers for highlighting this point, and we recognize it as an intriguing area for future research.
### 4. Parallelizability
Note that the FHL algorithm can be run in parallel. Here we discuss both the computational and communication costs in practice:
- **Computational Cost:** Although the computational cost grows linearly with the number of particles, the algorithm can be executed in parallel, allowing the total execution time to remain unchanged. Additionally, our method can be combined with techniques like multiple learning rates or different temperatures to accelerate the sampling process, as long as at least one Markov chain converges to the target distribution.
- **Communication Cost:** In modern computer architectures, NVIDIA GPUs utilize a software library called NCCL. This library is directly integrated into PyTorch and is easy to use. FHL requires only *broadcasting* the leader's parameters, which typically has **logarithmic complexity**, making it more efficient than standard collective communication methods like *all-reduce* (commonly used in Data Parallel), which have **linear complexity**.
Thus, the computational and communication costs associated with our method are efficiently managed.
### 5. Large-Scale Experiment: Study of Biomolecules Using Dialanine Peptide
*Sampling* bottlenecks have been a persistent issue in generating biomolecule conformations due to the high number of degrees of freedom and numerous energy landscape barriers. Typically, sampling becomes trapped in local minima for extended periods, making it time-consuming to explore the various conformations of a biomolecule. We implemented FHL on a real-world biomolecule, alanine dipeptide, in a vacuum at room temperature to approximate its Boltzmann distribution and used MCMC as a controlled experiment to evaluate our method's effectiveness.
We conclude that: (1) FHL is **capable of crossing energy barriers** and sampling the entire conformational space, and (2) FHL demonstrates **higher sampling efficiency** compared to MCMC.
---
We have included the theoretical and experimental results in one attached PDF. Everything is written in Markdown except for one page of figures to keep the rebuttal panel clean.
Pdf: /pdf/80dd008a17ca3854c557a52e888ab2d8011ca355.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Fast Best-of-N Decoding via Speculative Rejection | Accept (poster) | Summary: Best-of-N decoding involves generating multiple responses and then selecting the highest scoring one according to a given metric of interest. Based on the observation that the reward function used for scoring the utterances can distinguish high-quality hypotheses from low-quality ones at an early stage of the generation, this paper focuses on accelerating this procedure by stopping the generation of “unpromising utterances”, i.e., those that are unlikely to be returned at the end.
Strengths: I summarize below the main strengths:
- The motivation is very clear and improving the efficiency of LLMs at inference time is an important problem.
- The empirical results seem to be promising.
Weaknesses: Please see my questions and comments below. In my opinion, some parts of the submission could be significantly improved, including the overall organization of the paper, the related work section, and the discussion about situations in which the proposed method should or should not work. Also, I have some concerns about the experimental parts, which are detailed below.
Technical Quality: 2
Clarity: 2
Questions for Authors: Comments and questions:
- L36-38: I don’t think that saying that Best-of-N is “essentially” hyperparameter-free makes sense here. As you point out, $N$ is a hyperparameter. Also, the hypotheses can be sampled in various ways (e.g., sampling with different temperatures, nucleus samplings, etc).
- There is a large body of work on decoding strategies for LLMs that rely on sampling multiple hypotheses and selecting the best one, including voting procedures [1], minimum bayes risk decoding [2], and other types of strategies [3, 4]. I believe they should at least be mentioned in the related work section.
- As briefly mentioned in L161-169, predicting scores for unfinished sentences may be misleading. Even though Figure 2 does not look bad, this is just for one example, right? Unless I’m missing something, this does not seem to be enough evidence to support your claims. Since this is key to your proposal, it would be beneficial to expand the discussion about this topic as well as the empirical evidence.
- L270-271: “we present experiments on the AlpacaFarm-Eval dataset, where we sample 100 prompts at random”. What do you mean by sampling 100 prompts? Is this a common practice?
Minor comments:
- Not all figures are referred to (see, e.g., Figures 5-14).
- Sometimes you refer to equations as “eq. (X)”, sometimes just as “(X)”. The same thing happens with “appendix X” and “Appendix X”, or “Figure X” and “fig. X”. Please try to be consistent throughout the paper.
- L306: Fix citation.
[1] Self-Consistency Improves Chain of Thought Reasoning in Language Models (Wang et al., ICLR 2023)
[2] It’s MBR All the Way Down: Modern Generation Techniques Through the Lens of Minimum Bayes Risk (Bertsch et al., BigPicture 2023)
[3] An Empirical Study of Translation Hypothesis Ensembling with Large Language Models (Farinhas et al., EMNLP 2023)
[4] Quality-Aware Decoding for Neural Machine Translation (Fernandes et al., NAACL 2022)
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response for the reviewer gUaJ:
We appreciate the reviewer's detailed comments and we have read them carefully. Here are our detailed responses.
---"L36-38: I don’t think that saying that Best-of-N is “essentially” hyperparameter-free makes sense here. As you point out, is a hyperparameter. Also, the hypotheses can be sampled in various ways (e.g., sampling with different temperatures, nucleus samplings, etc)."---
**Response:** This is a valid point. We agree that Best-of-N can rely on several hyperparameters, for example, temperature, nucleus sampling value (top-p), and top-k, among others. But it is also worth mentioning that sampling from language-models that went through a post-training phase also requires selecting the decoding strategy and its hyper-parameters (e.g., temperature) at decoding time. On top of these, Best-of-N only requires choosing N as additional parameter.
Regarding the N hyperparameter, we examine its effect on our old and new experiments. When N is relatively small (~100-128), large rejection rates around 90% and higher have the potential to sacrifice significant performance. However, rejection rates as high as 80% still provide strong performance and improve efficiency at around 2-6x speedup, depending on the maximum length.
---"There is a large body of work on decoding strategies for LLMs that rely on sampling multiple hypotheses and selecting the best one, including voting procedures [1], minimum bayes risk decoding [2], and other types of strategies [3, 4]. I believe they should at least be mentioned in the related work section."---
**Response:** We agree, the body of work on decoding strategies for LLMs is very large and only growing larger. Although our related work already contains several categories, we will also include references such as the ones above to highlight the variety of available decoding strategies and how they connect to our algorithm.
---"As briefly mentioned in L161-169, predicting scores for unfinished sentences may be misleading. Even though Figure 2 does not look bad, this is just for one example, right? Unless I’m missing something, this does not seem to be enough evidence to support your claims. Since this is key to your proposal, it would be beneficial to expand the discussion about this topic as well as the empirical evidence."---
**Response:** We also include Figure 5 in Appendix D, which plots the correlation between partial reward and final reward across several decision tokens and for all prompts we tested. As shown, there is positive correlation across prompts which allows our algorithm to work effectively.
---"L270-271: “we present experiments on the AlpacaFarm-Eval dataset, where we sample 100 prompts at random”. What do you mean by sampling 100 prompts? Is this a common practice?"---
**Response:** Of the 805 prompts in the Alpaca Farm Eval dataset, we randomly sample (without cherry-picking) 100 of these prompts and perform all of our experiments on those 100 prompts, to save computation to run the baseline given the limited computational resources available to us.
Also, since the win rate by GPT4 is expensive and OpenAI limits access to GPT4 on accounts each month, downsampling prompts from a large dataset is seen in other works [1].
[1]Jang, Joel, et al. "Personalized soups: Personalized large language model alignment via post-hoc parameter merging." arXiv preprint arXiv:2310.11564 (2023).
Regarding the minor comments, we will also improve the consistency when referring to equations, figures, etc. and fix any citations we need to. However, we should note that Figures 5-14 are part of the Appendix, and so are indirectly referenced in the main body via the Appendices.
---
Rebuttal Comment 1.1:
Comment: I've read the other reviews and the rebuttal. Thank you for answering my questions, I've updated my initial review accordingly. | Summary: Best-of-N is a decoding-time alignment algorithm that effectively aligns the output of the system at the cost of high inference time. The paper seeks to reduce the computation time by pruning unpromising sequences at early stage using the reward model to estimate the reward on the partial utterance. They empirically show that the estimate of the reward model on a partial sentence is correlated with the estimate of a complete sentence, using AlpacaFarm-Eval dataset. The proposed method achieves 2-8 times speedup at a marginal drop of the output quality.
Strengths: - Proposed method is simple and effective.
- Experiments are clear and comprehensive, except for the variety of the tasks.
- Showing that partial rewards have good correlation with the final rewards (Figure 2) is a significant contribution to the community.
Weaknesses: I don’t see any critical weaknesses in the study. Several minor points are listed below.
- Experiments are conducted only on AlpacaFarm-Eval dataset. It would be ideal to have an evaluation of the proposed method in other datasets (e.g., TL;DR, hh-rlhf) as the effectiveness of the method may be affected by the structure of the task. For example, I speculate the the reward of the partial text has less correlation with the complete text on tasks like text summarization. It may also be influenced by the language. For example, Korean and Japanese have more flexible word order than English and word order does not determine grammatical function. Thus, the same length of the first few tokens may contain very different information in these languages. This may make the partial reward not a good indicator of the final reward. It may better be noted that the proposed method might be exploiting the structure of English which is not universally true for all natural languages.
- Adding two hyperparameters to the inference algorithm is a minus (as noted in Limitations). They show in Tables 3 and 4 that the optimal values of the hyperparameters are relatively consistent when using different choices of reward models and the number of samples (N). The question is how consistent the choice of the tasks/prompts (which is noted in Prompt-dependent stopping) is. It would be nice if one could evaluate the robustness of the hyperparameters to the variety of tasks.
- Although the experiments are conducted on three language models and three reward models, they share relatively similar profiles. It would be beneficial to have larger and smaller language models and reward models.
- An empirical evaluation of the proposed method compared against MCTS would be nice to have.
Technical Quality: 4
Clarity: 4
Questions for Authors: The speedup depends on the speed of the text generation and reward models. How fast are they in the experiments? It would be valuable to have the wall time of the text generation and reward computation separately in the Appendix.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: I don't see any problems. If the method is likely to exploit the structure of English, then it may be noted in the limitations that the experiments are conducted only in English.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response for the reviewer qQm8:
We thank the reviewer for providing positive feedback and several good questions of our work. Below are our detailed responses.
---- "Experiments are conducted only on AlpacaFarm-Eval dataset. It would be ideal to have an evaluation of the proposed method in other datasets (e.g., TL;DR, hh-rlhf) as the effectiveness of the method may be affected by the structure of the task. For example, I speculate the the reward of the partial text has less correlation with the complete text on tasks like text summarization. It may also be influenced by the language. For example, Korean and Japanese have more flexible word order than English and word order does not determine grammatical function. Thus, the same length of the first few tokens may contain very different information in these languages. This may make the partial reward not a good indicator of the final reward. It may better be noted that the proposed method might be exploiting the structure of English which is not universally true for all natural languages." ----
**Response:**
In the attached PDF, we have several new experiments on the HH-RLHF dataset as well and we show that our algorithm performs consistently well across both tested datasets. We would have liked to also test on the TL;DR task, but could not due to limited time.
Additionally, the reviewer makes a very good point regarding the specific semantics of the English language - in this sense, our method does take advantage of the structure of English and we will include this in the Limitations section.
---- "Adding two hyperparameters to the inference algorithm is a minus (as noted in Limitations). They show in Tables 3 and 4 that the optimal values of the hyperparameters are relatively consistent when using different choices of reward models and the number of samples (N). The question is how consistent the choice of the tasks/prompts (which is noted in Prompt-dependent stopping) is. It would be nice if one could evaluate the robustness of the hyperparameters to the variety of tasks." ---
**Response:** In the attached PDF, we have tested the robustness of the hyperparameters by conducting multiple counterfactual analyses across several datasets, LMs, RMs, and tasks. We find that across all these combinations, an overwhelming majority of them use a decision token of either 128 or 256 and a rejection rate of 0.7 or 0.8. Then, a reasonable choice of hyperparameters in practice would be, say, rejecting 75% of the trajectories after generating 200 tokens. This would be a strong choice across a wide variety of real-world situations.
---- "Although the experiments are conducted on three language models and three reward models, they share relatively similar profiles. It would be beneficial to have larger and smaller language models and reward models." ---
**Response:** In the attached PDF, we conduct several new experiments involving small and large LMs and RMs. We find that our method is effective across the spectrum of model sizes.
---- "An empirical evaluation of the proposed method compared against MCTS would be nice to have." ---
**Response:**
MCTS is definitely related as a decoding strategy. However, we could not complete the comparison by the end of the rebuttal period.
---- "The speedup depends on the speed of the text generation and reward models. How fast are they in the experiments? It would be valuable to have the wall time of the text generation and reward computation separately in the Appendix." ----
**Response:** We can include detailed values in the appendix for all of the combinations, but it mostly depends on the maximum generation length specified by the LM. Rough estimates to produce and score a single batch of 20 on the new experiments are as follows on our machine:
- GPT2-XL (max_length=1024): 25 seconds
- GPT-J-6B (max_length=2048): 2-3 minutes
- Mistral-7B (max_length=8000): 3-10 minutes
- Llama-3-8B (max_length=8000): 1-5 minutes
Reward model computations are generally 1-5 seconds, highlighting the intuition behind the effectiveness of our algorithm: because **reward computations are relatively cheap**, it is worth generating a few tokens (say, 200) and pruning most of the generations before continuing. Moreover, generating earlier tokens from decoder-only transformer architectures is much less expensive than generating later tokens due to the quadratic attention cost, compounding the strength of our method.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for the clarification.
I believe that the additional evaluation on the HH-RLHF dataset and on various language models further improved the reliability of the experimental results.
> We can include detailed values in the appendix for all of the combinations, but it mostly depends on the maximum generation length specified by the LM. Rough estimates to produce and score a single batch of 20 on the new experiments are as follows on our machine:
> reward computations are relatively cheap
I believe it is valuable to report the wall time (maybe in the appendix) even if it is a rough estimate. It will serve as evidence to say that reward computations are usually cheaper than the cost of generation.
---
Rebuttal 2:
Title: Can you defend your review?
Comment: Dear Reviewer,
This paper has been flagged as one with large discrepancies between scores. Could you please take a look at the other reviews and respond to the following question:
Would you defend this paper getting accepting into NeurIPS this cycle?
Thanks,
AC
---
Rebuttal Comment 2.1:
Comment: I am quite positive that this paper brings an interesting contribution to the community.
I am willing to discuss with Reviewer Qphu as their viewpoint is different from mine. However, I find it challenging to engage in a scientific discussion without supporting evidence or references for their claim. | Summary: The paper proposes an early stopping method to accelerate the Best-of-N method. Experimental results demonstrate its effectiveness.
Strengths: 1. The method has a strong and clear motivation, coupled with easy implementation.
2. Experimental results demonstrate its effectiveness in accelerating best-of-n while preserving the quality of generated content.
Weaknesses: 1. Best-of-N is out-of-date in decoding-time alignment, which weakens the novelty of the paper.
2. The validity of the first point may be questionable when considering the utilization of best-of-n for response generation in model training. Nonetheless, the paper does not experiment with model training aided by SBoN.
3. It would be advantageous if the author could test SBoN on a broader range of tasks beyond just alignment.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please refer to the weakness part.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors acknowledge the challenge inherent in selecting hyperparameters while also offering solutions to address this issue.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response for the reviewer Qphu:
We thank the reviewer for providing valuable feedback and understanding the value of our work. We have read your comments carefully and below are our detailed responses.
---- "Best-of-N is out-of-date in decoding-time alignment, which weakens the novelty of the paper." ----
**Response:** We find that Best-of-N(Rejection Sampling) is still a widely used technique that remains relevant even in recent times. For example, Song et al. [1] show that Best-of-N on relatively small models like Llama-3-8B-Instruct with values of N as small as 16-32 can outperform GPT-4-Turbo on several tasks. Additionally, Best-of-N was an important part of the deployment of OpenAI's WebGPT [2]. Also, rejection sampling is frequently used to generate high-quality data for alignment [3][4][5].
Finally, very recent work has proposed aligning language models such that their distribution of generations is closer to the Best-of-N distribution [6][7].
[1] Song, Yifan, et al. "The Good, The Bad, and The Greedy: Evaluation of LLMs Should Not Ignore Non-Determinism." arXiv preprint arXiv:2407.10457 (2024).
[2] Nakano, Reiichiro, et al. "Webgpt: Browser-assisted question-answering with human feedback." arXiv preprint arXiv:2112.09332 (2021).
[3] Khaki, Saeed, et al. "Rs-dpo: A hybrid rejection sampling and direct preference optimization method for alignment of large language models." arXiv preprint arXiv:2402.10038 (2024).
[4] Liu, Tianqi, et al. "Statistical rejection sampling improves preference optimization." arXiv preprint arXiv:2309.06657 (2023).
[5] Dubey, Abhimanyu, et al. "The Llama 3 Herd of Models." arXiv preprint arXiv:2407.21783 (2024).
[6] Sessa, Pier Giuseppe, et al. "BOND: Aligning LLMs with Best-of-N Distillation." arXiv preprint arXiv:2407.14622 (2024).
[7] Amini, Afra, et al. "Variational Best-of-N Alignment." arXiv preprint arXiv:2407.06057 (2024).
---- "The validity of the first point may be questionable when considering the utilization of best-of-n for response generation in model training. Nonetheless, the paper does not experiment with model training aided by SBoN." ----
**Response:**
As the reviewer suggests, best-of-n can be used both for inference-time alignment [2] or for later model fine-tuning [1].
This latter setup is substantially more involved, and it introduces additional confounding factors (e.g., training hyper-parameters) which would have made the comparison harder.
However, we added more metrics like the **win rate** computed by GPT4 [2] as a substitute (see attached PDF), to indicate that our algorithm produces high quality responses.
[1] Dubey, Abhimanyu, et al. "The Llama 3 Herd of Models." arXiv preprint arXiv:2407.21783 (2024).
[2] Dubois, Yann, et al. "Alpacafarm: A simulation framework for methods that learn from human feedback." Advances in Neural Information Processing Systems 36 (2024).
---- "It would be advantageous if the author could test SBoN on a broader range of tasks beyond just alignment." ----
**Response:** In the attached PDF, we include preliminary experiments where we minimize the perplexity---which is equivalent to maximizing the probability of the generated text---with BoN and SBoN.
We also include a few experiments in which we show that the GPT4 win rate remains relatively stable and large across rejection rates.
Finally, we include several new experiments across multiple LMs and RMs of varying size as well as varying datasets to demonstrate the robustness of our method in different settings.
Regarding the hyperparameter limitation, a significant majority of LM+RM combinations use a decision token of either 128 or 256 and a rejection rate of either 0.7 or 0.8 for Best-of-100. This suggests that rejecting, say, 75% of the trajectories at decision token 200 would be a simple and practical choice for hyperparameters in several real-world settings.
---
Rebuttal Comment 1.1:
Comment: Thanks authors for your detailed reply and thanks reviewer qQm8 for pushing me into discussion.
> Evidence for Best-of-N is out-of-date in **decoding-time alignment**
Actually, you should provide me with evidence that they are widely used, not me. If they are not widely used, what evidence do you expect? You can see it from the authors' response. [1] is the only paper that uses Best-of-N, and it is a new paper. [2] is old, and [3] [4] [5] are about data generation. This means there is only 1 paper supporting their claims.
> Addional Experiments
Thanks for putting efforts into adding so many experiments and they are convincing. Therefore, I would like to change my score.
---
Rebuttal 2:
Title: Inquiry for the reference supporting the statement "Best-of-N is out-of-date"
Comment: > Best-of-N is out-of-date in decoding-time alignment
I would like to ask Reviewer Qphu for the reference (or evidence) supporting this statement. It will be a more productive discussion if accompanied by evidence.
---
Rebuttal 3:
Title: Reply to reviewer Qphu
Comment: Dear Reviewer Qphu,
As the author-reviewer discussion period will end soon, we will appreciate it if you could check our response to your review comments. We have included all our new running experiments **in the attached PDF.** This way, if you have further questions and comments, we can still reply before the author-reviewer discussion period ends. If our response resolves your concerns, we kindly ask you to consider raising the rating of our work. Thank you very much for your time and efforts!
The Authors
---
Rebuttal 4:
Title: Reviewer, please respond to the authors
Comment: Hello reviewer,
You wrote a very short review and gave a rating of "reject" which is much lower than the other two papers. Could you please engage with the authors and respond to their rebuttal? Do you stand by your review or would you like to change your score? Thanks for your assistance to the ACs with making a decision on this paper.
-AC | null | null | Rebuttal 1:
Rebuttal: # General Response to all reviewers:
We thank all reviewers for the detailed comments and valuable questions. We present additional experiments here---as requested by the reviewers---and also address the reviewers' individual questions separately.
These new experiments include:
- New datasets
- LLMs and reward models of different sizes
- Minimizing the perplexity as an experiment different from alignment
- GPT4 win rate experiments
Taken together, we believe that these should address several of the reviewers' concerns, and we hope they would consider improving their scores. If not, we are very happy to answer any further question that the reviewers may have.
## Experimental details
In the attached PDF we use our algorithm to minimize the **perplexity**, which is related to the probability of the generated text to showcase that the algorithm works in task different from alignment, as suggested by reviewer Qphu.
Moreover, we include new experimental results related to alignment across several datasets, as well as LLM and reward model sizes, as suggested by most reviewers.
- Datasets:
- Alpaca Farm Eval
- HH-RLHF
- Language Models (LMs):
- GPT2-XL (1.5B)
- GPT-J-6B
- Mistral-7B
- Llama-3-8B
- Reward Models (RMs):
- reward-model-deberta-v3-large-v2 (~500M)
- RM-Mistral-7B
- FsfairX-Llama-3-RM (8B)
- ArmoRM-Llama3-8B
Although we weren't able to bring all possible combinations to completion due to time constraints, these new experiments demonstrates that:
1. **Our method works** on a variety of model sizes. Speedups with negligible drop in score are apparent **across** the spectrum of **small and large LMs and RMs**.
2. **Our method is effective on several different tasks**. Notably, we test our method on new tasks that include minimizing generation perplexity. The effectiveness of our algorithm is relatively consistent across tasks.
Pdf: /pdf/2bb899d08f05b5d6b6f697376f28bc8efb9c8a69.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Geometry Awakening: Cross-Geometry Learning Exhibits Superiority over Individual Structures | Accept (poster) | Summary: This paper proposes a novel method for graph knowledge distillation. The proposed method incorporates knowledge from both Euclidean and hyperbolic teacher models and transfers it to a student model in a way that leverages the appropriate geometry for different local subgraphs. A SWKT module is used to select embeddings from the most suitable geometry based on the local subgraph structure. Furthermore, a GEO module refines the transferred knowledge by mitigating inconsistencies between Euclidean and hyperbolic spaces.
Experimental results demonstrate that the proposed method outperforms other KD methods.
Strengths: S1. The authors propose a method that leverages the strengths of both Euclidean and hyperbolic representations.
S2. The experimental results demonstrate that the proposed method outperforms various graph data distillation baselines.
S3. The proposed method has a similar running time as other baselines with a much higher compression performance.
Weaknesses: W1. The paper would benefit from a thorough analysis of the method's time and space complexity.
W2. The proposed approach uses two geometries (,i.e., Euclidean and hyperbolic). While this is a significant advancement, the paper could be strengthened by discussing its potential extension to incorporate additional non-Euclidean geometries. Exploring the necessary modifications for handling multiple geometries would broaden the applicability of the method.
W3. While providing background knowledge can be helpful, the paper should ensure all definitions in the preliminaries section are essential for understanding the core method. Definitions like the Poincaré Disk Model and Tangent Space, if not directly used, could be moved to an appendix or omitted for clarity.
W4. The paper primarily focuses on Euclidean and hyperbolic teacher models. Further exploration with teacher models employing different geometries, such as spherical space, would provide valuable insights into the method's adaptability. Analyzing the performance with these variations would strengthen the paper's contribution.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive comments! We hope that our response can resolve your concerns. Please feel free to ask any follow-up questions.
---
# W1:
**Notation Definitions:**
- $N$: Number of nodes
- $|E|$: Number of edges
- $D$: Dimension of node features
- $H$: Dimension of hidden layers
- $R$: Number of teachers
- $k$: Parameter for $k$-pop subgraphs
**Space Complexity:**
- Node feature matrix: $O(N \cdot D)$
- Adjacency matrix: $O(|E|)$ (assuming the graph is sparse)
- Representations of hidden layers: $O(R \cdot N \cdot H)$
- Optimized distribution of local subgraph representations: $O(N \cdot k \cdot |E|)$
**Overall space complexity: $O(N \cdot D + |E| + R \cdot N \cdot H + N \cdot k \cdot |E|)$**
**Time Complexity:**
- Forward propagation: $O(N \cdot H^2 + |E| \cdot H)$
- Local subgraph generation: $O(N \cdot k \cdot |E|)$
- Structured-WiseKnowledge Transfer (SWKT) module: $O(N \cdot k \cdot H)$
- Similarity measurement computation: $O(N \cdot k)$
- Geometric Embedding Optimization (GEO) module: $O(N \cdot H^2)$
**Overall time complexity: $O(N \cdot H^2 + |E| \cdot H + N \cdot k \cdot |E| + N \cdot k \cdot H + N \cdot k)$**
Besides, we also conducted a computational efficiency study comparing the teacher and student models from both theoretical and empirical perspectives in our response to **W1 commented by Reviewer GamE**. In Table 2 of our paper, we present the parameter ratios (compression rates) between the teacher and student models for each knowledge distillation (KD) method. Additionally, we provide information on distillation training times and student inference times in Tables 2, 10, and 11.
---
# W2:
We fully agree with your suggestion that exploring the necessary modifications for handling multiple geometries would broaden the applicability of our method. In our paper, **we have simply attempted experiments incorporating spherical geometry**.
In Table 1, the Cross method represent simplified variants of our framework. The rows marked $\mathbb{E}$ and $\mathbb{S}$ correspond to the use of both Euclidean and spherical geometries, while rows $\mathbb{E}$, $\mathbb{B}$, and $\mathbb{S}$ represent the use of Euclidean, hyperbolic, and spherical geometries simultaneously. The results indicate that spherical geometry does not provide significant additional benefits within our framework. We apologize for any confusion caused by unclear descriptions about Table 1 of our paper. We will improve the presentation of the experimental section to enhance clarity.
In future work, we will explore feasible approaches to extend our framework to other potential geometries. For example, we may consider integrating more complex Riemannian geometries or other non-Euclidean geometries to further enhance the performance and applicability of the method. We appreciate your valuable suggestion and look forward to exploring these aspects in our future research.
---
# W3:
Yes, moving preliminary definitions that are not directly related to the core method to the appendix will enhance the clarity of the paper.
After carefully review, we found that the expression for the Poincaré Disk Model in Definition 1 is not directly used in the presentation of our method, so we will move it to the appendix. The description of our method requires the projection formula between the tangent space and hyperbolic space in Definition 3, and this conversion involves hyperbolic operations described in Definition 2. Therefore, we will retain Definitions 2 and 3 in the main paper.
Thank you for your constructive suggestion. We will make the necessary adjustments based on your feedback to improve the clarity and readability of the paper.
---
# W4:
In fact, **we have conducted some experiments using spherical teacher model in our paper.** In Tables 1,7,8 of our paper, the rows labeled $\mathbb{S}$ indicate results obtained using spherical models either alone or as one of the teacher models. Experimental results indicate that the use of spherical teacher models provides limited improvement in the results. This situation may be improved after further refining the framework for spherical space.
We will explore using teacher models from other geometric spaces to enhance the adaptability of our framework. We will also provide more detailed descriptions of the experimental setups in the paper to avoid similar misunderstandings and improve clarity.
---
Rebuttal 2:
Comment: Dear Reviewer rr98,
We thank again for your thorough review and constructive feedback. In our response to your comments, we included an analysis of our method's time and space complexity and noted that our original work have explored integrating other geometric models into the framework and presented the results. We genuinely hope you could check our responses, and kindly let us know your valuable feedback. We would be happy to provide any additional clarifications that you may need.
Best regards,
Authors
---
Rebuttal Comment 2.1:
Comment: Thank the authors for their response. I will keep my score.
---
Reply to Comment 2.1.1:
Comment: **Dear Reviewer rr98:**
Thank you for your response! We noticed that you did not mention whether our prior responses addressed your concerns. We cautiously infer that our prior responses may not have fully alleviated your concerns regarding the performance of our framework when extended to other geometric spaces. Upon receiving your initial comments, we began conducting additional experiments to integrate spherical space into our framework. To date, we have obtained the following results (some data are from Table 1), the best results are in bold, and the suboptimal results are italic:
- F1 score (%) of the **Euclidean student model** on the dataset when incorporating the spherical teacher:
| | ogbn-arxiv | ogbn-proteins | Wiki-CS | Co-Physics | Pubmed | Citeseer | Cora |
| ----------------------------------------------- | ---------- | ------------- | --------- | ---------- | --------- | --------- | --------- |
| **Spherical Teacher** | 70.11 | 70.74 | 69.13 | 96.27 | 82.14 | 71.88 | 82.48 |
| **Spherical + Hyperbolic Teachers** | **71.25** | *70.89* | 70.07 | 96.17 | *82.23* | 71.90 | 82.74 |
| **Spherical + Euclidean Teachers** | 70.87 | 70.56 | *70.85* | 96.07 | 80.45 | *71.98* | 82.89 |
| **Spherical + Hyperbolic + Euclidean Teachers** | 70.75 | 70.58 | 68.70 | *96.37* | 81.50 | 71.77 | *83.19* |
| **Hyperbolic + Euclidean Teachers** | *70.89* | **71.22** | **74.17** | **96.87** | **82.73** | **72.60** | **86.05** |
- F1 score (%) of the **spherical student model** on the dataset when incorporating the spherical teacher:
| | ogbn-arxiv | ogbn-proteins | Wiki-CS | Co-Physics | Pubmed | Citeseer | Cora |
| ----------------------------------------------- | ---------- | ------------- | --------- | ---------- | --------- | --------- | --------- |
| **Spherical Teacher** | 68.11 | 65.23 | 74.34 | 95.27 | 84.63 | 75.15 | 82.17 |
| **Spherical + Hyperbolic Teachers** | *70.13* | *70.71* | *77.98* | 96.10 | 85.00 | *77.97* | 84.37 |
| **Spherical + Euclidean Teachers** | 69.54 | 69.71 | 77.17 | **96.21** | 85.08 | 77.96 | 84.13 |
| **Spherical + Hyperbolic + Euclidean Teachers** | 68.53 | 70.10 | 73.10 | 96.07 | *85.72* | 77.63 | *84.52* |
| **Hyperbolic + Euclidean Teachers** | **70.59** | **70.97** | **79.24** | *96.13* | **85.83** | **78.21** | **86.73** |
The experimental results indicate that, in some cases, the distillation performance of the teacher group including spherical space teachers outperform those of the hyperbolic and Euclidean combination. This suggests that seeking additional knowledge from other geometric spaces, including spherical space, is of significant importance for improving our framework.
We thank you again for your detailed review and valuable suggestions. We hope that these supplementary comments can address your concerns. We would be glad to further discuss these new experimental results with you!
Best regards,
Authors | Summary: This paper introduces a new graph distillation framework using special teacher networks to consider Euclidean and hyperbolic geometries when performing the distillation to the light-weight student GNN model.
Key techniques include Structure-Wise Knowledge Transfer (SWKT) for selecting appropriate geometric spaces and Geometric Embedding Optimization (GEO) for feature fusion across geometries.
Strengths: 1. It is intriguing to consider various geometrics in distillation.
2. Experimental results demonstrate effective performance on small graph node classification datasets.
3. The sensitivity analysis of hyperparameters, including thresholds like $\delta$ and $\lambda$, is insightful. It will be great if the sensitivity analysis of the weight coefficient $\beta$ could be further considered.
Weaknesses: 1. The paper lacks an efficiency study comparing the computational efficiency between the teacher and student models.
2. Larger dataset should be included to better evaluate the proposed distillation methods such as ogbn-arxiv and ogbn-product
3. The motivation of choosing GNN as the student network is not sufficiently justified.
While previous works focus on focus on MLP-based students, more clarity on this choice would be beneficial.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. I am curious that the input of the student model is full graphs or sampled subgraphs? How does this differ from student networks using MLPs [1].
2. Are there any baseline methods that do not incorporate various manifolds to compare with to show the effectiveness of manifold-aware approaches?
[1] Graph-less Neural Networks: Teaching Old MLPs New Tricks via Distillation.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the time and effort in reviewing our paper. We hope that our response can resolve your concerns. Please feel free to ask any follow-up questions.
---
# S3:
In fact, we provided sensitivity analysis of the weight coefficient $\beta$ in Figure 3 (middle). We performed a joint analysis of the hyperparameters $\lambda$ and $\beta$ to evaluate their combined effects on our method. Lines with different markers denote different $\beta$ amd the best value of $\beta$ is around 3.
---
# W1:
**Theoretically**:
GCN's time complexity is: $O(\sum^L_{l=1}(|E|H + NH^2))$, where $N, |E|, L$ denote the number of nodes, edges, and layers, respectively, $H$ denote hidden dimension. In our paper, teacher and student models' hidden dimension is 128 and 8, respectively. Assuming graphs is sparse, which means $|E| \approx O(N)$, student is theoretically **229x** faster than their teacher models.
**Empirically**:
The inference times (ms) of the teacher and student models measured on our device are shown in the table below:
| | Wiki-CS | Co-Physics | Pubmed | Citeseer | Cora | Average |
| -------- | ------- | ---------- | ------ | -------- | ---- | ------- |
| Teacher | 906 | 3410 | 914 | 908 | 975 | 1422 |
| Student | 3.9 | 12.0 | 4.4 | 4.0 | 4.4 | 4.2 |
| Speed-up | 227x | 284x | 204x | 226x | 220x | 232x |
Our method achieves an average acceleration of **232×**.
Additionally, we provided time and space complexity analysis of SWKT and GEO modules in the response to **W1 commented by reviewer rr98**. In our paper, we provide the parameter ratios (compression rate) between teacher and student for each KD method in Table 2, distillation training times and student inference time in Tables 2, 10, 11.
---
# W2:
The largest dataset used in our experiments was Coauthor Physics (495,924 edges and 34,493 nodes). We conduct further experiments on larger datasets ogbn-arxiv (1,166,243 edges, 169,343 nodes) and ogbn-proteins (39,561,252 edges, 132,534 nodes). Results show that our method consistently achieves best distillation performance on larger datasets.
| | Euclidean Teacher | Hyperbolic Teacher | FitNet | AT | LSP | MSKD | VQG | Our |
| ------------- | ----------------- | ------------------ | ------ | ----- | ----- | ----- | ----- | --------- |
| Ogbn-arxiv | 71.91 | 73.21 | 67.56 | 67.48 | 69.53 | 69.27 | 68.59 | **70.89** |
| Ogbn-proteins | 72.83 | 69.23 | 68.71 | 68.53 | 69.45 | 70.97 | 69.54 | **71.22** |
---
# W3:
Yes, some remarkable pervious works focused on MLP-based students[1][4].
Meanwhile, there are also significant studies in graph KD that use GNNs as the student networks, such as [2] and [3].
**Our motivations for choosing GNNs as student**: We selected GNNs as the student model to explore **a new trade-off** among performance, inference speed, and compression rate. GNNs are designed for graph-based data, so GNN students can achieve performance close to their teachers even with a much small size (e.g., our method uses a GNN student model with **a hidden dimension of only 8 and ~2% of the teacher model's size**). Furthermore, GNN students demonstrate notable efficiency in inference acceleration, achieving an average **speed-up of 232x** compared to their teacher models in our method.
---
# Q1:
Our student model takes full graphs as input. The sampled subgraphs you mentioned are used by the SWKT module to enable it to extract local structure features and corresponding manifolds.
**Differences between MLPs and GNNs students**:
- GNN students are smaller, with a ~2% size of their teachers (see Table 2). MLP students and their teachers have similar sizes due to having the same number of layers and hidden dimension (see appendixes in [1][4]).
- GNN students achieve better performance on graph data since they leverage adjacency information.
- MLP students have faster inference speeds due to their lack of graph dependencies, with [1] showing a speed increase of 273×, our GNN students is 232× faster than their teachers.
In summary, GNN students are suitable for scenarios with limited space resource, whereas MLP students are better suited for scenarios requiring higher inference speed.
**Additional experiments with MLP-based student**: Our distillation method is model-agnostic. Using the same settings as [1] and [4], we test our method with GCN teachers and MLP students. Following results show that our KD method has better or similar performance with MLP students compared to GLNN [1] and VQG [4].
| | Citeseer | Pubmed | Cora |
| -------- | --------- | --------- | --------- |
| **GLNN** | 69.23 | 74.70 | 78.97 |
| **VQG** | 74.59 | 77.13 | **81.15** |
| **Our** | **76.00** | **77.49** | 81.08 |
(part of results from [4])
---
# Q2:
In fact, except our method, **all baseline methods do not incorporate various manifolds, as we are the first to introduce a KD framework using multiple geometric teachers**. In Table 1, rows marked as $\mathbb{E}$ show results for baseline methods in their original versions without additional manifolds information, while rows marked as $\mathbb{B}$ and $\mathbb{S}$ show results after adapting to hyperbolic and spherical spaces, respectively. This adaptation ensures a fair comparison and evaluates the SWKT and GEO modules. We apologize for any confusion caused by unclear descriptions about Table 1. We will improve the presentation of the experimental section to enhance clarity.
---
References:
[1] Graph-less Neural Networks: Teaching Old MLPs New Tricks via Distillation, ICLR’22
[2] Distilling Knowledge from Graph Convolutional Networks, CVPR’20
[3] Boosting Graph Neural Networks via Adaptive Knowledge Distillation, AAAI’23
[4] VQGraph: Rethinking Graph Representation Space for Bridging GNNs and MLPs, ICLR’24
---
Rebuttal 2:
Comment: Dear Reviewer GamE,
We appreciate your thorough review and constructive feedback. We have provided an analysis of the computational efficiency of our method and further validated its performance on the ogbn-arxiv and ogbn-proteins datasets. We have analyzed the differences between GNN and MLP student models and clarified potential misunderstandings about the baseline methods. We would be grateful for the opportunity to discuss whether your concerns have been fully addressed. Please let us know if you still have any unclear part of our work.
Best regards,
Authors
---
Rebuttal Comment 2.1:
Title: Official comments by Reviewer GamE
Comment: Thanks for the authors' effort to address my concerns. I have raised my score.
---
Reply to Comment 2.1.1:
Title: Thank you!
Comment: We are glad that our initial rebuttal addressed your concerns well!
Graph data may exhibit geometric heterogeneity (e.g., grid or tree structures) across different local structures. Therefore, we are committed to matching appropriate embedding geometric spaces through meticulous analysis and extraction of local structural features to enhance graph representation quality. We validated the effectiveness of this paradigm through knowledge distillation tasks.
We are delighted that you find our approach of consider various geometries in distillation intriguing, and that the experimental results demonstrate its effectiveness. We fully agree that conducting experiments on larger datasets and analyzing the selection motivations and computational efficiency of the student models will enhance the quality of our paper. In our initial rebuttal, we conducted experiments on larger datasets and provided a comparative analysis of the computational efficiency between the teacher and student models, as well as explained the rationale behind the selection of student model architectures. We will incorporate these additions into the revised version of our paper as your suggestions! Thank you again for your insightful comments and valuable suggestions!
Best regards,
Authors | Summary: This paper proposes a novel methodology to distill graph neural network (GNN) integrating the various geometries (e.g. Euclidean and hyperbolic). The author first utilizes a structure-wise knowledge transfer module with multiple teacher models from distinct geometric properties. Then, the authors demonstrate a geometric embedding optimization to guide a student model optimizing cross-geometric space. The authors show the efficacy of their approach through experiments on node classification and link prediction with various datasets compared with existing state-of-the-art methods.
Strengths: 1. The paper is well written.
2. The authors have extensive experiments including the comparison to the state-of-the-art and ablation studies on their new contributing components.
Weaknesses: 1. Some questions exist about the authors' empirical results. Please see the questions section.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. Are Table 1 results all reproduced by the authors? If so, the reported performance of other baselines seems significantly lower than the numbers reported in other papers. For example, MSKD (Zhang et al.) reports an 89.67 F1-Score with an 8.2 ratio on the CiteSeer node classification task. On the other hand, the MSKD NC F1-Score in Table 1 is only 71.42.
2. Following up with 1, GAT results in Table 7 on CiteSeer are significantly lower than the MSKD paper reported. For example, the teacher model trained on CiteSeer with Euclidean geometries only achieves an F1-score of 75.24, while the MSKD paper reports their student model achieves a 95.47 F1-Score with a 17.6 ratio!
3. Can you do the hyperparameter tuning on the training such that the teacher's performance on Citeseer is close to what MSKD (Zhang et al.) has reported? In distillation literature, experimenting with the very well-trained teacher model is important. If not, the gain from the student model could have been illusory since the performance gap could have been larger if the teacher had trained with better hyperparameters.
4. VQG authors (Yang et al.) claim that VQGraph has a significant advantage in the inference speed (828x compared to GNN), but the Table 2 inference speeds report all the same inference speeds on FitNet, AT, LSP, MSK, VQG, and the author's method on NC. I am skeptical whether these are the right numbers.
5. Also, VQG may not be the right candidate to compare since VQG tackles a different problem, a GNN to MLP distillation.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: Overall, the authors' experiments show their new contributions can improve the GNN models in an agnostic way. However, some numerical performance results from the authors are too low (including both teacher and student) compared to the other papers reported. After the authors match their teacher's performance to the other papers (e.g. CiteSeer from MSKD by Zhang et al.) and observe good student performance with a small gap to the teacher + state-of-the-art results, I am happy to raise the score.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the time and effort in reviewing our paper. We take all comments seriously and try our best to address every raised concerns. Please feel free to ask any follow-up questions.
---
# Q1:
Yes, our experimental settings are strictly following the original papers. We used same pre-trained teachers, which were the best-performing models obtained by extensive hyperparameter tuning.
MSKD[1] is an amazing baseline method that introduces multi-scale teachers, providing more comprehensive information to enhance distillation efficiency. Regarding the impressive performance of the teacher model in MSKD (GCN 89.67%, GAT 95.47%) on Citeseer, despite extensive efforts, we were unable to make our GCN teacher has the similar performance. Since the authors did not release Citeseer dataset used, the significant performance differences may be due to variations in dataset preprocessing or dataset versions.
We believe that we have obtained a well-trained and effective GCN teacher model on Citeseer, since our GCN teacher have a comparable performance (73.97%) with some latest works which use Citeseer. VQG (one of our baseline method, ICLR'24) [2] shows that the GCN teacher model of performs at 70.49% on Citeseer. Additionally, reported F1 scores for node classification with GCN on Citeseer are 71.0% in [3] (NIPS’23) , 72.9% in [4] (NIPS’23), and 73.18% in [5] (WWW’23).
---
# Q2:
We also believe that we have obtained a well-trained and effective GAT teacher on Citeseer. Reported F1 scores with GAT on Citeseer are 73.00% in [4] (NIPS’23) ,76.63% in [5] (WWW’23) . These results are comparable to our GAT teacher's performance (75.24%)
---
# Q3:
Unfortunately, due to a lack of detailed experimental information and specific dataset versions, we could not replicate MSKD teachers' performance on Citeseer.
We agree that using a well-trained teacher model is very important in KD. Therefore, we conducted extensive hyperparameter tuning to ensure our teacher models achieved near-optimal performance on each dataset, comparable to the best results reported in latest other graph-related studies [2][3][4][5].
Moreover, all methods employed the same student architecture. Thus, the concern that insufficient training of the teacher model may result in illusory performance of the student model should not appear in our experiments.
---
# Q4:
All runtime data were recorded using Python's datetime library.
In our experiments, we replaced the student model of VQG with GCN to maintain consistency with other baseline methods This adjustment is likely the primary reason why all students, including in VQG, exhibit similar inference times.
We analyzed the speed-up of the student compared to their teachers in our method:
**Theoretically**:
GCN layer's time complexity is: $O(\sum^L_{l=1}(|E|H + NH^2))$
where $N, |E|, L$ denote the number of nodes, edges, and layers, respectively, $H$ denote the hidden layer dimension. The teacher and student models have hidden dimension of 128 and 8, respectively.
Assuming the graph is sparse, which means $|E| \approx O(N)$. the student model is theoretically **229x** faster than the teacher model about.
**Empirically**:
The inference times (ms) of the teacher and student models measured by datetime library are shown below:
| | Wiki-CS | Co-Physics | Pubmed | Citeseer | Cora | Average |
| -------- | ------- | ---------- | ------ | -------- | ---- | ------- |
| Teacher | 906 | 3410 | 914 | 908 | 975 | 1422 |
| Student | 3.98 | 12.0 | 4.46 | 4.01 | 4.43 | 4.22 |
| Speed-up | 227x | 284x | 204x | 226x | 220x | 232x |
Our method achieves a notable acceleration with **an average speedup of 232×**. While the different student architectures result in lower speedup compared to VQG, our method's student achieves a high compression rate (~2%, see Table 2), whereas VQG's MLP students and teachers are similar in size (see appendix in [2]).
We chose GNN as the student model due to considerations of the trade-off between performance and inference speed. Please refer our responses to **Reviewer GamE's W3 and Q1** for more details.
---
# Q5:
Yes, VQG[6] was originally proposed to address the GNN-to-MLP conversion. In our experiments, we replaced the student model of VQG with GCN to maintain consistency with other baseline method.
We use GCN as the teacher model and replaced the student model with MLP same as VQG's student for a comparison with VQG and another MLP-based graph KD method GLNN[7]. We keep same settings with [6], and the results shown in the following table demonstrate that our method achieves effective distillation even with MLP-based student networks.
| | Citeseer | Pubmed | Cora |
| -------- | --------- | --------- | --------- |
| **GLNN** | 69.23 | 74.70 | 78.97 |
| **VQG** | 74.59 | 77.13 | **81.15** |
| **Our** | **76.00** | **77.49** | 81.08 |
(part of results from [6])
---
In the responses, we explained the reasons why our teacher models are being well-trained. Besides, all methods used same settings and pre-trained teacher models, with results averaged over 10 trials. Based on these solid results, our method achieved sota performance, we hope this can change your opinion.
---
References:
[1] Multi-Scale Distillation from Multiple Graph Neural Networks , AAAI’22
[2] VQGraph: Rethinking Graph Representation Space for Bridging GNNs and MLPs, ICLR’24
[3] On Class Distributions Induced by Nearest Neighbor Graphs for Node Classification of Tabular Data, NIPS’23
[4] Re-Think and Re-Design Graph Neural Networks in Spaces of Continuous Graph Diffusion Functionals, NIPS’23
[5] GIF: A General Graph Unlearning Strategy via Influence Function, WWW’23
[6] VQGraph: Rethinking Graph Representation Space for Bridging GNNs and MLPs, ICLR’24
[7] Graph-less Neural Networks: Teaching Old MLPs New Tricks via Distillation, ICLR’22
---
Rebuttal 2:
Comment: Dear Reviewer Z3B3,
We sincerely thank you for your time and effort in reviewing our paper. In responses to your comments, we reviewed relevant works and demonstrated that our pre-trained teacher models are sufficiently trained on the Citeseer dataset. We analyzed the acceleration ratio of the student model compared to the teacher model from both theoretical and practical perspectives. Additionally, we replaced our student model with an MLP model of the same architecture as used in the VQG method and conducted comparative experiments. We would be grateful for the opportunity to discuss whether your concerns have been fully addressed. Please let us know if you still have any unclear part of our work.
Best regards,
Authors
---
Rebuttal Comment 2.1:
Comment: Thank you for your response. After thoroughly reviewing it and considering the results of the additional experiments, most of my concerns have been addressed. I raised my score.
---
Rebuttal 3:
Title: Thank you!
Comment: Thank you for your positive feedback and we are glad to know that our rebuttal and new experiments have addressed most of your concerns!
Graph data may exhibit geometric heterogeneity (e.g., grid or tree structures) across different local structures. Therefore, we are committed to matching appropriate embedding geometric spaces through meticulous analysis and extraction of local structural features to enhance graph representation quality. We validated the effectiveness of this paradigm through knowledge distillation tasks.
We are pleased to learn that you found our paper is well-written and that the experimental results validate the effectiveness of our agnostic cross-geometric framework and modules. In our initial rebuttal, we analyzed the speedup achieved by our method from both theoretical and practical perspectives and compared our method with GNN-to-MLP methods by replacing the student model of our method with MLP. We will ensure that these important details are included in the revised version. Thank you again for your insightful comments and valuable suggestions!
Best regards,
Authors | Summary: This paper presents a cross-geometric graph knowledge distillation method for graph neural networks. This method employs multiple teacher models, each generating different embeddings with distinct geometric properties, such as Euclidean, hyperbolic, and spherical spaces. The student model is based on Euclidean space. Two modules, Structure-Wise Knowledge Transfer (SWKT) and Geometric Embedding Optimization (GEO), are proposed to enhance performance. To evaluate the proposed approach, distillation experiments are conducted on node classification (NC) and link prediction (LP) tasks across various types of graph data.
Strengths: 1. This work presents a novel cross-geometric knowledge distillation framework. Distilling knowledge from Euclidean and hyperbolic geometries in a space-mixing fashion is a new approach for me.
2. A fine-grained analysis of the subgraphs is provided.
3. Experiments show better results compared to previous knowledge distillation methods.
Weaknesses: 1. In the presented cross-geometric knowledge distillation framework, the student model learns from multiple teacher models with different geometric information. Properly training all these teacher models is challenging. It is unclear how to ensure the student model's reliability if some teacher models fail.
2. The student model operates only in Euclidean space, raising questions about whether other student models, such as those in hyperbolic space, can achieve similar results.
3. All experiments are conducted solely on GCN, with no results provided for other graph architectures.
4. It is unclear if a student model using the same architecture as the teacher models, within the cross-geometric knowledge distillation framework, can achieve better performance than the teacher models.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. If one or more teacher models fail, how does this impact the performance of the student model?
2. Can the student model operate in other geometric spaces, such as hyperbolic space?
3. Can the proposed framework work with other architectures, such as Graph Transformer Networks [1]?
4. If the student model uses the same architecture as the teacher models, how does it perform?
[1] Seongjun Yun, Minbyul Jeong, Raehyun Kim, Jaewoo Kang, Hyunwoo J. Kim:
Graph Transformer Networks. NeurIPS 2019
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: This work has discussed the potential limitations in their paper:
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the time and effort in reviewing our paper. We hope that our response can resolve your concerns. Please feel free to ask any follow-up questions.
---
# W1 & Q1:
It is true that the failure of one or more teacher models could potentially impact the student model’s performance, we have implemented several mechanisms in our method to mitigate this risk:
- **Ensemble Learning:** Using multiple teacher models that capture different geometric properties provides redundancy and robustness. If one model fails, the others still contribute valuable insights, minimizing the impact on the student model.
- **Geometric Optimization Network:** GEO dynamically adjusts the weight of information from each teacher model based on the loss function, reducing the influence of any underperforming model and ensuring the student model receives the most reliable information.
**Additional experiments**:
We designed various experimental strategies to assess the impact of failing teachers on students:
S1: Train student models without KD.
S2: Train student models with the best-tuned teacher model.
S3: Train student models with an underperforming teacher model.
S4: Train student models with an untrained teacher model.
S5: Train student models with all untrained teacher models.
Note: All methods except MSKD and ours use a single teacher model,so their S5 f1 scores are N/A.
Teachers' F1 score (%) of different experimental strategies:
| | S2 | S3 | S4 | S5 |
| ------------------ | ----- | ----- | ----- | ----- |
| Euclidean Teacher | 86.98 | 66.09 | 36.61 | 25.55 |
| Hyperbolic Teacher | 90.90 | 64.86 | 52.58 | 30.12 |
NC f1 score (%) of student models on Cora under different experimental strategies:
| Methods | S1↑ | S2↑ | S3↑ | S4↑ | S5↑ | Std(S2-S4) |
| ------- | ----- | ----- | ----- | ----- | ----- | ---------- |
| FitNet | 82.06 | 80.32 | 56.27 | 53.81 | N/A | 11.95 |
| AT | 82.06 | 80.49 | 63.39 | 51.11 | N/A | 12.04 |
| LSP | 82.06 | 83.34 | 71.33 | 56.76 | N/A | 10.86 |
| MSKD | 82.06 | 82.48 | 81.82 | 78.62 | 21.62 | 1.68 |
| VQG | 82.06 | 83.02 | 70.02 | 56.02 | N/A | 11.02 |
| Our | 82.06 | 86.05 | 85.26 | 81.33 | 22.85 | 2.06 |
Except S5, which teachers have an average performance of only 30%, our method’s distilled student models consistently maintain stable performance even when some teacher models fail.
---
# W2 & Q2:
Yes, the student model can operate in other geometric spaces. At first, we chose a Euclidean student model to combine hyperbolic accuracy benefits with Euclidean efficiency and stability. Our framework is model-agnostic, allowing replacement of the student model with other neural networks.
To validate student on various geometric spaces, we tested NC F1 scores (%) on the Cora dataset. Euclidean and hyperbolic teachers' F1 score is 86.98% and 90.90%.
| | FitNet | AT | LSP | MSKD | VQG | Our |
| ------------------ | ------ | ----- | ----- | ----- | ----- | --------- |
| Euclidean Student | 80.32 | 80.49 | 83.34 | 82.48 | 83.02 | 86.05 |
| Hyperbolic Student | 86.73 | 86.00 | 87.96 | 88.21 | 88.45 | 90.42 |
| Spherical Student | 75.92 | 83.54 | 84.77 | 85.26 | 83.54 | 86.73 |
| Average | 80.99 | 83.34 | 85.35 | 85.31 | 85.00 | **87.73** |
---
# W3 & Q3:
Our framework is model-agnostic and works with various model architectures.
In fact, we have conducted experiments with alternative architectures. We apologize if our results for replacing GCN with GAT, originally presented in Table 7 in the appendix due to space constraints, escaped the reviewer's attention.
We also followed your suggestion and replaced the Euclidean teacher model with the Graph Transformer Network (GTN)[1].
| | GTN Teacher | Hyperbolic Teacher | GTN Student | GCN Student | GTN Student Inference time | GCN Student Inference time |
| ------ | ----------- | ------------------ | ----------- | ----------- | -------------------------- | -------------------------- |
| WikiCS | 82.74 | 81.83 | 82.48 | 74.26 | 15.34ms | 3.98ms |
| Cora | 87.87 | 90.90 | 86.37 | 86.24 | 17.67ms | 4.43ms |
GTN teacher has 4 layers and a hidden dimension of 128. Specifically, during distillation, student model's $l$ layers match the last $l$ layers of teacher accordingly. The GTN and HGCN teacher output intermediate representations from each layer to the SWKT module for local subgraph structure extracting and selection. These distributions are then optimized by GEO module. Then, these features extracted from the optimized cross-geometric intermediate representations are transferred to students via the corresponding loss function. Additionally, traditional KD loss is computed from the logits output by both the teacher and student models.
---
# W4 & Q4:
As your suggestion, we conducted experiments using student models with the same architecture as the teacher models in our method. Following results are NC F1 scores (%) on the Cora dataset. Euclidean and hyperbolic teachers' F1 score is 86.98% and 90.90%.
| | FitNet | AT | LSP | MSKD | VQG | Our |
| ------------------ | ------ | ----- | ----- | ----- | ----- | ----- |
| Student in paper | 80.32 | 80.49 | 83.34 | 82.48 | 83.02 | 86.05 |
| Euclidean student | 86.73 | 86.24 | 86.98 | 86.98 | 86.49 | 87.47 |
| hyperbolic student | N/A | N/A | N/A | N/A | N/A | 90.42 |
Compared to the results presented in Table 1 of our paper, student models now even outperform some teacher models, but using the same architecture as the teacher makes student models larger and slower, limiting their suitability for resource-constrained devices.
---
References:
[1] Graph Transformer Networks. NeurIPS'19
---
Rebuttal 2:
Comment: Dear Reviewer ogwd,
We sincerely thank you for your time and effort in reviewing our paper. In responses to your comments, we have carefully designed and conducted experiments and analyses, which we believe have covered your concerns. We would be grateful for the opportunity to discuss whether your concerns have been fully addressed. Please let us know if you still have any unclear part of our work.
Best regards,
Authors
---
Rebuttal Comment 2.1:
Comment: Thank you for the response. I have carefully reviewed it and the results of the new experiments have addressed most of my concerns. I am pleased to increase my score.
---
Reply to Comment 2.1.1:
Title: Thank you!
Comment: We sincerely appreciate your valuable reviews and are glad to know that our rebuttal and new experiments have addressed most of your concerns.
Graph data may exhibit geometric heterogeneity (e.g., grid or tree structures) across different local structures. Therefore, we are committed to matching appropriate embedding geometric spaces through meticulous analysis and extraction of local structural features to enhance graph representation quality. We validated the effectiveness of this paradigm through knowledge distillation tasks.
We are delighted to see that you consider our method, which combines the advantages of teacher models from different geometric spaces and analyzes local subgraph features from a fine-grained perspective for knowledge distillation, to be an innovative distillation paradigm. We fully agree that further validating the robustness of our framework (e.g., cases where some teacher models fail) and its model-agnostic nature (e.g., changing the architecture of teacher and student models) will strengthen our work. We conducted some experiments in our initial response to provide support and will incorporate these results into the experimental section in the revised version as your suggestions! Thank you again for your insightful comments and valuable suggestions!
Best regards,
Authors | Rebuttal 1:
Rebuttal: Dear Reviewers,
We sincerely thank the reviewers for the time and effort in reviewing our paper. We take all comments seriously and try our best to address every raised concerns. Please feel free to ask any follow-up questions.
It is encouraging to learn that the reviewers stated that our method, which leverages the advantages of different geometric spaces for graph knowledge distillation, is novel (ogwd, rr98) and intriguing (GamE). Reviewers also acknowledged that extensive (Z3B3) experiments conducted on various datasets (ogwd, Z3B3) show that our method outperforms baseline methods (ogwd, Z3B3, rr98). Reviewer Z3B3 kindly noted that the paper is well written.
Since the reviewers focused on empirical performance of our method, we would like to highlight our **theoretical contributions** here. To address the challenge that real-world graphs often exhibit geometrically heterogeneous characteristics, we have introduced **a novel model-agnostic cross-geometric knowledge distillation framework**. This framework, for the first time, leverages the advantages of various geometric spaces to offer high-quality guidance to the student model. Our dual-teacher setup and adaptive GEO module ensure that our method remains effective even under adverse conditions (such as partial teacher fail). Furthermore, we introduce **a new fine-grained distillation perspective** that evaluates embeddings, extracts knowledge, and transfers it as hints to the student model based on local subgraphs.
In the past week, we carefully improved the experiments (utilizing all available computational resources), clarified various aspects, and expanded our discussions to address the concerns, questions, and requests from all four reviewers. **In summary, we have implemented the following improvements**:
- We analyzed the computational efficiency of student models compared to teacher models in our method, examined the inference acceleration achieved by student models, and validated these findings through experiments. (in response to reviewer Z3B3’s Q4, Game’s W1)
- We provided a complete analysis of the time and space complexity of our method. (in response to reviewer rr98's W1)
- We conducted distillation experiments with non-Euclidean student models. (in response to reviewer ogwd’s Q2)
- We conducted distillation experiments by replacing the teacher model with graph transformer. (in response to reviewer ogwd’s Q3)
- We replaced the student model with MLPs, conducted distillation experiments, and compared the results with method of using MLPs as student. (in response to reviewer Z3B3’s Q5, Game’s Q1)
- We conducted distillation experiments on larger datasets, ogbn-arxiv and ogbn-proteins. (in response to reviewer Game’s W2)
- We analyzed the impact of fail teacher model on distillation results and designed experiments to validate this. (in response to reviewer ogwd’s Q1)
- We clarified the motivation for choosing GNN as the student model and its differences from MLP students. (in response to reviewer Game’s W3 and Q1)
- We conducted distillation experiments where the teacher and student models had identical architecture. (in response to reviewer ogwd’s Q4) | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Retrieval & Fine-Tuning for In-Context Tabular Models | Accept (poster) | Summary: Tabular data is an important yet understudied modality in machine learning. Following recent success of Tab transformer and TabPFN, people have made significant progress on tabular tasks. This paper is an extension of TabPFN, a previous sota tabular learning model on small scale data, and the authors combine with retrieval and finetuning. The motivation is intuitive and the idea is natural.
In the retrieval part, the authors reuse the simple kNN trick and discuss in details the performance in TabPFN.
In the finetuning part, the authors construct the training data by shared contexts and finetune the TabPFN model.
The authors experiment on large scale datasets and proposes the state performance model LoCalPFN.
Strengths: - Tabular learning is an important topic in real life applications
- The methodology of retrieval and finetuning is not dependent on ICL tabular model.
- The performance gain by retriveal and finetuning is clear wrt TabPFN
- Scaling experiment is provided which is important to large scale analysis
Weaknesses: - There are many limitations on the datasets requirements, e.g. number of features, number of classes, tasks
- No other deep learning or transformer based models as a baseline
- Sec 2.4 is not quite clear in finetuning steps, though the general ideas are appreciated.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. L32, memory scales quadratically in the size of the dataset. Do the authors mean exactly the calculation of attention matrix? If so, it seems this memory limit s general and what the authors do is to leverage the limited context length better?
2. In Table 1, is the gain from loCalPFN w.r.t TabPFN-kNN coming from only finetuning per dataset?
3. Just make sure I understand correctly, do the authors fine tune on each dataset of 95 benchmarks?
4. How are tree methods evaluated? Any hyperparameter optimization?
5. What is the training time and inference time on each dataset and how this time scales with dataset size?
6. What is the selection criteria of 95 datasets? How are they chosen, preprocessed, etc.
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > There are many limitations on the datasets requirements, e.g. number of features, number of classes, tasks
We agree. There are ways to go beyond them while still using the same architecture though. For instance one can perform feature selection very efficiently as the forward pass of TabPFN is very fast. Furthermore, all multiclass problems can always be reduced to binary classification problems. Finally even regression can be cast as classification as we demonstrated in the general message
But we share your feelings and are currently investigating training a better base model which does not suffer from these limitations. This is however a different technical contribution on top on which the present method can be applied as well.
> No other deep learning or transformer based models as a baseline
We address this point in the paragraph “Deep learning model comparisons” in Section 4.2 which points to Table 5 in appendix. Essentially deep learning models are usually slower to train and we could only obtain results on a subset of the datasets. We still report their performance and ours on those. We believe the contrast is very clear.
> Sec 2.4 is not quite clear in finetuning steps, though the general ideas are appreciated.
This section is indeed the most technical part of the paper and we’d be happy to answer any specific question and update the main text. In the meantime to explain differently the main point:
Let’s say we want to use a context of size N_ctx=1000 we wish to classify N_qy = 500. With vanilla TabPFN, we simply construct a sequence of size 1500 and tell the model which points are ctx and query through the attention mask.
However when using a local context, now each of the 500 queries has its own 1000 context. We now have to fit $(1000+1) * 500$ points in the GPU! For doing inference this is fine, but when having to do backprop, and that over many steps, it just becomes too slow.
What we do instead is find a way to select points which are all “local” to each other so that we can use a context within that neighborhood and share it for all points, even though it might not be the exact neighbors of each query. We find that this works very well and efficiently in practice.
> L32, memory scales quadratically in the size of the dataset. Do the authors mean exactly the calculation of attention matrix? If so, it seems this memory limit s general and what the authors do is to leverage the limited context length better?
Yes, this is correct for the TabPFN-kNN method. Note that this poses some challenges on how to implement this efficiently but your understanding is correct.
> In Table 1, is the gain from loCalPFN w.r.t TabPFN-kNN coming from only finetuning per dataset?
Yes this is correct.
> Just make sure I understand correctly, do the authors fine tune on each dataset of 95 benchmarks?
Yes. (see L43, L132)
> How are tree methods evaluated? Any hyperparameter optimization?
We use the results from TabZilla. All baselines used HPO indeed. Please refer to Appendix A.2.1 for more details.
> What is the training time and inference time on each dataset and how this time scales with dataset size?
We provided some runtime results in the attached pdf. The main takeaway is that retrieval is not the bottleneck here, it is the cost of doing the finetuning.
> What is the selection criteria of 95 datasets? How are they chosen, preprocessed, etc.
We tried to address this question in the first paragraph of the experiment section “The 95 datasets are filtered from TabZilla to meet TabPFN’s architectural requirements by ensuring that each dataset has at most 100 features, at most 10 classes, does not contain NaN values, and has at least one instance per class for each split”. These are essentially conditions so that the datasets can be used without modification by the base model, TabPFN. The list of datasets is available in Appendix A.1
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the feedback. I am still at the borderline for its improvement wrt TabPFN. Happy to hear what other reviewers think
---
Reply to Comment 1.1.1:
Comment: Thank you for engaging with us!
We have tried to address the point of the limitations of TabPFN by explaining how to expand to more classes and provided a regression experiment. Do you have a specific limitation in mind you would like us to address?
Furthermore we would like to point out that our contribution is the retrieval and fine-tuning procedure (and their efficient implementation such as the shared context approximation or the joint retrieval+fine-tuning), our paper is not concerned with modifications to the base model (TabPFN) which is definitely important, but orthogonal, future work.
We believe we addressed your concerns, which notably included the deep learning and their runtime experiment, clarification on the fact that HPO is indeed used for our baselines (30 rounds), and answering specific clarifications alongside with the concerns about TabPFN's limitations which are addressed in the general message and discussed above. We hope that our additional experiments and explanations will be reflected in your evaluation of our work. | Summary: The authors extend the recently introduced TabPFN to larger and more complex datasets by fetching a relevant context for each test point using a KNN algorithm. The author evaluate two methods, TabPFN-knn, which consists of using the original TabPFN on the fetched context for each test point, and LocalPFN, which adds a finetuning step to adapt TabPFN to these new kind of local context. For this finetuning, an approximation is devised to use close points together in both the context and the query, enabling a shared context which is faster, but with results close to the result of a KNN for each point.
The paper shows extensive evaluations of these two methods, demonstrating state of the art results against the original TabPFN, Gradient Boosting Trees like CatBoost, and (not pretrained) neural networks on a previously introduced benchmark.
Furthermore, the authors provide several ablations, for instance showing the importance of jointly adding the knn and finetuning steps.
Strengths: Important and novel contribution, which allow to use TabPFN is many more settings.
The evaluation is well done. The authors benchmark their method on a previously introduced dataset, and the selection they use is principled, showing that no cherry-picking of dataset was used. The baselines are strong (modern GBDT like catboost, modern NN like ft transformer, and different tabpfn variants).
The ablations are interesting and numerous. For instance, I was wondering about TabPFN-3k-32ens-int and was happy to see it. The importance of doing the finetuning jointly with the knn is interesting. The approximation used are also ablated, like in table 12.
The paper is very well introduced and written, and make for a very pleasant read.
Weaknesses: Some details are missing, for instance:
- how are runtimes computed for figure 11? Which hardware? Which hyperparameters?
- I'm not sure whether LocalPFN undergoes some HPO or not (I think not). If it's not the case, I think it should be made clearer, as it makes the comparison to tuned GBDT more impressive. If it does, the hyperparameter space should be provided.
I think TabPFN-**1k**-32ens-int (maybe less than 32 ens if it's slow?) would be an interesting baselines as TabPFN is optimized for smaller datasets than 3K.
I think more aggregation metrics should be provided (for instance mean rank, mean normalized score (z-score or other rescaled scores)..).
figure 4 lacks details: which datasets are used? Are the different datasets subsampled to reach smaller sample sizes? I also don't understand why absolute mean AUC is decreasing when the dataset size is increased after a certain point in Figure 7. Is it because the list of datasets is changing?
The code is not currently available.
Technical Quality: 3
Clarity: 4
Questions for Authors: For inference, you have to use a different context for each points to predict, right? How long / memory intensive is it? What is the runtime repartition between finetuning and inference?
Do you think you can finetune with a smaller k than what you use for inference?
> We use the experimental results from TabZilla [35] when they are available.
I assume they are always available expect for TabPFN variants? I also think that the fact that your using the hyperparameter spaces from [35] should be said in the main text (if that's indeed the case).
Which distance are you using in faiss? I'm wondering if using the euclidean distance would improve the performance of the ordinal encoded features for the knn (compared to using one-hot-encoding).
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: The authors have adequately addressed the limitation of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your thorough review and helping us improve the paper.
> How are runtimes computed for figure 11? Which hardware? Which hyperparameters?
The time reported is the average time to perform training+inference for a single run (averaged over parameters/datasets/seeds). As such it doesn’t indeed account for the HPO for baseline methods which used one.
As reported in Appendix A.2.2 the hardware is as follows:
All experiments for our proposed methods can be run on a machine with a single NVIDIA RTX 6000 GPU Ada Generation, 995Gi RAM, and AMD Ryzen Threadripper PRO 5995WX 64-Cores CPU.
> I'm not sure whether LocalPFN undergoes some HPO or not (I think not). If it's not the case, I think it should be made clearer, as it makes the comparison to tuned GBDT more impressive. If it does, the hyperparameter space should be provided.
We conducted some minor experiments with HPO and saw no difference except for learning rate which we tuned by hand. Thus, we kept the default parameters we had initially (from TabPFN repo). We believe this, on top of other choices not having much effect, such as using embedding vs raw space or euclidean distance vs inner product, shows the approach is quite robust.
Indeed we will make this clear in the main text, thanks!
> I think TabPFN-1k-32ens-int (maybe less than 32 ens if it's slow?) would be an interesting baselines as TabPFN is optimized for smaller datasets than 3K.
We do not have the exact numbers but generally TabPFN generalizes well to larger context and on average using larger context helps. The performance of 1k-32ens-int was above 1k-32ens but below 3k-32ens-int.
> I think more aggregation metrics should be provided (for instance mean rank, mean normalized score (z-score or other rescaled scores)..).
We provide the mean rank, normalized AUC, z-scores for the algorithms of Table 1 in Table 3 of the attached pdf. Please let us know if there are some metrics you’d particularly wish to see and we’ll try to include them during the discussion phase. We made the choice to avoid normalized measures as the main metrics in our paper as we believe it can hinder the reproducibility of the results as the scores are now dependent on an exact set of algorithms. However we agree that they are also interesting and will include them in the paper.
> figure 4 lacks details: which datasets are used? Are the different datasets subsampled to reach smaller sample sizes? I also don't understand why absolute mean AUC is decreasing when the dataset size is increased after a certain point in Figure 7. Is it because the list of datasets is changing?
Good question, we will clarify it in the main text. Your second guess is correct. In this figure we have not subsampled or tampered with the datasets in any way, but only binned them in the appropriate size range. This also means that the mean AUC of each bin is not directly comparable to another bin as the datasets within one bin are totally disjoint from the one in another bin. It may be that datasets in the range 1000-3000 for instance are on average (in Tabzilla) harder than those in the range 3000-10000.
This is the main reason why we use a standard algorithm as a baseline so that some phenomena (such as TabPFN’s performance decreasing) with dataset size become visible.
> The code is not currently available.
Following NeurIPS policy we are reaching out to the AC in order to be allowed to share an anonymous repo link with you. Note that we intended to release our code later (as we would like to see our method being used, we are particularly hopeful TabPFN-kNN could be used as a replacement for TabPFN), and it is currently not optimally refactored/documented.
> For inference, you have to use a different context for each points to predict, right? How long / memory intensive is it? What is the runtime repartition between finetuning and inference?
TabPFN-kNN is essentially LocalPFN without finetuning. By looking at Figure 11 it can be seen that LocalPFN takes an order of magnitude longer (or more) compared to TabPFN-KNN. This is due to LocalPFN having to do finetune. Thus finetuning takes an order of magnitude more time compared to inference.
> Do you think you can finetune with a smaller k than what you use for inference?
We suspect this approach would work (refer to previous point about robustness). However, it would cause a disparity between finetune and inference and might have negative effects. In terms of smaller k helping with finetune time, most of time is spent for model parameter update not retrieval.
> I assume they are always available expect for TabPFN variants? I also think that the fact that your using the hyperparameter spaces from [35] should be said in the main text (if that's indeed the case).
We thank the reviewer for the observation, we will update the paper to reflect these more clearly. Most of the values were indeed present but we needed to re-run TabPFN (missing many datasets) and perform HPO for catboost on one dataset it was missing as well.
> Which distance are you using in faiss? I'm wondering if using the euclidean distance would improve the performance of the ordinal encoded features for the knn (compared to using one-hot-encoding).
We also had intuition on which encoding/distance would work best. We tried multiple approaches and they made little difference thus we ended up using the simplest variant. Please refer to the general message for more details.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed answer, and for the new tables in the pdf!
The thing which is still not clear for me is the inference time. I think it would be useful to report both the training and inference time separately. In many settings people care about having very low inference time, and this is already a limitation of TabPFN (very fast training but quite slow inference time). Something like inference time / 1000 samples would be interesting to report in the paper. I'm still wondering if for TabPFN-KNN you need one forward pass per inference point (which should be much slower than TabPFN). In Figure 11 TabPFN-KNN seems really fast, so I'm wondering if I'm missing something or if you have some tricks.
---
Rebuttal 2:
Comment: Indeed, we had only reported the full training+eval loop runtime for the algorithm but not the inference time specifically before.
Let's get into more details.
To classify $N_\text{qy}$ examples given $N_\text{ctx}$ points, TabPFN would contruct a tensor of size ($L=N_\text{ctx}+N_\text{qy}, B=1, d$) with appropriate masking.
As we do not share the context across queries, with TabPFN kNN we need to have a specific context for each point and as such our input tensor is size ($L=N_\text{ctx}+1, B = N_\text{qy}, d$) where the $+1$ is the query point to classify in each batch dimension.
For instance, for 512 query points and 1000 context size/neighbors, the inference speed of TabPFN would be about 0.01s. A naive TabPFN-kNN is about 1s. However using bf16 precision we can lower it to about 0.4s (0.008s for TabPFN).
If we used 128 queries, the number would be 0.008s (TabPFN) vs 0.1s (TabPFN-kNN).
So as we observe, we are slower than TabPFN.
In the 71 datasets in the attached pdf on which the deep learning baselines were able to run according to the Tabzilla design, the largest of the 95 datasets are typically excluded (this is because Tabzilla allocate a time limit for each algorithm and dataset, as such many --slow-- DL baselines do not run on the largest datasets in the given time budget). This is why TabPFN-kNN is really fast. On fig 12, including all datasets the average speed is about 10x slower but still fast considering there is no training.
The fact that TabPFN-kNN is slower (and takes a lot more memory, especially when having to perform backprop as we do when performing fine-tuning end-to-end with retrieval) is why we introduced the approximate NN context sharing for fine-tuning.
All in all, TabPFN-kNN is not the algorithm with the fastest inference speed, TabPFN is much faster and so is XGBoost etc...
However we believe this algorithm still has very important strengths: it scales extremely well with the dataset size (compared to TabPFN whose performance degrades strongly), so in the case of production ML models which have access to a very large dataset but only have to classify new queries at a lower rate, it is a very well suited algorithm.
Furthermore, indexing on large datasets is usually much faster than retraining any ML algorithm, as such it can adapt very fast to changes in data distribution (such as covid for instance) without needing expensive retraining.
Note that the evaluation runtime was never the bottleneck in our research, so this is not what we focused on, but there are other simple ways one could improve upon it. For example, we can use the clustering capabilities of faiss to split the training data into a fixed number of contexts. This way we can ensure many queries will share the same context and harness the speed of TabPFN. While it may degrade the accuracy of the NN search, we think it can be an interesting trade-off if inference speed is an issue.
Thank you for the interesting comment, we will include numbers or a figure specifically concerning the inference speed for clarity.
Have you received the anonymized code link from the AC?
---
Rebuttal Comment 2.1:
Comment: Thank you for the very clear answer!
> Have you received the anonymized code link from the AC?
No
---
Reply to Comment 2.1.1:
Comment: Here we share the anonymous link for our code: https://anonymous.4open.science/r/retrieve_ft-E2B0/README.md
Note that we removed some files (including the notebooks) to make sure anonymity is preserved. Thus the code for the analysis figures is not available in this repo (but we included that dataframe with all results).
The code should be able to run, if you have a GPU with limited memory (<48G) we recommend lowering the batch size or number of neighbors (--context_length 500 for instance).
If you are familiar with the TabPFN code base you will notice that a large portion of the code has been rewritten, notably to be compatible with the pytorch transformers class.
You will probably be interested in the methods/pfknn.py (TabPFN-kNN) and methods/ftknn.py (LoCalPFN) files.
There are currently a lot of options available in the code (different ways of computing neighbors, different embeddings etc..) which can make the code harder to read.
Let us know if you have any questions (and please do not share this code)! | Summary: The paper introduces Locally-Calibrated PFN (LoCalPFN), an advanced model for tabular data that enhances the transformer-based TabPFN by incorporating retrieval and fine-tuning techniques. By using k-Nearest Neighbours (kNN) to select a local context for each data point and fine-tuning the model on this retrieved set, LoCalPFN adapts more effectively to larger and more complex datasets. Extensive evaluations on 95 datasets from TabZilla demonstrate that LoCalPFN outperforms both neural and tree-based methods, setting a new state-of-the-art in tabular data classification. The key contributions include addressing TabPFN's scaling issues, proposing an improved context usage with retrieval and fine-tuning, and showcasing superior performance through extensive experimentation and ablation studies.
Strengths: - The paper introduces Locally-Calibrated PFN (LoCalPFN), which enhances transformer-based in-context learning for tabular data by combining retrieval and fine-tuning techniques.
- The research is robust, with extensive evaluations on 95 datasets from TabZilla. The authors provide comprehensive experimentation and analysis, demonstrating the effectiveness of LoCalPFN compared to strong baselines, including neural and tree-based models.
- The paper is well-organized and clearly written, making the complex concepts and methods accessible.
Weaknesses: - Although LoCalPFN shows improved performance, the fine-tuning process increases computational complexity and runtime, especially with large datasets.
- The paper's reliance on TabPFN as the base model restricts the generalizability of the proposed method. While LoCalPFN demonstrates significant improvements, it remains unclear whether these benefits would transfer to other in-context learning models for tabular data.
- The current implementation of LoCalPFN is constrained by TabPFN’s limitations on the number of features and classes, as well as its incompatibility with regression tasks.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Have you tested the retrieval and fine-tuning techniques used in LoCalPFN on other in-context learning models?
- Given the constraints on the number of features and classes due to TabPFN, how do you plan to extend LoCalPFN to handle datasets with more features and classes?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: - The paper focuses solely on classification tasks and does not address regression tasks. This omission leaves a gap in evaluating the full potential of LoCalPFN.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback.
> “Although LoCalPFN shows improved performance, the fine-tuning process increases computational complexity and runtime, especially with large datasets.”
Yes this is true, however note that the complexity added by the finetuning process is not specific to LoCalPFN but a cost we also must pay when fine-tuning with TabPFN (vanilla). Note that despite the additional runtime, our method has similar runtime to many other deep learning approaches while having very high performance.
> “The paper's reliance on TabPFN as the base model restricts the generalizability of the proposed method. While LoCalPFN demonstrates significant improvements, it remains unclear whether these benefits would transfer to other in-context learning models for tabular data.”
Please refer to the general message. The headline is that we expect the techniques and concepts from this paper to generalize to future tabular foundation models.
> “The current implementation of LoCalPFN is constrained by TabPFN’s limitations on the number of features and classes, as well as its incompatibility with regression tasks.”
This is correct. Our contribution is a post-training one so we don’t retrain a base model. However we agree with the reviewer concerning the limitations of TabPFN and are currently working towards removing those. Please refer to the general message for more discussion regarding the possibility of other models.
> “Have you tested the retrieval and fine-tuning techniques used in LoCalPFN on other in-context learning models?”
We are not aware of tabular in-context learners except for TabPFN. Are you referring to general in-context learners (i.e. LLMs)? There are multiple issues arising:
1. Since each row cannot be considered as a token but as a (potentially very long) sequence of tokens of varying length, comparing rows for retrieval is not straightforward.
2. How to embed rows is actually not obvious in the first place; since LLM perform best on text, most successful methods ([https://arxiv.org/abs/2206.06565 ]) rely on describing each row as a sentence.
3. Because each row would have many tokens, our context size would be much more limited: 1-2 orders of magnitude below what TabPFN can do [https://arxiv.org/pdf/2406.12031 ].
4. Despite all of this, LLMs for tabular data prediction still lag behind traditional methods and are harder to evaluate as they have memorized many well-known tabular datasets [https://arxiv.org/abs/2403.06644 ].
> “Given the constraints on the number of features and classes due to TabPFN, how do you plan to extend LoCalPFN to handle datasets with more features and classes?”
Even though the current model is limited (100 features, 10 classes, no regression), it is still possible to extend it: for instance we can easily perform feature selection/averaging as the forward pass is fast. Any multiclass problem problem can be turned into multiple binary classification problems. And as we showed in the main message we can even perform regression as classification.
However we also wish to go beyond those limitations and are actively towards designing a new architecture and training a new model that does not suffer from those.
> Regression
Please refer to the general message | Summary: The paper proposes LoCalPFN, a new method that improves the scaling of transformer-based in-context learning for tabular data. It uses retrieval and fine-tuning to adapt the transformer to local subsets of the data, and demonstrates state-of-the-art performance on a variety of datasets.
The paper makes the following contributions:
* Provides a comprehensive analysis of TabPFN and identifies key limitations in TabPFN's ability to scale with dataset size and complexity.
* Proposes LoCalPFN, a novel approach that combines retrieval and fine-tuning techniques. LoCalPFN leverages these methods to enable more effective utilization of context, thereby enhancing the scalability of in-context learning for tabular data.
* Demonstrates the superiority of LoCalPFN through extensive evaluation and ablation studies. These evaluations reveal that LoCalPFN outperforms baselines on a wide range of datasets
Strengths: * Novel Combination of Retrieval and Fine-tuning: The paper introduces an new approach by combining retrieval and fine-tuning for tabular data.
* Extensive Evaluation: The paper presents a thorough evaluation of the proposed method (LoCalPFN) across a wide range of datasets, comparing it with numerous baselines and conducting ablation studies.
Weaknesses: The paper contains several inaccurate statements, For example, in abstract, "Recent advancements using transformer-based in-context learning have shown promise on smaller and less complex datasets, but have struggled to scale to larger and more complex ones." is inaccurate -- depending on the dataset and task definitions. There are many such examples in the paper, which make it hard to understand the scope of the paper. It raises concerns about the overall reliability of the paper's findings.
The paper fails to clearly define the complexity of tabular datasets, making it difficult to assess the relevance and significance of the proposed method (LoCalPFN) in addressing the challenges of complex datasets.
The paper does not address the important practical considerations of the model's cost and latency at inference time, especially retrieval is used. This omission is a significant limitation, as it hinders the understanding of the model's real-world applicability. As acknowledged in the paper, LoCalPFN has a slower runtime compared to tree-based methods, which are a popular choice for tabular data. While the paper mentions a faster variant (TabPFN-kNN), its performance is not as strong as LoCalPFN."
The proposed LoCalPFN method is heavily reliant on TabPFN as the base model. This raises concerns about the generalizability of the approach to other base models and task types (e.g., regression).
Technical Quality: 2
Clarity: 1
Questions for Authors: 1) why the dataset size would affect the model quality? is it related to the task complexity and diversity of the samples?
2) can you given an example of the query x_qy for tabular data? and the distance metrics being used in kNN?
Confidence: 3
Soundness: 2
Presentation: 1
Contribution: 1
Limitations: The paper does not address the important practical considerations of the model's cost and latency at inference time, especially retrieval is used. This omission is a significant limitation, as it hinders the understanding of the model's real-world applicability. As acknowledged in the paper, LoCalPFN has a slower runtime compared to tree-based methods, which are a popular choice for tabular data. While the paper mentions a faster variant (TabPFN-kNN), its performance is not as strong as LoCalPFN."
The proposed LoCalPFN method is heavily reliant on TabPFN as the base model. This raises concerns about the generalizability of the approach to other base models and task types (e.g., regression).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 2
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the time you took to review our paper. We hope to clarify some points of confusion below in our response.
> "Recent advancements [...] have struggled to scale to larger and more complex ones." is inaccurate -- depending on the dataset and task definitions.
Can you please clarify this point? This sentence is meant to be understood in the context of tabular data, as the previous one begins with “Tabular data”. In any case, we will update it to “have shown promise on smaller and less complex **tabular** datasets“ to remove ambiguity.
However, if this sentence is being challenged **in the context of tabular data**, several works have indeed shown TabPFN to be a competitive model [e.g., https://arxiv.org/abs/2305.02997, https://openreview.net/forum?id=XctSyEsBzx] however it doesn’t translate as well to large datasets because of the limited context size [https://arxiv.org/abs/2402.06971 Fig 3, or Fig 2 of our paper].
> “There are many such examples in the paper, which make it hard to understand the scope of the paper. It raises concerns about the overall reliability of the paper's findings.”
We would be happy to have such examples pointed out so that we can better clarify the paper and convince you that our results are strong and reliable.
> “The paper fails to clearly define the complexity of tabular datasets, making it difficult to assess the relevance and significance of the proposed method (LoCalPFN) in addressing the challenges of complex datasets.”
We would like to address the comment with several points.
Complexity of a dataset is one of the reasons we invoked when explaining why retrieval can help, even when the dataset size is small. Note that the main motivation for our paper is still to be able to use such in-context models on large datasets. Neither our method nor our results (Table 1 and 6-7) are affected by how we measure complexity.
There isn’t one way to measure data complexity and it is still an open question. Many of the usual measures are based on a notion of “compressibility” dating back to Kolmogorov complexity. Many practical methods are based on a measure of discrepancy between the data distribution (usually $p(x)$ or $p(y|x)$) and a model of known capacity. For instance, let’s consider a linear model and a more complex non-linear model (e.g., polynomial). If both the linear and non-linear model provide a similar approximation to $p(y|x)$ (i.e., low $\text{KL}(p(y|x)\mid\mid q(y|x))$ or equivalently cross-entropy), then we can argue that the classification problem is not complex as a simple linear model is enough to approximate it. On the other hand, if the discrepancy between the two models is large, then a non-linear model is necessary to explain the data and as such it is more complex. This is very much in line with our intuition and how the term is used in machine learning, where some tasks are considered “hard” (such as mathematical reasoning) as small models don’t perform well but more advanced ones (e.g., adding more parameters or some search capability) perform better. Thus we argue that measuring the discrepancy between the loss/performance of different algorithms is both intuitive and widely used. Indeed we see examples in the tabular domain [https://arxiv.org/abs/2207.08815, 3.1 “not easy” paragraph] where the gap is measured, or [ https://arxiv.org/pdf/2305.02997 ] where datasets are considered “hard” if a simple baseline does not achieve top performance. We are therefore very much in-line with the standards of the field, including the tabular sub-domain.
> “The paper does not address the important practical considerations of the model's cost and latency at inference time, especially retrieval is used. This omission is a significant limitation, as it hinders the understanding of the model's real-world applicability.”
We believe you are referring to our mention of the runtime (L328) and Figure 11 in the Appendix. You can see there that the non-finetuned method (TabPFN-kNN) is faster than all tree based methods but does not significantly outperform XGBoost or CatBoost. As a reminder, TabPFN-kNN represents the raw cost of inference for our technique when only retrieval – not fine-tuning – is performed, which should hopefully assuage concerns about the runtime of the retrieval component. Meanwhile, LoCalPFN is slower (by a factor 10-25x on average) compared to a single XGBoost run, but achieves significantly better performance. Depending on the exact budget and the hardware available to a specific person, TabPFN-kNN, XGBoost, CatBoost, or LoCalPFN might be the best choice. Note that, as GPUs are improving and inference is made more efficient by the day (e.g., better flash attention, quantization, etc.), LoCalPFN’s inference speed should improve over time. Please see the general message for more details; we have included a runtime comparison against popular deep learning techniques there as well, showing that LoCalPFN leads on performance while remaining competitive on runtime.
> The proposed LoCalPFN method is heavily reliant on TabPFN as the base model. This raises concerns about the generalizability of the approach to other base models and task types (e.g., regression).
Please refer to the general response.
**Dataset Size and Model Quality**: We empirically notice that for the datasets & models we have, the task tends to be a bit harder as dataset size increases.
**Query Example**: Could you clarify what kind of example you would like to see? For all queries, we first standardize its features using training statistics and then retrieve its neighbours in the training set using L2 distance. We then pass the (context, query) vectors to TabPFN.
**Overall**: Given the strengths you pointed out (comprehensive analysis and extensive ablations, novel method), and that we have hopefully addressed all your concerns, **we ask you to re-consider your rating of 2**. This is generally reserved for papers that are technically wrong.
---
Rebuttal 2:
Title: Request for Comment
Comment: Hi,
We would like to again thank you for taking the time to review our paper, but would also very much appreciate if took the opportunity to engage in discussion if time permits, considering that we have clarified some of the more negative points of your review both in the rebuttal to you and the shared rebuttal to all reviewers (that one has a PDF attached).
Thanks! | Rebuttal 1:
Rebuttal: # General message
We would like to thank the reviewers for their assessment on our work. Overall, reviewers have appreciated our evaluations and experiments (mentioning points such as “extensive” [`j47T`], “robust and comprehensive” [`iFbB`], and “well done, .. principled, … no-cherry picking, … strong ablations” [`jqEN`]). Our paper was deemed “well-organized and clearly written” [`iFbB`] and “well written and pleasant to read” [`jqEN`, paraphrased] by two reviewers. The contribution was also deemed “novel” by multiple reviewers [`j47T`, `jqEN`].
However, there were some important and relevant points raised by multiple reviewers, and so we would like to address those here and not just in the individual responses.
## Dependence on TabPFN [`iFbB`, `j47T`, `ZbyM`]
Some reviewers (along with us in the paper) accurately noted that our results currently depend heavily on TabPFN. However we point out that our method is made for in-context architectures that consider each datapoint as a token. As far as we know, TabPFN is currently the only one based on this idea, but we expect many new models of this class to follow considering the success of TabPFN. We can draw an analogy to early techniques leveraging BERT as the base model when it was one of the very few performant language models; in particular, works promoting retrieval developed with a heavy reliance on BERT readily extended to future improvements, and many remain highly relevant today. We thus feel strongly that the ideas and concepts presented here will remain relevant for future tabular foundation models.
## Regression [`j47T`, `iFbB`]
A fair point that was raised is that we do not perform regression experiments, as TabPFN was not trained for regression. We would like to address this point with two arguments:
Our method, both retrieval and fine-tuning aspects, would apply exactly the same way to a pre-trained regression model that processes inputs in a similar manner.
Following the reviews, we also have tried to perform regression with our method by simply binning the regression targets into 10 classes, thus using a regression-as-classification approach that has shown success in other domains (e.g., https://arxiv.org/pdf/2402.13425, https://arxiv.org/abs/2403.03950). However, note that TabPFN was not trained to perform this directly. To improve performance when using local contexts, we can perform local binning and predict 10 local values for each point. On the other hand, using a global context would be akin to having a global binning as well which can restrict the precision of the output.
We validate this idea on the well known california-housing dataset (about 20k samples, 8 features) and provide results in Table 2 of the joined pdf.
TabPFN using random samples as context and binning them is not a strong regressor: a kNN regressor can do much better. However, when using a local context and binning, we can predict finer-grained values, and thus our MSE and correlation is closer to XGBRegressor. Note here that we did not even fine-tune the method.
Thus, we have already observed a generalization of the ideas presented in this paper; as discussed above, we expect further generalizations of our ideas in the realm of tabular data to be discovered.
## Runtime Details [`j47T`, `jqEN`, `ZbyM`]
While we include runtime results in the Appendix in Figure 11, admittedly this only compares our technique against tree-based approaches. In Table 1 of the joined pdf, we include a more thorough comparison with deep learning baselines on the 71 datasets of Table 5.
TabPFN-kNN and LoCalPFN outperform the deep learning baselines by a large amount. Furthermore LoCalPFN has similar runtime to other deep learning baselines while being significantly more performant.
## Distances and Details for the Retrieval [`j47T`, `jqEN`]
We will update the main text of the paper and appendices to include more details, which we also describe here. We always **standardized the features** of the training set to build the index. Next, for the retrieval, we experimented with different metrics, namely **Euclidean (L2) distance and cosine similarity**. The main takeaway from our experiments is that these two choices are quite similar when applied to the standardized features and so we stuck with the Euclidean distance.
Yet we were initially convinced that using *embeddings* of the datapoints would be far better than just using the standardized features for retrieval. However, we found that techniques such as using the encoder outputs or some keys/values at different layers as embeddings – with either L2 or cosine – did not increase performance. Indeed, using embeddings affected performance negatively in many cases, while additionally forcing us to recompute the index during training as the embeddings of the training set were also changing, adding computational burden.
We also tried learning the distance function (e.g., Mahalanobis distance, weighted L2, etc.). Unfortunately though this is non-differentiable and thus requires less-standard optimization techniques such as zeroth-order optimization. While we were able to discard irrelevant features in toy examples, the noise introduced by this optimization did not lead to an increase in performance on real datasets.
Furthermore, we’d like to emphasize here a difference we have with Retrieval-Augmented Generation techniques (RAGs): while in RAGs only a few documents (1 to 5) are usually selected, here we select about 1000 neighbors. This adds robustness against selecting a few irrelevant retrieved examples. Furthermore, in the worst case scenario, if we retrieve examples based on irrelevant/noisy features, our context becomes random, which is what the TabPFN baseline uses. This fact is one of the reasons why retrieval helps so much: in the best case we have large performance improvements, and in the worst case we have something similar to TabPFN.
Pdf: /pdf/aaf5f4e164f7811080ac4977c9da1feaa6d45712.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
LLMs as Zero-shot Graph Learners: Alignment of GNN Representations with LLM Token Embeddings | Accept (poster) | Summary: This paper proposes TEA-GLM to align the GNN representations with LLM token embeddings for zero-shot graph machine learning. TEA-GLM enables cross-dataset and cross-task learning without fine-tuning the LLM. Extensive experiments demonstrate its state-of-the-art performance on unseen tasks and datasets.
Strengths: The paper is well-written and easy to follow, effectively conveying the design and motivation behind LLM-GS.
The proposed framework employs only a linear projector to incorporate with the LLM, which is efficient.
The idea of mapping graph embedding to token embedding is also interesting.
Weaknesses: The experimental results are somewhat questionable. The vanilla Vicuna-7B even outperformed LLaGA, which leveraged Vicuna-7B as its base model. Furthermore, the results on E-commerce-Children show no difference between Vicuna-7B and the proposed TEA-GLM.
From my perspective, this paper marks LLaGA as the strongest baseline. However, I find that the experimental settings are totally different from LLaGA. Is it possible to conduct some experiments following LLaGA?
The ablation studies are also questionable. Even without GT or without FC, the model achieves at least equivalent performance to that of Vicuna-7B, right?
Technical Quality: 2
Clarity: 3
Questions for Authors: Please see Weaknesses.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and thoughtful comments! We have addressed your questions as follows:
> The experimental results are somewhat questionable. The vanilla Vicuna-7B even outperformed LLaGA, which leveraged Vicuna-7B as its base model.Furthermore, the results on E-commerce-Children show no difference between Vicuna-7B and the proposed TEA-GLM.
We apologize for any misunderstandings of the results that may have arisen due to the experimental setup. To address this, we provide clarification on the relevant experimental setup. Specifically, both LLaGA and our model were trained on the Arxiv and Computer, followed by zero-shot testing on the target datasets listed in Table 1. Following the approaches detailed in [1, 2], we used Vicuna-7B without fine-tuning as a baseline to evaluate the generalization ability of models that use Vicuna-7B as their base model. The LLaGA primarily focuses on the domain of supervised learning, where it exhibits strong performance, as detailed in Appendix B.3. It also shows some generalization capabilities in a zero-shot scenario on citation datasets, outperforming Vicuna-7B. However, compared to Vicuna-7B, LLaGA demonstrates negative transfer effects across e-commerce datasets. These datasets present a greater challenge due to their low topic relevance, thus highlighting its limited generalization capabilities.
Regarding the Children dataset, this dataset is particularly challenging as it contains 24 categories and shows minimal topical relevance to the source dataset. Specifically, the Children dataset comprises titles and summaries about children’s books, whereas the Computer dataset includes reviews on computer-related electronic products (for a detailed description of these datasets, please refer to Appendix A). In comparison with other baselines, our model does not exhibit negative transfer on the Children dataset. Furthermore, it outperforms the baseline on five other datasets, which we believe convincingly demonstrates the superiority of our approach.
> From my perspective, this paper marks LLaGA as the strongest baseline. However, I find that the experimental settings are totally different from LLaGA. Is it possible to conduct some experiments following LLaGA?
We apologize for any confusion caused. Our primary research goal is to establish LLM as a zero-shot learner for various graph tasks, with a main focus on the zero-shot scenario. From this perspective, GraphGPT[1] represents the most relevant work. While LLaGA does address zero-shot learning, its main emphasis is on supervised learning scenarios. Therefore, our experimental setup follows the approach outlined in the GraphGPT. Additionally, to more thoroughly validate our model’s generalization capabilities, we have also included cross-task scenarios in our testing to evaluate the model's performance.
At the same time, we have also reported the results of our experiments in supervised learning scenarios, which can be found in Appendix B.3. LLaGA performs well in supervised learning contexts; however, its generalization capabilities are limited, particularly when dealing with datasets that have highly irrelevant topics. In contrast, our model performs well in zero-shot scenarios but underperforms in supervised learning contexts. Optimizing performance across both scenarios will be a focus of our future work.
> The ablation studies are also questionable. Even without GT or without FC, the model achieves at least equivalent performance to that of Vicuna-7B, right?
We apologize for any confusion regarding our ablation studies. In fact, the results labeled 'without GT' correspond to those of Vicuna-7B, a detail that was not clarified in the paper. We will add this in the appropriate section of the paper. The results labeled 'without FC' do not correspond to Vicuna-7B. During the GNN pre-training phase, we employed both instance-wise and feature-wise contrastive learning. Feature-wise contrastive learning, which we proposed to involve integrating principal components selected from LLM token embeddings, and restrict the node embeddings obtained from GNNs. This is a crucial component for achieving generalizable and LLM-aligned node embeddings. 'Without FC' refers to using only instance-wise contrastive learning for pre-training the GNN, which still utilizes GNN; hence, the results are not those of Vicuna-7B.
Thank you again for reviewing our manuscript. We have tried our best to address your questions, and have revised our paper based on your questions. Please kindly let us know if you have any follow-up questions. Your insights are invaluable to us, and we are prepared to provide any additional information that may be helpful.
[1] Tang, Jiabin, et al. "Graphgpt: Graph instruction tuning for large language models." arXiv preprint arXiv:2310.13023 (2023).
[2] He, Yufei, and Bryan Hooi. "UniGraph: Learning a Cross-Domain Graph Foundation Model From Natural Language." arXiv preprint arXiv:2402.13630 (2024).
---
Rebuttal 2:
Comment: Dear Reviewer LAU5,
We greatly appreciate your time and effort in reviewing our manuscript, especially during this busy period.
Since there will be no second stage of author-reviewer discussions, we are writing to inquire about the **current status of your review** of our submission. We sincerely hope that our responses have effectively resolved your concerns. Additionally, we are eager to address any further questions you may have, as your input will help us improve our work.
Thank you once more for your invaluable guidance and support. | Summary: The paper propose a GNN plus LLM model for zero-shot learning in the graph domain. They first use contrastive learning to pretrain a GNN that can be applied to arbitrary graphs. They added a feature-wise contrastive learning objective on the projected representation to the PCA of LLM token embeddings to bridge the gap between contrastively learned embedding and LLM token embeddings. After training the GNN, they train additional linear projector to the map the GNN output to a sequence of graph tokens, and the tokens are combined with special prompt designed for graph learning tasks for LLM to perform downstream tasks. The methods does not need to fine-tune the LLM and acheived good performance on most of the datasets.
Strengths: The PCA-based alignment with the LLM input is interesting, and potentially can shed lights on future research on GNN and LLM alignment.
The experimental results seem promising on most datasets compared to both LLM and GNN+LLM baselines. Especially, the model shows some-level of cross-task transferrability where the GNN and linear projector are pretrained only on node classification but, but show non-trivial performance on link prediction tasks.
Weaknesses: My major concern about the paper is its novelty. Several important components, including using self-supervised learning to generate general node embedding [1], alignning graph representation with LLM input with linear projector [2], and the prompt design are proposed and already widely adopted in the research area. The feature-wise contrastive learning seems to be the main technical contribution, yet it is not studied and invectigated in details. For example, why pre-training on node classification leads to non-trivial link prediction results? According to the ablation study, feature-wise CL seems to be the key, yet no knowledge about link-prediction is injected into the model in any step of the method.
Related to the main concern, I think the experimental results do not fully justify the authors claim. The author claims that the aligned graph tokens provide the LLM with graph information. However, they still feed the node texts as direct text input to the LLM, for some datasets LLM can do well solely using node texts. To show that the pretraining process indeed brings in inductive/transferrable knowledge of the graph, the author should either show that by completely removing node text to from LLM text input (it is ok to keep task text), the model can still work or provide results of baseline models that tune the LLM on the node texts (tuning the LLM weight or use a soft prompt), and such baseline can not transfer or performs poorly. Note that the ablation study between w/o FC and w/o graph token is not enough. Essentially, if a soft prompt tuning can achieve similar effect, the proposed work is just adding linear layers as a substitute to soft prompts.
[1] He, Yufei, and Bryan Hooi. "UniGraph: Learning a Cross-Domain Graph Foundation Model From Natural Language." arXiv preprint arXiv:2402.13630 (2024).
[2] Tang, Jiabin, et al. "Graphgpt: Graph instruction tuning for large language models." arXiv preprint arXiv:2310.13023 (2023).
Technical Quality: 2
Clarity: 3
Questions for Authors: - Why do you need to map a single GNN node representation to multiple graph tokens? They should contain exactly the same amount of information.
- What is the prompt used for baseline LLM? Is it the same prompt as the one used for GNN? If so, what happen when you provide it with a more detailed prompt, but still only on the target node?
- For edge and graph tasks, you used the sum of node representation to compute graph tokens, wouldn't that break the aligned distribution as the pretraining only happens on node.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and valuable suggestions. We have addressed your questions as follows:
> My major concern about the paper is its novelty. Several important components, including using self-supervised learning to generate general node embedding [1], alignning graph representation with LLM input with linear projector [2], and the prompt design are proposed and already widely adopted in the research area.
Due to the word limit, please refer to the "Novelty" section in the top-level Author Rebuttal.
> The feature-wise contrastive learning seems to be the main technical contribution, yet it is not studied and invectigated in details. For example, why pre-training on node classification leads to non-trivial link prediction results? According to the ablation study, feature-wise CL seems to be the key, yet no knowledge about link-prediction is injected into the model in any step of the method.
It seems there might be a misunderstanding regarding the role of the key components. Firstly, the GNN is trained using self-supervised learning, which effectively aids the model in deeply exploring and understanding the intrinsic structures and properties of graph data. This training approach is advantageous for tasks such as node classification, link prediction, and graph classification, as referenced in [1, 3, 4]. Indeed, the linear projector is trained on node-level tasks with prompts, but to enable cross-task applicability, we have designed unified prompts. In addition to employing a fixed number of graph tokens for tasks at the node, edge, and graph levels, we also incorporate candidate answers. This approach is designed to enable the LLM to learn to select the correct answer using graph information, rather than merely learning how to respond to node classification problems. Consequently, after training on node classification tasks, the LLM can extract graph information from graph tokens, which naturally benefits the performance on link prediction tasks.
Regarding feature-wise contrastive learning, its key role is to assist GNNs in generating node representations that are more easily understood by LLMs. The aforementioned process ensures that the LLM can extract graph information from graph tokens, while feature-wise contrastive learning ensures that the LLM can comprehend this graph information. The results of our ablation experiments demonstrate that using feature-wise contrastive learning yields representations with greater generalizability, which are also better understood by the LLM.
> Related to the main concern, I think the experimental results do not fully justify the authors claim. ... Essentially, if a soft prompt tuning can achieve similar effect, the proposed work is just adding linear layers as a substitute to soft prompts.
Due to the word limit, please refer to the "Experimental Results of Soft Prompt Tuning" section in the top-level Author Rebuttal.
> Question 1
Node representations encompass information about the graph structure and its neighboring nodes, and we believe that a single token cannot adequately convey the graph information. Similarly, in the vision-language model domain, it is challenging for a single token to represent an entire image; images are typically encoded into multiple tokens[5, 6]. We consider this approach both correct and reasonable. The experimental results shown in Figure 3 of Appendix C further demonstrate that an appropriate number of graph tokens can significantly enhance the model's performance.
> Question 2
We apologize for not providing the prompt for the baseline LLM in our paper; we will add this information. The prompt used by the baseline LLM is almost identical to that of our model, the only difference being the absence of graph tokens. Here is an example:
```
# Our model
Given the representation of a paper: <Token 1> <Token 2> … <Token K>, with the following information:
Title: {title}.
Question: Which arXiv CS sub-category does this paper belong to? Please directly give the most likely answer
from the following sub-categories: {answer candidates}.
# Baseline LLM
Given a paper with the following information:
Title: {title}.
Question: Which arXiv CS sub-category does this paper belong to? Please directly give the most likely answer
from the following sub-categories: {answer candidates}.
```
For GNN models, prompts cannot be utilized. We followed the setup described in GraphGPT [7] to conduct the zero-shot testing of GNN models.
> Question 3
We apologize for the insufficient explanation regarding readout operations in our paper. Sum, mean, max, and min are commonly utilized readout operations in Graph Neural Networks, as cited in references [1-3]. After obtaining node representations from a GNN, these operations are employed to derive representations for edges and graphs, as noted in [1]. We implemented the sum readout method, which our results have demonstrated to be effective. However, as this was not the primary focus of our research, we did not investigate variations of these operations. We will include more detailed explanations on this topic in our revised paper.
[1] Xie Y, et al. Self-supervised learning of graph neural networks: A unified review. IEEE transactions on pattern analysis and machine intelligence. 2022.
[2] Xu K, et al. How powerful are graph neural networks?. arXiv preprint. 2018.
[3] Y. You, et al. “Graph contrastive learning with augmentations,” NeurIPS, 2020.
[4] Hu Z, et al. Gpt-gnn: Generative pre-training of graph neural networks. SIGKDD. 2020.
[5] Liu H, et al. Visual instruction tuning. NeurIPS. 2023.
[6] Li J, et al. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. PMLR. 2022.
[7] Tang J, et al. Graphgpt: Graph instruction tuning for large language models. SIGIR. 2024.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed reply. My concern about the empirical evaluation is addressed. I am raising my score to borderline, but slightly toward rejection. The reason for not giving it a higher rating is that the authors' response to my question on the novelty is not as convincing. Specifically,
- I agree that feature-wise contrastive learning is an important contribution as I mentioned in my initial review. I do not complain about its novelty but about studying the mechanism behind it. The reference mentioned is about conventional (instance-wise in the paper) CL, while the feature-wise CL, in my opinion, serves more as an alignment and is not carefully examined.
- Agree to disagree, I still don't believe mapping one embedding to multiple tokens is a noteworthy contribution. Note that this is significantly different from the vision-LLM alignment approaches you mentioned. They have a strong image encoder to directly map an image to a sequence of tokens without explicit information bottleneck. On the other hand, the information contained in the representation of your approach is bottlenecked. (Pooling for link and graph.) Mapping a single representation to multiple token representations does not inject new information into the system. Hence, I believe the design is not necessary.
- The node-to-link transfer is surprising and hard to believe. I will try to replicate the results, and meanwhile, it would be great if you could share some examples of the LLM's answers to the link tasks.
- The way to pool information is, I believe, critical in your approach. If you use sum-pooling (mean-pooling too, but sum is the most intuitive) and only train on node tasks, link and graph representations essentially become OOD input for your model, and it is very difficult to convince myself that such representation results in reasonable output from LLM
---
Rebuttal 2:
Comment: Thank you for taking the time to review our rebuttal. We are grateful for the increased score and are enthusiastic about addressing your remaining concerns to enhance our submission further.
> Question 1
We appreciate your feedback on feature-wise CL, and will elaborate on its mechanism in the revised paper. In addition to rows, columns of feature matrix also have semantic information. [1] proposed that these columns could be regarded as representations of clusters. It is possible to consider the columns as soft labels for features and to perform discrimination among groups of similar features. This approach inspires us to learn generalizable node representations for LLM, independent of other instances [1], enhancing transferability across datasets.
To obtain node representations understandable by LLMs, we employ PCA to select the top k principal components from the LLM token embedding space. Given that in high-dimensional spaces, vectors are almost invariably orthogonal [2], these k principal components serve as the axes that maximize variance in the LLM token embedding space. By projecting the original feature matrix onto these k principal components and then applying FC, we can achieve node representations that are aligned with LLM and possess enhanced generalizability.
Existing experimental results have confirmed that FC indeed achieves more generalizable node representations, playing a crucial role in handling cross-task challenges. Additionally, we report the legality rate of the model without FC (more about legality rate see Appendix B.1). Results in **bold** indicate the **poorest** outcomes, indicating a minimal impact on the language model by FC. This further demonstrates that FC yields node representations that are more easily understood by LLMs.
||Arxiv|Computer|Pubmed|Cora|Children|History|Photo|Sports|
|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|
|Vicuna-7B-v1.5|**99.3**|**96.7**|100.0|95.8|99.2|98.9|94.1|99.6 |
|LLaGA|100.0|100.0|**98.9**|**79.9**|93.1|**92.4**|**77.8**|**94.3**|
|TEA-GLM (without FC)|100.0|100.0|100.0|89.0|**92.9**|99.6|98.4|98.1|
|TEA-GLM|100.0|100.0|100.0|92.6|97.0|99.6|99.2|98.5|
> Question 2
Our approach of mapping a single node representation to multiple LLM tokens (graph tokens) is not meant to introduce new or extra information. Rather, it ensures that the complex information within graph structures is adequately conveyed. Our experiments on the number of graph tokens (Figure 3, Appendix C) confirm the necessity of multiple tokens. This approach achieves SOTA zero-shot performance without fine-tuning the LLM, demonstrating its novelty.
> Question 3
Our code is publicly available, and if necessary, we can provide the trained model files. Appendix D includes the prompt for the link prediction task. For instance, for citation datasets, the LLM would respond with either 'These two papers have citation relationships' or 'These two papers may not have citation relationships'.
> Question 4
We agree on the importance of pooling information. Our design avoids inputting all neighboring node tokens into the prompt, as the order significantly influences the LLM's output, conflicting with the graph's node permutation invariance. We use a readout operation to extract representations, then map them to a fixed number of graph tokens.
We conducted experiments using both mean and max readout operations, though due to time constraints, these were limited to citation datasets. The results clearly show that the choice of readout operation influences the experimental outcomes. However, both sum and mean readout operations produced effective results, enabling the model to perform inference successfully. This suggests that an OOD (Out-of-Distribution) scenario may not be present. A possible reason is that, in GNNs, node representations are computed as a weighted sum of the central node and its neighbors. Since both sum and mean readout operations are essentially special cases of a weighted sum, the link representations generated still align with the distribution of node representations produced by GNNs. Therefore, the OOD scenario may not be applicable in this context.
Thank you once more for your constructive feedback. We have enhanced the presentation to provide a clearer explanation of the key concepts in our paper. We believe that our revisions adequately address your concerns. **We would greatly appreciate it if you could consider raising your score further.** If you have any further concerns, we are ready to address them quickly to enhance our submission.
||Arxiv|Pubmed|Cora|
|:----:|:----:|:----:|:----:|
|Vicuna-7B|0.513|0.543|0.527|
|GraphGPT-std|0.649|0.501|0.520|
|LLaGA|0.570|0.569|0.537|
|SoftPrompt|0.537|0.535|0.565|
|TEA-GLM(max)|0.639|0.650|0.566|
|TEA-GLM(sum)|0.657|0.689|0.586|
|TEA-GLM(mean)|**0.659**|**0.690**|**0.588**|
[1] Li Y, et al. Contrastive clustering. AAAI. 2021.
[2] Hopcroft, et al. Computer Science in the Information Age. 2017.
---
Rebuttal Comment 2.1:
Comment: The authors provide reasonable explanations to my questions and I am impressed by their diligence in improving and defending their work, and I am leaning towards acceptance.
On the other hand, my doubts still remain, but the authors justification and empirical evidence make sense. In particular, for tasks other than node classification (link-prediction in particular, since graph classification is not empirically evaluated), my guess is that if two connected nodes are dissimilar, the pooling generate non-decodable embedding, and vice versa, and Vicuna has the bias from pretraining that happens to align link prediction output with these two scenarios. For the pooling mechanism, max pooling does make sense to me, while sum pooling does not, the domain of input to the model is very different, potentially, you might have employed some normalization trick not documented in your paper, so that the output can still roughly fall into the range.
---
Reply to Comment 2.1.1:
Comment: We sincerely appreciate your timely and detailed response, especially considering your busy schedule! Thank you for recognizing and reassessing our work!
We understand and acknowledge your concerns. Based on our current experimental results, max pooling—which intuitively should not produce OOD issues—does not perform better than sum pooling. Additionally, we did not fine-tune Vicuna in these experiments. Under these conditions, only graph tokens that the LLM can decode and comprehend contribute to its predictions. If the graph tokens are non-decodable, the LLM struggles to make accurate predictions or even generate responses. Thus, our empirical results suggest that the representations obtained through pooling remain within the model's domain.
Your feedback is highly valuable! We will include a discussion of different pooling methods in our paper and consider using max pooling and mean pooling, which are less likely to produce OOD issues, as our final approach.
Thank you again for your responsible reply and valuable suggestions. If our explanations have addressed your concerns, we would greatly appreciate it if you could consider improving our score. If you have any further concerns, we are committed to addressing them promptly. | Summary: The motivation of this work is to utilize the zero-shot learning capacity in LLMs for structural data. The basic idea is to connect GNNs with LLMs, by aligning its representations with the token embeddings of an LLM, such that GNNs can encode the structural information and LLMs can do zero-shot inference. The experiments were conducted with two different domains and seven baseline methods, and demonstrate the performance of the proposed method.
Strengths: - The research question is interesting and important. Generalizing the capacity of LLMs for zero-shot graph learner is a good idea to use the zero-shot capacity for structural information.
- The proposed method is properly described with sufficient technical details. The pipeline of two-step adjustment is convincing.
- Although more experiments can certainly be done, the current experiments answer three important research questions aligned with the topic, each of them is answered with a good amount of evidence.
Weaknesses: I do not have any major concern about this work. On the other hand, I do think the presentation of this work can be better. Some parts of the paper (especially section 2) are a little bit difficult to understand.
Technical Quality: 3
Clarity: 3
Questions for Authors: No further question, just some minor comments.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations have been listed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer's understanding and recognition of the contribution of our work. For the problem regarding
> I do think the presentation of this work can be better. Some parts of the paper (especially section 2) are a little bit difficult to understand,
we apologize for the lack of clarity in certain parts of our paper. As the reviewer pointed out, the explanation of feature-wise contrastive learning in Section 2 is not as clear and concise as it should be. We will revise the paper, with a focus on Section 2, to improve clarity and readability.
Thank you again for reviewing our manuscript. Please kindly let us know if you have any follow-up questions. Your insights are valuable to us, and we stand ready to provide any additional information that could be helpful. | null | null | Rebuttal 1:
Rebuttal: We sincerely appreciate the detailed reviews and valuable suggestions provided by the reviewers. Due to the character limit for a separate response, here we will focus on addressing the concerns regarding the novelty of our paper and presenting the results of the additional experiments.
**Novelty**
We respectfully disagree with the view that 'several important components are proposed and are already widely adopted in the research area.' Firstly, although [1] also utilized self-supervised learning, the methodologies are entirely distinct with ours. Specifically, for the first time, we proposed feature-wise contrastive learning in the GNN pre-training phase to obtain more generalizable node embeddings. This method uses principal components from LLM token embeddings to refine GNN-derived node embeddings, ensuring they align well with LLM token embeddings. Secondly, while we do use a linear projector, unlike existing related work [1, 2], we are the first to propose mapping a node embedding to multiple graph tokens, and we do not perform any form of fine-tuning on the LLM. This is enabled by feature-wise contrastive learning, which ensures that the node embeddings obtained post-GNN pre-training are already aligned with the LLM, thus only requiring training a linear projector. Thirdly, to achieve cross-task capabilities, we for the first time propose a unified prompt for graph tasks at different levels—node, edge, or graph. For tasks at different levels, we utilize a readout operation to extract representations corresponding to nodes, edges, or entire graphs. These representations are then mapped to a fixed number of graph tokens. This approach diverges from previous works that directly input all node tokens. Our design is more rational as the order of input node tokens significantly influences the LLM's output, which contradicts the graph's inherent node permutation invariance.
**Experimental Results of Soft Prompt Tuning**
Thank the reviewer 65Wi for constructive comments. We acknowledge the lack of experiments specifically demonstrating the role of graph information in the graph tokens. To address these concerns, we validated our approach by fine-tuning the LLM with soft prompts. To ensure a fair comparison, all experiments were conducted using the same random seed and prompt. Both the number of soft prompts and our model's graph tokens were consistently set to five. The key difference is that soft prompt tokens are trainable parameters, unlike those derived from GNNs and a linear projector.
We report the average experimental results: for the zero-shot tasks (upper table), we present the accuracy, and for the cross-task tasks (lower table), we provide the AUC. **Bold** highlights the best result across all methods, while *italics* highlights the second-best results. These results clearly demonstrate that, across all tasks, fine-tuning the LLM with soft prompts leads to inferior performance compared to our model. Notably, in node classification tasks, it fails to achieve transferability on e-commerce datasets, indicate that soft prompt tuning can not do well solely using node texts. This suggests that graph tokens indeed carry transferable graph information, which enables the LLM to make more accurate predictions. Additionally, this indicates that the use of FC produces graph tokens that are more interpretable by the LLM. After conducting additional experiments, we believe the experimental results fully justify that our method makes the LLM a zero-shot graph learner. We will include this content in our revised paper.
| | Pubmed | Cora | Children | History | Photo | Sports |
|:----:|:----:|:----:|:----:|:----:|:----:|:----:|
| Vicuna-7B | 0.719 | 0.156 | *0.270* | *0.363* | *0.378* | *0.370* |
| GraphGPT-std | 0.701 | 0.126 | - | - | - | - |
| GraphGPT-cot | 0.521 | *0.181* | - | - | - | - |
| LLaGA | *0.793* | 0.168 | 0.199 | 0.146 | 0.276 | 0.352 |
| Vicuna-7B (SoftPrompt Tuning) | 0.768 | 0.168 | 0.227 | 0.281 | 0.350 | 0.230 |
| **TEA-GLM (Ours)** | **0.848** | **0.202** | **0.271** | **0.528** | **0.497** | **0.404** |
| | Arxiv | Pubmed | Cora | Children | History | Computer | Photo | Sports |
|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|
| Vicuna-7B | 0.513 | 0.543 | 0.527 | 0.500 | 0.515 | 0.502 | *0.501* | 0.502 |
| GraphGPT-std | *0.649* | 0.501 | 0.520 | - | - | - | - | - |
| LLaGA | 0.570 | *0.569* | 0.537 | 0.422 | 0.449 | 0.479 | 0.478 | **0.597** |
| Vicuna-7B (SoftPrompt Tuning) | 0.537 | 0.535 | *0.565* | *0.544* | *0.543* | *0.509* | *0.501* | 0.508 |
| **TEA-GLM (Ours)** | **0.657** | **0.689** | **0.586** | **0.571** | **0.579** | **0.554** | **0.545** | *0.553* |
[1] Tang J, et al. Graphgpt: Graph instruction tuning for large language models. SIGIR. 2024.
[2] He Y, et al. UniGraph: Learning a Cross-Domain Graph Foundation Model From Natural Language. arXiv preprint. 2024. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
How Does Variance Shape the Regret in Contextual Bandits? | Accept (poster) | Summary: *** In line 12 there is a comment with acronyms of one of the authors, for the AC's consideration if it is a desk reject ***
The paper studies the setting of adversarial contextual MAB under the assumption of access to a realizable general function class for approximating the context-dependent rewards.
The authors prove both upper and lower regret bounds, that are variance dependent, i.e., dependent on the cumulative data variance rather than on $T$ which is the number of episodes. The authors present the regent bounds in three different settings (1) for strong adversary, (2) for weak adversary, (3) where learning from function class.
In all the cases, their bounds are also dependent on the Eluder dimension. They present a matching lower bound for each setting.
The presented algorithm is adapted from SquareCB of Foster and Rakhlin (2020), but additionally maintains a confidence function set, to learn faster when the functions in the confidence set have larger disagreement.
Strengths: 1. Variance-dependent regret bound is an interesting benchmark in RL. An application of it to contextual bandits is nice and worth the community's attention.
2. The work is extensive – the authors fully analyze three different settings, and prove both upper and lower bounds.
Weaknesses: 1. The related literature review is unclear. In the presented previous results, there should be a clear separation between variance-dependent bounds and minimax regret bounds.
2. The difference between the three settings is unclear to me. I would expect the authors to use the accepted terms, i.e., oblivious and adaptive adversary. Moreover, to be comparable to previous literature, I think the most interesting case is strong adversary + learning from function class.
3. I do not see why the eluder diminution is necessary in all the bounds. It would be appreciated if the authors could provide an intuitive explanation.
4. It seems that the work over a confidence set of functions implies a running time complexity of $|F|$ that is not discussed.
5. The paper seems to be written at the last minute, as there are comments left inside the paper. See for additional example line 185 "(our contribution)".
6. The use of the Hellinger-Eluder dimension is given without any justification or an appropriate citation of previous works. The classic Eluder dimension introduced by Russo and Van-Roy is defined with respect to the $\ell_2$ norm only. If the authors are using other versions of it, they have to cite related work, if such exists, or justify the used guarantees themselves. In this case, [1] presents a version of the Eluder dimension to bounded metrics, that also holds to the squared Hellinger distance.
[1] Eluder-based regret for stochastic contextual MDPs, Levy et al. 2024.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Please provide a specific comparison of your results with the known variance-dependent bounds in previous literature.
2. What do you mean by $d_{elu}(0)$ ? For my understanding, the Eluder dimension is meaningful only for $\alpha \in (0,1] $.
3. Please refer to the weaknesses.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Mentioned in the "Weaknesses" section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for providing these useful feedback and questions.
Weakness:
- *Unclear related work.*
We will improve the related work section based on the suggestion.
- *Difference between the three settings.*
The separation between strong and weak adversary is different from that between oblivious and adaptive adversary. The terms 'strong' and 'weak' come from literature studying adversarial corruption in contextual bandits (e.g., He et al., 2022). In our case, 'strong adversary' decides the variance after seeing the decision of the learner, while 'weak adversary' decides it before. The separation between 'function class' and `model class' has been studied in previous works like [Wang et al., 2024]. The latter provides the full distributional information given context and decision, while the former only the mean.
[He et al., 2022] Nearly Optimal Algorithms for Linear Contextual Bandits with Adversarial Corruptions.
*The most interesting case is strong adversary + learning from function class.*
We argue that weak adversary and learning from model class are also of interest. In particular, as we showed, against weak adversary, one may obtain a better regret bound. Also, with a model class, the learner is able to generalize the learned knowledge beyond the 'mean' domain, which is the main consideration in the field of 'distributional RL.' The benefit of learning with a model class is also discussed in, e.g., [Wang et al., 2023, 2024].
- *Why the eluder dimension is necessary in all the bounds?*
Some intuition has been mentioned in Line 41-54. While in the non-contextual case (e.g., multi-armed bandits), eluder dimension is not necessary, it is necessary in the contextual case as we showed. Consider the case where the function set size |F| >> 2 and the action number A=2. Suppose that the ground truth function $f^\star$ always chooses the better action under every context. Every time, the adversary chooses the context so that one of the functions choose an action that disagrees with all other $|F|-1$ functions' choices. This way, to figure out $f^\star$, the regret must scale with some ``disagreement coefficient'' of $|F|$, instead of just $A$. The eluder dimension takes the role of the disagreement coefficient.
- *Running time O(|F|) not discussed.*
We will clarify this in our revision. Indeed, in this work, we only focus on the statistical complexity but not the computational complexity.
- *Comments left inside the paper. Additional example: line 185 "(our contribution)".*
We are sorry for the mistake. The label ``(our contribution)'' is actually not a mistake. It refers to the equation right below Line 179.
- *The use of the Hellinger-Eluder dimension is given without any justification or an appropriate citation of previous works. The classic Eluder dimension introduced by Russo and Van-Roy is defined with respect to the norm only. If the authors are using other versions of it, they have to cite related work, if such exists, or justify the used guarantees themselves. In this case, [1] presents a version of the Eluder dimension to bounded metrics, that also holds to the squared Hellinger distance.*
The use of the Hellinger-Eluder dimension is given without any justification or an appropriate citation of previous works. The classic Eluder dimension introduced by Russo and Van-Roy is defined with respect to the $\ell_2$norm only. If the authors are using other versions of it, they have to cite related work, if such exists, or justify the used guarantees themselves. In this case, [1] presents a version of the Eluder dimension to bounded metrics, that also holds to the squared Hellinger distance.
Thanks for bringing to our attention the missing citation. Our motivation is the same as [1] to generalize the eluder dimension to general divergence.
[1] Eluder-based regret for stochastic contextual MDPs, Levy et al. 2024.
Question:
- *Please provide a specific comparison of your results with the known variance-dependent bounds in previous literature.*
Previous work on variance-dependent contextual bandits mostly focused on linear contextual bandits (see Zhao et al., 2023 and the related work therein). Literature on variance-dependent contextual bandits with general function approximation is scarce. [Wang et al., 2024] studied the setting same as our Section 6. Their result is listed in our Table 1. While they conjectured that their bound could be improved, we disproved it by showing a matching lower bound. The work of [Wei et al., 2020] also investigated second-order bounds for contextual bandits, but they focused on the agnostic setting (while we focused on the realizable setting). Though their algorithm can be applied to our setting, the regret is highly sub-optimal. In general, their result is incomparable to ours.
- *What does $d_{\mathrm{elu}}(0)$ mean?*
In fact, $d_{\mathrm{elu}}(\alpha)$ remains meaningful when $\alpha=0$ (see our Definition 2.1, and simply set $\alpha=0$). For example, a $d$-dimensional linear function class has $d_{\mathrm{elu}}(0)=d$.
---
Rebuttal 2:
Comment: I thank the authors for their response.
Not sure I understand their response related to the question about $d_{eluder}(0)$. As far as I understand the Eluder dimension, $\alpha$ should be an error parameter. Are you using $1-\alpha$? Would be happy if the authors could elaborate more.
Besides that I have no further questions.
---
Rebuttal Comment 2.1:
Comment: We provide a further explanation below for $d_{elu}(0)$.
Our definition of eluder dimension (Definition 2.1) follows that in Russo and Van Roy (2014). See their Definition 3 and 4. Note that their $\epsilon$ is our $\alpha$.
By the definitions, $d_{elu}(0)$ is well-defined as long as the concept of ``$0$-dependent'' is well-defined (see Russo and Van Roy's Definition 3). By their Definition 3, an action $a$ is $0$-dependent with respect to $\mathcal{F}$ on $\{a_1, \ldots, a_n\}$ if any pairs $f, \tilde{f}\in \mathcal{F}$ satisfying $f(a_i)=\tilde{f}(a_i)$ for all $i\in\{1,2,\ldots, n\}$ also satisfies $f(a)=\tilde{f}(a)$.
Take the $d$-dimensional linear function class as an example, i.e., $\mathcal{F}$ consists of functions of the form $f_\theta(a)=\theta^\top a$. We demonstrate it through $d=4$. Below we show that $a=(2,3,4,0)$ is $0$-dependent on $\{a_1, a_2, a_3\}$ where $a_1=(1,0,0,0)$, $a_2=(0,1,0,0)$, and $a_3=(0,0,1,0)$. To verify this, assume that $f=f_\theta$ and $\tilde{f} = f_{\tilde{\theta}}$ agree on $\{a_1, a_2, a_3\}$, i.e., $\theta^\top a_i = \tilde{\theta}^\top a_i$ for all $i=1,2,3$. By our choices of $a_1, a_2, a_3$, it must be that $\theta_i = \tilde{\theta}_i$ for all $i=1,2,3$. This further implies $\theta^\top a = \tilde{\theta}^\top a$. We thus have shown that $a$ is $0$-dependent on $\{a_1, a_2, a_3\}$.
It can be verified that for any finite function class $\mathcal{F}$, $d_{elu}(0)\leq |\mathcal{F}|$ always holds, though it is possible for some infinite function class to have $d_{elu}(0)=\infty$.
Note that by the definition of eluder dimension, $d_{elu}(\alpha)$ is decreasing with $\alpha$. That is the reason why we choose to state with $d_{elu}(0)$ in Theorem 4.1, 6.2, and 6.3. In these lower bounds, our construction ensures $d_{elu}(0)\leq d$, which implies $d_{elu}(\alpha)\leq d$ for all $\alpha\geq 0$. This ensures that our lower bound simultaneously hold for any choice of $\alpha$.
[Russo and Vay Roy, 2014] Eluder Dimension and the Sample Complexity of Optimistic Exploration. (link: https://web.stanford.edu/~bvr/pubs/Eluder.pdf). | Summary: This paper considers contextual bandits with function approximation, and studies the influence of the variance information on the regret bound. It studies three regimes: weak adversary where the variance is revealed before each time step; strong adversary where the variance depends on the chosen action at each time step; and learning with a model class where the reward follows a Gaussian distribution with the mean and the variance depending on the context and the arm. By incorporating the variance, both the upper and lower bounds of the regret are refined, subject to some gaps in some scenarios.
Strengths: - The proposed problem is interesting and the result also answers the conjecture that $\tilde{O}(\sqrt{A\Lambda \log|\mathcal{F}|}+A\log |\mathcal{F}|)$ is not true, when we incorporate the variance into the algorithm design. The eluder dimension plays an important role in this bound and has been used to characterize the bounds.
- The paper studies three different cases and provides sufficient insights in this line of problems. While there are some gaps between the upper and lower bounds, the discussions are thorough and can lead to further research.
Weaknesses: See the questions section
Technical Quality: 3
Clarity: 4
Questions for Authors: - Can the authors summarize the main technical novelty used in the paper?
- Concerning the third case, learning with the model class, the authors indicate that the variance information is not required as input in line 105. Does it mean that the variance, given the model, the context and the action, the variance $\sigma_{M_t}^2(x_t,a)$ is revealed, as presented in line 7 in Algorithm 3? If we only assume that the variance is unknown but it only depends on the context and the arm, can the algorithm be modified such that it can work with estimating the variance on the fly, like [1]?
- The lower bound result Theorem 6.2 is obtained via instances with the same variance. While the lower bound is indeed a minimax lower bound, as this paper is considering the role of the variance, is it possible to derive a lower bound for the case where the variances are heterogeneous?
[1] Jourdan, M., Rémy, D., & Emilie, K. Dealing with unknown variances in best-arm identification. International Conference on Algorithmic Learning Theory. PMLR. 2023.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: - *Can the authors summarize the main technical novelty used in the paper?*
We have the following technical contributions.
1. Algorithm design through checking disagreement: Our algorithms in Sec 4 and Sec 6 decide whether to use inverse gap weighting based on the degree of disagreement in the function confidence set. Such a design is, to our knowledge, crucial to obtain the tight bound and not seen in previous work. The analysis framework for this algorithm is also new.
2. Refined online regression oracle: The choice of the online regression oracle (i.e., Prod) in Sec 4 and its associated analysis are non-trivial. The usefulness of such an online regression oracle has not been discovered before.
3. In Sec 6, we adapt the inverse-gap weighting technique to learning from model class. This was mentioned as a future work by [Wang et al., 2024]. In this case, we identified the Gaussian-noise condition under which tight bound can be obtained without revealing variance. We believe this is an important step towards resolving the general case.
4. In Sec 6, we identify the important difference between the achievable bounds using $\sum_{t=1}^T
\sigma(x_t,a_t)^2$ and $\sum_{t=1}^T \max_a \sigma(x_t,a)^2$ as the total variance measure. This answers the open questions in [Wang et al., 2024]. Similarly, in Sec 4 and 5, we identify the difference between the achievable bounds under strong and weak adversary.
- *Concerning the third case, learning with the model class, the authors indicate that the variance information is not required as input in line 105. Does it mean that the variance, given the model, the context and the action, the variance $\sigma_{M_t}^2(x_t,a)$ is revealed, as presented in line 7 in Algorithm 3?*
Yes, the variance will be revealed.
- *If we only assume that the variance is unknown but it only depends on the context and the arm, can the algorithm be modified such that it can work with estimating the variance on the fly, like [1]?*
It would be hard to estimate in general. For instance, if the context space is super huge and no context is repeated twice, then since there is no model class, then there is no way to generalize the variance information since each time, the variance can be arbitrary. So, estimation of the variance on the fly is not possible. On the other hand, if the context space is small, estimating the variance might be possible, but we suspect that would render the regret bound depending on the size of the context space.
- *The lower bound result Theorem 6.2 is obtained via instances with the same variance. While the lower bound is indeed a minimax lower bound, as this paper is considering the role of the variance, is it possible to derive a lower bound for the case where the variances are heterogeneous?*
Thanks for your suggestion. Our lower bounds can be adapted to any collection of variances (with no ordering) through the following trick. The idea is that for any given collection of variance, we can divide the variances with doubling scales into $\log T$ categories. For the scale with the largest cumulative variance, we put the variances of this scale to all appear first and apply our lower bound to these steps. By pigeonhole principle, our lower bound is worsened by a rate of at most $\log T$. Currently, our lower bounds do not extend to cases where the sequence of variance is given. It is an interesting direction for future research.
[1] Jourdan, M., Rémy, D., & Emilie, K. Dealing with unknown variances in best-arm identification. International Conference on Algorithmic Learning Theory. PMLR. 2023.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for their response! Please consider incorporating the discussions of lower bound under the heterogeneous variances scenario into their manuscript.
Regarding the question that the variances are revealed, I still believe this setup might be strong. I understand that when the context space is large, the estimation of the variance may not be possible. From another point of view, this indicates the knowledge of the variance is unavoidable in this work. Therefore, can the authors give practical scenarios where the variance information is given to justify their setup?
---
Rebuttal 2:
Comment: Thanks for the feedback. First, we agree that the assumption in Section 4 that variance being revealed is not ideal, and our ultimate goal is to remove it (some initial attempts have been made in Appendix G). At the current stage, however, this assumption is necessary for our technique, and we hope that the our technique could inspire future work that obtains variance-agnostic result.
Still, there are cases where the learner is able to obtain variance information / variance estimation before making decisions, and the use of our algorithm can be justified.
For example, our algorithm works well for the case where the variance is upper bounded by a fixed quantity in all rounds, and the variance is much smaller than the reward range. This could simulate recommendation systems where the reward comes from user rating. Let's say the range of the rating is 0 to 10, but the user feedback has some noise and it is modeled to be within (mean - 1.5, mean + 1.5). In this case, the sigma in our algorithm can be chosen as 3/10, which significantly improve the existing regret bound by variance-unaware algorithms (e.g., SquareCB).
It is also possible that the learner can perform context-aware variance estimation, and plug it into our algorithm. For example, in clinical trials, the context may include information like the type of the drug and the age of the patient. We could create a finite number of categories for these contexts, and estimate the variance for each of them using historical data. This way, we can obtain variance estimation for given contexts before making decisions, and then apply our algorithm.
---
Rebuttal Comment 2.1:
Comment: Thank the reviewer for their response! I do not any further questions. | Summary: This paper studies contextual bandits with general function approximation, emphasizing the impact of variance information. It proves lower bounds dependent on the Eluder dimension and variance information in both strong and weak adversary settings. It also proposes algorithms for both cases. When the adversary is weak, the proved regret upper bound is optimal, otherwise, there is a $\sqrt{d}$ gap from optimal. The paper also studies the learning problem when the function class contains the distributional information and provides optimal regret bound.
Strengths: 1. The paper studies contextual bandits with general function approximation. The proved lower bound dependent on the Eluder dimension and variance information is very novel.
2. The paper provides a comprehensive analysis for different settings, e.g, weak/strong adversaries, and distributional settings. The proved results are insightful and optimal in most cases.
3. The presentation is very clear, including comparison with previous works.
Weaknesses: 1. In the known variance section, this paper only focuses on a regime where $A \le d_{elu} \le \sqrt{AT}$. The case $d_{elu} \le A$ is also very important, especially when the action set is large and function approximation benefits. In such cases, the proved upper bound and lower bound do not match, so the contribution in this section is a bit overclaimed. (matching upper bound when variance is known)
2. Some missing related works.
More works on variance-dependent results of heteroscedastic bandits:
[1] Zhou et al. Nearly minimax optimal reinforcement learning for linear mixture markov decision processes, Colt 2021
[2] Zhou and Gu. Computationally Efficient Horizon-Free Reinforcement Learning for Linear Mixture MDPs Neurips 2022
[3] Zhao et al. Optimal Online Generalized Linear Regression with Stochastic Noise and Its Application to Heteroscedastic Bandits
3. This description of "checking disagreement" (Line 166) is not very accurate. A similar definition of equation (4) has been used in previous works as weights ([4] [5]), bonus functions ([6][7]), and selection rules ([8][9]). I suggest adding more discussions on this.
[4] Agarwal et al. Vo q l: Towards optimal regret in model-free rl with nonlinear function approximation. Colt 2023
[5] Ye et al. Corruption-robust algorithms with uncertainty weighting for nonlinear contextual bandits and markov decision processes. ICML2023
[6] Di et al. Pessimistic Nonlinear Least-Squares Value Iteration for Offline Reinforcement Learning, ICLR 2024
[7] Huang et al. Horizon-Free and Instance-Dependent Regret Bounds for Reinforcement Learning with General Function Approximation AISTATS
[8] Gentile et al. Fast Rates in Pool-Based Batch Active Learning ICML 2024
[9] Zhao et al. A Nearly Optimal and Low-Switching Algorithm for Reinforcement Learning with General Function Approximation
4. Some minor errors: Line 12 worst case over what?
In the proof of Lemma E.5, there is an abusive use of $\epsilon$, it represents both the Gaussian noise and a changeable parameter for $\sigma$.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Is it possible to prove a variance-dependent upper bound with $d_{elu}$ dependence in the first term of Theorem 4.2?
2. What difficulty did you meet when trying to recover Zhao et al. (2023)’s result for unknown variance?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations and societal impact of this work have been addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for providing these useful feedback and questions.
Answers to Weaknesses:
- *Mismatch upper and lower bound for large action space cases*
We have such results for cases where $d_{\mathrm{elu}} < A$ (an upper bound $\tilde{\mathcal{O}}(d_{\mathrm{elu}}\sqrt{T\sigma^2\log|\mathcal{F}|})$). Please view Theorem 5.2 in our paper for more details. We admit that this upper bound does not match the provided lower bounds, and we will change our presentation accordingly in the next version.
- *Some missing related works. More works on variance-dependent results of heteroscedastic bandits:*
Thanks for pointing out these related references. We will add them in the next version of our paper.
- *This description of "checking disagreement" (Line 166) is not very accurate. A similar definition of equation (4) has been used in previous works as weights ([4] [5]), bonus functions ([6][7]), and selection rules ([8][9]). I suggest adding more discussions on this.*
Here 'check agreement' means checking whether there exists an action where two functions in the function set deviated a lot. Thanks for pointing out these references. We will add them in the next version of our paper.
- *Some minor errors: Line 12 worst case over what?*
These are words meant to be deleted. We will delete them in our next version.
Answers to Questions:
- *Is it possible to prove a variance-dependent upper bound with dependence in the first term of Theorem 4.2?*
Yes, we can obtain such an upper bound: $\tilde{\mathcal{O}}(d_{\mathrm{elu}}\sqrt{T\sigma^2\log|\mathcal{F}|})$, which serves as a special case of Theorem 5.2 in our paper.
- *What difficulty did you meet when trying to recover Zhao et al. (2023)’s result for unknown variance?*
The main difficulty we met is that we are dealing with bandit with function approximation in our paper, compared to the setting of linear bandit in Zhao et al. (2023). Technically, to prove our results, we are required to apply the general Freedman's inequality, rather than the Freedman's inequality specialized for linear setting.
---
Rebuttal Comment 1.1:
Comment: Thanks for your useful responses. I will keep my score. | Summary: This paper studies the problem of obtaining variance-aware regret bounds for realizable contextual bandits with general function approximation. They show that, despite intuition suggested by prior works, there is an unavoidable dependence on the Eluder dimension in getting variance-aware bounds. They investigate several different settings (depending on the strength of the adversary's environment design and the agent's knowledge of distributional assumptions) and show minimax lower bounds each involving the Eluder dimension. They also provide matching regret upper bounds under various assumptions on knowledge of distributions or variances.
Strengths: * The problem is well-motivated, the surrounding literature is well-explained, and the authors take care to explain why the main message is surprising, while also explaining the intuition for the unavoidable dependence on Eluder dimension.
* The algorithms and needed modifications are well-explained with authors taking care to make simplifying assumptions to enhance presentation and understanding.
* Along the way, some new technicalities had to be developed such as employing a square regression oracle with variance-based regret employing a more complicated action selection scheme based on checking discerning actions.
Weaknesses: * The main weakness is that the only upper bound results matching the lower bounds require some knowledge of the variance at every single round or under a stronger model class assumption (which then involves picking up a dependence on the size of the model class of variances).
* There could be more discussion of other function classes beyond the generic finite regressor class ${\cal F}$. For instance, how do the lower bounds and results here about variance-based bounds and Eluder dimension compare with the understanding for linear function classes, high-dimensional classes (like those mentioned for SquareCB in Foster and Rakhlin, 2020), neural network, or non-parametric classes? Do the lower bounds hold in greater generality?
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to strengths and weaknesses above.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No braoder impact concerns.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for providing these useful feedback and questions.
Answers to Weaknesses:
- *The main weakness is that the only upper bound results matching the lower bounds require some knowledge of the variance at every single round or under a stronger model class assumption (which then involves picking up a dependence on the size of the model class of variances).*
We obtained matched upper bound and lower bound for the variance-aware cases in our paper. For agnostic variance cases, we have both upper bound and lower bound with only a factor of $\sqrt{d_{\mathrm{elu}}}$ between them. Even with this gap, the dependence over the variances $\sqrt{\sum_{t=1}^T \sigma_t^2}$ is tight, and our bounds are sufficient to show the separation between variance aware and agnostic variance cases.
- *There could be more discussion of other function classes beyond the generic finite regressor class
. For instance, how do the lower bounds and results here about variance-based bounds and Eluder dimension compare with the understanding for linear function classes, high-dimensional classes (like those mentioned for SquareCB in Foster and Rakhlin, 2020), neural network, or non-parametric classes? Do the lower bounds hold in greater generality?*
Even though we did not discuss about infinite function class in our paper. But our algorithms and theoretical guarantee can generalize to infinite function classes. For example, for linear classes, using the similar structure of Algorithm 1 in our paper and some proper regression oracle for linear class similar as the one in Foster and Rakhlin (2021), we can obtain an upper bound of $\tilde{\mathcal{O}}(\sqrt{dA\sum_{t=1}^T\sigma_t^2})$. For generalized linear classes, we can get an upper bound of $\tilde{\mathcal{O}}(\sqrt{T\sum_{t=1}^T\sigma_t^2})$. And for other classes including neural networks and nonparametric class, we can get similar variance-dependent bound as long as there are some proper variance-aware regression oracles.
As for the lower bound part, since our lower bound is constructive, i.e. we show that there exists a function class $F$ which achieves the lower bound, we do not know how we can generalize our results to other function classes. But to obtain an instance-dependent lower bound for all possible function class is an interesting future research direction.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. It seems the strong adversary case actually covers the unknown variance situation so the results of the paper are stronger than I initially thought, even though there is higher dependence on $d$. As such, I am satisfied by the discussion and raise my score. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Instance-adaptive Zero-shot Chain-of-Thought Prompting | Accept (poster) | Summary: This paper explores the inside interactions among three components(i.e., question, prompt, rationale) in zero-shot CoT reasoning through the saliency score analysis, and discover the distinct characteristics of good and bad reasoning in terms of information flow. Based on above findings, the authors propose the IAP - an instance-level adaptive prompting strategy designed to enhance CoT reasoning by selecting an appropriate prompt that can guide LLMs to reason correctly from a given set of prompts for each question. To demonstrate the effectiveness of the proposed method, the authors conducted extensive experiments on the LLaMA-3 8B, LLaMA-2 13B, and Qwen14B models across Math, Logic, and Commonsense tasks.
Strengths: * **The motivation of this paper is quite intuitive.** Through the example in Figure 1, the authors illustrate that an instance-wise zero-shot CoT prompt is more plausible for better reasoning and may achieve a cap-breaking performance compared to the task-level optimal prompt, which is easy to understand.
* **Using Neuron saliency score to analyze the information flow of the question, prompt, and rationale is innovative and promising.** The authors conduct both qualitative and quantitative analyses on good-bad reasoning instances, discovering different patterns of information flow in good-bad reasoning. They further delve into a deeper analysis from the perspectives of Layer and Head, yielding valuable conclusions.
* **Extensive experiments are conducted to validate the effectiveness of proposed method.** The authors test the proposed IAP and comparison methods on LLaMA-3 8B, LLaMA-2 13B, and Qwen14B across Math, Logic, and Commonsense reasoning tasks.
Weaknesses: * **The proposed method is not clearly explained, some details are missing.** For instance, in Sequential Substitution (IAP-ss), it is not specified what the corresponding threshold is or how it was obtained. Similarly, in Majority Vote (IAP-mv), there is no explanation provided for how many top maximum scores are preserved. The lack of details in the proposed methods can be quite confusing.
* **The experimental setup is unfair.** For the main "effective" method proposed, IAP-mv, which obtains the final results through Majority Vote (similar to self consistency), it is unreasonable to compare it with baselines that only perform inference once. The improvement brought about by this method could very well be the result of ensemble inference from multiple reasoning paths [1][2][3], rather than the strategy implemented by the author's previous good-bad reasoning findings.
* **Some experimental conclusions are missing or somewhat unreasonable.** For instance, in the experiment of Consistency & Complementary, the authors only provide the experimental setup and show the corresponding results in Table 2 without giving any conclusions or analysis. Furthermore, in the analysis experiment for Efficacy, the author demonstrated through Figure 6 that while IAP-ss also incurs additional overhead, its accuracy compared to 0-shot also declines to some extent. Therefore, the conclusion stated by the authors that "the two IAP strategies can be employed as trade-offs in different demand prioritization applications" is not in line with the facts, as IAP-ss does not provide any gain compared to the baseline 0-shot.
[1] Making Large Language Models Better Reasoners with Step-Aware Verifier
[2] Diversity of Thought Improves Reasoning Abilities of LLMs
[3] Answering Questions by Meta-Reasoning over Multiple Chains of Thought
Technical Quality: 3
Clarity: 3
Questions for Authors: See weeknesses.
Although this paper performs quite well in the section of Information Flow Analysis on Zero-shot CoT, there are numerous flaws in its experiments that fail to convince me. If the authors can address my concerns, I would consider raising my score.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: I think the authors have addressed their limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the constructive advice and comments, which have greatly improved our manuscript, and we are glad to reply to your invaluable suggestions and questions.
### **Response to Weaknesses**
**Weakness 1:**
The proposed method is not clearly explained, some details are missing.
**Response to Weakness 1:**
For IAP-ss, we obtained threshold values with regard to distinct LLMs on different training sets, we computed the overall synthesized scores (defined in eq (4) in Section 3) to divide up the good and bad reasoning paths and adopted the thresholds whose classify reasoning well. Such as, the threshold of LLaMA-3 8B on GSM8K is 5.5e-6, and the identification of the thresholds of different LLMs on different datasets is the same and it is simple and doesn't not need much time. In practice, we considered reasoning with a value higher than the threshold as good, otherwise bad. We have tried different thresholds, and the best performance is shown in the following table.
| threshold | accuracy |
| --- | --- |
| 7.0e-6 | 59.82 |
| 6.0e-6 | 62.77 |
| 5.0e-6 | 64.67 |
| 4.0e-6 | 62.40 |
| 5.5e-6 | **65.36** |
We can see that an improper threshold can affect the performance of the IAP-ss, whether higher or lower. It comes from that higher thresholds tend to recognize some good reasoning instances as bad ones, and lower thresholds may overlook some bad reasoning.
As for the IAP-mv, we selected top-k (hyper-parameter, k=3) values and adopted the majority result as the final result, we also tried other k values and k=3 is the best among all other values with LLaMA-3 8B and picked some results on 3 datasets in the following table.
| k | MMLU | C-Judge. | T-Obj |
| --- | --- | --- | --- |
| 1 | 52.98 | 15.51 | 36.80 |
| 5 | 55.96 | 18.72 | 40.00 |
| 3 | **59.65** | **19.25** | **42.40** |
The k=3 achieves the best performance on most datasets, therefore, we selected the hyper-parameter k=3 as the default value in the paper. We are sorry for confusing you in this part and we have elaborated these details in the new version.
**Weakness 2:**
The experimental setup is unfair.
**Response to Weakness 2:**
We are sorry for not introducing the comparable approaches thoroughly in the baseline paragraph of Section 4.1, and we would like to replace the current version with the new one:
> In practice, the OPPR optimizes a meta prompt on different datasets with distinct LLMs for multiple rounds to guide LLMs to produce better prompts, requiring numerous optimization steps (i.e., inferences). As for Self-Discover, it consists of selecting, adapting, implementing, and reasoning, which also needs multi-round inference in every step.
Given the above description, we think that comparing the IAP with the above methods is not unfair.
The profits of IAP-mv didn't simply come from the multiple reasoning path, we computed the synthesized saliency scores (defined in eq (4)., as an application of former analysis) of all prompt candidates, and conducted a majority vote based on these synthesized scores. For a considerable proportion of questions, the LLM could be guided to generate wrong results with most prompts but only a few right results and our IAP-mv can handle it, thereby outperforming the majority final results vote. We compared the IAP-mv and direct majority results vote results in the following table:
| Method | GSM8K | SVAMP | C-Judge. | T-Obj. | CSQA | MMLU |
| ------ | ----- | ----- | ------- | ------ | ---- | ---- |
| Majority Vote. (Qwen 14B) | 28.22 | 51.33 | 26.10 | 1.44 | 46.52 | 52.46 |
| IAP-mv (Qwen 14B) | **62.81 (+34.59)** | **73.33 (+22.00)** | **29.95 (+3.85)** | **25.60 (+24.16)** | **65.68 (+19.16)** | **78.95 (+26.49)** |
| Majority Vote. (LLaMA-3 8B) | 52.54 | 74.33 | 17.06 | 12.60 | 62.41 | 52.53 |
| IAP-mv (LLaMA-3 8B) | **66.34 (+13.80)** | **77.33 (+3.00)** | **19.25 (+2.19)** | **42.40 (+29.80)** | **68.39 (+5.98)** | **59.65 (+7.12)** |
The results show that the majority vote approach performs poorly, and our IAP-mv outperforms it by a large margin, demonstrating that most prompts can lead the LLM to generate wrong answers for a given question, reaching only a few correct answers, i.e., such methods cannot recognize good/bad reasoning. Our IAP-mv can handle that with the analysis for information flow in reasoning, i.e., IAP-mv can differentiate good and bad reasoning, validating the effectiveness of our proposed strategy, rather than benefiting from multiple reasoning paths.
**Weakness 3:**
Some experimental conclusions are missing or somewhat unreasonable.
**Response to Weakness 3:**
We apologize for not providing an extensive explanation of the consistency and complementary experiments, we have discussed adding the following part to the 1st paragraph of Section 4.3 in our new manuscript:
> We can observe that the great performance of the instructive group outperforms our groups, which comes from the base of all instructive prompts. Furthermore, the combination of the instructive group and the other two can continue to improve the performance of both, demonstrating that complementary is critical for IAP-mv, and IAP-mv can take advantage of these complementary prompts.
In the Efficacy part, we intended to express that IAP-ss and IAP-mv can be chosen with different time budgets, and we are sorry for selecting an improper figure to confuse you. In fact, the IAP-ss outperforms all the task-level optimal prompts of most LLMs and datasets, as shown in Table 1, showing that people can still employ different strategies to achieve better zero-shot prompting. We have also conducted other time experiments, and plan to replace the current figure with the new one in the updated manuscript.
For some common questions, we made a unified reply in the Author Rebuttal part (see at top) which you can look up.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal by Authors
Comment: Thank you for your responses.
After reading your rebuttal, I still have the following two major concerns:
(1) IAP-ss needs to obtain the corresponding threshold on the training set. For the realistic inference scenarios, there is often no training set for parameter tuning, making the practical application scenarios of IAP-ss very limited.
(2) The experiment setup for comparing self-consistency. The rebuttal does not provide a specific implementation explanation for self-consistency, which makes me confused. I hope the authors can give further explanations.
---
Reply to Comment 1.1.1:
Comment: Thanks a lot for your instant reply, and we are glad to respond to your concerns.
### **Concern 1**
Your question is of practice value, IAP-ss needs to search thresholds with corresponding training sets, however, we want to briefly retrospect IAP, and provide detailed explanations to your question.
IAP consists of IAP-ss and IAP-mv, IAP-mv aims to select the top-k prompts with highest saliency scores. Though k is also a hyperparameter, it is a relatively discrete number and is more easily obtained (the simple way is to observe it directly from Figure 4 in the paper). Besides, as mentioned in our previous rebuttal, under our setup of 9 prompts combination, for different datasets, k can be set to 3 to achieve consistent improvements. Hence, extending IAP-mv to real scenarios is relatively straightforward.
Regarding your IAP-ss question, we provide an idea of threshold transfer, i.e., using an existing threshold to other datasets. To this end, we conducted IAP-ss experiments with GSM8K threshold to verify the transferability of thresholds.
| threshold (LLaMA-3 8B) | GSM8K | SVAMP | C-Judge. | T-Obj. | CSQA | MMLU |
| ---- | ---- | ---- | ---- | ---- | ---- | --- |
| single optimal prompt | 64.52 | 76.00 | 16.04 | 40.00 | 64.95 | 55.79 |
| from own training set | 66.43 | **77.33** | 16.57 | 38.80 | **65.68** | **56.49** |
| from GSM8K training set | **66.43** | 74.00 | **17.64** | **40.80** | 64.95 | 55.09 |
In the above table, we assumed all other datasets are real-world scenarios (without training sets, no knowledge of task-level best prompt), and we can see results based on GSM8K threshold, approaching or even surpassing the results of other datasets at their own thresholds (this is reasonable as threshold is successive value which usually does not cover a large tuning range; also, under few-shot prompt scenarios, demonstrations from GSM8K are commonly transferred in other datasets, indicating the adaptivity for other datasets) or under best prompts though we may not know which prompt is task-level best. Therefore, we can conclude that IAP-ss is still a potential choice without training sets.
Furthermore, we recommend in practical scenarios, one can choose a dataset setting more similar to specific contexts, or draw on methods such as online learning.
### **Concern 2**
We are sorry for not explaining more details of zero-shot Majority Vote (short for Zero-shot-mv) experiments earlier. We would like to first explain the details of the Zero-shot-mv experiments in former rebuttals, then clarify the self-consistency (SC) [4] you mentioned. At last, we extend supplementary experiments and discuss further.
As introduced in former rebuttals, Zero-shot-mv performs inference on 9 prompts individually and then uses majority voting based on the 9 results, which is a common ensemble method. You mentioned SC uses majority voting, but it expands decoding reasoning paths by modifying greedy sampling, and we refer to it as SC-mv. Their paper also stated: "Self-consistency is completely compatible with other ensemble strategies", shown in Table 7 in the paper. In addition, the papers you cited before(for example, [1][2]) further expand the prompt diversity, which is also compatible with Zero-shot-mv.
Now, back to Zero-shot-mv, we conducted additional experiments based on the suggestions of **Reviewer WuYd**, and we would like to make a deeper explanation for you. We conducted Zero-shot-mv with the other 7 zero-shot prompts (#1-7) first, and further selected top-3 prompts with the highest accuracy (fixed 3 prompts) for corresponding LLMs and datasets.
| Method (LLaMA-3 8B) | GSM8K | SVAMP | C-Judge. | T-Obj. | CSQA | MMLU |
| ------ | ----- | ----- | ------- | ------ | ---- | ---- |
| single optimal prompt | 64.52 | 76.00 | 16.04 | 40.00 | 64.95 | 56.67 |
| Zero-shot-mv (all prompts) | 52.54 | 74.33 | *17.06* | 12.60 | 62.41 | 52.53 |
| Zero-shot-mv (#1-7) | 57.82 | *77.00* | *18.13* | 20.80 | *65.03* | 41.23 |
| Zero-shot-mv (fixed 3 prompts) | *65.10* | *76.67* | *18.72* | 33.60 | *67.65* | *56.84* |
| IAP-mv | **66.34** | **77.33** | **19.25** | **42.40** | **68.39** | **59.65** |
The **bolded numbers** are best results, *italic* are results outperform task-level optimal prompts.
Taking LLaMA-3 8B as an example, the results of Zero-shot-mv (#1-7) improved a lot compared to all prompts, indicating some prompts were harmful to consistency. Also, Zero-shot-mv (fixed 3 prompts) surpassed best task-level single prompts on most datasets, at least comparable, showing with better prompts combination, the Zero-shot-mv can improve further. However, IAP-mv still outperforms all, demonstrating IAP-mv can select instance-adaptive good prompts by analyzing the information flow, which is more adaptive and effective than fixed Zero-shot-mv. We also depicted a schematic case in the discussions with **Reviewer WuYd** to illustrate that, and you can refer to that.
[4] Self-Consistency Improves Chain-of-Thought Reasoning.
---
Rebuttal 2:
Comment: Thank you for your review,
The authors responded to your initial review. Please be sure to read it and reply indicating the extent to which the authors have addressed your initial questions and concerns.
Best,
AC
---
Rebuttal 3:
Comment: Thanks for your detailed explanation. My major concern2 has been solved, and I have already corresponding adjusted my score.
---
Rebuttal Comment 3.1:
Comment: Thanks a lot for your instant reply, we are glad to hear that our explanations addressed your concerns, and we truly appreciate your input. | Summary: The paper introduces an instance-adaptive prompting algorithm for zero-shot Chain-of-Thought (CoT) reasoning in large language models (LLMs). Traditional task-level prompts are insufficient for all instances, so the authors propose a strategy that differentiates good and bad prompts based on information flow from the question to the prompt and rationale. Using neuron saliency score analysis, the study reveals that successful reasoning requires prompts to aggregate semantic information from the question. The proposed instance-adaptive prompting strategy (IAP) demonstrates consistent improvements across multiple reasoning tasks with LLaMA-2, LLaMA-3, and Qwen models.
Strengths: - **Comprehensive Analysis**: Uses neuron saliency scores to understand information flow during reasoning.
- **Innovative Approach**: Tailors prompts to individual instances, improving upon uniform task-level prompts.
Weaknesses: - **Limited Scope of Neuron Saliency Score Analysis**: The neuron saliency score analysis in Section 2 is conducted on only one LLM (not state in paper, personal guess) and one dataset, GSM8K. More evidence and diverse datasets are needed to support this analysis comprehensively. Additionally, Section 2.3, "Head Analysis," lacks a comparative study between good and bad reasoning instances.
- **Insufficient Experimental Details**: The experimental details provided are not sufficiently clear. Based on lines 263-264, my understanding is that IAP uses 9 different zero-shot CoT prompts to compute the S score. For IAP-ss, the process stops and uses the current result upon encountering the first prompt that meets the threshold. For IAP-mv, results from all 9 prompts are saved, and the top K are used for voting. This raises several questions: 1. How does the performance of IAP-mv compare to directly using a majority vote across the 9 prompt results? 2. What is the distribution of results for each method? For instance, which prompts tend to have higher S scores? This analysis could reveal conclusions such as "Let’s think step by step" being more suitable for math questions, while "Don’t think. Just feel." may be better for MMLU, aligning more closely with the instance-wise topic.
- **Figure 1 Inconsistency**: Figure 1 does not align with the rest of the paper. The figure and its description suggest addressing an overly complex problem, which is not the case in practice.
- **More Model Variants**: There is a need for experiments on larger model sizes to validate the findings.
- **Terminology Inconsistency**: The terminology used is inconsistent. In Figure 3, the terms "Good reasoning" and "Bad reasoning" are used, while in Figure 4, the terms change to "Good prompt" and "Bad prompt."
Technical Quality: 3
Clarity: 2
Questions for Authors: N/A
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks a lot for your valuable reviews, and we appreciate the time and effort you have taken. Regarding the weaknesses and questions, we would like to elaborate and address your concerns on this work.
### **Response to Weaknesses**
**Weakness 1:**
Limited Scope of Neuron Saliency Score Analysis: The neuron saliency score analysis in Section 2 is conducted on only one LLM (not stated in the paper, personal guess) and one dataset, GSM8K. More evidence and diverse datasets are needed to support this analysis comprehensively. Additionally, Section 2.3, "Head Analysis," lacks a comparative study between good and bad reasoning instances.
**Response to Weakness 1:**
In our investigation, we analyzed the information flow with the same method in different LLMs on various datasets and found that the phenomena of saliency scores for all LLMs on most datasets are quite similar, so we put the analysis process of Qwen-14B on GSM8K to maintain consistency in the narrative subject and present our analysis conclusion. Similarly, the head analysis results for good and bad reasoning are consistent, we are sorry for not elaborating on that in this subsection. We apologize for not putting more figures in the Appendix to make our analysis more complete and well-supported, and we have updated our manuscript. We also put a few samples of the analysis process in the pdf file attached in the Author Rebuttal (see at top).
**Weakness 2:**
Insufficient Experimental Details
**Response to Weakness 2:**
Your understanding of IAP is correct, and your suggestion of supplementing answers to the majority vote experiment would make our work more complete. The results comparison between our IAP-mv and the direct answers majority vote is as follows:
| Method | GSM8K | SVAMP | C-Judge. | T-Obj. | CSQA | MMLU |
| ------ | ----- | ----- | ------- | ------ | ---- | ---- |
| Majority Vote. (Qwen 14B) | 28.22 | 51.33 | 26.10 | 1.44 | 46.52 | 52.46 |
| IAP-mv (Qwen 14B) | **62.81 (+34.59)** | **73.33 (+22.00)** | **29.95 (+3.85)** | **25.60 (+24.16)** | **65.68 (+19.16)** | **78.95 (+26.49)** |
| Majority Vote. (LLaMA-3 8B) | 52.54 | 74.33 | 17.06 | 12.60 | 62.41 | 52.53 |
| IAP-mv (LLaMA-3 8B) | **66.34 (+13.80)** | **77.33 (+3.00)** | **19.25 (+2.19)** | **42.40 (+29.80)** | **68.39 (+5.98)** | **59.65 (+7.12)** |
The results show that the majority vote approach performs poorly, and our IAP-mv outperforms it by a large margin, demonstrating that most prompts can lead the LLM to generate wrong answers for a given question, reaching only a few correct answers, i.e., such methods cannot recognize good/bad reasoning. Our IAP-mv can handle that with the analysis for information flow in reasoning, i.e., IAP-mv can differentiate good and bad reasoning, validating the effectiveness of our proposed strategy.
The most adopted prompts in IAP-mv of distinct LLMs on different datasets are the former task-level optimal (among all prompt candidates) since they are suitable for most instances in corresponding datasets, respectively. We display the top-3 prompts for different LLMs on different datasets in the table below:
| Model | GSM8K | SVAMP | C-Judge. | T-Obj. | CSQA | MMLU |
| ------ | ----- | ----- | ------- | ------ | ---- | ---- |
| LLaMA-3 8B | #1, 6, 7 | #1, 2, 6 | #2, 4, 8 | #1, 2, 4 | #2, 4, 6 | #1, 2, 6 |
| Qwen 14B | #1, 3, 6 | #3, 4, 6 | #2, 3, 5 | #2, 3, 5 | #1, 6, 9 | #2, 4, 5 |
We can observe that the most adopted prompt candidates are the former task-level optimal ones, which is consistent with our analysis in the paper.
**Weakness 3:**
Figure 1 Inconsistency: Figure 1 does not align with the rest of the paper.
**Response to Weakness 3:**
Figure 1 offered a representative case to illustrate that: the worst task-level prompt can beat the optimal prompt for some instances, which is counterintuitive as we discussed in the 2nd paragraph in the Introduction section, such a case encouraged us to detect the inner mechanism for zero-shot CoT, further trigged us to propose instance-level prompting strategy.
**Weakness 4:**
More Model Variants.
**Response to Weakness 4:**
In this paper, we conducted experiments on 8B, 13B, and 14B LLMs with different architectures (LLaMA-2, LLaMA-3, and Qwen), and the results validated our initiative observation and analysis. The current evaluation across model sizes has provided a broad view of how different scales of models may perform under various prompt candidates. However, your suggestion is also valuable, and we conducted the experiments with LLaMA-3 70B using the same 9 prompts as the main experiments and IAP-mv, the table below shows the results:
| Prompt | GSM8K | SVAMP | C-Judge. | T-Obj. | CSQA | MMLU |
| ------ | ----- | ----- | ------- | ------ | ---- | ---- |
| #1 | 87.79 | 82.33 | 38.50 | 12.40 | 67.73 | 37.02 |
| #2 | 89.16 | 86.33 | 54.55 | 30.00 | 56.10 | 50.18 |
| #3 | 81.73 | 83.33 | 49.73 | 23.20 | 55.69 | 44.56 |
| #4 | 82.64 | 84.33 | 42.25 | 60.40 | 41.36 | 52.11 |
| #5 | 82.71 | 84.00 | 36.36 | 6.80 | 61.75 | 52.63 |
| #6 | 87.79 | 82.33 | 44.39 | 16.00 | 67.73 | 35.79 |
| #7 | 81.43 | 85.67 | 47.59 | 24.00 | 29.98 | 14.56 |
| #8 | 53.53 | 75.67 | 55.61 | 18.40 | 29.24 | 22.56 |
| #9 | 51.71 | 58.33 | 44.92 | 20.40 | 36.94 | 43.33 |
| IAP-mv | **89.84** | **87.33** | **56.20** | **62.00** | **69.04** | **54.39** |
In this table, IAP-mv demonstrates its effectiveness on LLaMA-3 70B, consistent with the results on LLaMA-3 8B and Qwen 14B, broadening the model scale impact of the IAP-mv.
**Weakness 5:**
In Figure 3, the terms "Good reasoning" and "Bad reasoning" are used, while in Figure 4, the terms change to "Good prompt" and "Bad prompt."
**Response to Weakness 5:**
We are sorry for not using the same statement in different figures, and we have rectified such issues to keep the claims consistent in the updated manuscript.
For some common questions, we made a unified reply in the Author Rebuttal part (see at top) which you can look up.
---
Rebuttal Comment 1.1:
Comment: Thank you for the authors' response. I have reviewed your feedback and noted that most of our concerns have been addressed.
However, I observed an issue with your supplementary experiment on IAP-mv VS majority voting, which shows unusual performance. According to Table 1 in the paper and the new table, nearly every score for Majority Vote is around the lower bound of the baseline scores in Table 1 with 9 prompt candidates. In my personal experience, the Majority Vote should not exhibit this behavior. Therefore, I am wondering if there might be a mistake in your experimental setup or if prompts #8 and #9 negatively influence the Majority Vote method. Could you please check the Majority Vote experiment again and provide updated scores excluding prompts #8 and #9?
---
Reply to Comment 1.1.1:
Comment: Thanks a lot for your instant reply, and we are glad to hear that most of your concerns were addressed.
You raised a great question, the majority vote is quite an important tool and we have checked the settings again for the supplementary experiments. Your judgment is right, zero-shot prompts-based majority vote (short for Zero-shot-mv) has some unusual results, raising by involving extra bad prompts (#8 and #9). According to your advice, we conducted experiments with the other 7 zero-shot prompts (#1-7), the results were enhanced, and we further selected top-3 prompts with the highest accuracy (top-3 prompts) for corresponding LLMs and datasets. With a better prompt combination, the performance of the Zero-shot-mv could be promoted. Taking LLaMA-3 8B as an example, Zero-shot-mv (with top-3 prompts) surpassed the best task-level single zero-shot prompts on most datasets, at least comparable. However, IAP-mv still outperforms all, demonstrating that IAP-mv can select instance-adaptive good prompts by analyzing the information flow, which is more adaptive and effective than the fixed Zero-shot-mv. Next, we will provide a detailed explanation of the experiments.
| Method (LLaMA-3 8B) | GSM8K | SVAMP | C-Judge. | T-Obj. | CSQA | MMLU |
| ------ | ----- | ----- | ------- | ------ | ---- | ---- |
| single optimal prompt | 64.52 | 76.00 | 16.04 | 40.00 | 64.95 | 56.67 |
| Zero-shot-mv (all prompts) | 52.54 | 74.33 | *17.06* | 12.60 | 62.41 | 52.53 |
| Zero-shot-mv (#1-7) | 57.82 | *77.00* | *18.13* | 20.80 | *65.03* | 41.23 |
| Zero-shot-mv (fixed 3 prompts) | *65.10* | *76.67* | *18.72* | 33.60 | *67.65* | *56.84* |
| IAP-mv | **66.34** | **77.33** | **19.25** | **42.40** | **68.39** | **59.65** |
| Method (Qwen 14B) | GSM8K | SVAMP | C-Judge. | T-Obj. | CSQA | MMLU |
| ------ | ----- | ----- | ------- | ------ | ---- | ---- |
| single optimal prompt | 60.50 | 72.00 | 28.34 | 23.20 | 63.23 | 76.84 |
| Zero-shot-mv (all prompts) | 28.22 | 51.33 | 26.10 | 1.44 | 46.52 | 52.46 |
| Zero-shot-mv (#1-7) | 57.98 | 55.33 | 27.50 | 8.80 | 57.70 | 63.86 |
| Zero-shot-mv (fixed 3 prompts) | *61.91* | 71.66 | *28.34* | 20.80 | **66.01** | 74.38 |
| IAP-mv | **62.81** | **73.33** | **29.95** | **25.60** | *65.68* | **78.95** |
The **bolded numbers** are the best results, and the *italic* are the results that outperform the task-level optimal prompt.
### **Zero-shot-mv without misleading prompts**
This table demonstrated that without misleading prompt candidates (#8 and #9), the performance of Zero-shot-mv can be improved compared to the all prompts-based setting, indicating that some prompts are harmful to the consistency, which is in line with your assumption. We present a schematic case to exemplify why the Zero-shot-mv(#1-7) performs better than the all prompts-based setting (supposed there are only two answers, right or wrong):
| Method | #1 | #2 | #3 | #4 | #5 | #6 | #7 | #8 | #9 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Zero-shot-mv(all prompts) | ✔ | ✔ | ✗ | ✔ | ✗ | ✗ | ✔ | ✗ | ✗ |
| Zero-shot-mv(#1-7) | ✔ | ✔ | ✗ | ✔ | ✗ | ✗ | ✔ | | |
We can see that the Zero-shot-mv(all prompts) gets the wrong answer because the number of wrong answers is more. For the Zero-shot-mv(#1-7), it can get the correct result with the answers ensemble after we eliminate 2 misleading prompts.
### **Zero-shot-mv with fixed 3 prompts**
We further testified the top-3 prompts with the highest accuracy of corresponding LLMs and datasets (fixed 3 prompts), and we found that with better prompt candidates, the Zero-shot-mv can improve further. Even though, the IAP-mv still outperforms Zero-shot-mv(fixed 3 prompts), the IAP-mv can take advantage of both consistency and complementary among those prompts at the instance level. Here, we present another schematic case to exemplify why the IAP-mv can outperform Zero-shot-mv(fixed 3 prompts, #1-3 in the table below, suppose there are only two answers, right or wrong):
| Method | #1 | #2 | #3 | #4 | #5 | #6 | #7 | #8 | #9 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Zero-shot-mv(all prompts) | ✔ | ✗ | ✗ | ✗ | ✗ | ✗ | ✔ | ✔ | ✗ |
| Zero-shot-mv(fixed 3 prompts) | ✔ | ✗ | ✗ | | | | | | |
| IAP-mv | ✔ | ✗ | | | | | | ✔ | |
In this case, the IAP-mv can get the correct answer with more-grained recognition for suitable prompts dynamically in the instance level, thereby outperforming the fixed 3 prompts.
Furthermore, the Zero-shot-mv depends on picking fixed prompts, requiring prior knowledge and inflexibility, thus such methods are not easy to obtain in a real-world scenario. In contrast, IAP-mv is more generalizable and can be applied to any new task adaptively without handcraft selection.
---
Rebuttal 2:
Comment: Thank you for your review,
The authors responded to your initial review. Please be sure to read it and reply indicating the extent to which the authors have addressed your initial questions and concerns.
Best,
AC | Summary: This paper analyzed the mechanism of the large language models (LLMs) zero-shot Chain-of-Thought (CoT) reasoning, in which the authors found a pattern to discriminate a good reasoning path and a bad one with the saliency scores. Based on the findings, this paper proposed a set of instance-adaptive prompting approaches for zero-shot CoT reasoning. Experimental results on various tasks with distinct LLMs demonstrated the effectiveness of the proposed methods, validating the correctness of the findings.
Strengths: 1. This paper is well-written and easy to follow.
2. This paper emphasized the instance-level CoT prompting rather than the task-level, providing a novel and fine-grained research object for LLM reasoning, which had not been explored in earlier works.
3. This work employed an interesting information flow analysis to delve into the mechanism of zero-shot CoT during the LLM inference, encouraging relevant CoT research.
4. The authors presented two simple yet effective instance-adaptive zero-shot CoT prompting approaches based on elaborative analysis, and empirical results verified their observations.
Weaknesses: 1. Experiments only covered 7B and 13/14B models, involving larger model in the experiments would be more marvelous.
2. In Section 4.3, the authors didn’t interpret the results of consistency and complementary results in detail.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. Why did the authors choose the saliency score instead of the widely used attribution score as the information flow analysis method?
2. What did some results with underlines in Table 1 mean?
3. Should the colon after the “remains” in the line 44 be removed?
4. Should the “and” in the “, and Tracking Shuffled Objects …” in line 258 be removed?
5. In Table 3 in the Appendix, why didn’t the authors highlight the best results and explain the results more?
All my concerns have been addressed by the authors' rebuttal.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors have adequately clarified the limitations of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks a lot for the time and effort you invested in providing the detailed reviews. Regarding the current weaknesses and questions you pointed out, we are glad to give our responses.
### **Response to Weaknesses**
**Weakness 1:**
Experiments only covered 7B and 13/14B models, involving larger models in the experiments would be more marvelous.
**Response to Weakness 1:**
In this work, we conducted experiments on 8B, 13B, and 14B LLMs with different architectures (LLaMA-2, LLaMA-3, and Qwen), and empirical results validated our initiative observation and assumption. The current evaluation across model sizes has provided a broad view of how different scales of models may perform under various prompt candidates.
We conducted the experiments with LLaMA-3 70B using the same 9 prompts as the main experiments and IAP-mv, the table below shows the results:
| Prompt | GSM8K | SVAMP | C-Judge. | T-Obj. | CSQA | MMLU |
| ------ | ----- | ----- | ------- | ------ | ---- | ---- |
| #1 | 87.79 | 82.33 | 38.50 | 12.40 | 67.73 | 37.02 |
| #2 | 89.16 | 86.33 | 54.55 | 30.00 | 56.10 | 50.18 |
| #3 | 81.73 | 83.33 | 49.73 | 23.20 | 55.69 | 44.56 |
| #4 | 82.64 | 84.33 | 42.25 | 60.40 | 41.36 | 52.11 |
| #5 | 82.71 | 84.00 | 36.36 | 6.80 | 61.75 | 52.63 |
| #6 | 87.79 | 82.33 | 44.39 | 16.00 | 67.73 | 35.79 |
| #7 | 81.43 | 85.67 | 47.59 | 24.00 | 29.98 | 14.56 |
| #8 | 53.53 | 75.67 | 55.61 | 18.40 | 29.24 | 22.56 |
| #9 | 51.71 | 58.33 | 44.92 | 20.40 | 36.94 | 43.33 |
| IAP-mv | **89.84** | **87.33** | **56.20** | **62.00** | **69.04** | **54.39** |
The empirical results of the LLaMA-3 70B with IAP-mv are better than any task-level hard prompt, which demonstrates the effectiveness of the IAP-mv under broader model scales.
**Weakness 2:**
In Section 4.3, the authors didn’t interpret the results of consistency and complementary results in detail.
**Response to Weakness 2:**
In the ablation studies, we conducted the consistency and complementary experiment to detect which type of prompt combination contributed to the performance. Table 2 shows the results, and referring to the main results in Table 1, we can observe that each pair of combinations can improve the performance, and instructive and irrelevant combinations achieve better outcomes than others, which comes from the base performance of instructive prompts.
### **Response to Questions**
**Question 1:**
Why did the authors choose the saliency score instead of the widely used attribution score as the information flow analysis method?
**Response to Question 1:**
There are two reasons that we didn't employ the attribution score: (1) the attribution score is designed for knowledge-oriented tasks, which have been discussed in the related work section; (2) the saliency score is more suitable for the in-context learning scenario, and may not perform well on the reasoning tasks. These two reasons had been demonstrated by [1][2][3][4].
[1] Dai, D., L. Dong, Y. Hao, et al. Knowledge neurons in pretrained transformers.
[2] Hao, Y., L. Dong, F. Wei, et al. Self-attention attribution: Interpreting information interactions inside transformer.
[3] Wang, L., L. Li, D. Dai, et al. Label words are anchors: An information flow perspective for understanding in-context learning.
[4] Li, J., P. Cao, C. Wang, et al. Focus on your question! interpreting and mitigating toxic cot problems in commonsense reasoning.
**Question 2:**
What did some results with underlines in Table 1 mean?
**Response to Question 2:**
In Table 1, we underlined some numbers which means these results are task-level optimal prompts among all candidates, we are sorry for not elaborating on this, and we have explained in the new version.
**Question 3:**
Should the colon after the “remains” in line 44 be removed?
**Response to Question 3:**
Thank you for your detailed review, we have removed the colon after the “remains” in line 44 in the updated manuscript.
**Question 4:**
Should the “and” in the “, and Tracking Shuffled Objects …” in line 258 be removed?
**Response to Question 4:**
We have removed the “and” in the “, and Tracking Shuffled Objects …”, and we have checked all the grammar issues in the new manuscript.
**Question 5:**
In Table 3 in the Appendix, why didn’t the authors highlight the best results and explain the results more?
**Response to Question 5:**
Since Table 3 in the Appendix is the supplement of Table 1, and the IAP shows similar performance on LLaMA-2 13B with LLaMA-3 8B and Qwen 14B, thus we didn’t repeat similar explanations. Also, we have highlighted the best results as we did in Table 1 in the updated manuscript.
For some common questions, we made a unified reply in the Author Rebuttal part (see at top) which you can look up.
---
Rebuttal Comment 1.1:
Title: Response to the authors' rebuttal
Comment: Thank you for the authors' responses. I have read the rebuttal and all my concerns have been addressed. This paper analyzed the inner machanism of zero-shot CoT and proposed a novel instance-adaptive prompting strategy. I also reviewed the comments from the other reviewers, and believe this is quite a solid work. I will raise the score from 7 to 8.
---
Reply to Comment 1.1.1:
Comment: Thanks a lot for your valuable reply, we appreciate the time and effort you invested, and we are so glad to know that your concerns have been addressed.
---
Rebuttal 2:
Comment: Thank you for your review,
The authors responded to your initial review. Please be sure to read it and reply indicating the extent to which the authors have addressed your initial questions and concerns.
Best,
AC | Summary: The authors argue that a single, task-level prompt is insufficient for addressing the diverse needs of different instances within a dataset. To overcome this limitation, they propose an instance-adaptive prompting (IAP) algorithm that differentiates between effective and ineffective prompts for individual instances. The authors also provide a detailed examination of the information flow at different layers and heads of the LLMs, offering insights into the internal mechanisms that contribute to reasoning quality. The proposed IAP strategy is shown to be effective across multiple models and tasks, highlighting its potential for advancing zero-shot reasoning in LLMs.
Strengths: 1. The IAP algorithm is a creative solution that addresses the limitations of previous prompt strategy. The originality lies in the adaptive differentiation of prompts for individual instances and the use of information flow analysis to understand and enhance reasoning mechanisms within LLMs.
2. The quality of the experimental design is high, with comprehensive testing across different models and reasoning tasks.
Weaknesses: 1. As the number of available prompts grows, the strategy for selecting the best prompt could become increasingly complex.
2. The titles of some figures are verbose (e.g. Figure 2). Keeping the title concise and giving a detailed explanation of the figures in the main text will make the paper better.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. How were the saliency score thresholds determined, how do these thresholds affect the results?
2. In some paragraphs, full stops are missing, for example, at the end of the "Experiments" section and at the end of the first paragraph in the "Preliminary analysis" section.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: it appears that the authors have made an effort to address the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We deeply appreciate the time and effort you invested in reviewing our paper, and we are glad to answer your questions.
### **Responses to Weaknesses**
**Weakness 1:**
As the number of available prompts grows, the strategy for selecting the best prompt could become increasingly complex.
**Response to Weakness 1:**
For IAP-ss, we obtained threshold values with regard to distinct LLMs on different training sets, we computed the overall synthesized scores (defined in eq (4) in Section 3) to divide up the good and bad reasoning paths and adopted the thresholds whose classify reasoning well. Such as, the threshold of LLaMA-3 8B on GSM8K is 5.5e-6, and the identification of the thresholds of different LLMs on different datasets is the same and it is simple and doesn't not need much time. In practice, we considered reasoning with a value higher than the threshold as good, otherwise bad. We have tried different thresholds, and the best performance is shown in the following table.
| threshold | accuracy |
| --- | --- |
| 7.0e-6 | 59.82 |
| 6.0e-6 | 62.77 |
| 5.0e-6 | 64.67 |
| 4.0e-6 | 62.40 |
| 5.5e-6 | **65.36** |
We can see that an improper threshold can affect the performance of the IAP-ss, no matter higher or lower. It comes from that higher thresholds tend to recognize some good reasoning instances as bad ones, and lower thresholds may overlook some bad reasoning.
As we discussed in the last paragraph in Section 3 and Section 4, IAP-mv can achieve better performance with more computing, while IAP-ss can be a choice under the resource-limited scenario, and it would be faster if the order of prompt candidates were arranged properly. We conducted the time experiments with LLaMA-3 8B on SVAMP under different orders of prompt candidates, and the results are shown in the table below (The numbers in the order column represent the prompt candidates as we introduced in Section 4.1 Zero-shot CoT prompts paragraph.):
| order | accuracy | time |
| --- | --- | --- |
| #9 | 39.67 | 2860s |
| #6 | 76.00 | **2657s** |
| #9, 8, 5, 4, 3 | 63.66 | 3870s |
| #6, 1, 2, 7, 3 | 76.66 | 5216s |
| #1, 2, 3, 4, 5, 6, 7, 8, 9 | **77.33** | 4751s |
Where #9 is the worst task-level prompt, and #6 is the best task-level prompt, achieving the highest accuracy among all the prompt candidates while consuming the least time. The prompt order of the 3rd row is accuracy-decreased on SVAMP, the 4th row is accuracy-increased, and the last row is our default setting, which obtains the best performance. This table shows that IAP-ss can cost less time with fewer prompt candidates but may obtain limited results, however, even fewer improper candidates could take a lot of computing time. Therefore, we think that the time cost of IAP-ss is not a major issue if prompt candidates are in an appropriate order.
**Weakness 2:**
The titles of some figures are verbose (e.g. Figure 2).
**Response to Weakness 2:**
Thanks for your meticulous reviews and suggestions, we have moved that verbose explanation from the figures to the main body in the new version. Now, the explanation of Figure 2 and Figure 4 are as follows:
> (Figure 2) The visualization comparison of the saliency matrices between good and bad reasoning instances with two prompts, the darker the color of the pixel point in the image represents a larger saliency score. (a) and (b) are good and bad reasoning instances under "Let's think step by step.", and so as (c) and (d) under "Don't think. Just feel.", respectively. The red, blue, and green boxes in each subfigure depict the question-to-prompt, question-to-rationale, and prompt-to-rationale information flow, respectively.
> (Figure 4) Saliency scores of question-to-prompt, question-to-rationale, and prompt-to-rationale across layers. The yellow lines represent prompts that effectively guide the LLMs to generate the correct answer, indicating good prompts. Conversely, the blue lines denote ineffective prompts.
And we have moved the previous definitions of the saliency matrices to the main body in the new version to make the presentation more clear.
### **Response to Questions**
**Question 1:**
How were the saliency score thresholds determined, and how do these thresholds affect the results?
**Response to Question 1:**
For the IAP-ss, we obtained threshold values with regard to distinct LLMs on different training sets, we computed the overall synthesized scores (defined in eq (4) in Section 3) to divide up the good and bad reasoning paths and adopted the thresholds whose classify reasoning well. In practice, we considered reasoning with a value higher than the threshold is good, otherwise bad. From this, it is evident that an improper threshold may lead to inaccurate judgment for good and bad reasoning, thereby hurting the overall performance. The current threshold of LLaMA-3 8B on GSM8K we adopted is 5.5e-6, we also tried other thresholds while keeping other settings, and the results are shown in the table below.
Table caption LLaMA 3 on GSM8k, different thresholds from repeating the experiment on other LLMs and datasets. (validation selection, fast)
| threshold | accuracy |
| --- | --- |
| 7.0e-6 | 59.82 |
| 6.0e-6 | 62.77 |
| 5.0e-6 | 64.67 |
| 4.0e-6 | 62.40 |
| 5.5e-6 | **65.36** |
We can see that an improper threshold can affect the performance of the IAP-ss, whether it is higher or lower. It comes from that higher thresholds tend to recognize some good reasoning instances as bad ones, and lower thresholds may overlook some bad reasoning. Therefore, a proper threshold benefits the IAP-ss a lot, and vice versa.
**Question 2:**
In some paragraphs, full stops are missing.
**Response to Question 2:**
We appreciate your careful review of our paper, and we have revised all full stops missing issues in the new version and checked the entire manuscript to avoid any similar issues.
For some common questions, we made a unified reply in the Author Rebuttal part (see at top) which you can look up.
---
Rebuttal Comment 1.1:
Title: Response to the authors
Comment: Thanks to the authors for resolving my concern. I will change me score from 6 to 7.
---
Reply to Comment 1.1.1:
Comment: Thank you for your valuable reply, we appreciate your effort and time, and we are so glad to hear that your concerns have been addressed.
---
Rebuttal 2:
Comment: Thank you for your review,
The authors responded to your initial review. Please be sure to read it and reply indicating the extent to which the authors have addressed your initial questions and concerns.
Best,
AC | Rebuttal 1:
Rebuttal: We want to thank each reviewer for your thoughtful reviews and constructive feedback on our manuscript. We appreciate the time you invested in evaluating our submission, and we are grateful for your detailed suggestions and recommendations. We have carefully considered each of your comments and have made corresponding revisions to our manuscript in response. We noticed that some reviewers raised similar questions, so we consolidated the responses first.
### **Experiment Details**
For IAP-ss, we obtained threshold values with regard to distinct LLMs on different training sets, we computed the overall synthesized scores (defined in eq (4) in Section 3) to divide up the good and bad reasoning paths and adopted the thresholds whose classify reasoning well. Such as, the threshold of LLaMA-3 8B on GSM8K is 5.5e-6, and the identification of the thresholds of different LLMs on different datasets is the same and it is simple and doesn't not need much time. In practice, we considered reasoning with a value higher than the threshold as good, otherwise bad. We have tried different thresholds, and the best performance is shown in the following table.
| threshold | accuracy |
| --- | --- |
| 7.0e-6 | 59.82 |
| 6.0e-6 | 62.77 |
| 5.0e-6 | 64.67 |
| 4.0e-6 | 62.40 |
| 5.5e-6 | **65.36** |
We can see that an improper threshold can affect the performance of the IAP-ss, no matter higher or lower. It comes from that higher thresholds tend to recognize some good reasoning instances as bad ones, and lower thresholds may overlook some bad reasoning.
As for IAP-mv, there is no need to compute thresholds for each LLM and dataset, we need to compute the overall synthesized saliency scores for all prompt candidates and recognize the good and bad reasoning with top-3 highest scores. **The IAP-mv is different from the direct answers majority vote strategy.** Specifically, the IAP-mv is to find the top-k (k=3) highest scores among all prompt candidates based on the analysis of information flow, in which the IAP-mv can recognize the correct answer from good and bad reasoning. We appreciate the reviewers' suggestion of supplementing the answers majority vote experiment, we did a comparison between our IAP-mv and the direct answers majority vote as follows:
| Method | GSM8K | SVAMP | C-Judge. | T-Obj. | CSQA | MMLU |
| ------ | ----- | ----- | ------- | ------ | ---- | ---- |
| Majority Vote. (Qwen 14B) | 28.22 | 51.33 | 26.10 | 1.44 | 46.52 | 52.46 |
| IAP-mv (Qwen 14B) | **62.81 (+34.59)** | **73.33 (+22.00)** | **29.95 (+3.85)** | **25.60 (+24.16)** | **65.68 (+19.16)** | **78.95 (+26.49)** |
| Majority Vote. (LLaMA-3 8B) | 52.54 | 74.33 | 17.06 | 12.60 | 62.41 | 52.53 |
| IAP-mv (LLaMA-3 8B) | **66.34 (+13.80)** | **77.33 (+3.00)** | **19.25 (+2.19)** | **42.40 (+29.80)** | **68.39 (+5.98)** | **59.65 (+7.12)** |
The results show that the majority vote approach performs poorly, and our IAP-mv outperforms it by a large margin, demonstrating that most prompts can lead the LLM to generate wrong answers for a given question, reaching only a few correct answers, i.e., such methods cannot recognize good/bad reasoning. Our IAP-mv can handle that with the analysis for information flow in reasoning, i.e., IAP-mv can differentiate good and bad reasoning, validating the effectiveness of our proposed strategy.
### **Explanation for Ablation Study**
We apologize for not providing an elaborative explanation of the consistency and complementary experiments, we have discussed adding the following part to the 1st paragraph of Section 4.3 in our new manuscript:
> We can observe that the great performance of the instructive group outperforms our groups, which comes from the base of all instructive prompts. Furthermore, the combination of the instructive group and the other two can continue to improve the performance of both, demonstrating that complementary is critical for IAP-mv, and IAP-mv can take advantage of these complementary prompts.
### **Larger LLMs**
We have conducted experiments with several LLMs of 8B, 13B, and 14B in different architectures, shown and discussed in the original manuscript. To broaden the generalizability of our method, we further implemented experiments with LLaMA-3 70B on different datasets, results are shown in the table below:
| Prompt | GSM8K | SVAMP | C-Judge. | T-Obj. | CSQA | MMLU |
| ------ | ----- | ----- | ------- | ------ | ---- | ---- |
| #1 | 87.79 | 82.33 | 38.50 | 12.40 | 67.73 | 37.02 |
| #2 | 89.16 | 86.33 | 54.55 | 30.00 | 56.10 | 50.18 |
| #3 | 81.73 | 83.33 | 49.73 | 23.20 | 55.69 | 44.56 |
| #4 | 82.64 | 84.33 | 42.25 | 60.40 | 41.36 | 52.11 |
| #5 | 82.71 | 84.00 | 36.36 | 6.80 | 61.75 | 52.63 |
| #6 | 87.79 | 82.33 | 44.39 | 16.00 | 67.73 | 35.79 |
| #7 | 81.43 | 85.67 | 47.59 | 24.00 | 29.98 | 14.56 |
| #8 | 53.53 | 75.67 | 55.61 | 18.40 | 29.24 | 22.56 |
| #9 | 51.71 | 58.33 | 44.92 | 20.40 | 36.94 | 43.33 |
| IAP-mv | **89.84** | **87.33** | **56.20** | **62.00** | **69.04** | **54.39** |
The empirical results of the LLaMA-3 70B with IAP-mv are better than any task-level hard prompt, demonstrating the IAP-mv's effectiveness under broader model scales.
### **Grammar Error**
We have checked and modified the grammar errors in the original paper, and updated a new manuscript to avoid such issues, making the presentation more clear.
Pdf: /pdf/bc9b4ea29f3aa881202c6ca756fbd2dc003eb4d7.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Optimal Scalarizations for Sublinear Hypervolume Regret | Accept (poster) | Summary: This abstract presents a study on non-linear scalarization techniques for multi-objective optimization, specifically focusing on hypervolume scalarizations with random weights. The authors prove that this approach achieves optimal sublinear hypervolume regret bounds and apply it to multiobjective stochastic linear bandits, deriving improved regret bounds. Their theoretical findings are supported by empirical results demonstrating superior performance of non-linear scalarizations compared to linear alternatives and other standard multiobjective algorithms in various settings.
Strengths: The paper provides both theoretical and empirical analyses of hypervolume scalarization performance.
Weaknesses: First of all, most lemmas are from the reference [1]. The authors should primarily focus on highlighting differences or contributions compared to this reference. Given that the key scalarization method was proposed in [1], simply conducting experiments on synthetic data adds little value, as this was likely already verified in the original proposal.
Besides, the applicability of hypervolume regret to the multi-armed bandit setting is questionable, especially in scenarios with finite action sets. The identification of the optimal arm in such cases needs to be carefully defined.
[1]. Zhang, Richard, and Daniel Golovin. "Random hypervolume scalarizations for provable multi-objective black box optimization." International conference on machine learning. PMLR, 2020.
Technical Quality: 2
Clarity: 2
Questions for Authors: See the weakness.
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors need to clarify the contribution first.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Novelty of our paper:** Our paper is focused on describing how fast scalarizations can approximate the Pareto frontier under finite samples even with *perfect knowledge of the Pareto frontier (whitebox setting)* and is first of its kind (see Theorem 7). Specifically, we introduce the notion of the hypervolume regret convergence rate, which is both a function of both the scalarization and the weight distribution and is a guarantee in the worst case for any frontier. This is complementary to the results of Golovin & Zhang, which showed the expected hypervolume regret bounds is $T^{-1/2}$ under the assumption that the scalarizations, with no generalization error, already cover the whole Pareto frontier in expectation. Specifically, note that their bounds in Golovin & Zhang do not even include the $T^{-1/k}$ rate, which we have shown to be tight. In addition to this main contribution, we also include a linear bandits section showcases the improved non-euclidean analysis of the hypervolume regret and the algorithm is primarily introduced as a tool to demonstrate the utility of the hypervolume scalarization in theory and later in experiments. We will make this more clear in contributions section in the introduction.
**Borrowing Previous Work:** We strongly disagree with the assessment that our work is borrowed from the previous work of Golovin and Zhang. Note that only 1 of our results is borrowed from (and attributed to) the previous work (Lemma 5), while all other results are novel, especially Theorem 7 in the whitebox setting. Furthermore, the experiments on synthetic data were not done before, as the notion of hypervolume convergence rate (in the whitebox setting) was not introduced in the previous paper, which focuses on blackbox optimization and derives a regret bound rate of $T^{-1/2}$.
As mentioned in the introduction, we emphasize that our derived regret rate of the Hypervolume scalarization holds regardless of the multi-objective function or the underlying optimization algorithm; furthermore, we believe these agnostic rates can be a general theoretical tool to compare and analyze the effectiveness of proposed scalarizers (see our experiments on synthetic Pareto frontiers). We will make this distinction clearer.
**Multi-armed Bandit:** Lastly, we do not mention the multi-armed bandit setting in our paper and we focus on only continuous action sets.
---
Rebuttal Comment 1.1:
Comment: Thank you for your responses and clarification. I changed my rating. | Summary: This paper shows that the hypervolume scalarization has sublinear hypervolume regret bounds of $O(T^{-1/k})$, and further proves a lower bound of hypervolume regret of $\Omega(T^{-1/k})$. An optimization algorithm for multiobjective linear bandits is proposed. An empirical study is conducted to verify the effectiveness of hypervolume scalarizations.
Strengths: * Although this paper is technical, it is well-organized and easy to follow.
* The proposed theoretical results are interesting and non-trivial.
Weaknesses: * This paper has a significant overlap with a previous paper (Golovin and Zhang, 2020), which proposed hypervolume scalarization and its hypervolume regret. The main focus of these two papers seems similar, so I suggest the authors summarize the differences and new contributions.
* The authors claim that the concept of the so-called "hypervolume scalarization" originated from (Golovin and Zhang, 2020). However, similar scalarization methods were already proposed in (Qi et al., 2014) and have been widely used in the field of multi-objective optimization (Li et al., 2016).
* The authors regard hypervolume as the gold standard, but I have reservations about this. Hypervolume has two main drawbacks: (1) the hypervolume optimal distribution may not cover the entire Pareto Front, and (2) HV's behavior is highly dependent on the choice of the reference point. Therefore, in practical applications, maximizing HV does not necessarily yield ideal results. Additionally, the paper assumes reference points multiple times, such as in Theorem 8 where it is assumed that $z=0$. Given the characteristics of HV, I would question whether such assumptions may limit the generalizability of these theorems.
References
Zhang, R., & Golovin, D. (2020). Random hypervolume scalarizations for provable multi-objective black box optimization. In ICML (pp. 11096-11105).
Qi, Y., Ma, X., Liu, F., Jiao, L., Sun, J., & Wu, J. (2014). MOEA/D with adaptive weight adjustment. Evolutionary Computation, 22(2), 231-264.
Li, H., Zhang, Q., & Deng, J. (2016). Biased multiobjective optimization and decomposition algorithm. IEEE Transactions on Cybernetics, 47(1), 52-66.
Technical Quality: 3
Clarity: 3
Questions for Authors: P2, L74. "maximize the Pareto front"?
P3, L130. Typo: "biojective".
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: No concerns.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the helpful review.
**Novelty of our paper:** Our paper is focused on describing how fast scalarizations can approximate the Pareto frontier under finite samples even with *perfect knowledge of the Pareto frontier (whitebox setting)* and is first of its kind (see Theorem 7). Specifically, we introduce the notion of the hypervolume regret convergence rate, which is both a function of both the scalarization and the weight distribution and is a guarantee in the worst case for any frontier. This is complementary to the results of Golovin & Zhang, which showed the expected hypervolume regret bounds is $T^{-1/2}$ under the assumption that the scalarizations, with no generalization error, already cover the whole Pareto frontier in expectation. Specifically, note that their bounds in Golovin & Zhang do not even include the $T^{-1/k}$ rate, which we have shown to be tight. We will make this more clear in contributions section in the introduction.
As mentioned in the introduction, we emphasize that our derived regret rate of the Hypervolume scalarization holds regardless of the multi-objective function or the underlying optimization algorithm; furthermore, we believe these agnostic rates can be a general theoretical tool to compare and analyze the effectiveness of proposed scalarizers (see our experiments on synthetic Pareto frontiers). In addition to this main contribution, we also include a linear bandits section showcases the improved non-euclidean analysis of the hypervolume regret and the algorithm is primarily introduced as a tool to demonstrate the utility of the hypervolume scalarization in theory and later in experiments. We will make these contributions clearer in the paper.
**Scalarization Origination:** The hypervolume scalarization origination is indeed difficult to pinpoint but we mainly cite Golovin and Zhang as it contains theoretical analysis of the scalarization, specifically its relation to hypervolume, which is novel to our knowledge. We recognize in our paper that “the classical Chebyshev scalarization has been shown to be effective in many settings, such as blackbox optimization”, and acknowledge the relationship between the two: the Chebyshev scalarization with an appropriate “inverse” weight distribution enjoys the same convergence rate as the hypervolume scalarization. We will add more discussion and the recommended citations to the other similar scalarization methods.
**Hypervolume Standard:** The hypervolume as a standard metric is classically used because it does cover the full Pareto frontier, as the hypervolume metric has strict Pareto compliance meaning that if a set $A \subseteq B \subset R^k$, then the hypervolume of $B$ is greater than that of $A$ [see Golovin & Zhang]. We will clarify this in our final version.
**Reference Point:** We acknowledge that the choice of the reference point can be a bit tricky in practice but many packages use reasonable heuristics and we find that our experiments are not sensitive to this choice [see Daulton et. al. 2020]. In theory, our assumptions on reference point are held without loss of generality and our hypervolume convergence bounds hold for ANY choice of reference point. Therefore, for Theorem 8, we only need to find a specific counterexample to establish a lower bound and so we choose $z = 0$, but the generalization is in fact not affected, as the lower bound can be extended to any reference point by shifting the outputs. We will clarify the role of the reference point more in the final version.
We fixed the typos, thank you, and hope that our clarifications have improved the reviewer's confidence about the quality of our work.
Refs:
Daulton et. al 2020. Differentiable Expected Hypervolume Improvement for Parallel Multi-Objective Bayesian Optimization
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I am satisfied with the response, and I have increased my rating. Regarding your response to HV, I think HV covers the full PF only under certain assumptions. First, the reference point is set appropriately. Second, there are infinite solutions.
---
Reply to Comment 1.1.1:
Comment: Thank you for the update! We agree that the reference point needs to be set sanely (heuristics is usually set reference to min - 0.1 * range), but for the second point, not sure if the number of solutions matter. Happy to entertain a longer discussion if there's still concern on the latter. | Summary: For multi-objective optimization, a common technique is to use scalarizations to reduce the multi-objective to one single objective. While it is easy to use linear function/scalarizations, it does not fully explore a concave region of Pareto frontier. The paper focuses on non-linear scalarization, in particular, it focuses on hypervolume scalarizations.
The authors propose simple non-linear scalarizations that effectively explore the Pareto frontier and achieve optimal sublinear hypervolume regret bound.
The main contribution of this paper is to show that hypervolume scalarizations with uniformly random weights achieves a sublinear hypervolume regret bound of order $O(T^{-1/k})$ together with a matching lower bound.
For the special case of multiobjective linear bandits, the paper gives an algorithm that achieves $O(dT^{−1/2} + T^{1/k})$ hypervolume regret.
The paper also empirically justifies the effectiveness of hypervolume scalarizations via synthetic, linear bandit, and blackbox optimization benchmarks.
--------------
I acknowledge that I have read the author's rebuttal.
Strengths: The multi-objective optimization task is very common and the technique scalarization seems to be very important (both practical and theoretically). The paper explores non-linear scalarization technique and gives sublinear regret bound.
Weaknesses: .
Technical Quality: 2
Clarity: 3
Questions for Authors: .
Confidence: 2
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: One minor suggestion to improve the readability of this paper is to give some concrete examples on scalarization, and in particular, for hyper volume regret.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the helpful review.
**Scalarizations and Hypervolume Regret:** Our study considers both the scalarization and the weight distribution (see Figure 1 for visualization) and provides a regret guarantees in the worst case for all frontiers. We have included some novel synthetic experiments that gives a complete whitebox characterization of the Pareto frontiers and explicitly calculates the hypervolume regret given our optimization procedure using different scalarizations. We hope to make this more clear in our final draft with better visualization. | Summary: The paper addresses the challenge of exploring the Pareto frontier in multi-objective optimization problems, particularly focusing on minimizing hypervolume regret. Linear scalarizations are often inadequate as they fail to explore certain non-convex regions of the Pareto frontier. The authors propose using non-linear scalarization with weights chosen uniformly at random. Specifically, the scalarization function $s_\lambda(y) = \min_{i \in [k]} (y_i/\lambda_i)^k$, where $y$ is a $k$-dimensional vector and $\lambda$ is chosen uniformly at random. This achieves optimal sublinear hypervolume regret bounds of $O(T^{-1/k})$ with matching lower bounds. Furthermore, the authors show that hypervolume scalarization uniformly explores the Pareto frontier in terms of the angular direction, thereby covering the entire Pareto frontier. They further demonstrate the theoretical and empirical performance of these scalarizations across various optimization settings. For multi-objective linear bandits, the authors introduce a scalarized algorithm based on the UCB algorithm and prove that this algorithm achieves a regret bound of $O(d \sqrt{T} + T^{-1/k})$.
Strengths: The paper makes relevant contributions to multi-objective optimization literature. The setting itself is well justified, and the authors thoroughly compare their contributions to previous work. Although the scalarization functions are not novel, given the prior work, the results presented in this work make notable advances.
In particular, Lemma 5 offers good insight into why hypervolume scalarization is potentially a good strategy even for complicated optimization problems. The fact that hypervolume scalarization uniformly explores the Pareto frontier (in an angular sense) ensures that hypervolume regret is small. Moreover, while it is known that the expected value of the scalarization function (with uniform exploration) gives the hypervolume, this result is asymptotic, and Theorem 7 quantifies the actual convergence rate (with finite samples). Additionally, the lower bound in Theorem 8 establishes that the convergence rate is optimal.
Weaknesses: Please see the questions section.
Technical Quality: 3
Clarity: 2
Questions for Authors: The paper briefly discusses Chebyshev scalarization (which has a similar form except that $\lambda_i$ is in the numerator), mentioning that it enjoys similar advantages as hypervolume scalarization when dealing with a non-convex Pareto frontier. However, it is not clear how the hypervolume regret would scale for this function. The paper could benefit from a theoretical comparison with other scalarizations as well.
In the case of linear bandits, the UCB extension works for other scalarization functions (Chebyshev, linear) as seen in the algorithm and achieves sublinear regret as shown in the experiments. I wonder if the theoretical guarantees differ significantly for different scalarization functions. What exact properties of the scalarization function impact the regret?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: This paper does not have any negative social impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the helpful review.
**Comparison with Other Scalarizations:** The hypervolume regret convergence rate is both a function of both the scalarization and the weight distribution and is a guarantee in the worst case for all frontiers in the whitebox setting. Even the hypervolume scalarization, with a skewed weight distribution, stands no chance of a sublinear hypervolume convergence rate. Therefore, we need to compare theoretical convergence rates using the scalarization along with the weight distribution.
As you stated, we have mentioned that the Chebyshev scalarization with an appropriate “inverse” weight distribution enjoys the same convergence rate as the hypervolume scalarization. But to your point, we hope to add a theoretical analysis of the worst case convergence rate of the Chebyshev scalarization with a uniform weight distribution, which can likely be done via a KL analysis of the weight distribution.
As for the linear scalarization, since hypervolume convergence is a worst case measure, note that any scalarization that uses a linear component will not be able to achieve sublinear hypervolume regret rates (regardless of weight distribution), as it will not be able to explore the whole Pareto frontier in adversarially designed concave settings.
**Scalarization on Regret:** The bandits section showcases the improved non-euclidean analysis of the hypervolume regret and the algorithm is primarily introduced as a tool to demonstrate the utility of the hypervolume scalarization in theory and later in experiments. The theoretical guarantees for linear bandit regret bounds are based on the $\ell_p$ smoothness of the scalarization function, where $p$ is the norm chosen for the artifact of analysis.
The bounds can differ significantly as the number of objectives $k$ increases, as the Hypervolume scalarization has an $\ell_\infty$ smoothness that is independent of $k$ whereas this is not true of other scalarizations, which grow polynomially in $k$. In our experiments, we do find that as $k$ increases, we have more advantage in using the Hypervolume scalarization, over other scalarizations. We will make this more clear in the final draft.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I have no further questions. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Compact Proofs of Model Performance via Mechanistic Interpretability | Accept (poster) | Summary: This paper makes progress towards the formal verification of model performance by using insights from mechanistic interpretability to reduce proof length. It presents a case study in a simplified setup: a single-layer, single-head attention-only transformer applied to a synthetic task of finding the maximum of four integers. The authors train a large number of models on this task and derive formal guarantees for their performance using different proof strategies that make use of different insights derived by mechanistically interpreting the model. The first strategy with cubic complexity, uses the model's decomposition into direct and indirect paths, and the observation that the model makes minimal use of positional embeddings. They also consider different sub-cubic proof strategies that rely on different simplifications, such as the observation the EQKE matrix is approximately rank 1, corresponding to the size of the key token. These strategies are compared to a brute-force solution in terms of proof length (estimated using the number of FLOPs) and the ratio of the certified bound (lower bound b divided by the expected value s). The authors observe a trade-off between proof length and the certified bound, noting that establishing tighter bounds is more computationally expensive. They also find that compact proofs are less faithful to model internals. Finally, the authors identify compounding noise as a challenge for proofs of global behaviour.
Strengths: - The paper demonstrates great ambition by working towards provable guarantees about global model behaviour. Developing techniques to provide such guarantees is of significant interest, especially for deploying these models in safety-critical environments.
- The authors employ a variety of methods to analyse the model and cleverly use mechanistic insights to reduce proof length. To the best of my knowledge, this is the first paper that attempts to verify global model behaviour using insights from reverse-engineering the neural network.
- The proposed measure to assess mechanistic understanding (degrees of freedom that have to be evaluated using brute-force) is interesting, as current evaluations of interpretability results often rely on proxy metrics.
Weaknesses: The results appear to be merely an interesting case study in a highly simplified setting. The proposed proof strategies rely on various unrealistic simplifications, including the use of attention-only transformers, and the learned algorithm for the task allows for further simplifications, such as ignoring positional embeddings. These simplifications raise doubts about the transferability of the proposed proof strategies to other settings. For example, the observation that compact proofs, which rely on various simplifying assumptions, are already less faithful in this synthetic setting suggests that obtaining compact proofs in more realistic settings would be incredibly challenging. The paper could have been significantly strengthened by either studying a more realistic setting or characterising fundamental methods to leverage mechanistic understanding for any task to improve upon brute-force search.
Minor Issues:
- The title on the paper (“Compact Proofs of Model Performance via Mechanistic Interpretability”) does not match the title in OpenReview (“Provable Guarantees for Model Performance via Mechanistic Interpretability”).
- Typo in line 95: “we denote the true maximum of the by t_{max}.”
- Line 162: one “the” too many: “… sequence x, [the] define the ….”
- Line 189: “consider” was probably meant to be “considered.”
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. You only compare sequence lengths of up to four because the brute-force solution is infeasible beyond that. Could your proposed proof strategies be used to make guarantees beyond this length? I believe that demonstrating that your approach allows us to provide guarantees for settings where brute-force solutions are (nearly) impossible to derive would greatly strengthen the results.
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors have properly discussed the limitations of their approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback!
We have also rectified the typographical issues.
> You only compare sequence lengths of up to four because the brute-force solution is infeasible beyond that.
Could your proposed proof strategies be used to make guarantees beyond this length? I believe that demonstrating that your approach allows us to provide guarantees for settings where brute-force solutions are (nearly) impossible to derive would greatly strengthen the results.
Thanks for the suggestion, we agree that this is a serious concern.
We have generated new results on Max-of-5, Max-of-10, and Max-of-20 (holding parameters other than n\_ctx fixed).
We ran the cubic proof on each of these models, giving bounds of 94% (Max-of-5), 93% (Max-of-10), and 83% (Max-of-20) in under two minutes.
For comparison, the brute-force approach on Max-of-20 would take roughly $64^{20} \cdot 20 \cdot 64 \cdot 32 \approx 10^{40}$ FLOPs and it took about $10^{23}$ FLOPs to train GPT-3.
We will add a section on the scalability of proof strategies to larger models, which includes these experiments as validation that our strategies result in lower complexity proofs.
> The results appear to be merely an interesting case study in a highly simplified setting.
The proposed proof strategies rely on various unrealistic simplifications, including the use of attention-only transformers, and the learned algorithm for the task allows for further simplifications, such as ignoring positional embeddings.
These simplifications raise doubts about the transferability of the proposed proof strategies to other settings.
For example, the observation that compact proofs, which rely on various simplifying assumptions, are already less faithful in this synthetic setting suggests that obtaining compact proofs in more realistic settings would be incredibly challenging.
Our goal in the project was to prototype verification of transformer models, lay out desiderata for verification (compactness of proof length for feasible cost, and tightness of bound for guarantees to be materially useful), and study what the key challenges to achieving the desiderata (faithfulness of interpretation, compounding error terms).
Obstacles that arise in prototyping are likely to be even more challenging in complex, real-world settings.
This approach is standard in the field of mechanistic interpretability: starting in simplified settings and pushing for extensive exploration [1]. Simplifications of the model architecture in our work are comparable to state of the art analysis of toy models [2]. And the challenges of decomposing even small models adequately are discussed in various papers in the field [3] [4].
However, we agree that we could have taken the alternative approach of targeting the verification of a large model, which would have been an impressive result.
> The authors employ a variety of methods to analyse the model and cleverly use mechanistic insights to reduce proof length. To the best of my knowledge, this is the first paper that attempts to verify global model behaviour using insights from reverse-engineering the neural network.
We appreciate the highlights about our contributions! We want to add that to the best of our knowledge, we are the first to (1) verify the performance of a transformer-architecture model, (2) guarantee global performance bounds on specific models, and (3) incorporate mechanistic information to obtain shorter proofs. While we work in a simplified setting, we are excited about follow-up work building upon these milestones.
> The proposed measure to assess mechanistic understanding (degrees of freedom that have to be evaluated using brute-force) is interesting, as current evaluations of interpretability results often rely on proxy metrics.
We appreciate this highlight as well. Whereas current work relies upon human judgment, with ambitious projects pushing towards procedural arguments, by pushing mechanistic interpretability to formal proofs, we were able to find that degrees of freedom remain remain even when we have decent informal interpretations.
Please don't hesitate to ask any further questions. We would love the chance to clarify our approach and provide details that might encourage you to increase your score!
[1]: Elhage, et al., "Toy Models of Superposition", Transformer Circuits Thread, 2022.
[2]: Elhage, et al., "A Mathematical Framework for Transformer Circuits", Transformer Circuits Thread, 2021.
[3]: Stander, D., Yu, Q., Fan, H. and Biderman, S., 2023. Grokking Group Multiplication with Cosets. *arXiv preprint arXiv:2312.06581*.
[4]: Chan, et al., "Causal Scrubbing: a method for rigorously testing interpretability hypotheses", AI Alignment Forum, 2022.
---
Rebuttal Comment 1.1:
Comment: Thank you for clarifying the motivation of your study and conducting the experiments on larger input spaces. I appreciate the effort and believe that these results significantly strengthen the results of the paper. After reading the author rebuttals and other reviews, I have reconsidered my initial assessment. I now believe this paper would be an interesting contribution to the conference and will increase my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you! Please let us know if there are any other questions we can clarify. | Summary: The authors apply techniques from mechanistic interpretability to ``compact-ify'' the process required to prove model performance on the max-of-$k$ task, specifically where $k=4$. Through this procedure, the authors claim that (1) there is a positive relationship between mechanisitic understanding and the length of the proof required; (2) compact proofs provide better mechanistic understanding; (3) absence of known structure/interpretations make it difficult to compactify proofs in a non-vacuous manner.
Strengths: - The proposed approach demonstrates connections between two important yet distinct subfields of deep learning---interpretability and robustness.
- For this specific max-of-$k$ task, the authors reverse-engineer the circuitry and algorithm implemented by transformers to perform this task. To the best of this reviewer's knowledge, this has not been done before in existing works.
- The authors introduce neat tricks such as mean+diff, max row-diff, as well as a nice convexity approximation in the appendix to achieve compact, subcubic proofs.
- The paper provides some insight into how theorists may utilize tools from mechanistic interpretability which so far has been a purely empirical endeavor.
Weaknesses: - The task the authors choose seems very contrived even for mechanistic interpretability studies. I'm not sure why specifically the max-of-$k$ task was chosen or what its significance is in the formal verification literature and the authors do not provide this context.
- The model the authors use is also very contrived (one-layer, one attention-head transformer). Even though the results are interesting, it is unclear to this reviewer how any of these methods would generalize as the task/model changes in both complexity and size, respectively. If in fact they are not meant to generalize, it is unclear to this reviewer how the insights of this paper fit into the larger conversation of robustness and interpretability.
- The authors claim that Fig. 4 shows how less faithful interpretations lead to tighter bounds. It is extremely unclear to me that there is a positive correlation between the Singular ratio and accuracy bound. The authors should perform a statistical test here based on the $R^2$ to demonstrate that there is indeed a significant correlation.
- The paper is difficult to read in the sense that there are undefined/inconsistent terms and notations. For example,
- Line 16, ''we will need compact proofs of global robustness.'' The notion of a ''compact proof'' is never explicitly defined. I am not sure if this is jargon from formal verification. Even if it is, it should have citations and some explanation for audience.
- Line 55, ''we restrict $f$ to be the 0-1 loss so our theorems bound the accuracy of the model.'' In Eq. 1, we seek to lowerbound $f$. Based on my understanding of 0-1 loss, the model is penalized 1 for incorrect classification and 0 otherwise. Is this not the opposite of what we want?
- Line 87, Line 106, inconsistent bold-facing. What is difference between $\mathbf{E}$ and $E$. Moreover, what does $\mathbf{E}_q$ mean?
- Table 1, in the expressions of the complexity cost, specifically for rows ''Low-rank $QK$'', ``Low-rank $EU$'', and ''Low-rank $QK$ &$EU$'', are the terms (EU & OV), (QK & OV), (OV) annotations or a part of the mathematical expression?
- Line 501, citation deadlink.
- Fig. 6 ''Let $d = \sqrt{d}$''?
- Line 589 and beyond, notation $\mathbb{N}^{<n_{ctx}}$ is nonstandard. I think I understand from context, but it would be nice to clearly state what you mean by this.
- Algorithm 3, 5 is very difficult to parse, break this up into multiple subroutines?
Technical Quality: 3
Clarity: 1
Questions for Authors: - Why did the authors choose to investigate a task like max-of-$k$ instead of modular addition (or IOI, colored-objects, etc.) where circuits for more general models have been discussed extensively in related literature?
- In lines 16-18, the authors emphasize that ``we need compact proofs of global robustness.'' Neither of these terms are rigorously defined in the paper nor is there reference to related literature. Since there are many working definitions of robustness what exactly do you mean by this? Based on Eq. 1, it does not seem that these guarantees fend against out-of-distribution or adversarial examples?
- In mechanistic interpretability, researchers generally choose a specific task, then reverse engineer the mechanisms of models that perform this task well to uncover the algorithms driving these models. The authors also follow this principle when reverse engineering the max-of-$k$ task in their toy transformer. Given that these mechanistic interpretability techniques require models with almost perfect performance a priori why is this not tautological with respect to their formal verification?
- How would this method generalize to different models that implement different circuits or algorithms on the same task? It seems that the inductive biases from the model used to compactify the proofs would not hold if a different model was using a different algorithm to complete the task (also perfectly).
- In line 73, why is pessimal ablation done by simply sampling from the data distribution instead of using carefully crafted counterfactuals that is common in mechanistic interpretability [1, 2, 3]
- Throughout the paper, the authors denote the residual from the rank-1 approximation of the EQKE matrix as ''unstructured noise.'' Could the authors elaborate on why they came to this conclusion? I understand in lines 1071-1077 the residuals follow ''pretty close'' to a normal distribution, did the authors run statistical tests to rigorize this claim? Moreover, why does this imply that the noise is unstructured?
- Why is the ratio of the largest to second-largest singular value a good metric for the faithfulness of the interpretation?
[1] Jack Merullo, Carsten Eickhoff, and Ellie Pavlick. Circuit Component Reuse Across Tasks in Transformer Lan-
guage Models. In The Twelfth International Conference on Learning Representations, September 2023. URL
https://openreview.net/forum?id=fpoAYV6Wsk.
[2] Neel Nanda, Lawrence Chan, Tom Lieberum, Jess Smith, and Jacob Steinhardt. Progress measures for grokking
via mechanistic interpretability. In The Eleventh International Conference on Learning Representations,
September 2022. URL https://openreview.net/forum?id=9XFSbDPmdW.
[3] Kevin Ro Wang, Alexandre Variengien, Arthur Conmy, Buck Shlegeris, and Jacob Steinhardt. Interpretability in
the Wild: a Circuit for Indirect Object Identification in GPT-2 Small. In The Eleventh International Conference
on Learning Representations, September 2022. URL https://openreview.net/forum?id=NpsVSN6o4ul.
Confidence: 4
Soundness: 3
Presentation: 1
Contribution: 3
Limitations: I am unconvinced that the procedure the authors propose can be generalized to different tasks (even of the same complexity). I think the large amount manual quantitative/qualitative analysis/engineering required to derive the compact proofs for even this trivial task should be addressed as limitations. Other than this, the limitations section is well-written.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the extremely detailed review!
We also appreciate the list of typographic errors, which we’ve corrected in a revised draft of the paper.
We’d like to start by defining what we mean by “compact proof of global robustness” by clarifying our definition on lines 48-69 of Section 2.
* By “compact proof”, we’re referring to a proof that is short.
Specifically, in our paper, we measure the length of proofs by considering the length of a trace of the computational component of the proof (as described in lines 56-61).
We agree that our use of the concept “compactness” could be made more explicit in the paper; we’ll add additional clarification.
As for why we want compact proofs: proofs that are too long are both infeasible to find and also infeasible to verify – hence our interest in using mechanistic understanding to generate shorter/more compact proofs.
* By “global robustness”, we mean that the model performs correctly in expectation over an input distribution $\mathcal D$ (lines 48-52).
This is in contrast to much of the prior work, which focuses on establishing correctness of model behavior for points close to a particular input $x$ (which we refer to as local robustness).
We agree that the use of the word “global robustness” in the introduction is confusing, because we use “global performance” to refer to the desired form of our verification target in the title, on line 49 in Section 2, and on line 294 and 301 in Section 6.
We’ll edit the paper to be more clear and consistent.
> Based on Eq. 1, it does not seem that these guarantees fend against out-of-distribution or adversarial examples?
In addition, we’d like to clarify the flexibility inherent in equation 1.
Normally, by OOD or adversarial inputs, we suppose that there is a distribution $\mathcal D_{\textrm{in}}$ that’s used for training and (in-distribution) validation, and another distribution $\mathcal D’$ that is the deployment distribution or generated by an adversary.
If we had knowledge of $\mathcal D’$, we could compute the expected performance from inputs sampled from $\mathcal D’$.
Even if we don’t have exact knowledge of $\mathcal D’$, we can still define a very broad distribution $\mathcal D$ that covers possible $\mathcal D’$s.
In our work, $\mathcal D$ is the distribution of all $64^4$ possible valid input sequences.
In addition, as our proofs partition $\mathcal D$ into subdistributions, and bound the performance on each subdistribution, we can bound the model’s performance on any possible distribution over valid input sequences.
We agree that we don’t adequately discuss the flexibility inherent in our definition; we’ll add clarification to sec 2 as a well as a section to the appendix discussing this in more detail.
----
We believe that much of the motivation for the work (and indeed the answers to your questions) follow from the goal of producing compact proofs.
> Why did the authors choose to investigate a task like max-of-$k$ instead of modular addition (or IOI, colored-objects, etc.) \[...\] ?
We picked a very simple task because formal verification is quite challenging, even for Max-of-$k$, and believe that Max-of-$k$ captures some of the core difficulties that we might expect to see.
But to address some of your specific examples:
1. **Modular addition:** In Nanda et al 2023, the authors don’t provide a mechanistic understanding of *how* the MLP layer performs multiplication – instead, they check this via brute force.
This lack of understanding prevents us from compactifying our proof of the model’s correctness.
In follow-up work, we extend our methodology to the modular addition task.
However, compactifying the proof required us to reverse engineer and provide a mechanistic description of how the MLP computes its output – which is, as far as we know, not previously known in the relevant literature.
1. **IOI/colored-objects/other small circuits in LMs:** One of the concerns with using these small circuits is that they tend to only be valid on a very restricted subdistribution.
For example, the original IOI paper considers a small set of sentences with names ordered ABBA, and the circuit metrics become invalid on as small a distributional shift as ordering the names ABAB[1].
If we chose to study these circuits, we’d run into a dilemma: either we restrict our distribution to be small enough to check feasibly with brute force (as the authors tend to do), or we’d have to perform additional interp that advances SOTA.
As our focus is on the formal verification side, and not on the novel interpretability side, we chose instead to pick a task where the mech interp is relatively straightforward.
> Given that these mechanistic interpretability techniques require models with almost perfect performance a priori why is this not tautological with respect to their formal verification?
We agree that on models where we can evaluate the behavior on *all* possible input-output pairs, we can indeed guarantee performance via the brute-force proof (lines 151-4).
We study such a case because it allows us to have “ground truth” on the performance of the model.
In general, however, mech interp is often studied with samples from a distribution – for example, in IOI, the mean ablation is computed with respect to a small handful of other sequences, and in modular addition the authors use sampling to evaluate the 5-digit addition, repeated subsequence, and skip trigram tasks.
In cases where the bad behavior is heavy tailed, sampling may not provide a viable estimate of performance – our hope is that understanding derived from studying the model with samples will allow us to construct formal guarantees that hold in the worst case.
We prototype this problem by providing formal guarantees on a max-of-20 transformer, where brute forcing the input distribution is infeasible and our understanding is derived via samples.
[1]: Miller et al. Transformer Circuit Faithfulness Metrics are not Robust.
---
Rebuttal 2:
Title: Rebuttal by Authors (cont.)
Comment: > How would this method generalize to different models that implement different circuits or algorithms on the same task?
We agree that guarantees require taking into account the specific model – this is part of the mechanistic understanding that interp should provide.
That being said, we believe that this specificity can be useful: for example, if a proof strategy derived for one model does not apply to a second model, this provides evidence that the two models are using different algorithms.
> In line 73, why is pessimal ablation done \[...with samples\]?
We apologize for the typo.
In line 73, “with values drawn from $\mathcal{D}$” should instead read “to values over $X$”.
That is, instead of sampling, which might not find the actual worst-case, we use the actual global worst-case values as the counterfactuals for ablation.
The counterfactuals commonly constructed in mechanistic interpretability are attempts to estimate the typical (/average-case) behavior of a component, which is insufficient for getting a worst-case guarantee on the overall behavior of the model.
We’ll edit the paper to fix the typo and further clarify this point.
> \[T\]he authors denote the residual from the rank-1 approximation of the EQKE matrix as ''unstructured noise.'' \[...\]
We apologize for our lack of clarity.
First, by “unstructured”, we mean that insofar as structure exists, it doesn’t help the model’s performance.
Second, “If we sample elements randomly” should read “If we replace the entries of $E_{q,2}^\perp$, $E_{k,2}^\perp$, $Q^\perp$, and $K^\perp$ with randomly sampled values”.
In order to validate our claim that the noise contains no helpful structure, we performed this replacement 100 times and compared the max-row-diff of these samples with the diff in the original model (lines 1074-7), and found that the diff of the original model is *worse* than the mean value by about 4σ.
(This is analogous to the standard resampling or mean ablations that mech interp work uses to check for lack of structure.)
> Why is the ratio of the largest to second-largest singular value a good metric for the faithfulness of the interpretation?
Our interpretation claims that $EQKE$ is approximately rank 1 (the size direction).
If the first singular value is much larger than all other singular values, this suggests that the matrix is well-approximated by a rank-1 approximation.
We’ll clarify this in the text.
Responses to more minor points:
> Line 55, ''we restrict $f$ to be the 0-1 loss so our theorems bound the accuracy of the model.'' In Eq. 1, we seek to lowerbound $f$.
We had this terminology wrong, and indeed were intending to speak of 1 minus the 0-1 loss.
We’ll correct this, and thank you for pointing it out.
> Line 87, Line 106, inconsistent bold-facing.
What is difference between $\textbf{E}$ and $E$.
Moreover, what does $\mathbf{E}_q$ mean?
The boldfacing is significant and is defined on line 107:
$$\hat{\mathbf{P}} = P - \overline{\mathbf{P}} \quad \text{and} \quad \mathbf{x}\overline{\mathbf{E}} = \mathbf{x}E + \overline{\mathbf{P}} \quad \text{and} \quad \mathbf{x}\mathbf{E}_q = \mathbf{x}E + \mathbf{P}_q \quad (\text{since } h^{(0)} = \mathbf{x}\overline{\mathbf{E}} + \hat{\mathbf{P}}).$$
That is, we use the boldfaced versions of $\mathbf{E}$ to denote $E$ summed together with positional embeddings.
We’ll try to clarify this more in the text.
> The authors claim that Fig. 4 shows how less faithful interpretations lead to tighter bounds.
It is extremely unclear to me that there is a positive correlation between the Singular ratio and accuracy bound.
The authors should perform a statistical test here based on the $R^2$ to demonstrate that there is indeed a significant correlation.
We deeply apologize for not catching the bug in our code that led to an incorrect version of this figure being included in the original submission.
We’ve included a corrected version of this figure in the accompanying pdf.
From this figure, it is apparent that for the smallest values of $\sigma_1/\sigma_2$ (the three values below 350), this ratio is strongly correlated with proof bound.
For the “svd” proof strategy, where $\sqrt{2}\sigma_1(\mathrm{EQKE\_err})$ is used to bound the error term, a linear regression gives an $R^2$ of about 0.71.
Please don't hesitate to ask any further questions! We appreciate the thoughtful engagement, and would love the chance to clarify results that might encourage you to increase your score.
---
Rebuttal Comment 2.1:
Comment: Thank you for your detailed clarifications. I have read both the authors' global response and the individual responses.
The authors adequately addressed my concerns relating to (1) the motivation of studying the max-of-$k$ task, (2) the typographical errors, and (3) OOD/adversarial distributions. However, I still have some further questions:
> In Nanda et al 2023, the authors don’t provide a mechanistic understanding of how the MLP layer performs multiplication – instead, they check this via brute force. This lack of understanding prevents us from compactifying our proof of the model’s correctness.
I do not really understand why this would prevent you from compactifying your proofs. Based on my understanding, understanding the underlying algorithm implemented by the model through mechanistic interpretability should only give more insight into the specific relaxations that can be made. In the case of modular arithmetic (specifically see [1]), as some algorithms implemented by the models are permutation invariant with respect to the input, one could potentially ignore those during the brute-force proof: shortening your proofs.
Why is it the case that the role of every component in the network needs to be known for its proof to be compactified?
> For example, the original IOI paper considers a small set of sentences with names ordered ABBA, and the circuit metrics become invalid on as small a distributional shift as ordering the names ABAB[1]. If we chose to study these circuits, we’d run into a dilemma: either we restrict our distribution to be small enough to check feasibly with brute force (as the authors tend to do)
Why would you need to restrict the distribution to be small enough to check with brute-force? Isn't the contribution of the paper the ability to achieve tight lower bounds without this type of brute-force verification?
> We prototype this problem by providing formal guarantees on a max-of-20 transformer,
where brute forcing the input distribution is infeasible and our understanding is derived via samples.
This is also mentioned in the global response. Where are the results for these experiments? Also, as $k$ or $d$ changes how are the authors verifying that the underlying circuit implemented by the model is the same, as this could affect the assumptions of all subsequent theorems/algorithms?
Looking forward to your clarifications.
[1] Zhiqian Zhong, Ziming Liu, Max Tegmark, and Jacob Andreas. The Clock and the Pizza: Two Stories in
Mechanistic Explanation of Neural Networks. In Advances in Neural Information Processing Systems, November
2023. URL https://arxiv.org/pdf/2306.17844.
---
Rebuttal 3:
Comment: Thank you for your time, effort, and engagement.
> Why is it the case that the role of every component in the network needs to be known for its proof to be compactified?
To be more precise: the role of every component must be known for the proof to be *asymptotically* compactified.
When measuring asymptotic behavior, it is standard to consider the leading order term(s) only. If the number of layers and attention heads is fixed, the number of parameters in a component is a constant fraction of (that is, linear in) the number of parameters of the overall network. Hence if we have to use the brute-force proof on any component, we will have an asymptotic cost equal to that for performing a brute-force analysis of the entire network.
As we are not able to compress the proof of a component's behavior in the absence of mechanistic understanding of the role and functioning of that component, the leading contribution to the asymptotic size will always be the components we don't understand at all.
Moreover, we want to clarify that compressing components using mechanistic understanding is how we obtain compact proofs with non-vacuous bounds *at all*, even if approximation errors from the compression can eventually tank the hoped-for bound. Without integrating mechanistic understanding, we are not able to compactify the brute-force proof.
> Where are the results for these experiments?
Sorry, “attached” was left-over from an earlier draft of the response, and we meant to say “included”: our results were included in the sentence “the cubic proof achieves bounds of 94% (Max-of-5), 93% (Max-of-10) and 83% (Max-of-20) in under two minutes”.
However, we can provide some additional results for Max-of-10: The equivalent graph to Fig. 3 for Max-of-10 has the following points:
- importance sampling (acc: 0.9988 ± 0.0013), $1.6 \cdot 2^{85}$ FLOPs for brute force
- cubic (rel acc: 0.915 ± 0.021), $1.4 \cdot 2^{27}$ FLOPs
- subcubic (rel acc: 0.734 ± 0.019) $(1.49 ± 0.10) \cdot 2^{22}$ FLOPs
- attention-$d_{\\mathrm{vocab}}d_{\\mathrm{model}}^2$ (rel acc: 0.718 ± 0.019), $(1.609 ± 0.090) \cdot 2^{22}$ FLOPs
- direct-quadratic (rel acc: 0.657 ± 0.032), $(1.392 ± 0.013) \cdot 2^{22}$ FLOPs
- attention-quadratic (rel acc: 0.437 ± 0.028), $(1.384 ± 0.013) \cdot 2^{22}$ FLOPs
- attention-quadratic, direct-quadratic (rel acc: 0.398 ± 0.028) $(1.318 ± 0.014) \cdot
2^{22}$ FLOPs
The equivalent graph to Fig. 4 has ratios $\sigma_1 / \sigma_2$ starting at around 400, and the best-fit line for the SVD proof approach has $bound ≈ 0.56 + 8.3\cdot 10^{-5} (σ₁/σ₂)$, with $r^2 \approx 0.5$.
> Also, as $k$ or $d$ changes how are the authors verifying that the underlying circuit implemented by the model is the same, as this could affect the assumptions of all subsequent theorems/algorithms?
We note that:
1. We are working with proofs as computations (lines 56--59), i.e. the theorem statement doesn't exist independently of the proof, it doesn’t presuppose a lower bound $b$ that we then prove.
2. Whatever bound $b$ we are able to compute is the theorem statement we have proven. Thus, the theorem template is valid regardless of what circuit is implemented by the model.
The faithfulness of the interpretation used in the proof mediates the bound (Section 5.2). In this way, the tightness of bound becomes a metric for how faithful the interpretation is. Thus, the recovered bounds can be seen as a measurement of how well the interpretation for Max-of-4 generalized to the new models trained to compute Max-of-5, Max-of-10, and Max-of-20. We are clarifying this further in our writeup, and this is valuable feedback on what concepts are worth highlighting from verification.
Note that if the EVOU circuit did not perform copying, or if the model did not compute its answer by paying more attention to the correct token, the bound would be vacuous. We want to highlight that in the Fig. 4 equivalent for Max-of-10, the smallest ratio $\sigma_1/\sigma_2$ that we saw was over 400, which is evidence that the interpretation that EQKE is approximately low-rank remains valid. We are making this point more clear in a new section on applicability of results to other models.
---
Rebuttal Comment 3.1:
Comment: > Why would you need to restrict the distribution to be small enough to check with brute-force? Isn't the contribution of the paper the ability to achieve tight lower bounds without this type of brute-force verification?
We want to clarify that the paper's contribution is demonstrating how mechanistic understanding (aka compression of model internals) permits us to avoid brute force enumeration. Without such understanding, obtaining a more compact proof is not feasible.
Here are examples illustrating how existing circuits are developed on small distributions, and why extending these results to larger distributions would require additional mechanistic interpretation:
1. Names: The interpretation uses a small fixed dataset of names, lacking a broader interpretation to show that all tokens in specified positions would be parsed as names. For intuition: While we might expect a model to handle "John → Jon", it might struggle with "Mary → marry". Properly enumerating the set of names would require either proving that all elements in the dataset are equivalent, or resorting to brute force evaluation (which is the limiting behavior of their random sampling approach).
2. Patterns: The interpretation is restricted to the ABBA case. This means either limiting ourselves to the ABBA case similarly, or conducting additional interpretation to show how answers on the ABBA case apply to other cases like ABAB.
If we had taken on a project performing such interpretations, that would have been very cool! However, such further interpretation would be a separate contribution from the main objectives of our study.
Please let us know if you have any further questions or if this fails to clarify some question that you’ve asked. | Summary: The paper aims to generate formal guarantees certifying model’s performance on max of K task. Using brute force gives the bound close to true performance of the model but the complexity cost i.e. compactness is exponential in the size of the vocabulary. The paper then aims to use understanding derived from mechanistic interpretability to reduce the complexity cost while ensuring the bound remains tight. For this, they first use the fact that model is composed of OV, QK circuits and a direct path. This aids in reducing the complexity to cubic in size of vocabulary. The bound still remains tight. Then the authors utilize few observations from their setup, for instance, low rank of EQKE matrices to further reduce the complexity to sub-cubic. However, there is a sharp drop seen in the tightness of the bound as these insights are used.
Strengths: 1) This work can motivate future works to try to generate more compact bounds on model’s performance. This is especially important in context of alignment, where these bounds could help in providing certificates for safe performance. Although this is a very preliminary study, it attempts to make a contribution in this direction.
2) It is interesting to see how authors were able to achieve better trade-offs between complexity and tightness of bounds on model performance by using insights from mechanistic interpretability. This work is very original in this regard.
Weaknesses: 1) The tightness of bounds seem to degrade very quickly as additional insights from mechanistic interpretability are used. Thus it seems difficult if future works would be able to use insights from mechanistic interpretability to derive tight bounds on performance of the model.
2) The author analyze a very simple task and techniques used by authors to decrease computational complexity are specifically tailored for this task. It therefore remains unclear how insights from this work can be applied to any other tasks. It would have been great if the authors would have attempted to investigate a few other tasks which are simple (fundamental) but still very different in nature, e.g., counting, sorting, etc.
3) There isn't enough evidence to adequately support the noise hypothesis presented by the authors i.e. the noise in estimating rank-1 approximations of E, Q, K results in larger magnitude in the error term. Currently this seems based on speculation and needs more detailed investigation. Therefore it would be great if the authors could try to artificially inject noise (of varying magnitude) in these matrices and see its effect. Else, the authors should not claim this being the major cause for not so tight bounds obtained on using insights from mechanistic interpretability.
Technical Quality: 3
Clarity: 2
Questions for Authors: I request the authors to kindly address the questions in the weaknesses section.
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Yes, the authors have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review!
> The tightness of bounds seem to degrade very quickly as additional insights from mechanistic interpretability are used.
Thus it seems difficult if future works would be able to use insights from mechanistic interpretability to derive tight bounds on performance of the model.
We agree that the degrading tightness of bounds is an issue.
However, note that we evaluate proofs on the additional criteria of compactness of proof length, and using information about model internals is how we obtain shorter proofs than brute force.
Thus, some degree of mechanistic interpretation is necessary for finding compact proofs in future work.
We address this question in more detail in the top-level rebuttal.
One claim that we will add more clearly to the paper is that proof length can be used as an evaluation metric on the quality of mechanistic interpretation (see Section 5).
The interpretations in our different proofs all appear to be sound and use the same high-level abstractions.
However, by pushing them to rigorous formal proofs we get to see the various degrees of freedom that remain in the different interpretations.
Therefore, we expect that future work will need mechanistic interpretations to get short proofs, and can use the gains in proof reduction as a way of distinguishing what interpretation techniques to use.
> The author analyze a very simple task and techniques used by authors to decrease computational complexity are specifically tailored for this task.
It therefore remains unclear how insights from this work can be applied to any other tasks.
It would have been great if the authors would have attempted to investigate a few other tasks which are simple (fundamental) but still very different in nature, e.g., counting, sorting, etc.
Verification projects are quite intensive in human-labor, and our goal with this project was to prototype verification for transformer models, and focus on studying the core challenges to its viability.
Thus, we limited ourselves in the number of different tasks we worked on, and instead focused on deriving multiple proofs that varied the two core desiderata: compactness and bound tightness.
However, we want to highlight that it should be straightforward (if labor-intensive) to generalize our work to other toy models where the model computes the answer by paying attention to a single token or position, i.e. general element retrieval, example tasks include: first element, last element, binding adjective to key phrase, etc.
This is because our theorems in the appendices prove in full generality that any computation performed by a single attention head, when only one of the position and the token value matters, is extremized by pure sequences.
This means that the worst-case behavior of the computation at a single token can always be found by setting all other tokens to be identical, regardless of what the task is.
Furthermore, generalizing our convexity arguments to multiple attention heads *whose error terms can be treated independently without degrading the bound too much* is also straightforward.
We address the high-level version of this question in the top-level rebuttal where we talk about the general applicability of our approach to projects on larger input distributions, different tasks, more complex architectures, and larger models.
> There isn't enough evidence to adequately support the noise hypothesis presented by the authors i.e. the noise in estimating rank-1 approximations of E, Q, K results in larger magnitude in the error term.
Currently this seems based on speculation and needs more detailed investigation.
Therefore it would be great if the authors could try to artificially inject noise (of varying magnitude) in these matrices and see its effect.
Else, the authors should not claim this being the major cause for not so tight bounds obtained on using insights from mechanistic interpretability.
Thank you for pointing out our confusing use of terminology! By “noise” we meant error terms in our approximations of the model that we use to generate more compact proofs, not error terms that the model learned.
So, we are not referring to model weights that we can ablate to improve performance bounds of our proofs.
In the improved version of Table 1 (included in the accompanying pdf, and which will replace Table 1 in the paper), consider the difference in bound between “Low-rank QK” (0.806) and “Quadratic QK” (0.407).
The only change between these results is in the strategy for bounding the error that comes from approximating QK as low rank.
We provide a more detailed clarification in the top-level rebuttal.
Please don't hesitate to ask any further questions! We would love the chance clarify our approach, and provide further details that might encourage you to increase your score.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their rebuttal. Their response helps me clarify my questions, therefore I will increase my score. | Summary: This paper does a detailed and careful case study of the trade-offs and design space of formal verifications for meaningful lower bounds of the accuracy of a transformer model on a chosen task:
- Specifically, a one-layer, one-head, no-MLP, no-layernorm transformer is studied (akin to the model studied in detail in "A Mathematical Framework for Transformer Circuits", https://transformer-circuits.pub/2021/framework/index.html). A crucial difference from "A mathematical framework..." is that the model here also includes positional encodings.
- The task is to output the maximal element from a context of length 4, where tokens are synthetically generated to correspond to integers in a limited range, and the model is trained (successfully) solely on this task.
- Then, several verification approaches are considered:
- Brute force: this means running the model on every possible input and calculating the accuracy
- "Cubic": this proof uses some ingenious ideas to avoid iterating over all 4-tuples of possible inputs, and instead only over a particular set of triples $\Xi$. Specifically, they show that given an input $x=(t_1, t_2, t_3, t_4)$, then, *if* the model is 100% accurate on a certain subset of $\Xi$ derived from $x$, the model will also be correct on $x$.
- "Sub-cubic": even more ingenious/compact proof strategies are possible if one uses mechanistic knowledge of how the chosen model architecture tends to solve the given task. For instance, a major part of the algorithm seems to be: the queries and keys combine to put attention on the token representing the largest integer in the input, and the output and value matrix then combine to copy this to the last token position. By using this and other properties (e.g. that the important structure of the query/key/value/output matrices is low-rank), it becomes possible to shorten the proof further.
- The main result is that approaches using more mechanistic knowledge of the
approximate algorithm implemented by the transformer require less computation for verification, but also lead to less faithful/tight bounds on the accuracy.
Strengths: - this paper really engages with the computations of a transformer on a very fundamental level, and presents some ingenious ways to combine high level and heuristic approaches from the mechanistic interpretability literature with a fully verified method of deriving correctness.
- This is to my knowledge the first work to carry out such a complete analysis. One may hope that in the (not too distant) future, similar methods can be scaled up via LLM automation to apply to more interesting, non-toy reasoning tasks, and deliver ground-truth insights about the structure and weaknesses of LLM computations.
- the paper is very clear and rigorous about what it achieves, and all claims are backed up by evidence and proof.
Weaknesses: - the setting is quite toy, though this is probably very hard to overcome for such an approach with human labor.
Technical Quality: 4
Clarity: 4
Questions for Authors: - this is very interesting work! You probably have a good intuition for the most promising/low-hanging future directions in which it can be taken - can you get into specifics about this in the conclusion?
- how can this method be carried over to more realistic tasks? If I understand correctly, much of the theory in the appendices relies on a careful analysis of the structure of this particular transformer model **and** the particular (arithmetic) task in question.
- you mention: "models with similar training procedure and performance may still
have learned significantly different weights" - I believe a standard reference is "Breiman, Leo. Statistical modeling: The two cultures (with comments and a rejoinder by the author).", cf. "Rashomon"
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: - it is quite unclear how such results, based on careful analysis of the structure of the particular task, will carry over to more realistic tasks.
- similarly, it is unclear whether re-introducing transformer components, and especially MLP layers, won't obliterate the accuracy bounds derived here to be practically useless (e.g. below chance levels, which would be 25% here)
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your kind review!
> how can this method be carried over to more realistic tasks?
It would be relatively straightforward to carry over the methodology to more realistic tasks if the following conditions were met:
1. A sufficiently detailed mechanistic understanding of the components of the model not being brute-force evaluated.
2. A model that is interpretable enough that the (not-understood) remainder of the model can be treated as independent error terms without blowing up the bound.
If these conditions were met, then the interpretation would broadly proceed as follows:
Every expression $x$ in the model with interpretation $y$ can be re-expressed as $x = y + (x - y)$.
The error terms $(x - y)$ must then be given worst-case bounds.
As far as we can tell, most realistic models do not meet these conditions.
Consequently, in the project, we devote significant effort to getting reasonable bounds with minimal understanding (and it is surprising that, e.g., we can get a non-vacuous bound on the model that, modulo our lack of understanding of low-rank approximations to the identity matrix, is linear in the parameter count of the model!).
In follow-up work, we are making progress on applying the approach to models with MLPs, to models with with multiple attention heads, to models with layer norm, and to two-layer models. We see three big obstacles. We believe all the obstacles are important for the field of mech interp to address (with the possible exception of the third one):
1. Unstructured approximation error terms (“noise”) accumulate quickly, necessitating either models with significantly less noise (more interpretable models) or more sophisticated error term analysis.
In follow-up work, we have preliminary results suggesting that fine-tuning models can make them sufficiently more interpretable as to bypass this problem.
2. Path decomposition is exponential in the number of layers, necessitating adequate mechanistic understanding of how to simultaneously argue that multiple paths contribute negligibly.
Existing approaches to solving instances of this problem in finding mechanistic interpretations -- such as presuming independence of irrelevant paths and evaluating them all simultaneously, or using gradient-based search methods that leave the path decomposition implicit -- unfortunately do not yield valid proof techniques.
This problem is especially pressing for us, as it interacts with the central problem of compounding error approximations (1).
However, we are currently investigating the possibility of matching prefixes of paths to allow mechanistic understanding to be used to simultaneously argue that multiple paths contribute negligibly.
3. Worst-case error bounds are both significantly looser by default and significantly more expensive to compute than average-case error bounds.
We would be excited to see a formalization and analysis of compact guarantees of average-case performance! (Unfortunately, we are not aware of any adequate formalism of average-case performance that is independent of training method.)
> If I understand correctly, much of the theory in the appendices relies on a careful analysis of the structure of this particular transformer model **and** the particular (arithmetic) task in question.
That is correct.
However, we would like to highlight that it should be straightforward (if labor-intensive) to generalize our work to other toy models where the model computes the answer by paying attention to a single token or position, i.e. general element retrieval, example tasks include: first element, last element, binding adjective to key phrase, etc.
This is because our theorems in the appendices prove in full generality that any computation performed by a single attention head, when only one of the position and the token value matters, is extremized by pure sequences.
So the worst-case behavior of the computation at a single token can always be found by setting all other tokens to be identical, regardless of what the task is.
Furthermore, generalizing our convexity arguments to multiple attention heads *whose error terms can be treated independently without degrading the bound too much* is also straightforward.
Generalizing the theory to handle coupled attention heads or more complex uses of attention will require additional theoretical work.
We address the high-level version of this concern in the top-level rebuttal where we talk about the general applicability of our approach to projects on larger input distributions, different tasks, more complex architectures, and larger models.
> similarly, it is unclear whether re-introducing transformer components, and especially MLP layers, won't obliterate the accuracy bounds derived here to be practically useless (e.g. below chance levels, which would be 25% here)
In follow-up work, we conduct analysis of a modular addition model with an MLP where we achieve a bound of roughly 80%--90% on a sub-brute-force proof.
However, it is indeed the case that integrating all the mechanistic understanding we have of that obliterates the accuracy bound.
Ultimately, we hope that fine-tuning will allow bypassing this issue.
Separately, we want to note that 25% would be *average case* chance levels for *argmax* (predicting the position of the maximum).
Average-case chance levels on max are actually $1/d_{\mathrm{vocab}} = 1/64 = 1.56\%$ for this task, since we are predicting tokens not positions.
Furthermore, it's not entirely clear that the baseline for proofs should be average-case bounds rather than worst-case bounds.
Indeed we typically see bound performance drop off to floating point error, not average-case chance, when error terms accumulate too quickly.
> you mention: [...] I believe a standard reference is [...]
Thank you!
Please don't hesitate to ask any further questions!
We'd love the chance to highlight strengths of our submission that might encourage you to further increase your score!
---
Rebuttal Comment 1.1:
Title: Thank you for the detailed clarifications
Comment: I thank the authors for their very thorough followup to my questions. Also, thanks for pointing out my mistake about the chance-level performance on the max task!
I am looking forward to future work in this direction, and I remain strongly in favor of accepting this paper to the conference. | Rebuttal 1:
Rebuttal: Thank you to the reviewers for their comments and feedback!
We’re happy that reviewers agree that our approach to verification is novel and that our approach to mech interp is rigorous.
We address the five common questions that reviewers raised.
**(1) Motivation for picking a toy setting (vjrV, KZFC, and ZhNY)**
Formal reasoning is computationally expensive; very few large software projects have ever been verified [1][2], none of them comparable to large transformer models [3][4].
Separately, there is a high fixed cost to taking on any verification project, regardless of computational efficiency of the verification itself.
Thus, we picked the simplest setting to study the question of interest:
Is it even possible to reason more efficiently than by brute force about model behavior?
We did not adequately explain this motivation in the submission, and will add a section.
**(2) Applicability of results to other settings (exsQ, vjrV, KZFC, and ZhNY)**
1. **Larger input spaces:**
We applied our proof strategies to models trained for Max-of-10 and Max-of-20, and have attached our results.
While running the brute force proof on Max-of-20 would require $64^{20} \cdot 20 \cdot 64 \cdot 32 \approx 10^{40}$ FLOPs, which is about $10^{17} \times$ the cost of training GPT-3, our cubic proof achieves bounds of 94% (Max-of-5), 93% (Max-of-10) and 83% (Max-of-20) in under two minutes, demonstrating that proof strategies can be reused on larger input spaces (and indeed, scales better than the brute force or sampling approach do).
2. **Different tasks:**
In this work, we worked on highly optimized relaxations to make our bounds as tight as possible when incorporating as little understanding as possible.
This is not necessary for deriving proofs.
Our general formalization of mech interp is replicable: (1) theorem statements are exact expressions for the difference between the actual behavior of the model and the purported behavior, and (2) proofs are computations that bound the expression.
Furthermore, our convexity theorems and proofs are applicable much more generally generally to element retrieval tasks.
3. **More complicated architectures:** We worked on a simple model studied in A Mathematical Framework for Transformer Circuits.
In follow-up work, we extend this approach to proving bounds on 1L transformers with ReLU MLP trained on modular addition.
4. **Larger models:** We agree that it’s an open question whether or not the mech interp approach to proofs can scale to larger models.
However, a large part of this question lies in the feasibility of deriving a high degree of faithful mechanistic understanding from large models (i.e. whether mech interp itself will scale).
This is widely recognized in the field of mech interp, and scaling interp approaches while getting both a high degree of mechanistic understanding and assurances that said understanding is faithful to the model is an active area of research in mech interp in general.
**(3) Why use mech interp at all, especially given that it “results” in worse bounds? (exsQ and ZhNY)**
First, we agree that ideally, we could maintain the tightness of the bound even as we decrease the length of the proof.
However, there’s some fundamental reasons to expect there to be a trade off: for example, from the compression perspective, any proof strategy that does not run the true model on all inputs needs to use a compressed approximation to the model, the input distribution, or both.
So unless the model weights are losslessly compressible, shorter proofs using lossy approximations will likely cause weaker bounds.
Consequently, the results of proofs derived via mech interp should be compared to other proofs of comparable length.
Indeed, we find better mech interp can give us tighter bounds at same proof length (figure 3), which provides evidence that better mech interp can be used to improve proofs.
Conversely, we can think of the quality of proofs (the combination of tightness of bound, and length of proof) as a metric for how good our mechanistic understanding is.
From this perspective, the fact that mech interp derived bounds are worse suggests gaps in our mechanistic understanding.
As mech interp matures as a field and we develop tools that enable more faithful and complete understanding of model behavior, we expect that the quality of bounds we derive from mechanistic understanding will improve.
**(4) Less faithful interpretations produce better bounds (exsQ and vjrV)**
We have corrected the plot attachment, which was a genuine mistake on our part.
The plot is more visually clear in the point it makes.
**(5) Validity of the noise hypothesis (exsQ and vjrV)**
We want to clarify that we use “noise” to refer to error terms in our approximation, which is confusing terminology that we will change.
In our work, we use approximations to model components in order to shorten proofs.
While we were able to approximate individual matrices with small error terms, when we compose the approximations, the estimate for the error term on the matrix composition is much larger than the empirical error term.
This is because lower-bounding (aka worst-case bounds) requires that we pessimize over the error terms.
Thus, we are not referring to noise in models (which we might ablate to improve our bounds or introduce to check our hypothesis).
Additionally, reviewers point out issues in our presentation.
We have incorporated feedback from this review process, and have made significant progress since the submission of the manuscript, including standardizing language and adding definitions, simplifying notation, and improving copyediting.
We address remaining issues in individual rebuttals.
[1]: Leroy. “A Formally Verified Compiler Back-end”
[2]: Klein et al. “seL4: Formal Verification of an OS Kernel”
[3]: Clarke. “State Explosion Problem in Model Checking”
[4]: Gross. “Performance Engineering of Proof-Based Software Systems at Scale”
Pdf: /pdf/a8d8249340ff6f4a65c5fafba0e12a1cf2ea9a3e.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Generative Retrieval Meets Multi-Graded Relevance | Accept (spotlight) | Summary: The paper targets the problem of Multi-Graded Relevance in a Generative Retrieval setup.
Generative Retrieval uses a Seq2Seq model to decode relevant “docids” for a given query. Prior work on Generative Retrieval has focused on tasks where there is only one relevant document per query and the degree of relevance is binary : “relevant” or “not-relevant”. The paper talks about Multi-graded Retrieval tasks, where each query has multiple relevant documents, each having its own degree of relevance.
The paper proposes a methodology to tackle the graded retrieval tasks in a Generative Retrieval Framework. It identifies the key challenges : 1. Having distinct but relevant “docids” and 2. Graded Relevance when matching query and documents. It proposes solutions to these challenges in “regularized fusion” and “MGCC” approaches respectively.
Additionally the authors also develop and experiment with a pre-training setup for multi-graded relevance retrieval.
Strengths: 1. The problem of multi-graded retrieval in Generative Retrieval is relevant/important with no prior work
2. The challenges are well-identified i.e., the goal of having distinct but relevant docids for documents.
3. The solutions proposed are well formulated and intuitive to understand.
4. Empirical Results are strong with thorough ablations and detailed discussions.
5. The authors propose and experiment with a pre-training framework, showing positive results.
Weaknesses: While not a strong weakness, the mathematical notation in the paper can use some work to make the paper an easier read. I found it tough to parse through “Section 3 : Preliminaries”. The paper would benefit from resolving this.
Technical Quality: 3
Clarity: 2
Questions for Authors: I just have a few clarifying questions :
1. Are there numbers on Recall? Prior work I believe focuses on MRR and Recall metrics.
2. Why include the last term in Eq2 (similarity in embedding space) when the same is taken care of by Eq1? Is it for training stability?
3. If I understood correctly, then docids are generated by decoding a random sample on the hypersphere (z=e+r). Does that mean if we were to sample multiple times for each document, we would get distinct docids from the randomness in r itself? Do we need e of documents (through QG module) to be “distinct” (i.e. do we need regularized fusion if we have better sampling or more samples of r?)
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Typo : In line 188, “These fixed” needs “T” to be lowercase
While not a limitation, I believe the paper would also benefit from a few Qualitative examples showing Multi-graded relevance in practice. An uninitiated reader might not be familiar with Generative Retrieval or Multi-graded relevance and a figure/qualitative example might be useful for them.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the constructive and valuable comments. With regard to different comments about weaknesses, our responses are as follows:
**[Comment 1]** While not a strong weakness, the mathematical notation in the paper can use some work to make the paper an easier read. I found it tough to parse through “Section 3 : Preliminaries”. The paper would benefit from resolving this.
**[Response 1]** Thank you for your suggestions. We will simplify the mathematical notation in the paper and add a notation table.
---
**[Comment 2]** Are there numbers on Recall? Prior work I believe focuses on MRR and Recall metrics.
**[Response 2]** Thank you for your question. Since the Hits metric and the Recall metric are very similar in principle, we did not include Recall separately. Similarly, [79, 106, 7, 17, 102] only used Hits or used both Hits and MRR.
---
**[Comment 3]** Why include the last term in Eq2 (similarity in embedding space) when the same is taken care of by Eq1? Is it for training stability?
**[Response 3]** Thank you for your question.
(1) For both the relevance and distinctness regularization terms, the similarity is used to ensure relevance between the document and the query, i.e., the docid (for example, the numerator in Eq 1 and the numerator of the last term in Eq 2). Additionally, the denominator in Eq 1 ensures that irrelevant queries and documents are dissimilar.
(2) The first two terms in Eq 2 push apart different document representations encoded by the QG model and different query representations encoded by the AE model. To bridge the QG and AE latent spaces, the last term in Eq 2 builds upon the first two terms by making relevant document and query representations similar. Note that it does not involve comparing documents to irrelevant queries, which is a distinction from Eq 1.
---
**[Comment 4]** If I understood correctly, then docids are generated by decoding a random sample on the hypersphere (z=e+r). Does that mean if we were to sample multiple times for each document, we would get distinct docids from the randomness in r itself? Do we need e of documents (through QG module) to be “distinct” (i.e. do we need regularized fusion if we have better sampling or more samples of r?)
**[Response 4]** Thank you for your question.
(1) Sampling multiple times can indeed yield different docids. However, our requirement for docids is not just simple "distinctness," but to ensure that docids are relevant to the documents and that they are "distinct" based on semantic differences among the documents. Achieving this requires using e as an anchor, which is why regularized fusion is necessary.
(2) Sampling strategies are heuristic post-processing methods and might not guarantee that the model generates high-quality docids. Regularized fusion, on the other hand, can optimize the model to ensure better generation capability.
---
**[Comment 5]** Typo : In line 188, “These fixed” needs “T” to be lowercase
**[Response 5]** Thank you for pointing this out. We will make the correction.
---
**[Comment 6]** The paper would also benefit from a few Qualitative examples showing Multi-graded relevance in practice. An uninitiated reader might not be familiar with Generative Retrieval or Multi-graded relevance and a figure/qualitative example might be useful for them.
**[Response 6]** Thank you very much for your suggestions. We add some quantified examples of multi-graded relevance as the follows.
In real-world search scenarios, documents might be described with different degrees of relevance [Yu et al., 2009; Scheel et al., 2011] with respect to queries, such as not relevant, partially relevant, and highly relevant. For example, the Yahoo! search engine presented "Perfect" relevant documents at the top in the ranking, followed by "Excellent", "Good", and "Fair" relevant documents, without showing "Bad" relevant documents [Chapelleet al., 2011]. There are also some other examples include medical retrieval [Yu et al., 2009], recommender systems [Scheel et al., 2011], as well as representative retrieval benchmarks [Tetsuyaal., 2005;18,19].
Targeting multi-graded instead of simple binary relevance, helps to meet the demand of practical use cases, while the collection of multi-graded annotations helps to improve the retrieval results. Accordingly, the importance of IR evaluation based on multi-graded relevance assessments is increasingly attracting attention; see, e.g., the NII Test Collection for IR Systems (NTCIR) project [71] and many widely-used ranking metrics [12,31,49].
- [Yu et al., 2009] Enabling Multi-Level Relevance Feedback on Pubmed by Integrating Rank Learning into DBMS
- [Scheel et al., 2011] Performance Measures for Multi-Graded Relevance
- [Chapelleet al., 2011] Yahoo! Learning to rank challenge overview
- [Tetsuya et al., 2005] Ranking the NTCIR Systems Based on Multigrade Relevance
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the detailed response. | Summary: The work deals with generative retrieval for cases where documents have multi-graded relevance. The authors propose GR2 a framework for generative retrieval in such cases by tackling two important challenges. First they optimize for relevance and distinctiveness of document IDs by a regularized fusion approach which comprises a pseudo query generation module followed by an auto-encoding module that reconstructs the document ids. Secondly, the authors introduce a graded, constrained contrastive learning approach which aims to bring together representations of queries with the representations of the relevant docids and push apart irrelevant docids in mini-batch. GR2 shows impressive performance improvements over other approaches and is also relatively efficient.
Strengths: - The work tackles an important task of generative retrieval for cases of documents with multi-graded retrieval. Compared to existing works like [2], the work captures the relationship between labels using a graded contrastive loss leading to impressive gains over existing generative IR models and dense IR models.
- The loss functions are well motivated and shows the impact of contrastive learning on generative retrieval. I think further studies on effect of different contrastive losses would be an interesting avenue to explore.
- The authors perform extensive experiments and ablations of the proposed method to reinforce the importance of each of the components.
[1] Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval LEE et. al.
[2] Learning to tokenize for generative retrieval.
Weaknesses: - Some important baselines are missing in the current version of the work. ColBERT (ColBERTV2) in PLAID setup is a powerful dense retriever and would serve as a valuable approach for comparison. Additionally, dense retrieval models like contriever and tas-b have also demonstrated impressive performance on a wide range of benchmarks and are strong dense retrievers due to their training objectives and should be included for comparison. Particularly since GR2 is also used in pre-training setup it is important to compare to retrievers like Contriever. pre-trained using contrastive learning for retrieval.
- While there is clear intuition for the distinctiveness, relevance regularization losses and the MGCC loss, further experiments could be carried out with respect to the contrastive learning parts of these approaches to understand the approach better and to help improve or strengthen the results. For instance, negative sampling is an important aspect of contrastive learning and a good mix of hard and soft negatives are crucial for learning useful representations. While in MGCC authors explore mini-beach based negatives, it is also important to explore global negatives similar to works like ANCE [1].
- The paper would benefit from some error analysis. This involves analysis of cases where wrong docids are considered relevant, and an analysis of whether it stems from the stochasticity of generative models or other external factors. The paper does not currently report any list of common errors or a qualitative analysis of results. It is critical to understand where the current approach fails as it would provide avenues for further research.
[1] Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval LEE et. al.
[2] Learning to tokenize for generative retrieval.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Equation 10 gives equal weights to the regularization losses. Have you considered weighting them differently ?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: While authors briefly describe the limitations in conclusion a more detailed explanation would help contextualize the work and understand future directions better.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the constructive and valuable comments. With regard to different comments, our responses are as follows:
**[Comment 1]** Some important baselines are missing in the current version of the work. ColBERT (ColBERTV2) in PLAID setup is a powerful dense retriever and would serve as a valuable approach for comparison. Additionally, dense retrieval models like contriever and tas-b have also demonstrated impressive performance on a wide range of benchmarks and are strong dense retrievers due to their training objectives and should be included for comparison.
**[Response 1]** Thank you for your suggestions.
Due to the time constraints of the rebuttal, we have now included the effects of ColBERT and Contriever on the Gov 500K dataset, as shown in the table below:
| Method|nDCG@5| nDCG@20|P@20 |ERR@20|
|---|---|---|---|---|
| ColBERT| 0.4384|0.4529|0.5162 | 0.1803|
| Contriever|0.4683| 0.4641| 0.5267| 0.1979|
|GR$^{2S}$|0.4869|0.4784|0.5364|0.2125|
|GR$^{2P}$|0.5095|0.4912|0.5506|0.2167|
We can observe that these two baselines perform better than strong GR baselines, such as NOVO, but their performance is slightly worse than RIPOR and GR$^2$, which also validates the effectiveness of our method.
---
**[Comment 2]** While there is clear intuition for the distinctiveness, relevance regularization losses and the MGCC loss, further experiments could be carried out with respect to the contrastive learning parts of these approaches to understand the approach better and to help improve or strengthen the results. For instance, negative sampling is an important aspect of contrastive learning and a good mix of hard and soft negatives are crucial for learning useful representations. While in MGCC authors explore mini-beach based negatives, it is also important to explore global negatives similar to works like ANCE.
**[Response 2]** Thank you very much for your feedback.
(1) If we remove the contrastive learning from the distinctiveness and relevance regularization losses, these two regularization terms will degrade to maximizing the similarity between the query and relevant documents. We denote this variant as GR$^{2S}_{Sim}$.
If we remove the contrastive learning from the MGCC loss, this loss will also degrade to maximizing the similarity between the query and related documents, but with an additional relevance grade weight, which is exactly the GR$^{2S}_{CE}$ variant we explored in line 271. The performance of these two variants compared to GR$^2$ on Gov500K is as follows:
| Method |nDCG@20| P@20 |
|---|---|---|
| GR$^{2S}_{Sim}$| 0.4693|0.5308|
| GR$^{2S}_{CE}$|0.4715| 0.5321|
|GR$^{2S}$|0.4784|0.5364|
|GR$^{2P}$|0.4912|0.5506|
We can see that both variants perform worse than GR$^{2}$, which validates the effectiveness of contrastive learning in these losses.
(2) We also agree that global negatives might be more beneficial for performance. Due to hardware constraints, we have used contrastive learning within the mini-batch range. One possible approach to utilize global negatives is to first perform warm-up training with mini-batch negatives, then use the resulting model to filter hard negatives on a global range, and finally use them for a second phase of enhanced training. However, this approach might increase the optimization cost. We will explore more efficient methods in future work.
---
**[Comment 3]** The paper would benefit from some error analysis.
**[Response 3]** Thank you for your suggestions.
(1) We performed an error analysis on GR$^{2P}$. Specifically, in the MS 500K dataset, we identified a bad case where the query (Query ID: 1054438) is "explain grievances". The relevant ground-truth document's docid is “what is grievance?”. In the list of docids predicted by GR$^{2P}$, the top-1 docid is an irrelevant one: “explain complaints”, with the ground-truth docid ranked second. Both documents are semantically similar.
The reason GR$^{2P}$ failed to rank the correct docid first might be due to our sequence-based docid approach, which considers order and requires exact generation. The targeted document will be missed from the retrieval results if any step of the generation process makes a false prediction about its identifier [96]. This issue arises from using a prefix tree to constrain decoding, which is a common problem in many GR studies [79, 106, 85, 22, 104]. One possible improvement is to use term-sets for decoding constraints, but this requires more storage space [96].
(2) We will add this analysis to the appendix in our subsequent revisions.
---
**[Comment 4]** Equation 10 gives equal weights to the regularization losses. Have you considered weighting them differently ?
**[Response 4]** Thank you very much for your question.
We experimented with different values for $\gamma$, while keeping the weights of the MLE losses in Equation 10 as 1. Our experiments revealed that:
(1) When $\gamma$ is less than 1, there is a slight decrease in performance. This may be because it weakens the effect of the MGCC loss, causing the model to learn less effectively about relevance.
(2) When $\gamma$ is greater than 1, the performance decreases more significantly. This is likely because it weakens the effect of the other two losses, which are fundamental operations in GR, namely indexing and retrieval. Poor learning of these operations has a significant impact on performance. Therefore, we set $\gamma$ to 1.
---
**[Comment 5]** While authors briefly describe the limitations in conclusion a more detailed explanation would help contextualize the work and understand future directions better.
**[Response 5]** Thank you for your suggestions. We will add a description of the limitations in the appendix in our subsequent revisions. | Summary: The paper proposes a novel QG based docid for generative retrieval, optimising relevance and distinctness of the generated queries jointly. It introduces the MGCC loss with multi-graded labels. Experiments on subsets of Gov2, ClueWeb09-B, Robust04, MS Marco, and NQ with up to 500k documents validate the effectiveness of the methods.
Overall, the contributed methods are valuable and inspiring, and the results are convincing.
Strengths: - Jointly optimizing relevance and distinctness with an AE block is a nice idea, effectively balancing both factors as demonstrated by ablation studies.
- Creating semi-supervised graded PT data from Wikipedia seems very useful, with remarkable improvements from this additional training.
- The paper is well-presented and the experiments are comprehensive.
Weaknesses: - Limited corpus scale remains a concern for various GR works, further discussion on its potential influence on the proposed method is needed. Nonetheless, a 500k corpus is substantial and applicable.
- The paper should discuss the existing works more about the use of QG in GR
Technical Quality: 3
Clarity: 4
Questions for Authors: - The term "pre-train" is somewhat misleading, as using semi-supervised Wikipedia data for further training differs from typical LLM pre-training.
- Some examples for generated queries, both with and without using the RF building block, to better understand the reasons behind its effectiveness.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your constructive and valuable comments. In response to the various feedback, we address each point as follows:
**[Comment 1]** Limited corpus scale remains a concern for various GR works, further discussion on its potential influence on the proposed method is needed. Nonetheless, a 500k corpus is substantial and applicable.
**[Response 1]** Thank you for your suggestions. We agree that handling large-scale corpora, such as those in the millions, is indeed necessary and represents a significant challenge for GR. There is currently limited research exploring this issue, and we discuss relevant work in lines 731-737 of Appendix C. Additionally, we are actively exploring ways to better manage and remember corpora of this scale.
---
**[Comment 2]** The paper should discuss the existing works more about the use of QG in GR
**[Response 2]** Thank you very much for your suggestions.
(1) Currently, GR has utilized QG in the following works: (i) DSI-QG [106], NCI [85], and RIPOR [92] use QG as a data augmentation technique to generate pseudo-queries as the input. (ii) [43, 76] use pseudo-queries as one of the identifiers. For the second type, our method differs in that they directly use pseudo-queries generated by DocT5query as identifiers without additional optimization, which results in multiple semantically similar documents sharing the same identifier.
(2) We have a brief description in line 93. We will provide further details in the related work section.
---
**[Comment 3]** The term "pre-train" is somewhat misleading, as using semi-supervised Wikipedia data for further training differs from typical LLM pre-training.
**[Response 3]** Thank you for pointing this out. Indeed, our "pre-training" differs from typical LLM self-supervised pre-training. The reason we describe it this way is to follow the work of [104, 15], where they construct pseudo data pairs for training to enhance the model's capability for retrieval tasks and better align with downstream retrieval tasks.
---
**[Comment 4]** Some examples for generated queries, both with and without using the RF building block, to better understand the reasons behind its effectiveness.
**[Response 4]** Thank you very much for your suggestions.
(1) We sampled a query (Query ID: 289812) from the MS 500K dataset: "How many mm is a nickel coin?". The top-3 docids generated by GR$^{2S}$ and GR$^{2S}_{RF}$ are as follows (correct predictions are indicated in italics):
| Rank | GR$^{2S}$ | GR$^{2S}_{RF}$ |
|---|---|---|
|1| *What is the diameter of a nickel coin in millimeters?* | How much was nickel in a coin?|
|2| How much was nickel in a coin? |How heavy is the nickel in grams?|
|3| What was the weight of the nickel coin?| When was nickel silver first used?|
We can observe that GR$^{2S}$ rank the correct docid first. However, the top 3 identifiers predicted by GR$^{2S}_{RF}$ do not include the correct docid, even though all the predicted docids share keywords with the query, such as "nickel." There is still a gap in relevance compared to the correctly predicted docid, which further validates the importance of the RF module for docid quality.
(2) We will add this case study to the experimental results section in Section 5. | Summary: In this paper, the authors propose a new generative retrieval model, which utilizes multi-grade relevance labels instead of binary relevance. Using graded relevance labels is not well-discussed in previous works.
Pros:
- The problem itself is interesting and important. Generative retrieval is a hot research topic in the IR community.
- The proposed solution is reasonable.
Cons:
- In the Introduction, the authors introduced the reason why the simple generation likelihood of docids cannot work for graded relevance. It is strange to claim that "Docids commonly exhibit distinct lengths" given the truth that some existing docid schemas (such as semantic id used in DSI, the PQ-based ID, etc.) Even for token-based docID, we can still add a fix-length constraint. It is hard to convince the readers by claiming that "as a fixed length might not adequately encompass diverse document semantics".
- Both DocID schema design and learning from multi-grade relevance are considered in the paper. I am quite interested in learning whether the multi-grade relevance part can also work with other DocId schemas. So an ablation study about this (e.g., using semantic string as docids) should be given (note that this is different from the experiments in Section 5.2.2) .
Strengths: - The problem itself is interesting and important. Generative retrieval is a hot research topic in the IR community.
- The proposed solution is reasonable.
Weaknesses: Cons:
- In the Introduction, the authors introduced the reason why the simple generation likelihood of docids cannot work for graded relevance. It is strange to claim that "Docids commonly exhibit distinct lengths" given the truth that some existing docid schemas (such as semantic id used in DSI, the PQ-based ID, etc.) Even for token-based docID, we can still add a fix-length constraint. It is hard to convince the readers by claiming that "as a fixed length might not adequately encompass diverse document semantics".
- Both DocID schema design and learning from multi-grade relevance are considered in the paper. I am quite interested in learning whether the multi-grade relevance part can also work with other DocId schemas. So an ablation study about this (e.g., using semantic string as docids) should be given (note that this is different from the experiments in Section 5.2.2) .
Technical Quality: 3
Clarity: 3
Questions for Authors: na
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: na
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the constructive and valuable comments. With regard to your comments, our responses are as follows:
**[Comment 1]** In the Introduction, the authors introduced the reason why the simple generation likelihood of docids cannot work for graded relevance. It is strange to claim that "Docids commonly exhibit distinct lengths" given the truth that some existing docid schemas (such as semantic id used in DSI, the PQ-based ID, etc.) Even for token-based docID, we can still add a fix-length constraint. It is hard to convince the readers by claiming that "as a fixed length might not adequately encompass diverse document semantics".
**[Response 1]** Thank you for pointing this out.
1. Indeed, the current design of docids can be either fixed-length or variable-length. Among variable-length designs, there are about 8 types, such as unstructured atomic integers[79], naively structured strings[79], titles[22], URLs[104], pseudo queries[76], important terms[14], n-grams[7], and multi-view identifiers[43]. Fixed-length designs are roughly 4 types: semantically structured strings[79], PQ-based strings[104,92], learnable numbers[74], and learnable n-gram sets[86]. We stated that "Docids commonly exhibit distinct lengths" because variable-length identifiers typically have a lower acquisition cost and are thus widely used. In contrast, fixed-length learnable identifiers offer better retrieval performance but usually require more complex learning tasks and the optimization process is more challenging [74, 86, 77].
2. Regarding the "token-based docID" you mentioned, it is indeed possible to directly set a fixed-length constraint. However, setting a heuristic fixed value might lead to information loss (if the length is too short) or excessive storage cost (if the length is too long). Additionally, different datasets may require different suitable values. Therefore, we did not adopt this approach.
3. To describe more accurately, we will modify the statement (Line 43) to: “For variable-length token-based docids, such as titles, URLs, etc., which are easily obtainable, it may be challenging to comprehensively include the diverse information of the document. For fixed-length learnable docids, obtaining such docids incurs higher optimization costs. Considering the the cost of genearating docids, this work focuses on variable-length token-based docids.”
---
**[Comment 2]** I am quite interested in learning whether the multi-grade relevance part can also work with other DocId schemas. So an ablation study about this (e.g., using semantic string as docids) should be given.
**[Response 2]** Thank you for your question.
If semantic strings are used as docids, they can also be applied to our MGCC loss. This type of identifier inherently possesses a degree of relevance to the document and distinctness between docids. Therefore, we directly follow the method in DSI[79] to generate semantic strings as docids.
We combine this docid with the MGCC loss and optimize it using supervised learning, denoted as GR$^{2S}_{sem}$. The performance on the Gov 500K dataset is as follows:
| Variant | nDCG@5 | nDCG@20 | P@20 | ERR@20 |
|----------|----------|----------|----------|----------|
| GR$^{2S}_{sem}$ | 0.4264 | 0.3487 | 0.4618 | 0.1893|
|GR$^{2S}$|0.4869|0.4784|0.5364|0.2125|
|GR$^{2P}$|0.5095|0.4912|0.5506|0.2167|
We can observe that the performance of GR$^{2S}_{sem}$ is significantly worse compared to our GR$^{2S}$. The reason might be that such docids have a greater gap compared to token-based ids and queries or documents, making the learning process for the model more challenging (which is consistent with the findings in [7, 22]). Additionally, their distinctness is determined by a combination of clustering indices and randomly assigned numbers within the final layer clusters (for algorithm details, please refer to the DSI[79] paper). In other words, these indices better reflect similarities (the same numbers have similar semantics), which to some extent weakens the differences (the degree of difference between different indices does not necessarily correlate strongly with the degree of difference in document content).
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the detailed response. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Bridging semantics and pragmatics in information-theoretic emergent communication | Accept (poster) | Summary: This study explores how language, combining meaning and context, might evolve. It examines how a shared vocabulary can emerge from interactions that consider the situation. By training agents in an unsupervised fashion to consider both context-specific utility and general communication pressures, the research reveals these aspects are key for understanding language evolvement.
Strengths: - Pragmatics is under-explored in NLP, but it is an important topic to study and model language. This work is an novel and interesting effort towards that direction.
- The approach seems reasonable, and the evaluation shows the potential of using cognitively-inspired optimization principles for the communication strategies akin to human interaction.
Weaknesses: - I don't see major weaknesses.
Technical Quality: 3
Clarity: 3
Questions for Authors: - The paper's approach is interesting. To just confirm my understanding: in this specific setting, pragmatic representations will be always more specific (like hyponyms) than semantic representations, rather than dealing with connotations. This seems similar to the challenge of finding the most accurate hyponyms to describe something. Is my understanding correct?
- What do you think would be the potential real-world applications of this work?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper addressed limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive and encouraging feedback!
To address the reviewer’s questions:
> The paper's approach is interesting. To just confirm my understanding: in this specific setting, pragmatic representations will be always more specific (like hyponyms) than semantic representations, rather than dealing with connotations. This seems similar to the challenge of finding the most accurate hyponyms to describe something. Is my understanding correct?
This is partially correct. While agents could learn the strategy described by the reviewer, they may also learn a mix of strategies that humans employ. For example, a related phenomenon is known as scalar implicatures, in which a weaker characterization, such as ‘dog’, may implicitly rule out references that correspond to stronger characterizations, such as ‘Pomeranian’ (imagine a scene with a Labrador and a Pomeranian; both are types of dogs but Labrador is considered a more typical dog compared to Pomeranian). Humans also employ strategies with non-hierarchical alternatives (Silberer et al., 2020). That is, different conceptualizations for the same object can be produced by speakers to highlight different aspects of the referent. For example, we can call someone “teacher” irrespectively of any context, but also refer to them as “tennis player” when the context requires us to do so, even though “teacher” and “tennis player” are not in a hypernym-hyponym relationship. Our agents may also converge on solutions of this kind.
> “What do you think would be the potential real-world applications of this work?”
While our work focuses on the science of intelligence, rather than on engineering AI applications, we see at least one important direction in which our work could potentially lead to improvement in real-world applications. Recall that our agents are trained without any human-generated linguistic data. This stands in sharp contrast to LLMs that require massive amounts of training data, which is unavailable in low-resource languages. Therefore, our work could help understand how to leverage cognitively-motivated optimization principles for training more data-efficient language models, with the hope of advancing more equitable language technologies. We thank the reviewer for raising this question and we will address this point in the paper.
**References**
Silberer, C., Zarrieß, S., and Boleda, G. (2020). Object naming in language and vision: A survey and a new dataset. In *Proceedings of the 12th Language Resources and Evaluation Conference*, pages 5792–5801, Marseille, France. European Language Resources Association.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. | Summary: This paper addresses the co-evolution of pragmatics, which must be interpreted according to context, and lexical semantics, the common meaning independent of context. In this paper neural agents are trained in the pragmatic setting to select the target object from two objects, and evaluated in the semantic setting to reconstruct mental representation of the target.
The authors used the vector quantize variation information bottleneck (VQ-VIB) to evaluate the trade-off of utility, informativeness, and complexity. The results showed that human-like outcomes emerged when not solely pursuing one aspect among utility, informativeness, and complexity, but rather making a non-trivial trade-off among them.
Strengths: The idea of learning emergent communication in a pragmatic setting and evaluating it in a semantic setting is intriguing and meaningful.
This paper effectively conducts experiments on naturalistic images. Through quantitative and qualitative evaluation, the authors show that human-like lexicon emerges when optimizing both context-specific and general objectives.
Weaknesses: The explanation for the significance of the scenario that training in a pragmatic setting and testing in a semantic setting is insufficient.
The paper's contributions are somewhat unclear. Except for the point that the communication model is trained in a pragmatic setting, it is similar to [1] and [2]. It uses VQ-VIB as presented in [1], and the overall communication model is similar to [2].
The novelty is somewhat unclear in the claim that optimizing both task-specific and general objectives yields human-like results. Although there is a difference in that emergent communication is learned in a pragmatic setting, the trade-off between task-specific utility and general information is already presented in [1].
[1] Tucker, Mycal, R. Levy, J. Shah, and Noga Zaslavsky. 2022. “Trading off Utility, Informativeness, and Complexity in Emergent Communication.” *Advances in Neural Information Processing Systems*.
[2] Tucker, Mycal, Roger P. Levy, Julie Shah, and Noga Zaslavsky. 2022. “Generalization and Translatability in Emergent Communication via Informational Constraints.” https://openreview.net/pdf?id=yf8suFtNZ5v.
Technical Quality: 3
Clarity: 4
Questions for Authors: - What is the significance of training in a pragmatic setting and testing in a semantic setting?
- Since the training loss includes utility, informativeness, and complexity, isn't semantics considered during training?
Confidence: 2
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The explanation for the significance of the scenario that training in a pragmatic setting and testing in a semantic setting is insufficient.
> What is the significance of training in a pragmatic setting and testing in a semantic setting?
We believe that this concern is somewhat related to the framing concern of reviewer Cm9N. As we explain in the related work section, and in our response to Cm9N, semantics and pragmatics are two key aspects of linguistic meaning that have so far been mostly studied separately from each other. While they are related, the interface between them has been largely under-explored and is not well understood. Most computational models of pragmatics assume a given lexicon, and most of the work of lexical semantics ignores the context-sensitive aspect of meaning that is captured by pragmatics. As we note in the abstract: “we aim to bridge this gap by studying how a shared lexicon may emerge from local pragmatic interactions.”
We believe that clarifying the framing throughout the paper, as we explain in the global rebuttal, will address this concern.
> The paper's contributions are somewhat unclear.
Our contributions are stated in the abstract (lines 9-15), introduction (lines 68-75), and conclusion section (lines 294-304). We would appreciate it if the reviewer could explain what is unclear about these lines in the paper, so that we could address this concern. Otherwise, this comment by the reviewer is too vague.
> Except for the point that the communication model is trained in a pragmatic setting, it is similar to [1] and [2]. It uses VQ-VIB as presented in [1], and the overall communication model is similar to [2].
While our model builds on the work by Tucker et al. that the reviewer mentioned, we extend that work in several important ways. First, not only is our training procedure different, but also our communication model is different. To see this, compare Fig. 2 in our paper with Fig. 1 in each one of the papers the reviewer cites. The evident differences are crucial for our work: we adjust the model to incorporate a shared context for the speaker and listener and implement a masking mechanism that allows us to invoke the model in two different ways (pragmatics and semantics settings). Second, in contrast to Tucker et al., who considered a discrimination task between whole images, our task focuses on discrimination between co-occurring objects within single images. This is crucial in our case, as we aim to capture communication in naturalistic scenes. Finally, our work addresses a major open question that has not been addressed by Tucker et al. That is, we focus on the under-explored interface between semantics and pragmatics, while Tucker et al. have focused on establishing the utility-informativeness-complexity tradeoff, and on the generalization and translatability afforded by this new framework.
Thanks to the reviewer’s comment, we have noticed that these important differences were not stated clearly enough in the related work section. We will adjust that section accordingly to address the reviewer’s point.
> Since the training loss includes utility, informativeness, and complexity, isn't semantics considered during training?
The key point in this context is that semantics is not built into the agents and we do not assume that a shared lexicon between the speaker and listener is given a-priori. This is in contrast to the vast majority of studies in computational pragmatics that assume a given lexicon (for review see: Goodman, & Frank, (2016). Pragmatic language interpretation as probabilistic inference. *Trends in Cognitive Sciences*). Semantics is considered during training only implicitly, in the sense that we incorporate in our training objective a general optimality principle, the information bottleneck, that is believed to guide the evolution of human semantic systems. | Summary: This paper looks at aligning emergent communication with human language through the joint optimization of the utility, informativeness, and complexity of a communication channel.
This is done in the context of a "pragmatic" signalling game where a speaker agent must refer to an object in an image contrasting it with a distractor object in the image.
The resulting emergent communication protocol is evaluated in terms of its lexical semantics such as lexicon complexity and size.
Strengths: In conjunction with standard criteria, there are three characteristics that are particularly important for emergent communication research: reusability (how easily can another researcher use the products of this research), generalizability (how much do the findings of this research apply broadly to our knowledge of emergent communication), and directedness (does this research contribute concretely to particular questions in emergent communication research).
### Quality
- (major) The experiments are reasonably designed and demonstrate informative results.
### Clarity
- (major) The experiments are easy to follow and are accompanied with informative visualizations.
### Reusability
- Nothing of note.
### Generalizability
- (major) The UIC framework utilized in this work is a general way to think about emergent communication systems with support in psycholinguistic theories. As a result, providing evidence of the utility of this framework as well as an example of how to use it makes such a framework more effective down the road for for emergent communication research.
### Directedness
- (major) As mentioned above, the UIC framework makes progress towards multiple goals within emergent communication research: developing more formal models to describe system behavior as well as aligning emergent language agents (and the languages themselves) with human languages behavior.
Weaknesses: ### Quality
- Nothing of note.
### Clarity
- (major) It is difficult to tell if the paper is primarily about relationship between semantics and pragmatics in a given EC system or about empirically verifying the utility-complexity-informativeness framework.
### Reusability
- (major) The code is not available to be reviewed.
- (major) Although the code is available for VQ-VIB, the details on how the code (or some other codebase) was adapted to this paper are sparse.
### Generalizability
- (minor) There are not many details on how the UCI framework could be applied to other emergent communication systems or questions. I think the framework is already pretty general, so it would be too difficult to expand on this for a paragraph or two.
### Directedness
- Nothing of note.
Technical Quality: 4
Clarity: 3
Questions for Authors: Although I think the contributions of this paper are worthwhile, I think it is somewhat caught between different framings. From the introduction (and title), I got the sense that the paper was about semantics and pragmatics, but from the rest of the paper I got more the sense that the paper was about validating the informativity-complexity-utility tradeoff using emergent communication techniques with the ManyNames dataset.
I think if the framing of pragmatic/semantics is primary, then there need to be more explicit definitions of what semantics and pragmatics is, why the distinction is important, and what the empirical investigation is telling us about it; I understand the difference between the "pragmatic" and "semantic" goals in the environment, but I do not get an explicit sense of what the relationship is between these two and why it is important. Also, some of the framing seems to make pragmatics seem wholly independent of semantics when really pragmatics is more a pressure on semantics.
On the other hand, if the framing is to be more consistent with the empirical work as it stands currently, I think there ought to be more talk of why the IUC-tradeoff is important/useful both generally and specifically with relation to the experiments.
Addressing this issue adequately would likely lead to me raising my Rating/Presentation score.
### Comments
- Figure 2: "Agent are trained" -> "Agents are trained"
- Line 156: "in its nature to" -> "in it nature"
- Not a requirement at all but it would be informative to compare the EC results to the recently released Mandarin Chinese data in ManyNames (assuming the data is already in the right format for analysis).
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: N/A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Clarity + Questions (major)**
The main goal of the paper is to study the interface between semantics and pragmatics, and specifically, to address the question: how can a shared human-like lexicon emerge from task-specific, context-sensitive pragmatic interactions? As the reviewer noted in the “Questions” section of their review, this framing is clear in the title and introduction (we also believe it is clear in the abstract, related work, and conclusions), but less so in other parts of the paper. We thank the reviewer for noticing this, and believe that this shortcoming is due to our focus on the technical details in the more technical parts of the paper, such as the experimental setup and results.
To address this issue, the reviewer suggested that “there need to be more explicit definitions of what semantics and pragmatics is, why the distinction is important, and what the empirical investigation is telling us about it”. We agree and are grateful for this suggestion. Here we address the reviewer’s questions, and in the global rebuttal we explain how we will revise the paper to address this concern.
The term **lexical semantics** refers to the study of word meanings [1, 2], independently of any specific conversational context. An example we mention in the paper (lines 23-25) is that we have a shared idea of what the word ‘player’ means, regardless of any context or scene in which this word might appear. The structure and evolution of word meanings has been an active research area in linguistics and cognitive science for several decades [3-5], and it has recently been characterized using the information-bottleneck (IB) complexity-informativeness tradeoff [6, 7]. However, the vast majority of studies in this area do not take into account how words are used in context, which can alter their meaning.
The term **pragmatics** refers to the study of how the local conversational context may alter meaning in real-time, as speakers and listeners reason about each other’s intentions and beliefs [8, 9]. For example, a speaker may choose either ‘player’ or ‘batter’ to describe the target person in the red bounding box in Fig. 1. In scenes with only one player, the more frequent word ‘player’ may be easier to produce, but in scenes with more than one player this word would not be informative about the speaker’s intended meaning. Therefore, if the speaker takes into account how a rational listener would interpret its words, then in the scene shown in Fig. 1 the speaker should use the word ‘batter’ instead of ‘player’. This type of reasoning, called pragmatic reasoning, has been widely studied both experimentally and with computational models that typically use tools from Bayesian inference and game theory [e.g., 9-11], that emphasizes only utility maximization. However, the vast majority of studies in this area assume that the lexicon is given and fixed.
The **distinction** between semantics and pragmatics is important for several reasons: First, they capture two different aspects of language: semantics captures how meaning is structured into words regardless of context, while pragmatics captures the context-sensitive meaning. Second, they capture two different time-scales: word meanings are stable building-blocks of language that change relatively slowly, while pragmatic reasoning applies in real-time, everyday language use. Finally, these two aspects of linguistic meaning constitute two subfields of linguistics, and therefore much of the relevant literature, including in computational linguistics and cognitive science, is organized around this distinction.
At the same time, it is widely agreed that semantics and pragmatics are related, as the reviewer noted as well; however the interface between them has been largely underexplored. As noted above, these two aspects of language are typically studied independently (but see our related work section for a few exceptions). As we state in the abstract, **our computational and empirical investigation aims to bridge this gap in the literature**, by providing a unified computational framework for studying semantics and pragmatics simultaneously. To this end, we extend a framework that integrates the IB complexity-informativeness tradeoff that characterizes lexical semantics with utility maximization that has been used to model pragmatic reasoning. In addition, in our setup, we do not assume that the lexicon is given a-priori, in contrast to most models of pragmatics, but rather investigate how it may naturally emerge from context-sensitive interactions. Our results show that in order to understand how a human-like lexicon may emerge from pragmatic interaction, it is crucial to consider all three terms in our objective function: utility, informativeness, and complexity.
**Reusability (major)**
To address the reviewer’s concern about the availability of our code, we made our code available by providing a link to an anonymized repository. Following the NeurIPS instructions we received, this link was posted in a separate comment to the AC. We included in the repository a demo notebook, useful for exploring the ManyNames dataset and identifying target and distractor objects in the images, which we hope will facilitate further extensions of our work.
**Generalizability (minor)**
We completely agree with the reviewer that our framework is general and could be used to study additional environments and open questions in the literature. We will happily add a short discussion on this in the paper.
**Comments (minor)**
Thanks for noting the typos, we will certainly fix them. We are also very excited about considering the Mandarin Chinese data in ManyNames in future work! We would have loved to do so in this paper, but given the scope of a single NeurIPS paper, we won’t be able to do justice to a cross-linguistic study in addition to our current set of results.
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal. I believe that the proposed changes will sufficiently address my concerns about the framing and make the findings of the empirical clear both in the context of the paper and in the broader context of computational approaches to linguistics. I will update the review's scores accordingly.
---
Rebuttal 2:
Title: References
Comment: [1] Dowty, D. R., Wall, R. E., & Peters, S. (1981). *Introduction to Montague Semantics*. Springer.
[2] Murphy, G. (2002). *The Big Book of Concepts*. The MIT Press.
[3] Rosch, E. (1975). Family resemblances: Studies in the internal structure of categories. *Cognitive Psychology*, 7(4), 573-605.
[4] Rosch, E., Mervis, C. B., Gray, W. D., Johnson, D. M., & Boyes-Braem, P. (1976). Basic objects in natural categories. *Cognitive Psychology*, 8(3), 382-439.
[5] Regier, T., Kemp, C., & Kay, P. (2015). Word meanings across languages support efficient communication. In B. MacWhinney & W. O'Grady (Eds.), *The Handbook of Language Emergence* (pp. 237-263). Wiley-Blackwell.
[6] Zaslavsky, N., Kemp, C., Regier, T., & Tishby, N. (2018). Efficient compression in color naming and its evolution. *Proceedings of the National Academy of Sciences*, 115(31), 7937-7942.
[7] Zaslavsky, N., Regier, T., Tishby, N., & Kemp, C. (2019). Semantic categories of artifacts and animals reflect efficient coding. *Proceedings of the Annual Meeting of the Cognitive Science Society*, 41.
[8] Grice, Paul (1975). "Logic and conversation". In Cole, P.; Morgan, J. (eds.). Syntax and semantics. Vol. 3: Speech acts. New York: Academic Press. pp. 41–58.
[9] Goodman, N., & Frank, M. (2016). Pragmatic language interpretation as probabilistic inference. *Trends in Cognitive Sciences*, 20.
[10] Monroe, W., Hawkins, R. X. D., Goodman, N. D., & Potts, C. (2017). Colors in context: A pragmatic neural model for grounded language understanding. *Transactions of the Association for Computational Linguistics*, 5, 325–338
[11] Franke, M. (2012). Signal to Act: Game Theory in Pragmatics. PhD thesis, University of Amsterdam. | Summary: This paper investigates emergent communication in artificial agents and how properties of the emergent language compares to properties of human language. Specifically, it aims to present a framework to study pragmatic language emergence in systems with different learning objectives, in order to determine constraints that lead to human-like linguistic systems. For training, they use a reference-game setup based in naturalistic images. Based on the closest similarity in complexity, lexicon size and lowest normalized information distance compared to the human baseline, they argue that agents need to optimize for all: utility, informativeness, and complexity.
Strengths: - Well thought-through dataset preprocessing and task formulation
- Relevant and interesting topic for the computational linguistics and computation cognitive science community
- Thoughtful setup and analyses that are informative for understanding what the agents learn
Weaknesses: - The paper overall argues for the necessity of a trade-off of different communicative pressures in order to learn a human-like emergent language -- as measured by the system's similarity to human language in complexity, lexicon size and Normalized Information Distance (NID). Based on these parameters, the authors argue that utility, informativeness, and complexity need to be jointly optimized for in order for a human-like language system to emerge. However, the best-performing system (according to the complexity, lexicon size, and NID metrics) only achieves a final task performance of 72% -- If I understand the task setup correctly, the baseline chance performance is 50% and people should be expected to achieve nearly 100% in performance. The utility-only objective achieves an accuracy of 95% but does so with a larger lexicon size, worse NID and differently looking complexity. So why do the complexity/lexicon size/NID metrics hold so much more value than the actual task performance? **It seems to me that this argument can really just be made when all systems succeed on the task equally and conditioned on that, we can inspect which learning constraints appear to lead to the most human-like system.** And learning constraints that don't lead to task success while others do should already disqualify from further inspection. I find this methodologically problematic and to me this is a fundamental flaw in the argument. (It also severely lacks discussion in the paper.)
- Minor: The paper writing could generally be improved. Firstly, I find Figure 2 quite confusing as to what the labels mean and when something is masked. Secondly, there are various typos in the paper (e.g., lines 90, 153, 158, 170, 179) and confusing enumeration (line 125-129).
Technical Quality: 2
Clarity: 3
Questions for Authors: - Could you elaborate on what naturalistic images afford compared to more controllable environments (e.g., more interesting lexicon size?)
- What is the basis for the claim that utility and informativeness are only partially aligned? (line 245)
- There is an argument brought forth that the agents are pressured to overcompensate because of the lack of syntax (lines 231-233). Could you elaborate on this a bit more? Why can syntax not emerge and wouldn't that then be a fundamental constraint on the comparability of the system to human language (especially when it comes to lexicon size as a metric)?
- In the utility-only condition, the agents learn to solve the task well, but the authors argue that the listener doesn't learn to reconstruct robust and non-contextual semantics. How is it possible that the model still generalizes so well?
- How do the pragmatic and semantic condition map onto the results? I understood the paper to say that the agents were trained in the pragmatic setting where it's the listener's goal to pick out the intended target based on two options. The utility evaluation seems to measure the listener's success to pick out the correct target. What is the semantic setup used for?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: What questions does this leave unanswered? What simplifying assumptions are being made about language learning (no syntax or social learning)? The English language limitation is certainly true but not particularly meaningfully discussed -- why would we potentially expect variation across languages?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Weaknesses:**
1. Major: The reviewer assumed that “people should be expected to achieve nearly 100% in performance” and based on that, argued that we should disqualify systems that do not achieve near-perfect task utility. We would like to clarify that the reviewer’s claim is inconsistent with prior work showing empirical evidence that **people cannot achieve near-perfect performance in our task** (see additional comment for detail). This also makes sense intuitively: our study focuses on the lexicon, and therefore our task focuses on the use of **lexical items** (i.e., single words/signals). Humans cannot achieve near-perfect performance with lexical items, but rather need to employ syntactic constructions for that. This stems from the ambiguity of the human lexicon, an important property that is believed to facilitate efficient communication [e.g., 1; 2]. Therefore, we find it encouraging that our approach has identified a regime in which human-like behavior arises both at the lexical level — as captured by NID, lexicon size, and informational complexity — as well as at the pragmatic level, as captured by the (expectedly bounded) task performance. We briefly explained this in lines 226-233 in our submission, but thanks to the reviewer’s thoughtful comments we realize that we should elaborate more on this point and explain it more clearly. We will certainly do this for the final version.
Re the reviewer’s question on task performance vs other metrics: due to space limitation, we addressed this in the additional comment.
2. Minor: we will easily fix the typos and confusing enumeration. Furthermore, we will do a thorough English proofreading of the entire paper. The reviewer also noted that the labels in Fig2 are confusing, but they didn’t explain what is confusing about these labels (they seem rather intuitive to us). We therefore ask the reviewer for a clarification on this potential issue, to allow us to fully address all the concerns.
**Questions:**
1. Our chosen dataset of naturalistic images has several advantages: (a) richer lexicons, as the reviewer suggested; (b) using images of naturalistic scenes is particularly crucial in our case, because the structure of the emergent communication is sensitive to the statistics of co-occurring objects in the environment; (c) controllable environments have been considered in prior work with the utility-complexity-informativeness tradeoff [3], we thus extend this framework to more naturalistic settings;
2. Optimizing utility and optimizing informativeness are only partially aligned because they are different objectives that can lead to different solutions, but at the same time, they don’t necessarily compete and can sometimes support each other. For example, high informativeness can facilitate high utility, as seen in Table 1 ($\lambda_I = 1$). The intuition for this is that if the listener can accurately reconstruct the speaker’s representation of the target object, then it could perform reasonably well in many downstream tasks. At the same time, optimizing utility alone can be achieved with very poor feature reconstruction (i.e., low informativeness), as can be seen in Table 1 ($\lambda_U = 1$) and Figure 4a. The intuition for this is that when there is no pressure for informativeness, the listener can learn arbitrary representation spaces (e.g., random projections) that don’t resemble what the speaker has in mind. While this may yield good task performance for in-distribution inputs, it’s unlikely to generalize well to out-of-distribution (see [4] for preliminary results).
3. As noted above, our work focuses on lexical systems, i.e., the meanings of single words, and not syntax. We therefore allow our speaker agents to send only one word/signal per interaction. Respectively, the English ManyNames data that we use is also based on single-word responses from people. Our measures of complexity — the lexicon size and the informational complexity term — are both measures of lexical complexity, not syntactic complexity. The reviewer is correct that syntax is also an important feature of language; it is beyond the scope of this work and we will add a note on this matter in the limitation section.
4. This is best illustrated in Fig. 4a, which shows that in the utility-only condition the listener’s reconstructions are misaligned with the speaker’s representations of the target visual features, in contrast to other solutions (Fig. 4b and Fig. 4d) where the listener is better able to recover the speaker’s representations. We believe that the listener achieves high task performance in the utility-only condition because it learns random projections instead of useful reconstructions. The random representations could perform well for in-distribution inputs, but they are unlikely to generalize well to out-of-distribution inputs (see [4] for preliminary results).
5. The semantic setup is designed to recapitulate the human naming experiment with our artificial agents. In a naming experiment (such as the ManyNames experiment), participants are shown a single object and are asked to name it (using a single word). In contrast to the pragmatic task, there is no distractor and no downstream task. We use this setup to evaluate the emergent lexical systems with respect to the human naming data. NID, lexicon size, and complexity are all based on this setup.
**Limitations:**
Indeed, as the reviewer noted, this work doesn’t address the evolution of syntax. We focus on the evolution of the lexicon, which provides the building-blocks for syntax, and it is far from being well understood. We agree that this is a simplifying assumption and adding syntax is an important direction for future work. We will add a discussion on this in the limitations section.
As for the question about linguistic variation: it is well documented in the literature that languages vary widely in the ways they structure the environment into words, see for example [5-7].
---
Rebuttal 2:
Title: Further details and references
Comment: **Empirical evidence suggesting that humans do not achieve near-perfect performance:**
First, recall that our work focuses on understanding the emergence of the lexicon, and therefore our task setup only considers the use of lexicalized items (i.e., single signals or words) in communication. This kind of experimental paradigm is very common in the cognitive science of language, and is often used for studying the emergence of the lexicon. Second, while we do not have human data for our specific task, prior work has collected behavioral data for closely related tasks in this domain:
(a) The ManyNames dataset itself [9], which contains non-contextualized free naming data (i.e., with no distractor) from native English speakers, shows that the English lexicon contains unresolved ambiguities. This is not surprising given what we know about the lexicon and the probabilistic nature of many semantic categories [e.g., 1, 2, 5].
(b) Mädebach et al. (2022) [8] conducted a human experiment in a pragmatic referential task similar to ours (with distractors), but in contrast to our setup, they did not restrict participants to single-word responses. Their findings suggest that English speakers often cannot use the lexicon alone to resolve visual ambiguities in the ManyNames images. For example, the lexicon does not allow speakers to unambiguously distinguish between two similar chairs in the same scene. Instead, in such cases participants would need to employ syntactic constructions that go beyond the lexicon itself (e.g., “the chair on the left”).
While studying syntactic constructions is an important direction for future extensions of our work, our current paper focuses on how the semantic structure of the lexicon emerges, and this aspect of human language does not support perfect task performance.
**On the importance of task performance vs other metrics for evaluation**
> “The utility-only objective achieves an accuracy of 95% but does so with a larger lexicon size, worse NID and differently looking complexity. So why do the complexity/lexicon size/NID metrics hold so much more value than the actual task performance?”
High task performance (utility) can be trivially achieved with very non-human-like communication systems. For example, by assigning a unique signal to each object in the dataset, or by using the same signal for entirely unrelated objects that never appear in the same context. Therefore, focusing only on task performance is unlikely to explain the emergence of human-like lexicons. Taken together with our previous point, that the English lexicon does not afford near-perfect task performance, we actually find it very encouraging that the emergent system that is most aligned with humans also achieves bounded task performance. In other words, we value all metrics for evaluation, including task-performance, and when comparing them with human behavior they all consistently support our conclusions.
**References:**
[1] Piantadosi, S.T., Tily, H.J., & Gibson, E. (2011). The communicative function of ambiguity in language. *Cognition*, 122, 280-291.
[2] Zaslavsky, N., Kemp, C., Regier, T., & Tishby, N. (2018). Efficient compression in color naming and its evolution. *Proceedings of the National Academy of Sciences*, 115(31), 7937-7942.
[3] Tucker, M., Levy, R., Shah, J., & Zaslavsky, N. (2022a). Trading off utility, informativeness, and complexity in emergent communication. *Advances in Neural Information Processing Systems*.
[4] Tucker, M., Levy, R., Shah, J., & Zaslavsky, N. (2022b). Generalization and translatability in emergent communication via informational constraints. In *NeurIPS 2022 Workshop on Information-Theoretic Principles in Cognitive Systems*.
[5] Malt, B. C. (1995). Category coherence in cross-cultural perspective. *Cognitive Psychology*, 29(2), 85–148.
[6] Zaslavsky, N., Regier, T., Tishby, N., & Kemp, C. (2019). Semantic categories of artifacts and animals reflect efficient coding. *Proceedings of the Annual Meeting of the Cognitive Science Society*, 41
[7] He, Y., X. Liao, J. Liang, G. Boleda. 2023. The Impact of Familiarity on Naming Variation: A Study on Object Naming in Mandarin Chinese. *CoNLL 2023*.
[8] Mädebach, A., Torubarova, E., Gualdoni, E., & Boleda, G. (2022). Effects of task and visual context on referring expressions using natural scenes. In *Proceedings of the Annual Meeting of the Cognitive Science Society*, 44.
[9] Silberer, C., Zarrieß, S., and Boleda, G. (2020). Object naming in language and vision: A survey and a new dataset. In *Proceedings of the 12th Language Resources and Evaluation Conference*, pages 5792–5801, Marseille, France. European Language Resources Association.
---
Rebuttal 3:
Comment: Thank you for the details and clarifications. I have a few follow-up questions based on your response.
Firstly, could you quantify the expected human performance on the task and how you infer that number? I'm assuming it's definitely higher than 50% because it's unlikely that there are more than two objects with the same lexical label (on average). Is it closer to 60%, 70%, 80%, 90%, 95%?
(To get further insights on the performance, could you maybe leverage the data split that Mädebach et al. 2022 introduced into no-competitor, lexicon-sufficient, and syntax-needed? I think this might potentially lead to interesting observations on error patterns and whether the cases where the model fails might be cases where people would employ syntactic specification.)
About the claim that *syntax is excluded from this analysis:* You specifically highlight that this work's contribution aims to only consider the semantics/pragmatics interface and tries to abstract away from syntax. I would like some clarifications on this point.
- Firstly, where do you draw the line between syntax and semantics in this particular setup? Mädebach et al. 2022 specifically discusses that they had to make some fairly ad-hoc decisions about what qualifies as lexical and syntactic modification. A prime example are compound nouns, where Mädebach et al. for instance chose a specific frequency threshold to determine whether they classify a noun compound as syntactic or lexical modification (making "tennis player" lexical specification and "front court player" syntactic specification). This is simply to illustrate that the syntax/semantic divide isn't as clear cut in this setting as you make it seem in your response and I would like some clarification on this point.
- Secondly, human language has developed alongside syntactic complexity, so the English language has fundamentally one major additional dimension of system complexity that actively negotiates semantic complexity. As you pointed out in your response, in Mädebach et al's work, they find that people choose syntactic structures over lexical specification quite frequently. The option to have syntactic variation at a speaker's disposal for a reference game or reference disambiguation task can then be reasonably expected to decrease the lexical variation that will emerge in a learning system. In fact, there is ample of evidence in the cognitive science literature that different languages indeed optimize for a distinct syntactic complexity/semantic complexity trade-off (see, e.g., Reali et al., 2018 [1], for an interesting exploration of this trade-off). And even in an already established language, like English, recent work suggests that people trade off lexical and syntactic complexity against each other in production (Rezaii et al., 2022 [2]). Given this, how are you thinking about comparing a system that fundamentally has one more dimension for reducing referential ambiguity to a system that is aimed to be stripped from it?
- This is a more minor point, but you highlight in your responses that you restricted the model learning setup to single words and that that prohibits learning of syntactic structures. Could you clarify how restricting an emerging language system to single words necessarily disables learning syntactic structure? Since the definition of what qualifies as a token is sort of arbitrary, couldn't the system in principle map to sophisticated morphosyntax? (To clarify, I don't believe that that's the case here but I would like clarification on how the single-word constraint relates to emerging-language syntax.)
Other points:
- You also stated that "High task performance (utility) can be trivially achieved with very non-human-like communication systems." -- I agree that utility is not a *sufficient* condition but what I raised in my review is the concern that a system that's not achieving task performance seems like it's missing a *necessary* condition to be an interpretable cognitive model.
- Do you have estimates on how stable the learned systems are and how generalizable the results? I'm aware that the tested model is a ResNet model which has been shown in the past to have promising representational alignment with humans. However, there are still many ad-hoc decisions that are part of training and evaluating such models as cognitive models which might significantly affect the results. For instance, is there evidence that the stopping criterion used for training is a reasonable stopping criterion for finding human alignment? If you were to continue training would the learned system significantly change?
Citations:
[1] Reali, Chater, Christiansen (2018), "Simpler grammar, larger vocabulary: How population size affects language", Proceedings of the Royal Society B
[2] Rezaii, Mahowald, Ryskin, Dickerson, Gibson (2022), "A syntax–lexicon trade-off in language production", PNAS
---
Rebuttal 4:
Title: Responses to follow-up questions - Part 1
Comment: We thank the reviewer for engaging in an interesting discussion about our work. Before turning to our detailed response (below), we would like to highlight that none of the points that the reviewer raised seem to be grounds for rejection. We have already addressed the concerns that were raised in the review, and the reviewer’s main follow-up questions about the syntax-semantics interface are not specific to our work. These questions touch upon some of the deepest open challenges in cognitive science and linguistics. Our work does not attempt to address these challenges. Instead, we follow common practices in the field and make well-established simplifying assumptions in order to advance the understanding of how lexical systems may emerge from contextual pragmatic interactions. Furthermore, the reviewer’s first follow-up question, on estimating the expected human performance, has actually helped us to further strengthen the support for our results and conclusions. We therefore hope that the reviewer will positively reconsider the rating of our paper.
Detailed response to the reviewer's questions:
> Firstly, could you quantify the expected human performance on the task and how you infer that number?
While the Mädebach et al. data does not allow us to quantify precisely the human performance in our task (our point in the paper, lines 226-233, and in our rebuttal was qualitative rather than quantitative), we can use that data to roughly estimate an upper and lower bound on the human performance. For the upper bound, notice that Figure 2 in Mädebach et al. shows that, across all context conditions (no-competitor, lexicon-sufficient, and syntax-necessary), the performance is around 80%, and recall that Mädebach et al. did not restrict the participants’ responses to lexical items. Therefore, it is unlikely that in our task, which is restricted to lexical items, the performance would be better than 80%. For the lower bound, notice that in the most favorable condition, i.e., the lexicon sufficient condition, which could in principle be solved using the lexical items, the proportion of responses that correspond to lexical items without syntactic construction is only ~42% (‘no specification’ and ‘lexical’ response type). In ~39% of the response participants use syntactic construction even though there is a lexical item that could unambiguously describe the target. One possible explanation for this behavior is that in some cases lexical retrieval is harder than syntactic constructions (e.g., for infrequent words). Therefore, while we expect the proportion of lexical responses to be higher than ~42% in our version of the task, we also expect the error rate to be higher. While we don’t have a way to quantify this further, we do agree with the reviewer that the task performance would probably be higher than 50%.
To summarize: the Mädebach et al. data suggests that the human performance in our task would be somewhere between 50%-80%. In comparison, our model achieves ~72%, which is well within the reasonable range of human performance. We find this estimation very helpful to further support our results and conclusions, and thank the reviewer for suggesting it!
> where do you draw the line between syntax and semantics in this particular setup?
Syntax corresponds to how single words can be combined into larger linguistic structures, such as phrases or sentences. Lexical semantics corresponds to word meanings. These two subfields of linguistics are largely separable, but indeed, as the reviewer notes, there are gray areas in between. However, these gray areas are irrelevant to our work because our agents simply cannot learn them. Recall that in our setup communication signals cannot be combined and in each communication act only a single signal can be generated. Therefore borderline cases like the noun compound, which require combining words, cannot emerge in our agents. Instead of lexicalized compounds, our agents would simply use a single communication signal, thus bypassing the issue that Mädebach et al. had to address in the human data.
---
Rebuttal Comment 4.1:
Title: Responses to follow-up questions - Part 2
Comment: > how are you thinking about comparing a system that fundamentally has one more dimension for reducing referential ambiguity to a system that is aimed to be stripped from it?
First, we would like to emphasize that we are comparing the agents’ lexicon with the human lexicon. That is, the reviewer's claim is factually inaccurate in the sense that our analysis does not include the additional syntactic dimension of English, but rather focuses only on the lexical dimension of English.
Second, while it is certainly true that the human lexicon evolved together with syntax, in contrast to the emergent lexicon in our agents, this simplifying assumption is very common in the emergent communication literature, and more generally, in agent-based and game-theoretic approaches to language evolution. Furthermore, this kind of simplifying assumption – i.e., studying only one key aspect of human cognition, even though it has evolved together with many other cognitive functions – applies to every cognitive model that we are aware of and seems inevitable, at least given the current state of the field.
Finally, we would like to highlight that we have already acknowledged in the rebuttal that this simplifying assumption should be addressed more explicitly in the limitations sections of the paper and we certainly intend to do so if given the opportunity to revise the paper.
> Could you clarify how restricting an emerging language system to single words necessarily disables learning syntactic structure?
In our model, agents learn a codebook of k communication vectors, similar conceptually to the idea of word embeddings. As explained above, at each communication round the speaker can generate only a single communication vector, which rules out the possibility of syntactic structures that emerge from combining communication signals. As for the possibility of a complex morphological structure in the vector embeddings, this is unlikely to emerge because the agents cannot reuse subparts of the communication vectors, but rather only the vectors as a whole. To address this, Tucker et al. (2022) proposed an extension of the model that does support some degree of combinatorial structure within the communication signals, but we have not used that extension in our work.
> You also stated that "High task performance (utility) can be trivially achieved with very non-human-like communication systems." -- I agree that utility is not a sufficient condition but what I raised in my review is the concern that a system that's not achieving task performance seems like it's missing a necessary condition to be an interpretable cognitive model.
Thank you for clarifying your point. We hope that our rebuttal and response to your first follow-up question here have convinced you that this is not a necessary condition. Human lexical systems do not seem to afford near-perfect task performance, and therefore it would not make sense to consider only emergent systems that achieves near-perfect task performance.
> Do you have estimates on how stable the learned systems are and how generalizable the results? I'm aware that the tested model is a ResNet model which has been shown in the past to have promising representational alignment with humans. However, there are still many ad-hoc decisions that are part of training and evaluating such models as cognitive models which might significantly affect the results. For instance, is there evidence that the stopping criterion used for training is a reasonable stopping criterion for finding human alignment? If you were to continue training would the learned system significantly change?
We have verified that our results are robust across random seeds (see Table 1). The stopping criteria we used is convergence of the training objective, which is one of the most common, generic stopping criteria in the literature. We have not considered a stopping criteria that specifically targets better alignment with humans, but that presumably could only improve our results. For other hyperparameters, we have built on the prior work by Tucker et al. (2022) that established the VQ-VIB framework. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their helpful and thoughtful comments. We are excited that the reviews are generally positive and in favor of publication. As we explain in our detailed response to each reviewer and in the summary below, we believe that all the concerns raised by the reviewers can be easily addressed with minor revisions to the paper that will clarify a few points about the framing and significance of our work.
Below is a summary of our responses to the key points raised in the reviews, as well as the revisions we intend to implement to address all the reviewers’ concerns.
**1. zhL9’s major concern**
We believe that the major concern raised by zhL9, which seems to be the grounds for this reviewer’s negative rating, is based on a misinterpretation of our work and does not take into account prior work on how humans would perform in our task. Specifically, the reviewer assumed that humans perform near-perfectly in our task, without providing any references to support that claim, and based on that, argued that we should have excluded systems that do not solve the discrimination task perfectly. However, as we explain in our detailed response to zhL9, the reviewer’s assumption is unsupported by prior work that provides empirical evidence in direct support of our findings. That is, humans cannot solve the task perfectly, similar to our model that is best aligned with the English lexicon. Therefore, the fact that we are able to identify, in a principled way, a regime in which human-like behavior emerges both at the lexical semantics level — as captured by the resemblance between the emergent lexicon and the English lexicon in our domain — and at the pragmatics level — as captured by the bounded task performance — is not a design flaw but rather a major contribution of our work.
We suspect that this confusion may have stemmed from misinterpreting our work as focusing on language as a whole, including syntax, rather than focusing specifically on the semantics-pragmatics interface as highlighted throughout the paper (including in the title and abstract).
We hope that we addressed zhL9’s major concern, both here and in our detailed response to the reviewer. We greatly appreciate zhL9’s comments and feel that clarifying this point would improve the presentation of the paper. Specifically, we intend to address this issue by:
(a) Further elaborating on the empirical evidence suggesting that humans do not solve our task perfectly.
(b) Clarifying that achieving bounded task performance is a desired (human-like) property for lexical systems (this is not the case for syntax, but that is not within the scope of our work).
(c) Clarify that our work does not consider syntax, and discuss the important extension to syntax in the limitations section of the paper.
**2. Clarity of framing**
Reviewers Cm9N and F8cg raised related concerns about the framing of the paper. Specifically, Cm9N noted that while the title and introduction capture the correct framing, other parts of the paper seem to reflect a somewhat different framing. As explained in our response, we believe that the abstract, conclusions, and related work sections also capture the correct framing, but we agree with the reviewer that the framing in the more technical parts of the paper could be improved. The reviewer also offered a constructive suggestion for addressing this issue, by elaborating on what semantics and pragmatics are, why their distrintinction is important, and how our work is significant in this context. We are grateful for this suggestion and will address it along the lines of our detailed response to Cm9N, as follows:
(a) Extend the first paragraph of the introduction to clarify what we mean by semantics and pragmatics.
(b) Adjust the related work section to clarify the distinction between semantics and pragmatics, in addition to the significance of bridging them.
(c) Revise the technical sections to ensure that they are also contextualized w.r.t. the correct framing of the paper.
Relatedly, F8cg wondered about the significance of training in a pragmatic setting and testing in a semantic setting. We believe that the revisions described above will address this concern as well, as we explain in our detailed response to F8cg.
**3. Differentiation from prior work**
F8cg raised a concern that our work uses the same communication model from Tucker et al. (“Generalization and Translatability in Emergent Communication via Informational Constraints.”). As we explain in our detailed response to F8cg, this is factually inaccurate and there are three key differences between our model and the Tucker et al model. Most importantly, our new setting is designed to address a major open question at the interface between semantics and pragmatics, which Tucker et al. have not considered and their communication model would not support. Having said that, we agree with the reviewer that these key differences w.r.t. Tucker et al. were not conveyed clearly enough in the paper. To address this issue, we will clarify this point in the related work section, around lines 113-117, where we discuss the prior work by Tucker et al.
**4. Other questions and concerns**
- Cm9N’s reusability concern: to fully address this concern, we have provided an anonymized link to our code (in a comment to the AC, following the NeurIPS instructions).
- We will fix all the typos and will do a thorough English proofreading of the entire manuscript.
- The reviewers also raised several valuable questions. We intend to incorporate our clarifications in response to these questions in the revised version of the paper. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Automated Efficient Estimation using Monte Carlo Efficient Influence Functions | Accept (spotlight) | Summary: Efficient influence functions (EIFs) for nonparametric estimands are used to construct debiased estimators. Existing methods are primarily estimand-specific and require intricate analytic derivations; existing automated methods don't scale well. This paper proposes general Monte-Carlo estimators of the efficient influence function (MC-EIF) that to be used within a probabilistic programming workflow, shows their convergence to the true efficient influence function, as well as convergence of some estimates based on MC-EIF to the true value.
Disclaimer: I had little familiarity with the use of influence functions for constructing estimators before reading this paper.
Strengths: The motivation for the paper is clear, and the developed mechanism is sound, and the convergence results are convincing. The extensions of the use of EIF for estimation that are opened with the proposal of the estimator are intriguing.
Weaknesses: * I found the paper was harder to understand than was necessary due to unclear notation. In the Problem Statement, P and Q are used interchangably to mean the pdf or the probability measure (e.g. in Definition 2, it's a measure with respect to which the L2 space is defined, and a function in said L2 space).
* The Assumptions (esp 3.1-3.3) are not easily interpretable and are not discussed; as the paper proposes a practical method, it would be made stronger by a discussion of the assumptions.
Technical Quality: 3
Clarity: 2
Questions for Authors: * The maximum eigenvalue term in Theorem 3.8 may grow in N. Is there an assumptions making sure it does not?
Confidence: 1
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: * Authors do not discuss if there is value for lifting the assumptions of the method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are very grateful to receive such positive feedback that our submission is well motivated, theoretically sound, the evidence is convincing, and that MC-EIF opens up future interesting directions. We also appreciate the actionable feedback.
**(1) Notation**. We thank the reviewer for flagging this issue. To improve clarity, we will carefully clean up notation, including not using P and Q interchangeably to mean the pdf or the probability measure.
**(2) Clarity of Assumptions**. We thank the reviewer for this honest feedback. We will include a more extensive discussion of assumptions in the Appendix, and clarify Assumptions 3.1-3.3 in the main body. In summary, Assumption 3.1 states that the parametric model’s probability density is continuous and differentiable with respect to its parameters, Assumption 3.2 states that the pushforward of the parametric model through the target functional is continuous and differentiable with respect to the model parameters, and Assumption 3.3 states that the Fisher information is invertible. Without these assumptions, the terms in Equation (1) do not have any meaning.
**(3) Maximum eigenvalue growth**. The maximum eigenvalue of $\Sigma^{-1}$ does not grow with $N$ (the number of datapoints); $\Sigma = cov(\tilde{x})$, where $\tilde{x}$ is the normalized score vector associated with $P_{\phi}$ (defined in Assumption 3.5) which is independent of $N$. This term could, however, increase with the model dimension $p$. We could add an assumption that $\Sigma^{-1}$ is uniformly bounded but we proved the theorem for the more general case when $\Sigma^{-1}$ is not. There are of course many instances when the maximum eigenvalue of $\Sigma^{-1}$ is indeed uniformly bounded in $p$ (in which case our approximation error bound gets better). For example, if $\Sigma$ is the identity matrix, then the maximum eigenvalue is 1 for all p.
**(4) Discussion of Limitations**. While the assumptions we make throughout the paper permit a very broad class of practical models and functionals, as we discussed in Section 3, there are certainly some circumstances where relaxing the assumptions and requirements would be of practical value. For example, one may wish to use MC-EIF on functionals in models that only contain an implicit likelihood, such as Bayesian variants of off-the-shelf physics emulators (See https://www.pnas.org/doi/10.1073/pnas.1912789117). Extending MC-EIF to these models, or similarly implicit functionals, is an exciting area of future work.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer hhJt:
Can you please respond to the rebuttal as soon as possible? Your comments will be greatly appreciated. Many thanks,
AC | Summary: The paper establishes a novel method for estimating the efficient influence functions under mild assumptions, called Monte Carlo Efficient Influence Functions(MC-EIF). The method is easy to apply and flexible in many cases, where it can seamlessly equip an existing/popular efficient estimator.
Strengths: It is a really well-written paper with complete model establishment, theoretical results and fair comparisons between their MC-EIF method and other existing methods. Details are listed, like the different ways to generate the estimator and how it is sensitive to their methods and also the limitations on the dimension of model size are discussed. This MC-EIF is easy and flexible to use in many scenarios, so it gets good prospects in the application area.
Weaknesses: When applying new methods in practice, especially in high-dimensional cases, the time cost is also a necessary aspect that needs to be considered, so it would be good to show the time cost for your methods and how it compares to other existing methods.
Some settings in the experiments are not listed clearly, like what is the value of $D$ and $p$ for experiments in Figure 1,2,3 or provided the value of $F$ instead?
Technical Quality: 3
Clarity: 4
Questions for Authors: It draws my attention that, from Assumption 3.7 and Theorem 3.8, as the model size $p$ becomes larger, the constant of the bound of error will increase as $\sqrt{p \log p}$. So, when dealing with the high-dimensional problem, it may require people to use a really large number of samples ($M$) to get a desirable accuracy and the cost will be tremendous, which could be a potential drawback of this method, what do you think?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: I am not sure what is the $p$ tested here, but from Figure 2, it seems like $p_{\max}$ is only $1000$, which is not a very large model. People should test on much higher dimensional problems to defend their methods. Moreover, it would be worth looking at more challenging problems, other than well-defined Gaussian problems.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are very grateful to receive such positive feedback on our submission’s exposition, the quality of our theoretical and empirical evidence, and MC-EIF’s generality and ease-of-use. Thank you also for helping us to improve our work with actionable feedback.
**(1) Time comparison relative to other methods**. Before MC-EIF, the only general-purpose method for approximating efficient influence functions was provided by JWZ22a, who use a finite-differencing approximation. This runtime of this method is exponential with respect to dimension p with a large constant. The primary benefit of MC-EIF over these existing techniques is its substantially faster runtime. In fact, in our experiments, we could not even compare against the method in JWZ22a because the runtime was too slow for p larger than 2. That is why we only compare against JWZ22a in experiment 1 and not the other experiments where p is in the hundreds or a thousand. In Figure 7 in the Appendix we provide the runtime of our method as a function of p. We will emphasize how MC-EIF is faster than existing approaches much more prominently in the revision.
**(2) Missing labels**. We will add these missing labels in the revised submission and make the choices clearer in the main body. In Figure 1(a) D=1 and p=1 (only unknown mean parameter). In Figure 1(b), D=1 and p=2 (unknown mean and variance parameter). In Figure 2, F=200, D=202 and p=402. In Figure 3, D varies from 1 to 502 and p is shown on the y-axis.
**(3) Approximation quality for large p**. The reviewer is correct that the error bound grows like $\sqrt{p \log p}$. To ensure an accurate error bound, that means the number of Monte Carlo samples $M$ must scale linearly with $p \log p$. In practice, the runtime is not prohibitively slow. For example, in Figure 7 in the Appendix, we compare the runtime of fitting the model relative to computing MC-EIF as a function of model size. We see that when $p$ is less than 600, MC-EIF is faster than the time to fit the model. When $p$ is 1,000 then our implementation of MC-EIF takes 20 seconds to run on a standard M2 laptop.
The computational bottleneck of MC-EIF is computing the Fisher information matrix. Fortunately, given the importance of the Fisher information matrix in statistics, there are many ways to speed up computations here. See, for example, “A Kronecker-factored approximate Fisher matrix for convolution layers”. We will include a discussion of how these approximations can be used with MC-EIF in the paper.
**(4) Larger p problems / real data**. Relative to existing work, we evaluated our method on much larger p problems. For example, in JWZ22a, the authors only evaluated on settings with p=1, as their method’s runtime scales exponentially in p. Nevertheless, we can certainly evaluate our method on a larger p problem in the revised submission. Since submission, we have evaluated MC-EIF on real, non-Gaussian data and found similar empirical results on synthetic data in Section 5 of our paper. We will include these results in the revised submission.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you very much for your responses! My main concern revolves around the time scale of the methods, which I believe is a key strength. However, this aspect hasn't been sufficiently highlighted in the paper. I hope the author can give this more attention in the revision and also follow through on addressing the larger problem as promised. Overall, I will maintain my score. | Summary: The paper proposes Monte Carlo Efficient Influence Functions (MC-EIF), an automated technique for numerically computing efficient influence functions using existing differentiable probabilistic programming systems. MC-EIF simplifies efficient statistical estimation for high-dimensional models, achieving optimal convergence rates and consistency without the need for complex manual derivations. The approach is validated both theoretically and empirically.
Strengths: The paper is well-written and can be followed easily. The authors introduce Monte Carlo Efficient Influence Functions (MC-EIF), a technique for numerically computing EIFs using existing AD and PPL system quantities. They express EIFs as a product of the gradient of the functional, the inverse Fisher information matrix, and the gradient of the log-likelihood. This method automates the construction of efficient estimators, avoiding manual derivations, and provides accurate, generalizable estimates applicable to various functionals and models. They also provide a non-asymptotic error bound on the quality of their approximation, showing how estimators using MC-EIF achieve the same asymptotic guarantees as using analytic EIFs. Empirical results show MC-EIF outperforms existing approaches without degrading estimation accuracy.
Weaknesses: I am not fully familiar with this line of work, so I am unable to identify a major weakness. However, I have some questions regarding the assumptions and the effectiveness of the proposed approach on real datasets, which I have added to the Question sections.
Technical Quality: 3
Clarity: 3
Questions for Authors: Q1. In Assumption 3.5, authors assume that the normalized score vector is sub-Gaussian with a parameter $C_1$. I want to know if this constant scales with respect to the dimension $D$ and model size $p$. If yes, what is the scaling? If no, can you clarify why it does not scale?
Q2. The authors assume that the map $\phi$ to $P_{\phi}$ is continuous. My question is, in order to approximate the Fisher information $\hat{I_{m}}$, do we need to know what $P_{\phi}$ actually is?
Q3. The authors back up their theoretical results with synthetic data experiments. Given the assumptions, it is not clear how applicable the proposed method is to real datasets, and how should the results be interpreted if these assumptions don't hold?"
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors clearly mention the assumptions before stating their theoretical guarantees. Also, the assumptions are explained clearly. I don't see any potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are very grateful to receive such positive feedback on our submission’s exposition, and empirical and theoretical evidence. We are also very grateful to receive actionable feedback we can use to further improve our work.
**(1) Assumption 3.5**. The constant C will not grow with D in Assumption 3.5. If there exists a universal constant k such p <= kD, then C will not grow with p either but this is not a necessary assumption. We will include this discussion in the revision but for completeness we provide a quick sketch below:
The score vector $\nabla \log P_{\phi}(x)$ is a p-dimensional vector so its expected squared norm grows linearly with $p$. The expected squared norm of the normalized score $\frac{1}{\sqrt{D}} \nabla \log P_{\phi}(x)$ scales as $O((\frac{1}{\sqrt{D}})^2 \times p) = O(p/D)$. Hence, if $p \leq kD$, then the expected squared norm is bounded. Hence, the sub-Gaussian constant in Assumption 3.5 for the normalized score will not grow with p or D.
**(2) Continuity assumption**. MC-EIF addresses the problem of efficient estimation with parametric models using efficient influence functions. Here, $P_{\phi}$ is that parametric model itself, and is known by construction for any fixed set of parameters $\phi$. For example, if we wish to use MC-EIF to construct an efficient estimator for a linear regression model with Gaussian errors, then $\phi$ would be the fitted regression coefficients and $P_{\phi}$ is the induced Gaussian distribution over outcomes. We will add a note clarifying this in the revision.
**(3) Real data**. Since submission, we received other feedback to add real data experiments so we evaluated MC-EIF on real data from the UCI machine learning repository. We find that the approximation quality is similar to the good performance on synthetic data in Section 5 of our paper! This is not surprising, as previous work on efficient estimation has already demonstrated the benefit of influence function based estimation with real data, and MC-EIF closely approximates the efficient influence function.
---
Rebuttal Comment 1.1:
Title: Re.
Comment: Thanks for the response to my questions. A clarifying note on the Continuity assumption, as the authors mentioned, would be very useful. I still believe that the paper would benefit from some real data experiments or results. Overall, I will keep my score for the paper. | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Exploring Fixed Point in Image Editing: Theoretical Support and Convergence Optimization | Accept (poster) | Summary: This paper provides important theoretical support and practical optimization in the field of image editing, especially the fixed point problem in the DDIM inversion process. Through the application in image editing and dehazing tasks, the effectiveness and generalization potential of the fixed point theory are demonstrated. The paper has performed well in theoretical contribution and experimental verification.
Strengths: 1. Based on theoretical findings, the authors optimized the loss function in the original DDIM inversion and improved the visual quality of image editing.
2. The fixed point theory is applied to the image dehazing task, and unsupervised dehazing methods based on text guidance are explored, showing the generalization potential of the fixed point theory.
3. Experiments on the PIE-Bench dataset verify the effectiveness of the optimized fixed-point convergence criterion and demonstrate the improved image editing quality.
Weaknesses: 1. Although the paper discusses fixed-point computational optimization, no detailed computational resource requirements (i.e. GFLOPs, param.) are provided, which may affect the reproducibility of the experiments.
2. The paper does not report the error margins or statistical significance tests for the experimental results, which limits the interpretability of the results.
3. The experiments mainly focus on image editing and dehazing tasks, and it may be necessary to verify the generalization ability of the fixed point theory in a wider range of image restoration tasks.
4. I am very curious whether the method proposed by the author is as robust to some complex nonlinear degradation as to simple linear degradation.
Technical Quality: 3
Clarity: 3
Questions for Authors: I don't particularly understand the specific theoretical derivation in the article, but I think the experimental part still has room for improvement. I will decide my final rating based on the author's rebuttal and the opinions of other reviewers. Overall, I think this article is interesting.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer eqay for devoting time to this review and providing valuable comments.
**Weaknesses:**
> **W1:** No detailed computational resource requirements (i.e. GFLOPs, param.) are provided.
**A:** Our approach relies on the Stable Diffusion v1.4 model, which is available through the Diffusers library. As such, the GFLOPs and parameter counts can refer to the official documentation of Stable Diffusion v1.4. Of course, we have also calculated the GFLOPs and parameters of each module in Stable Diffusion v1.4 ourselves, and the following data is for reference only:
| | vae| unet| text_encoder|
| :---: | :---: | :---: |:---:|
|parameter| 83.65M| 859.52M | 123.06M |
| GFLOPs| 1773.5 | 222.3 |0.12 |
> **W2:** The paper does not report the error margins or statistical significance tests for the experimental results.
**A:** The Reviewer eqay are correct that we should have included these important measures to provide a more rigorous evaluation. Upon your recommendation, we have conducted additional runs and calculated the mean values along with the error margins. The updated results are as follows:
| | Distance | PSNR | LPIPS | MSE | SSIM | Whole | Edited |
| :---: | :---: | :---: |:---:| :---: | :---: | :---: | :---: |
|Image Editing | 69.181$\pm$0.001 | 17.88$\pm$0.01 | 208.87$\pm$0.02 | 219.86$\pm$0.05 | 71.24$\pm$0.18 | 25.33$\pm$0.016 | 22.59$\pm$0.001 |
| Image Reconstruction | 70.247$\pm$0.20 | 17.80$\pm$0.07 | 210.70$\pm$0.03 | 225.47$\pm$0.01 | 70.90$\pm$0.05 | 23.81$\pm$0.004 | 21.39$\pm$0.02 |
> **W3:** It may be necessary to verify the generalization ability of the fixed point theory in a wider range of image restoration tasks.
**A:** We have addressed this issue in lines 226-231 of the paper. The primary reason for the relatively poor performance on tasks like rain and snow removal is that the pre-trained diffusion model we utilized is not accurate in capturing these specific types of degradation. The pre-trained Stable Diffusion v1.4 is not sufficiently adept at generating accurate attention maps for these weather-related artifacts. To address this limitation, the solution would be to train a diffusion model that is specifically designed to capture the attention maps corresponding to rain and snow. And it requires the use of shallow, pixel-level attention maps that can better identify the characteristics of these weather-related degradations.
> **W4:** I am very curious whether the method proposed by the author is as robust to some complex nonlinear degradation as to simple linear degradation.
**A:** The key to the robustness against different types of degradations lies in the accuracy of the attention map matching. Based on our experimental observations, Stable Diffusion v1.4 was able to capture the attention maps for degradations such as rain, snow, and fog. However, the issue lies in the fact that the default Stable Diffusion v1.4 model uses deep-level attention maps, which are not precise enough to accurately capture the attention maps for rain and snow. As a result, our method's performance on tasks like snow and rain removal was not as effective as we would have liked. Regarding other types of degradations, such as blur, shadows, occlusions, and noise, we have not yet explored them by using Stable Diffusion v1.4. Therefore, we are uncertain to capture the attention maps for these degradations. To achieve more robust results, we believe the solution would be to retrain a diffusion model that is specifically designed to capture the attention maps corresponding to the various types of degradations.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for the rebuttal, which has addressed my concerns to some extent.
According to the authors' description, the proposed method seems to focus more on semantic-level processing, while its ability to capture details appears to be insufficient. However, many tasks seem to have a strong semantic demand, such as low-light image enhancement. Have the authors attempted any experiments in this area? Merely providing some analysis without concrete examples seems like a limitation to me. On the other hand, it seems that dehazing tasks are more complex in terms of physical imaging compared to deraining tasks. Why, then, is rain more challenging to address? Have the authors considered the non-uniformity of real-world hazy images?
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer eqay's recognition of our work. Regarding the reviewer's questions, we respond as follows:
- Its ability to capture details appears to be insufficient. Have the authors attempted any experiments in low-light image enhancement area?
Since the pre-trained Stable Diffusion V1.4's the semantic attention map is at deeper layers, its ability to capture fine details is somewhat limited. We have mainly experimented with removing rain, snow, and haze, and have not tried low-light image enhancement. However, if low-light image enhancement also relies on semantic awareness and the attention map also can be captured, it should be effective.
- Why, then, is rain more challenging to address?
The main reason is that the attention map is based on deeper layers, which results in the attention map having a much smaller size compared to the original image. This makes it difficult to accurately capture raindrops or small snowflakes.
- Have the authors considered the non-uniformity of real-world hazy images?
We believe the reviewer is asking about the scenario where the $t(x) = e^{-kd(x)}$ in the equation $I(x) = J(x)t(x) + A(1-t(x))$ has $k$ as a variable. We have tested this scenario and found it to be effective for removing haze. This haze is generated by equipment in an indoor factory setting, and the purpose is to remove the obstruction caused by these haze conditions. However, since this real-world scenario involves the commercial confidentiality of our collaborators, we did not include it in the paper. | Summary: The paper concerns with the fixed points of the DDIM inversion scheme, which is central to many image editing methods. The authors first establish that the DDIM inversion process at any given time step $t$ exhibits a unique fixed point by demonstrating that the corresponding functional has a Lipschitz constant strictly smaller than 1.
Furthermore, the authors reveal that the commonly employed stopping criterion $f(z_t)-z_t$ in the fixed-point inversion process might not guarantee convergence to the optimal solution. To address this, they introduce a new optimization criterion, $f(z_t)-f(\hat z_t)$, where $\hat z_t$ represents a perturbed version of $z_t$, creating a new optimization path. Since the fixed point is unique, both the original and perturbed paths should converge to the same point. Thus, the new criterion ensures convergence to solutions that better approximate the desired fixed point, yet at the expense of a few additional iterations per time step.
The authors of demonstrate the proposed method through experiments on image editing and unsupervised dehazing tasks.
Strengths: * The paper is well-written. The authors provide a clear motivation and necessary background, and their derivations are easy to follow.
* The contributions of the paper are solid. The presented fixed-point analysis is simple yet insightful, and the new optimization criterion is both creative and easy to implement.
* The experiments, while not extensive, are sufficient to demonstrate the proposed method.
Weaknesses: * While I believe the the theoretical conclusions derived in Section 3 are correct, there are several inaccuracies that must be addressed to ensure the soundness of the derivations. Specifically, $z_t$ and $f(z_t)$ are vectors, hence, the inequality in (7) is not clear. How it is defined? It is pointwise ineuqliaty? Furthermore, it seems that the authors assume the difference between the scores on the right-hand side of (7) is non-negative, which needs to be justified.
Continuing in this regard, the differences on both sides of (11) are vector differences, making the inequality unclear. I believe Inequality (11) should be $||f(z_t^i)-f(z_t^j)||\leq (1-k\sqrt{\bar{\alpha}}) ||z_t^i-z_t^j||$.
* The motivation and intuition behind equation (12) should be explained prior to its presentation. Currently, the purpose of this equation is unclear upon initial reading.
* The implications of inequality (14) should be clarified. Why is this inequality interesting? How does it affect fixed-point convergence, and what role does $\delta$ play in this process? Are there any conditions on $\delta$ that need to be assumed to ensure proper convergence?
* In contrast to previous sections, I find Section 5 to be lacking necessary mathematical derivations, explanations, and background on NTI. In its current form, the section provides only very high-level details, obscuring the paper's contribution in this context.
* In the image editing experiments, the proposed method appears to offer only marginal improvement, both visually and quantitatively. In the dehazing experiments, the method seems to provide a marginal visual improvement, but it does demonstrate a notably more stable recovery process.
Technical Quality: 2
Clarity: 3
Questions for Authors: * Please address the weakness above.
* Can the theoretical analysis presented lead to more efficient algorithms than the straightforward Picard iteration method?
* Can any theoretical statements be made regarding the change in convergence rate under the proposed optimization loss?
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: * While I appreciate the extensive discussion on the application of dehazing, I would appreciate further discussion on the proposed underlying method (or optimization loss), which, in my view, is the central focus of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer Cq8S for devoting time to this review and providing valuable comments.
**Weaknesses:**
> **W1:** Is Inequality (7) pointwise ineuqliaty? Inequality (7) and (11) are missing $||\cdot||$.
**A:** Inequality (7) is a pointwise inequality. Additionally, Inequalities (7) and (11) should have the $|\cdot|$ notation, similar to Inequality (12). We apologize for the oversight that caused any inconvenience. We have also adopted your suggestion to replace the $|\cdot|$ notation in Inequality (12) with $||\cdot||$, and have added the $||\cdot||$ notation to Inequalities (7) and (11) as well. We are grateful for your careful review and helpful feedback.
> **W2:** The motivation and intuition behind equation (12) should be explained prior to its presentation.
**A:** The intuition behind Inequality (12) is that in the absence of numerical errors, the path $\hat{z}_t$ and the path $z_t$ will converge to the same point, and the difference between the paths $\hat{z}_t$ and $z_t$ will also converge to $0$. The convergence rate of the three is the same in the ideal case.
> **W3:** The implications of inequality (14) should be clarified. What role does $\delta$ play in this process?
**A:** Inequality (14) demonstrates that in the non-ideal case, the convergence of the difference between paths will be slower than the convergence of the difference within a single path. In other words, when using the intra-path loss, while it may appear to have converged, we cannot be certain of the distance from the ideal fixed point. However, The inter-path loss provides a reference for this. The role of $\delta$ is to reflect the fact that in the presence of numerical errors, the inter-path loss will converge more slowly than the intra-path loss. As a result, the inter-path loss can provide a perspective that is closer to the ideal fixed point. The experiments shown in Figure 1 also validate this point. In general, the $\delta$ value is a very small number. Even in the presence of numerical errors, the function would still converge to a small interval. If the $\delta$ value were to disrupt the contraction mapping property, it would lead to the function failing to converge to a small interval, which would be contradictory to the conclusions proved in Section 3.
> **W4:** I find Section 5 to be lacking necessary mathematical derivations, explanations, and background on NTI.
**A:** We apologize for the lack of sufficient explanation and background regarding the NTI. NTI involves optimizing the null-text embedding as a learnable parameter. Its fine optimization compensates for the inevitable reconstruction error caused by the classifier-free guidance component, and this is crucial for image restoration. From the perspective of the mathematical formulation, the classifier-free guidance prediction is defined as:
$$
\tilde{\varepsilon_{\theta}}(z_t,t,C, \varnothing)=w \cdot {\varepsilon_{\theta}}(z_{t}, t, C)+(1-w) \cdot \varepsilon_{\theta}(z_{t}, t, \varnothing)
$$
Where $C$ represents the text embedding, $\varnothing$ is the null-text embedding, $z_{t}$ denotes the input at timestep $t$, and $w$ is the guidance scale parameter. The primary optimization target of NTI is the $\varnothing$ term. We hope this provides the necessary background and explanation to clarify the NTI.
> **W5:** In the image editing, the proposed method appears to offer only marginal improvement.
**A:** The marginal improvement in image editing is primarily due to the fact that we chose to use a fixed number of iterations. This may have resulted in some cases where the solution did not fully converge to the ideal fixed point. Nevertheless, there is still room for performance improvement. As mentioned in lines 155-159 of the paper, one potential strategy is to compute multiple inter-path losses and then select the cluster center of the converged points as the final ideal fixed point.
**Questions:**
> **Q1:** Can the theoretical analysis presented lead to more efficient algorithms than the straightforward Picard iteration method?
**A:** The fixed point computation in this work is based on the Picard iteration method. If more efficient algorithms are desired, we would recommend exploring Aitken's acceleration method or Steffensen's method. Aitken's acceleration can provide faster convergence than the basic Picard iteration by extrapolating the sequence of iterates. Steffensen's method is another technique that achieves higher order convergence compared to the Picard iteration.
> **Q2:** Can any theoretical statements be made regarding the change in convergence rate under the proposed optimization loss?
**A:** Regarding the change in convergence rate, we only provided a rough illustration of the slight increase in the Lipschitz constant using the inequality (14) to demonstrate the change in convergence rate. Given that the observed change in convergence rate in the experiments is subject to the influence of numerical errors, which may vary across different scenarios, we did not provide further theoretical statements. However, we can offer an intuitive statement. If we use the original fixed-point loss, which takes a single-path perspective, the convergence rate would be consistent with the ideal case, as there are no other references. When the intra-path loss changes minimally, it would consider the fixed point to be reached. In contrast, the multi-path perspective introduces the inter-path loss. Even when the intra-path loss changes very little, the inter-path loss may still exhibit a decreasing trend. Furthermore, even if the intra-path loss is zero, due to numerical errors, the inter-path loss may not be exactly zero. This implies that the inter-path loss provides additional information indicating that we still need to travel a small distance to reach the ideal fixed point. As a result, the inter-path loss may converge slightly slower than the intra-path loss in the presence of numerical errors.
---
Rebuttal Comment 1.1:
Title: Post-Rebuttal
Comment: I thank the authors for their rebuttal, which addressed most of my concerns satisfactorily. However, an issue that remains, and was also raised by other reviewers, is the marginal improvement in image editing quality. The authors attributed this to the use of a fixed number of iterations.
My follow-up question is how this specific number of iterations was determined. Was there an ablation study conducted to investigate its impact? Additionally, why not allow the process to run until approximate convergence is achieved, based on some predefined stopping criteria?
While I appreciate that the focus of the paper is primarily theoretical, I believe it is crucial for theoretical contributions to demonstrate some practical merits as well. This could include faster convergence, improved stability, or enhanced image quality.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer Cq8S's recognition of our work. Regarding the reviewer's questions, we respond as follows:
- How was this specific number of iterations determined?
We referred to the strategies used in AIDI and FPI, which observed the loss convergence behavior through some data and then determined a fixed number of iterations. Their observations showed that the intra-path loss can converge in about 3 iterations, so the experiments using intra-path loss used 3 iterations. The inter-path loss was observed to converge in about 6 iterations, so we used 6 iterations. This is a relatively fair configuration.
- Was there an ablation study conducted to investigate its impact?
We experimented with gradually increasing the number of iterations from 3 to 6, as well as decreasing from 6 to 3, in 50 time steps for image editing. The performance was between the 3-iteration and 6-iteration results, with the former performing slightly better than the latter. We had only saved the results for the former in our previous experiments.
| Distance | PSNR | LPIPS | MSE | SSIM | Whole | Edited |
| ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| 69.46 | 17.84 | 208.91 | 220.44 | 71.14 | 25.25 | 22.55 |
- Why not allow the process to run until approximate convergence is achieved, based on some predefined stopping criteria?
We did try setting a threshold to stop the process. However, the threshold that works for different images and time steps can vary, and the manually set thresholds took a long time to reach in some cases. For those that could not reach the threshold, we stopped at 50 iterations. But we found the experimental results on some metrics were not as good as the fixed iteration approach, and it took longer. The main reason was that the manually set thresholds might be higher than the ideal values in some cases. Therefore, we did not mention this strategy in the paper, and instead recommend using the clustering center of multiple path iterations. Considering a fairer comparison with AIDI and FPI strategies, time efficiency, and verifying the defects of intra-path loss, we decided to use the fixed iteration approach.
- It is crucial for theoretical contributions to demonstrate some practical merits as well.
Since the gains of fixed points are limited in image editing, we added extra unsupervised dehazing experiments in the paper to further demonstrate the practical value and potential of fixed points in other tasks. | Summary: The paper proves the existence and uniqueness of fixed points in DDIM inversion using the Banach fixed-point theorem. It identifies flaws in existing fixed-point loss functions and proposes optimizations to improve convergence and visual quality of edited images. It also introduces a novel text-based approach for unsupervised image dehazing using fixed-point based editing.
Strengths: The paper addresses the inconsistencies in image editing results caused by errors introduced during the DDIM inversion process. It provides a solution to enhance the reliability and quality of edited images with the proposed fixed-point optimization. The paper also gives a thorough analysis of DDIM inversion, identifying key issues and proposing optimization strategies for it. This detailed analysis helps to undetstand the challenges in the context of image editing. The paper also outlines clear directions for future work.
Weaknesses: - the performance gains the paper proposes are minor. It might not be worth the complexity of the approach.
- the paper doesn't seem to have thoroughly tested its method on various challenging situations, especially for image dehazing.
- the paper acknowledges that proposed method may introduce additional computational overhead when optimizing for fixed point convergence. However, specific quantitative measures of this overhead are not provided, so further analysis is needed.
Technical Quality: 4
Clarity: 3
Questions for Authors: how does the proposed fixed-point based dehazing approach compare with state-of-the-art dehazing techniques in terms of performance and computational efficiency?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The authors have addressed the limitations of their work, along with potential negative societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer 5f4S for devoting time to this review and providing valuable comments.
**Weaknesses:**
> **W1:** The performance gains the paper proposes are minor.
**A:** The reason why the performance gains are minor in image editing is that across the entire dataset, we used a fixed number of iterations to reach the fixed-point. This may have resulted in some images not converging fully at certain time steps. It's important to note that the focus of this paper is on providing theoretical support for the fixed-point and optimizing the fixed-point loss. Therefore, using a fixed set of variables for the experiments is more equitable. However, in the case of unsupervised image dehazing, the performance gains are significant, as shown in the last two columns of Figure 5 in the paper. The use of the fixed-point effectively avoids the collapse of the recovered images.
> **W2:** The paper doesn't seem to have thoroughly tested its method on various challenging situations.
**A:** In terms of fixed-point based image restoration, we have only conducted experiments on image dehazing, and the specific reasons for this are explained in lines 226-231 of the paper. The reason is that haze, as a pervasive degradation, is relatively easier to capture the corresponding attention map at the semantic level, as shown in the last row of Figure 5 in the paper. Capturing the semantic-level attention map is heavily dependent on the pre-trained diffusion model used, and in this work, we have relied on the Stable Diffusion v1.4 model from the diffusers library. We have also experimented with Stable Diffusion v1.4 for image desnowing and deraing, but found that it was unable to accurately capture the attention maps corresponding to the rain and snow regions. Therefore, we believe that to extend the fixed-point based image restoration to other degradations, a diffusion model specifically trained to capture the attention maps for the corresponding degradations would be required. Finally, the primary focus of this work is to provide the theoretical support for the existence and uniqueness of the fixed-point, as well as the optimization of the fixed-point loss. We have also aimed to demonstrate the potential of the fixed-point in image restoration tasks, as exemplified by the image dehazing results.
> **W3:** Specific quantitative measures of this overhead are not provided, so further analysis is needed.
**A:** The Reviewer 5f4S make a valid point regarding the additional computational overhead. We have therefore included the runtime information for reference. The additional computational overhead is primarily incurred during the DDIM inversion process. Specifically:
(1) The original fixed-point loss takes approximately 0.26s per individual time step.
(2) The improved fixed-point loss takes approximately 0.61s per individual time step.
(3) The total time to edit a single image using the original fixed-point loss is approximately 41s.
(4) The total time to edit a single image using the improved fixed-point loss is approximately 60s.
In both cases, the total number of time steps is 50. We hope this additional information helps to address your concern regarding the computational overhead.
**Questions:**
> **Q1:** How does the proposed fixed-point based dehazing approach compare with state-of-the-art dehazing techniques.
**A:** Since the focus of this paper is on the theoretical support and convergence optimization, the performance is lagging behind the state-of-the-art dehazing techniques. The main reason for this performance gap is the inherent limitation of the pre-trained diffusion models in accurately capturing the attention maps corresponding to the degradations. As shown in the last row of Figure 5, the haze attention map is not entirely precise. To address this issue, we would need to utilize lower-level attention maps and train an additional diffusion model that can accurately capture the degradation-specific attention maps. In this work, our aim has been to demonstrate the potential of the fixed-point based image restoration approach in terms of its semantic-level interpretability, which we believe is a key advantage over other methods. Moreover, another key advantage is that with the ability to precisely capture the corresponding degraded attention maps, our method can be applied to arbitrary datasets without the need for additional training. | Summary: Recent methods treat each step of DDIM inversion as a fixed-point problem to reduce errors, but they lack theoretical support. This paper addresses this gap by making the following contributions: This paper theoretically proves that the Lipschitz constant in DDIM inversion is less than one. By applying the Banach fixed-point theorem, it establishes the existence and uniqueness of fixed points, thus providing the necessary theoretical foundation for image editing methods involving implicit functions. It leverages the uniqueness of fixed points to highlight flaws in existing fixed-point loss methods through theoretical analysis and experimental cases, subsequently proposing optimizations. Furthermore, this paper extends the fixed-point based image editing approach to the task of unsupervised image dehazing, and explores the feasibility of text-guided unsupervised dehazing through fixed-point based editing.
Strengths: 1. This paper proves the existence and uniqueness of fixed points in DDIM inversion using the Banach fixed-point theorem.
2. It identifies flaws in the loss functions AIDI and FPI used during inversions.
3. The authors modify the loss function to $f(z_t) - f(\hat{z}_t)$
Weaknesses: 1. In Section 3, the statement "Due to the fact that different initial values at time step $t$ can lead to similar $z_0$, i.e., any position at time step $t$ can converge to a small range defined by the rough estimation of $z_0$, we can obtain $|\epsilon_0| = k|\epsilon_t|$, where $0 < k < 1$," might be assertive. This assertion is especially unconvincing when $t$ is large. A rigorous proof for this part would be beneficial.
2. The experiment only compares NTI and P2P, omitting the results of AIDI and FPI. A more extensive and fair comparison should be conducted. Additionally, the image editing results are unsurprising due to the nature of the inversion-based method, and the human editing results appear to be tweaked.
3. The optimization for the image editing task occurs only during inversion. However, this method inherently offers less controllability over the output compared to the input. How does this method address this limitation?
Technical Quality: 2
Clarity: 2
Questions for Authors: My question is included in the weaknesses section. I would be willing to raise the rating if my concern is addressed.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 1
Limitations: Please check weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer cPeB for devoting time to this review and providing valuable comments.
**Weaknesses:**
> **W1:** This assertion is especially unconvincing when $t$ is large.
**A:** The premise of this assertion is the DDIM deterministic sampling without perturbation. Under this premise, it is theoretically the case that any position at any time step t will converge to the same position, which implies that the mean of $\varepsilon_0$ is 0 and the variance is very small. This property also allows the diffusion model to be generalized to other tasks such as image restoration[1],[2]. Therefore, the first half of the assertion is valid. Regarding the statement "$|\varepsilon_0| = k|\varepsilon_t|$, where $0< k < 1$", we can provide a proof by contradiction. Suppose $k \geq 1$, and when $t$ is large, we let $t = T$. In this case, ${\varepsilon_t} \sim \mathcal{N}(0, 2I), {\varepsilon_0} \sim \mathcal{N}(0, 2k^{2}I)$, which implies that the variance of the distribution of $\varepsilon_0$ is greater than $2I$, which clearly contradicts the premise. Furthermore, ${\varepsilon_0} \sim \mathcal{N}(0, 2k^{2}I)$ means that the pre-trained diffusion model is not a convergent model. Hence, the assertion is valid.
*References:*
[1].Wang J, Yue Z, Zhou S, et al. Exploiting diffusion prior for real-world image super-resolution[J]. International Journal of Computer Vision, 2024: 1-21.
[2].Sun H, Li W, Liu J, et al. Coser: Bridging image and language for cognitive super-resolution[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 25868-25878.
> **W2:** The experiment only compares NTI and P2P, omitting the results of AIDI and FPI. The image editing results are unsurprising.
**A:** Our response to this question is as follows:
1. The purpose of this paper is to provide theoretical support for AIDI and FPI, and to improve the fixed-point loss they use. The relationship between this work and theirs is complementary, not competitive. To achieve better performance in image editing, AIDI uses Blended Guidance and Stochastic Editing, while FPI uses Prompt-aware Adjustment. However, neither AIDI nor FPI have open-sourced their code, and reproducing these methods is not the focus of this paper.
2. "the image editing results are unsurprising" is due to the fact that we used the same number of iterations for all images at different time steps, which may have led to some images not converging fully at certain time steps. However, this is still sufficient to demonstrate that the fixed-point loss used by AIDI and FPI is not perfect, and that there is room for improving the performance of AIDI and FPI. This paper also provides suggestions on how to further improve the performance, as mentioned in lines 155-159, where it states that if we can find the cluster centers, the performance will be further improved. Finally, the key point of this work is meant to provide theoretical support for AIDI and FPI and improvements to the fixed-point loss used by AIDI and FPI, rather than to compete with their methods. We hope this clarifies the relationship and the purpose of this work.
> **W3:** This method inherently offers less controllability over the output compared to the input.
**A:** The main application of the fixed-point loss is in DDIM inversion, and this is also the focus of this work. If one wishes to have more control over the output, the Blended Guidance method used in AIDI could be a suitable approach. The Blended Guidance method adjusts the guidance scale during sampling. It increases the guidance scale for regions corresponding to the positive attention map, and decreases the guidance scale for regions corresponding to the negative attention map. This allows for more modifications to the areas that need to be edited, while making fewer changes to the areas that do not require editing. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Last-Iterate Global Convergence of Policy Gradients for Constrained Reinforcement Learning | Accept (poster) | Summary: The paper targets the constrained RL problem and provides a primal-dual method C-PG to solve it. The proposed method is extended to C-PGAE and C-PGPE to handle the constraint cases with risk measures. The paper provides a theoretical analysis of the global last-iterate convergence guarantees toward C-PG and empirically tests the C-PGAE and C-PGPE.
Strengths: The paper is well-written. All the assumptions are standard in literatures or justified. The theoretical result is rigorous.
Weaknesses: The main weakness of the paper comes from the technical novelty. From the algorithm design, the primal-dual method is widely used in constrained optimization problems and constrained RL problems, and the regularization term is also not new in constrained optimization problems [1]. The proposed algorithm basically follows the existing methods without a significant novel design. The algorithm can be naturally adapted to C-PGAE and C-PGPE to handle the constraint cases with risk measures due to the well-established policy gradient for the case. From the theoretical analysis, assumptions 1,2, 3, and 4 are standard assumptions for constrained optimization problems. With these assumptions, the constrained RL problem is transferred to a pure constrained optimization problem. Although the assumptions are justified in RL, I believe there are lots of theoretical results for constrained optimization problems with these assumptions, and can be directly used in constrained RL. Therefore, the theoretical novelty may also be marginal.
Although the paper claims that it addresses some theoretical limitations of previous works, these limitations are somehow avoided, rather than being addressed. For example, the paper does not require the softmax policy, but assumption 2 is verified under the softmax policy. The provided convergence rates do not depend on the problem dimension, but the problem dimension may be included in the constants of assumptions 2, 3, and 4.
The experiment scenarios are relatively simple, but it is acceptable if the theoretical result is sufficiently solid.
[1] Khuzani, Masoud Badiei, and Na Li. "Distributed regularized primal-dual method: Convergence analysis and trade-offs."
Technical Quality: 3
Clarity: 3
Questions for Authors: As mentioned in Weaknesses, as the dimension-free property of the proposed algorithm is claimed many times in the paper, have you proven that the problem dimension is independent of the constants of assumptions 2, 3, and 4? What is the reason and the theoretical intuition that it is dimension-free?
How does the convergence rate change after introducing the regularization term for the last-iteration convergence? As it introduces an extra error term of $w$ in Theorem 3.1, it may deteriorate the convergence rate.
I am willing to increase my score if weaknesses and questions are addressed.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: No limitations of the paper are explicitly stated by the author.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for the time spent reviewing our paper and for recognizing that our work is well-written and that our theoretical results are rigorous. Below, we address the raised issues.
**General novelty.** To the best of our knowledge, our paper is the first to introduce a **general framework** for CRL with primal-dual PGs, that is able to address simultaneously: (i) **continuous action and state spaces** (more general than the tabular case considered by some previous works), (ii) multiple constraints, (iii) **generic parameterizations of (hyper)policies** (strictly generalizing the approaches based on softmax), (iv) **inexact gradients** (avoiding irrealistic assumption of exact gradients of some previous works), (v) **parameter-based exploration** (action-based only was considered in previous works); (vi) delivering **global last-iterate convergence guarantees** (guarantee stronger than convergence *on average*).
**Algorithmic novelty and assumptions role.** Algorithmically, while the primal-dual PG approach is well-explored [1,2,4,5], **PB exploration has not been addressed until now**. Our main contribution is the **theoretical analysis**, which provides state-of-the-art results and addresses some limitations discussed earlier.
Under our assumptions (which are standard as recognized by the Reviewer [1,4,5,6,7]), we remark that **CRL does not reduce to a "pure constrained optimization" problem**. Indeed, with inexact gradients, the problem remains a learning one due to the need for **gradient estimation**, which affects algorithmic choices like the learning rates tuning.
**Technical novelty.** We stress that our paper provides novel technical contributions, which we comment on in the following.
We employ the **$\psi$-gradient domination** (asm. 3.2) on the Lagrangian function w.r.t. the parameters of the (hyper)policy to be learned, rather than focusing on specific classes of policies or on approximation error assumptions (asm. 2 of [1]). Moreover, we study the convergence of a new and different **potential function** $\mathcal{P}_k$ (Sec. 3.3) and we prove that when it is bounded, then both the objective function gap and the constraint violations are bounded (**thr. 3.1**). Finally, a new technical challenge we had to face was the study of the recursion appearing in Eq. (215). For this, we derived the technical **lem. F.1** and conducted a quite extensive study of the recurrences of the form $r\_{k+1} \le r\_k - a \\max\\{0,r\_k\\}^\phi + b$ in **apx. G**. These results and theoretical tools are, in our opinion, novel and of potential *independent interest*.
**Addressing of theoretical limitations of other works.** We address a **more general** setting than that of softmax policies. While our asm. 3.2, as noted by the Reviewer, is satisfied by the softmax policy, **Remark 3.1 shows that it applies beyond softmax policies**, thereby **generalizing and unifying** the setting of previous works. Additionally, we tackle other challenges, including the use of inexact gradients, handling multiple constraints, and operating in a continuous setting with $|\mathcal{S}| = |\mathcal{A}| = \infty$.
**Dimension-free property.** Thank you for highlighting this potential source of confusion. To clarify, dimension-free theoretical guarantees [1,4,5,8] mean our results apply to **continuous state and action spaces without dependence on the cardinality** of these spaces (i.e., $|\mathcal{S}|$ and $|\mathcal{A}|$). As shown by [3], in this setting, new problem dimensions must be considered: **$d_{\Theta}$ (parameterization dimension), $d_{\mathcal{A}}$ (action vector dimension), and $T$ or $(1-\gamma)^{-1}$ (trajectory length)**. While the constants in our assumptions may depend on these dimensions, they do not depend on $|\mathcal{S}|$ or $|\mathcal{A}|$. We will clarify this in the final version of the paper.
**Regularization and convergence rate.** The Reviewer is right. Introducing the regularization term with magnitude $\omega$ affects convergence properties, indeed, Table 1 details complexities w.r.t. $\varepsilon$ and $\omega$. However, the ridge-regularization of the Lagrangian with respect to $\lambda$ makes it **strongly convex (quadratic) in $\lambda$**, allowing last-iterate convergence **without additional assumptions**. In contrast, existing works on CRL with **non-regularized Lagrangians provide convergence guarantees on average (not last-iterate)** [4,5,8].
**References**
[1] Ding et al. (2024). Last-iterate convergent policy gradient primal-dual methods for constrained mdps.
[2] Khuzani et al. (2016). Distributed regularized primal-dual method: Convergence analysis and trade-offs.
[3] Montenegro et al. (2024). Learning Optimal Deterministic Policies with Stochastic Policy Gradients.
[4] Ding et al. (2020). Natural policy gradient primal-dual method for constrained markov decision processes.
[5] Ding et al. (2022). Convergence and sample complexity of natural policy gradient primal-dual methods for constrained MDPs.
[6] Bai et al. (2023). Achieving zero constraint violation for constrained reinforcement learning via conservative natural policy gradient primal-dual algorithm.
[7] Yang et al. (2020). Global convergence and variance reduction for a class of nonconvex-nonconcave minimax problems.
[8] Liu et al. (2021). Policy optimization for constrained mdps with provable fast global convergence.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed responses. The main contribution of the paper is the derivation of theoretical results that simultaneously offer several advantages (i)-(vi). However, some of these advantages ((i)(ii)(iv)(vi)) have been addressed by previous works. A concern is that assumptions like gradient domination, regularity of the objective function, and the existence of saddle points are quite similar to those in standard (non-convex) optimization, which has been extensively studied in the field of optimization.
Overall, I think the theoretical results are solid, but the experiments and comparisons with baselines are relatively simple. As I am not fully familiar with the theory bar of this conference, I will finalize my score after discussing it with the AC. If the paper meets the requirements, I am inclined to vote for acceptance. | Summary: The paper studies policy-based methods in constrained RL. The author first establishes the last-iterate convergence of the algorithm C-PG under a form a gradient domination assumptions. Then, the author further designs action-based and parameter-based versions of C-PG to handle constraints defined in terms of risk measures over the costs. Finally, the proposed algorithms are validated by numerical examples of control problems.
Strengths: * Despite not being the first paper to establish the last-iterate convergence of primal-dual type algorithms, I believe the convergence results for C-PG is also a meaningful contribution to the literature.
* More general constrained RL formulations are studied in the paper, where constrains are defined in terms of risk measures. The author adapts C-PG to another two sample-based variants for these formulations.
* Overall, the paper is well-written and easy to follow. The author uses highlights for the two different algorithms/settings, making them easy to distinguish.
Weaknesses: * The role of Section 4 in the paper is unclear to me. Good theoretical results are established for Section 3, yet, there is no general theoretical results for Section 4, besides some cases where results from Section 3 can be directly applied. This makes Section 4 looks like an ''add-on'' to Section 3.
* For theoretical results, e.g., Theorems 3.1 and 3.2, it would be better for the author to briefly discuss the proof idea in the main paper.
Minor comments: it would be better to make figure legends exactly align with the names of algorithms, e.g., change CPGAE to C-PGAE.
Technical Quality: 3
Clarity: 3
Questions for Authors: * I hope the author could further clarify the contribution/role of Section 4 in the paper.
* Could the author also discusses how the techniques used in the paper different from that in Ding et al., 2024?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper has stated assumptions clearly in the paper. Yet, although the author mentions "The limitations of the work emerges in the final section of the work", I failed to find that section in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for having appreciated the contribution our work brings and its presentation. In the following, our answers to the Reviewer's concerns.
**Section 4.** We thank the Reviewer for having raised this point. First of all, **Section 3** introduces the **exploration-agnostic** algorithm C-PG which exhibits global last-iterate convergence **guarantees** (Theorems 3.1 and 3.2). C-PG is thought to be employed for constrained RL problems in which the constraints are enforced in terms of **expected values over cost** functions.
In **Section 4**, we particularize C-PG in two versions, C-PGAE and C-PGPE, depending on the employed **exploration approach**. Moreover, in Section 4 we **extend** the **framework** presented in Section 3 to handle constraints formulated in terms of **risk measures** over cost functions, employing a **unified formulation** for risk measures. In Table 3 (that we will move to the main paper leveraging the additional page), we show the mapping of the functions $f$ and $g$ (which form the unified risk measure formulation) to several risk measures, in which also the **expectation over costs** is **included**. While we agree that no further theoretical guarantees are presented in Section 4, Remark 4.1 comments on whether the theoretical guarantees of Section 3 applies also for risk measures different from the expected values over costs.
Besides making the framework more general, the introduction of the constraints over risk measures allow to (empirically) appreciate the **semantic difference** of enforcing constraints while considering different **exploration** approaches (parameter-based vs action-based). In this sense, Section 4 has the goal to highlight the generality of the approach beyond the constraints formulated as expected costs.
**Thr. 3.1 and 3.2.** Thank you for the raising this point. We have deferred the proofs to Appendix E due to space constraints. We will leverage the additional page to add the **sketches of the proofs**.
**Plot labels.** We agree with the Reviewer and we will fix these typos in the final version of the paper.
**Comparison with Ding et al. 2024.** We are happy to clarify this point.
* From a **technical** point of view, to achieve global last-iterate convergence guarantees, we employ the **$\psi$-gradient domination** (Assumption 3.2) on the Lagrangian function w.r.t. the parameters of the (hyper)policy to be learned, rather than focusing on specific classes of policies or on approximation error assumptions (Assumption 2 of [1]). Moreover, we study the convergence of a different **potential function** $\mathcal{P}_k$ (Section 3.3) and we prove that when it is bounded, then both the objective function gap and the constraint violations are bounded (**Theorem 3.1**). Finally, a new technical challenge we had to face was the study of the recursion appearing in Equation (215). For this, we derived the technical **Lemma F.1** and conducted a quite extensive study of the recurrences of the form $r\_{k+1} \le r\_k - a \\max\\{0,r\_k\\}^\phi + b$ in **Appendix G**.
* From an **algorithmic** perspective, we propose a *meta algorithm* C-PG that (i) requires a **regularization only on $\lambda$** ([1] performs also entropy regularization w.r.t. the policy), (ii) that is able to handle **multiple constraints** ([1] handles the single constraint case only), (iii) that requires learning rates on **two different time scales** ([1] is single time scale), and (iv) that leverages on an **alternate ascent-descent** scheme. Moreover, [1] proposes also an optimistic method which works only with exact gradients. We particularize C-PG with **parameter-based** and **action-based** exploration approaches ([1] uses just the action-based one) both working with **generic parameterizations** and with **inexact gradients** in **continuous action** and **state** spaces.
**Limitations.** The Reviewer is right, the reported sentence is misleading. We meant that the limitations of our work are to be considered as the points that future works should address (Section 6). For instance, the development of C-PG variants ensuring the same guarantees shown in this paper, but with a single time scale learning rates. We will make more explicit the limitations of our work in the final version of the paper.
**References**
[1] Ding, D., Wei, C. Y., Zhang, K., & Ribeiro, A. (2024). Last-iterate convergent policy gradient primal-dual methods for constrained mdps. Advances in Neural Information Processing Systems, 36.
---
Rebuttal Comment 1.1:
Comment: I thank the reviewer for the explanations. I will maintain my score. | Summary: This paper proposes a general framework for addressing safe RL problems via gradient-based primal-dual algorithms. The authors show that the proposed algorithm exhibit global last-iterate convergence guarantees under gradient domination assumptions. Additionally, the authors validate their algorithms on several constrained control problems.
Strengths: 1. This paper is well-written and easy to follow.
2. The paper is technically sound with most claims supported sufficiently.
3. The theoretical analysis seems novel.
Weaknesses: Quality:
1. In Theorem 3.1, it requires w=O(\epsilon) to enforce an overall \epsilon error, but w should be a fixed number in C-PG algorithm.
2. It is better to have some experimental results about cost constraints on MuJoCo and include more baselines, e.g., [Zhang et al., 2020, First Order Constrained Optimization in Policy Space].
Clarity:
1. The range of reward value is [-1, 0], which is a bit weird compared to normal settings.
2. The advantage of parameter-based hyperpolicy is not clearly mentioned in the paper.
Significance:
One relevant literature is missing. In [Liu et al. 2021, Policy Optimization for Constrained MDPs with Provable Fast Global Convergence], a fast convergence \tilde{O}(1/\epsilon) result is proved without multiple assumptions in this paper.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please see the details in "weakness".
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: There is no potential negative social impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for reviewing our work and for appreciating the clarity, the soundness, and the theoretical novelty.
**The role of $\omega$.** The desired accuracy $\varepsilon$ is **decided before** employing the algorithms. As prescribed by theory, $\varepsilon$ appears in the expression of the **learning rates** for the variables to learn and of the **regularization** amount $\omega$. Tuning hyperparameters based on $\varepsilon$ is standard in convergence studies of PGs ([1, thr. C.2] for learning rate and horizon; [2, thr. 6.1] for learning rate) and primal-dual PGs ([3, cor. 3] for learning rate and regularization; [6, thr. 3] for algorithm parameters).
**MuJoCo experiments and baselines.** We present experiments in MuJoCo envs. with costs in Sec. 5 and Apx H, focusing on comparing our algorithms under risk measures over costs. All the presented experiments aim to validate our theoretical results. Indeed, we chose as baselines state-of-the-art PGs for CMDPs [4, 5] coming with convergence guarantees comparable to our results. Specifically, [4] considers tabular CMDPs, while [5] addresses continuous CMDPs but loses last-iterate guarantees. Instead, the paper "Zhang et al., 2020" proposed by the Reviewer solves an approximated constrained optimization problem and its performance is characterized by worst-case policy improvement only and no convergence guarantee. It has a more practical perspective, thus we believe that a comparison with it is beyond the scope of our work.
**Reward range.** This choice is based on formulating the constrained optimization problem for costs, which are positive. Thus, we use a cost $c_0$ in place of the reward $r$ for the optimization part of the problem. The cost function to minimize is defined at line 116 as $c_0(s,a) = - r(s,a)$. We consider $r$ ranging in $[-1,0]$, so that $c_0$ ranges in $[0,1]$. We stress that this choice for the range induces no loss of generality w.r.t. a range $[R_{\min}, R_{\max}]$.
**Hyperpolicy advantage.** We are happy to clarify this point.
1. **Trade-off between exploration approaches**. There is a trade-off between PB and AB explorations [2,5]. The PB approach struggles with **large parameterizations** (large $d_{\Theta}$), but its gradient estimator has **lower variance** compared to AB. The AB approach struggles with **long-lasting interactions** (large $T$ or $\gamma \to 1$) and **large action vectors** (large $d_{\mathcal{A}}$), but comes with a **higher variance** gradient estimator than PB. Thus, PB exploration can be preferable in some cases.
2. **PB exploration and risk constraints**: In Sec. 5, we show that PB exploration, when considering risk constraints over costs, can learn policies **inducing safer behaviors compared to AB exploration**. This arises from the semantic differences in how the two approaches enforce risk constraints (lines 288-293).
Finally, we note that PB exploration has not been addressed in the CRL literature (to our knowledge). We thank the Reviewer for pointing out this lack of clarity, will make it more explicit in the final version of the paper.
**Comparison with Liu et al. 2021.** The paper proposed by the Reviewer considers mainly the setting of: (i) *tabular* CMDPs; (ii) *softmax* policies; (iii) *exact gradients* provided by an oracle; and (iv) provides an *average* (not last-iterate) convergence rate of order $\tilde{\mathcal{O}}(\epsilon^{-1})$. As shown in Table 1, in the same setting, our convergence rate is $\mathcal{O}(\varepsilon^{-2})$, i.e., when: (1) GD condition holds $\psi = 1$ (which happens under (i) *tabular* CMDPs; (ii) *softmax* policies; see Remark 3.1); (2) exact gradients are available; (3) the regularization $\omega$ is $\mathcal{O}(\varepsilon)$. Liu et al. 2021 faster convergence is justified by the fact that their approach **relies to the softmax policy model**, indeed, **applies to tabular problems only**, and **does not reach a last-iterate convergence**.
Moreover, Liu et al. 2021 propose a sample-based variant which requires $\tilde{\mathcal{O}}(\varepsilon^{-3})$ (still for the tabular case and with an average convergence guarantee). As shown in Table 1, in such case (i.e., $\psi=1$ and inexact gradients) our last-iterate rate is $\tilde{\mathcal{O}}(\omega^{-4}\varepsilon^{-3})$ (being $\tilde{\mathcal{O}}(\varepsilon^{-7})$ with $\omega=\mathcal{O}(\varepsilon)$). However, the setting considered by Liu et al. 2021 assumes the access to a **generative model generating independent trajectories from any arbitrary pair $(s,a)$**, a demanding requirement that we overcome.
Finally, we stress that in the setting considered by Liu et al. 2021, most of our assumptions hold. Indeed, asm. 3.1 holds in the form of the Slater's condition; asm. 3.2 holds with $\psi=1$ for tabular CMDPs with softmax policies (see Remark 3.1); asm. 3.3 holds for tabular softmax that is known to lead to bounded gradients and smooth objectives [7]; asm. 3.4 holds because the mentioned paper considers exact gradients or it uses good events considering the differences between the estimations and the real values of $V$ and $Q$ bounded (their Definition 3). We thank the Reviewer for suggesting this paper; we will add it in the final version of the paper, especially in the comparison table (Table 2).
**References**
[1] Yuan et al. (2022). A general sample complexity analysis of vanilla policy gradient.
[2] Montenegro et al. (2024). Learning Optimal Deterministic Policies with Stochastic Policy Gradients.
[3] Ding et al (2024). Last-iterate convergent policy gradient primal-dual methods for constrained mdps.
[4] Ding et al. (2022). Convergence and sample complexity of natural policy gradient primal-dual methods for constrained MDPs.
[5] Metelli et al. (2018). Policy optimization via importance sampling.
[6] Liu et al. (2021). Policy optimization for constrained mdps with provable fast global convergence.
[7] Papini et al. (2022). Smoothing policies and safe policy gradients.
---
Rebuttal Comment 1.1:
Comment: Thank you for your clarification! I have increased the score. | Summary: This paper studies the problem of constrained MDP. To solve this problem, this paper adopts the policy gradient methods, and specifically they considered the action-based policy gradient method and parameter-based policy gradient method.
The algorithm proposed in this paper is a type of primal-dual method. Under certain assumptions, this algorithm is shown to have last iterate convergence.
This paper also executes numerical experiments on various environment with their algorithm, and the experiments validate the results in this paper.
Strengths: This paper is a well-written paper. The description of problem setup, theorems, assumptions are clear.
Even though this is mainly a theory paper, there are numerical experiments which validates the theoretical results.
This paper has results on the last iterate convergence, which is a property not possessed by most of stochastic optimization algorithms.
Weaknesses: I don't see significant weaknesses in this paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: Do you have lower bounds showing that these rates are tight?
Confidence: 1
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes. The authors addressed all the limitations listed in the guidelines.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for appreciating our work and recognizing its novelty. Below, we respond to the Reviewer's questions.
> Do you have lower bounds showing that these rates are tight?
The derivation of a lower bound for Constrained MDPs with **continuous state and/or action spaces** is still an **open research problem**. Nevertheless, [1] presents a sample complexity lower bound for the *tabular case* under a generative model, which is of order ${\Omega}(|\mathcal{S}| |\mathcal{A}| (1-\gamma)^{-5} \zeta^{-2} \varepsilon^{-2})$, where $\zeta$ is the Slater's constant. However, such a lower bound does not apply to our case, since it has been derived for the tabular setting.
**Reference**
[1] Vaswani, S., Yang, L., & Szepesvári, C. (2022). Near-optimal sample complexity bounds for constrained MDPs. Advances in Neural Information Processing Systems, 35, 3110-3122.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your response. I do not have further questions. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Dinomaly: The Less Is More Philosophy in Multi-Class Unsupervised Anomaly Detection | Reject | Summary: The paper "Dinomaly: The Less Is More Philosophy in Multi-Class Unsupervised Anomaly Detection" introduces a minimalist reconstruction-based framework for unsupervised anomaly detection (UAD) in multi-class settings. The framework focuses on four main components: Foundation Transformers, Noisy Bottleneck, Linear Attention, and Loose Reconstruction. Extensive experiments on MVTec-AD, VisA, and Real-IAD datasets show that Dinomaly achieves superior performance compared to state-of-the-art multi-class and even some class-separated UAD methods.
Strengths: 1. Using simple components like Foundation Transformers, Noisy Bottleneck, Linear Attention, and Loose Reconstruction to achieve superior performance is highly original. This is a significant departure from traditional methods that rely on complex designs and multiple modules. It challenges the conventional views: more complex architectures are necessary for better performance in anomaly detection tasks.
2. The methodology is well-detailed, and the experimental design is robust. The authors conduct extensive experiments on three well-known datasets (MVTec-AD, VisA, and Real-IAD), providing comprehensive performance metrics and comparisons with SOTA methods. The result is convincing, showing that Dinomaly not only outperforms existing MUAD methods but also surpasses some of the best class-separated UAD methods.
3. The paper is generally clear, well-organized, and relatively reproducible.
4. The significance of this work is substantial, and makes a valuable contribution to anomaly detection, as it addresses a major challenge in UAD—achieving high performance in multi-class settings without resorting to complex, specialized architectures, and is potentially scalable.
Weaknesses: 1. The paper provides a detailed explanation of the proposed framework but lacks important justification and discussion, it is difficult for readers to realize the novelty and improvements brought by Dinomaly. The author may need to compare Dinomaly to specific previous methods, highlighting the differences and improvements. Discuss how the minimalist approach contrasts with more complex architectures and why this improvement is significant.
2. The motivations for choosing Noisy Bottleneck and Loose Reconstruction are not deeply explored. For instance, explain in more detail why Noisy Bottleneck helps prevent identity mapping.
3. The paper claims simplicity but there was no discussion of parameter number, computational complexity, or time complexity in the experiment.
4. In Loose Constraint, the author claims that 1-group LC mixes low-level and high-level features which is harmful for anomaly localization. How to group the features into the low-semantic-level group and high-semantic-level group in 2-group LC?
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Can you provide more detailed justifications for the choice of Noisy Bottleneck and Loose Reconstruction? Why do these help in preventing identity mapping and improving anomaly detection performance?
Suggestion: A detailed theoretical explanation or additional references would clarify the rationale of the design of Dinomaly and strengthen the argument for its effectiveness.
2. Provide a more explicit comparison to recent methods in terms of performance and conceptual differences. Highlight any specific limitations of prior work that Dinomaly addresses.
3. Increase discussion on simplicity and scalability.
4. The credibility of the paper may benefit from deeper experimental analysis. Known that Dinomaly surpasses compared methods by a large margin on all datasets and all metrics, if its performance is limited under certain conditions, such as when the input is video?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First, thank you for your valuable reviews and comments.
__W1,2&Q1,2: Justification and discussion of proposed components, and comparison of the difference with prior works.__
In Noisy Bottleneck, we show that the simple Dropout can work as a noise injection module to transformer "reconstruction" to "restoration". The base motivation and insight is that when a decoder is trained to reconstruct its input (output==input), it can generalize so well that it can reconstruct unseen samples (so-called identity mapping) because there is no explicit training regularization that forbids the decoder from generating unseen input. However, if the decoder is trained to restore (a.k.a. denoise) the input given noisy input (output != input), there is theoretically no over-generalization problem because there is an explicit training goal that forces the decoder to generate normal feature given abnormal input.
In the original Dropout thesis [a] (page 2): "Dropout can also be interpreted as a way of regularizing a neural network by adding noise to its hidden units. This idea has previously been used in the context of Denoising Autoencoders [b] [c] where noise is added to the inputs of an autoencoder and the target is kept noise-free." This "ancient" Denoising Auto-Encoder is very similar to the denoising paradigm of Dinomaly. Therefore, there is theoretical evidence to adopt Dropout as noise injection.
We have primarily discussed the limitations of previous noise injection methods in L115-L117, that their anomaly generation strategies are heuristic and hand-crafted which are not universal across domains, datasets, and methods. This leads to the advantage of our Dropout-based Noisy Bottleneck-----simplicity. Furthermore, we compare Dropout-based Noisy Bottleneck with Feature Jitting proposed in UniAD in Appendix Table A6, which demonstrates our effectiveness and robustness.
For Loose Reconstruction, we connect feature reconstruction to feature-level knowledge distillation in L160-L158. To be more specific, in prior works on feature-level knowledge distillation[d][e], it is suggested that dense layer-wise distillation helps the student model to better mimic the knowledge of the teacher and results in better generalization, which is harmful in UAD context.
We will include and shorten the above discussion and information in the final version.
[a] Srivastava, Nitish. Improving neural networks with dropout. University of Toronto
[b] Extracting and composing robust features with denoising autoencoders. ICML ’08
[c] Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of Machine Learning Research, 2010
[d]Patient knowledge distillation for bert model compression." arXiv preprint arXiv:1908.09355 (2019)
[e]Task-aware layer-wise distillation for language model compression." International Conference on Machine Learning. PMLR, 2023.
__W3&Q3: Parameter number, computational complexity, or time complexity. Increase discussion on simplicity and scalability.__
The computation cost of Dinomaly was presented in Table A2 in the Appendix, including parameters (148M), MACs (104.7G), and latency per image (17.8 ms) for the default ViT-B version. We will additionally include the total training time (about 1.5 hours on one NVIDIA 3090) in the final version.
The scalability is presented in Table A3 in the Appendix, where Dinomaly is scaled from ViT-Small to ViT-Large. Results show that our method follows the "scaling law".
The above information will be moved to the main paper given more pages in the final version.
__W4: How to group the features into the low-semantic-level group and high-semantic-level group in 2-group LC?__
As shown in Figure 2, the lower four layers are grouped as low-semantic-level; the deeper four layers are grouped as high-semantic-level. This scheme is simple, following the common sense of neural networks that shallow layers extract basic visual features such as lines, colors, borders, and corners, while deep layers extract abstract semantic information.
__Q4: Performance under other conditions, such as video. When is Dinomaly limited?__
The proposed method is not directly applicable to video modality, because video anomaly detection methods usually adopt temporal-spatial networks. As a complement, we include datasets of three more image domains, including MPDD (metal parts, position not aligned), BTAD (noisy training set), and Uni-Medical (medical images, including brain MRI, liver CT, and retinal OCT). A preliminary comparison is shown as below. As presented in the tables, the superiority of Dinomaly is relatively limited under noisy training set (anomaly images in normal training set) and medical domains.
The full results will be included in Appendix as further information on a wide range of domains.
Short comparison on MPDD.
| Method | I-AUROC | I-AP | P-AUROC | P-AP |
|---|---|---|---|---|
| RD4AD | 90.3 | 92.8 | 98.3 | 39.6 |
| UniAD | 80.1 | 83.2 | 95.4 | 19.0 |
| ViTAD | 87.4 | 90.8 | 97.8 | 34.6 |
| Dinomaly | 97.2 | 98.4 | 99.1 | 59.5 |
Short comparison on BTAD.
| Method | I-AUROC | I-AP | P-AUROC | P-AP |
|---|---|---|---|---|
| RD4AD | 94.1 | 96.8 | 98.0 | 57.1 |
| UniAD | 94.5 | 98.4 | 97.4 | 52.4 |
| ViTAD | 94.0 | 97.0 | 97.6 | 58.3 |
| Dinomaly | 95.4 | 98.4 | 97.8 | 70.1 |
Short comparison on Uni-Medical.
| Method | I-AUROC | I-AP | P-AUROC | P-AP |
|---|---|---|---|---|
| RD4AD | 76.4 | 75.8 | 96.4 | 38.9 |
| UniAD | 80.4 | 76.6 | 96.5 | 39.0 |
| ViTAD | 81.5 | 80.6 | 97.0 | 46.8 |
| Dinomaly | 83.4 | 82.7 | 96.7 | 50.9 |
[a] Deep learning-based defect detection of metal parts: evaluating current methods in complex conditions. In ICUMT, 2021
[b]Vt-adl: A vision transformer network for image anomaly detection and localization. In ISIE, 2021.
[c] ADer: A Comprehensive Benchmark for Multi-class Visual Anomaly Detection. arXiv:2406.03262 2024.
---
Rebuttal Comment 1.1:
Comment: I agree with the author's answer to “Why choose Noisy Bottleneck and differentiation”, but still do not explain why dense layer-wise distillation is harmful to UAD.
Thanks for the valuable comments by reviewer #LXBB. I agree with the statement that “it is challenging to include the suggested experiments in the main text without allowing major changes in subsequent submissions”. There are indeed some writing shortcomings in innovative ideas, motivations, clarity of principles, and related work. But in terms of technical contribution, I acknowledge the good performance and comprehensive experimental design of this work, as the author replied, “We provide a variety of selections for downstream users to choose from according to their budget”, so I will keep my decision.
---
Rebuttal 2:
Title: Response
Comment: Thank you for your valuable review and comments.
As discussed in the 4th paragraph of the rebuttal of W1,2&Q1,2, we have attributed the harms of dense layer-wise distillation to its generalization ability. Apart from the listed [d] and [e], there were plenty of works that utilized dense layer-wise distillation schemes for better generalization in the context of knowledge distillation. E.g., In [f]: "Given that the teacher’s layerwise representations often contain rich semantic knowledge, they can significantly improve the student’s generalizability". Because we have established that over-generalization is a curse in the context of UAD, such dense layer-wise distillation paradigm is self-evidently harmful to reconstruction/distillation-based UAD. In addition, as discussed in L165-L166, "the student (decoder) can better mimic the behavior of the teacher (encoder) given more layer-to-layer supervision", which is also clearly unwanted in UAD, because an "Identical" student cannot detect anomalies based on the reconstruction error. We believe such analysis can be a strong conceptual principle for the proposed Loose Constraint.
[f] Liang, Chen, et al. "Module-wise adaptive distillation for multimodality foundation models." Advances in Neural Information Processing Systems 36 (2023).
Due to the page limit, we have to put extensive ablation studies in the Appendix, while leaving the main paper for presenting contributions. We will present at least two more ablation experiments in the final version, while others can be included in Appendix with proper references.
Again, thank you for your thorough review. Looking forward to further discussion. | Summary: This paper focuses on the Multi-class Unsupervised Anomaly Detection task and proposes a minimalistic reconstruction-based anomaly detection framework — Dinomaly that consists of only vanilla Transformer blocks. In this framework, four key components (Foundation Transformers, Noisy Bottleneck, Linear Attention, and Loose Reconstruction) are introduced to alleviate the performance gap between multi-class and class-separated models. The paper conducts extensive experiments on three major datasets: MVTec-AD, VisA, and Real-IAD. Results show that Dinomaly outperforms current state-of-the-art methods.
Strengths: (1)The paper is well-written and has clear statements which make it easy to understand.
(2)The design of Dinomaly is straightforward but innovative. The use of foundation transformers, noisy bottleneck, linear attention, and loose reconstruction is well-justified.
(3)The paper generally outperformed existing SOTA methods and did enough experiments and comparisons.
Weaknesses: (1)The method relies heavily on transformer architectures, which might limit its applicability to other types of models.
(2)Transformers can be resource-intensive, and the paper does not fully address the computational cost of training and inference.
(3)The method's generalization to other domains or types of anomaly detection is not fully explored.
Technical Quality: 3
Clarity: 4
Questions for Authors: (1) Please specify the computational overhead of your work.
(2) There have been various recent works that attempt to perform anomaly detection in a more general zero-shot/few-shot setting, where the model trained on multiple classes is used to test samples from unseen classes. A discussion of these recent works and how the zero-shot (cross-dataset) performance of this approach should be added. e.g.,
"WinCLIP: Zero-/few-shot anomaly classification and segmentation" CVPR'2023
"AnomalyCLIP: Object-agnostic Prompt Learning for Zero-shot Anomaly Detection" ICLR'2024
"Toward Generalist Anomaly Detection via In-context Residual Learning with Few-shot Sample Prompts" CVPR'2024
"PromptAD: Learning Prompts with only Normal Samples for Few-Shot Anomaly Detection" CVPR'2024
(3) If I understand correctly, the best and second-best results in each table should be highlighted, with the best results in bold and the second-best results underlined. However, in Table 2, the best results for MVTec-AD (P-AP) and VisA (P-AP) are incorrectly marked. Additionally, could you clarify why the Dinomaly (MUAD) model outperforms the Dinomaly (class-separated) model in this metric?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Limitations are discussed in Supplementary Sec. A.4.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable reviews and comments.
__W1: The method relies heavily on transformer architectures, which might limit its applicability to other types of models.__
Transformer architecture has proven its ability as the foundation of machine learning tasks, including NLP (GPT, Llama) and CV (DINO, SAM). We believe that it is important to fully harness the power of foundational Transformers in UAD context, though some of the discoveries are Transformer-specialized. In addition, the insights of proposed components can be adapted to other architectures with proper modification.
__W2&Q1:Transformers can be resource-intensive, and the paper does not fully address the computational cost of training and inference. Please specify the computational overhead.__
The computation cost of Dinomaly was presented in Table A2 in the Appendix, including parameters (148M), MACs (104.7G), and latency per image (17.8 ms). The total training computation can be inferred by total iterations and MACs per image. We will additionally include the total training time (about 1.5 hours on one NVIDIA 3090) in the final version. The latency per image of Dinomaly-ViT-Base is 17.8ms (~58FPS) on a single NVIDIA RTX 3090 (consumer-level GPU), which can be considered enough for industrial applications.
__W3: The method's generalization to other domains or types of anomaly detection is not fully explored.__
We conduct experiments on more public datasets that represent other domains, including MPDD [a] (metal parts, position not aligned), BTAD [b] (noisy dataset), and Uni-Medical [c] (medical images, including brain MRI, liver CT, and retinal OCT). A preliminary comparison is shown as below. The full results will be included in Appendix as further information on a wide range of domains.
Short comparison on MPDD.
| Method | I-AUROC | I-AP | P-AUROC | P-AP |
|---|---|---|---|---|
| RD4AD | 90.3 | 92.8 | 98.3 | 39.6 |
| UniAD | 80.1 | 83.2 | 95.4 | 19.0 |
| ViTAD | 87.4 | 90.8 | 97.8 | 34.6 |
| Dinomaly | 97.2 | 98.4 | 99.1 | 59.5 |
Short comparison on BTAD.
| Method | I-AUROC | I-AP | P-AUROC | P-AP |
|---|---|---|---|---|
| RD4AD | 94.1 | 96.8 | 98.0 | 57.1 |
| UniAD | 94.5 | 98.4 | 97.4 | 52.4 |
| ViTAD | 94.0 | 97.0 | 97.6 | 58.3 |
| Dinomaly | 95.4 | 98.4 | 97.8 | 70.1 |
Short comparison on Uni-Medical.
| Method | I-AUROC | I-AP | P-AUROC | P-AP |
|---|---|---|---|---|
| RD4AD | 76.4 | 75.8 | 96.4 | 38.9 |
| UniAD | 80.4 | 76.6 | 96.5 | 39.0 |
| ViTAD | 81.5 | 80.6 | 97.0 | 46.8 |
| Dinomaly | 83.4 | 82.7 | 96.7 | 50.9 |
[a] Deep learning-based defect detection of metal parts: evaluating current methods in complex conditions. In ICUMT, 2021
[b]Vt-adl: A vision transformer network for image anomaly detection and localization. In ISIE, 2021.
[c] ADer: A Comprehensive Benchmark for Multi-class Visual Anomaly Detection. arXiv preprint arXiv:2406.03262 2024.
__Q2: Recent works on few-shot/zero-shot anomaly detection.__
Thanks for the valuable suggestion. Few-shot/zero-shot UAD is a new promising UAD setting proposed and studied lately. We also have been devoted to this field. We will discuss recent few-shot/zero-shot works in the Related Work in the final version. Nevertheless, this setting is a very different track from MUAD. Dinomaly is not capable of adapting to few/zero-shot/cross-dataset settings. Vice versa, few/zero-shot methods lag far behind the proposed Dinomaly and other MUAD works in absolute detection performance. We will include this information in the Limitation section.
__Q3-1: In Table 2, the best results for MVTec-AD (P-AP) and VisA (P-AP) are incorrectly marked.__
In Table 2, we intend to present Dinomaly's performance under the conventional class-separated UAD setting. Methods trained under class-separated setting (starting from the 2nd row) are compared together, leaving Dinomaly (MUAD) out as a reference. Therefore, Dinomaly (MUAD) is not in bold or underlined, but in italic.
__Q3-2: Why does the Dinomaly (MUAD) model outperform the Dinomaly (class-separated) in AP on MVTec-AD and VisA?__
Dinomaly localizes and segments anomalous regions by reconstruction error. In class-separated setting with few images, the decoder of Dinomaly is under-optimized so that the decoder is too sensitive to non-anomaly local deviation. This phenomenon does not affect image-level performance because such reconstruction error does not become the largest error in an image, but it affects pixel AUC metrics (AP and AUROC) as they measure the performance integration across the whole anomaly score range.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' response, and it basically addressed my concerns. I will maintain my score. | Summary: This paper introduces Dinomaly, a simple yet effective anomaly detection framework using pure Transformer architectures. It identifies four key components essential for multi-class anomaly detection: Foundation Transformers, Noisy Bottleneck, Linear Attention, and Loose Reconstruction. Extensive experiments on MVTec-AD, VisA, and Real-IAD datasets show that Dinomaly achieves superior performance, surpassing both state-of-the-art multi-class and class-separated anomaly detection methods.
Strengths: 1. The authors have conducted extensive experiments to validate the effectiveness of their method across multiple anomaly detection tasks.
2. The authors have proposed a simple yet effective framework that approaches and even surpasses the results of state-of-the-art methods in single-class anomaly detection tasks.
Weaknesses: 1. L53-55 'In addition, previous...': The authors should provide relevant evidence rather than subjective assumptions.
2. L77: Placing the Related Works section in the appendix is unconventional.
3. The first component proposed by the authors, Foundation Transformers, was already introduced in the ViTAD paper, which diminishes the overall contribution of the paper.
4. The input resolution used in the authors' experiments is 448x448, while other comparison methods use 256x256 or 224x224. This is an extremely unfair comparison. Please include results with a 256 resolution in table for a fair comparison.
5. In the proposed Loose Loss, 90% of the feature points were selected. How was this 90% hyperparameter determined? Please provide ablation study results.
6. The authors have employed Linear Attention to reduce computational load while maintaining similar performance. It is recommended that the authors compare the parameter count and FLOPs of their method with those of the baseline methods to demonstrate its efficiency. Additionally, it is suggested to conduct ablation studies to verify the computational efficiency of Linear Attention.
7. Other methods, such as RD4AD, SimpleNet, and UniAD, perform under the proposed settings. The authors can conduct a more equitable comparison.
Technical Quality: 3
Clarity: 3
Questions for Authors: See the Weaknesses.
=== After Rebuttal ===
I decide to raise my score from 4 to 5.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: It is recommended that the authors evaluate the performance of different methods under fair settings.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First, thank you for your valuable reviews and comments.
__W1: Relevant evidence rather than subjective assumptions for L53-55.__
Previous works on MUAD do make large efforts to design special modules for mitigating "identity mapping". Taking works of NeurIPS as examples, UniAD (NIPS22), the pioneer of MUAD, hacks into the self-attention mechanism by masking the neighbor's attention and designs complex decoder architecture that is distinct from vanilla ViT. HVQ-Trans (NIPS23), taking steps further, maintains a prototype set (memory bank) for each decoder layer which requires extra hyperparameter tuning for the code book and its POT loss. ReContrast (NIPS23) employs two encoders, one frozen and one trainable, for cross-contrastive-reconstruction, which largely alters the naive reconstruction paradigm.
Such methods are considered to be rather complicated, leaving challenges for downstream users to understand and tune on their datasets. In addition, complicated design restricts the generalization of method; e.g., UniAD developed for MVTec-AD performs worse than the more simple RD4AD when used on VisA (88.8 vs. 92.4). These lead to the advantage of our proposed simple and concise Dinomaly. We will include the above discussion in the final version.
__W2: Placing the Related Works section in the appendix is unconventional.__
Yes, indeed, due to the limited pages in the submission. We will place it in the main content given one more page in the final version.
__W3: Foundation Transformers, was already introduced in the ViTAD paper.__
The use of foundation Transformers is a core component of Dinomaly from a narrative perspective; but, we consider it as an important component, but not the main novelty. That is why it is not included in the component ablation study in Table3. However, we do want to emphasize the importance of a powerful foundation model to readers. Hence, we include it as a core element of Dinomaly for the integrity of writing. For the first time, we extensively present the choice of pre-trained foundations (Table A4) (DeiT, MAE, DINO, iBOT, BEiT... etc.). We hope such exploration will inspire and help future research to select proper backbones in UAD tasks.
__W4: Unfair comparison to other methods with 256/224 input.__
The comparison is based on the best performance, because 256x256 is already the optimal resolution for the compared methods. In our reproduction experiments, prior methods got worse performance when increasing input size from their default 256x256, as shown in this Table. Therefore, our comparison is based on the optimum vs. optimum. We will add the above explanation in the final version.
In addition, the results of Dinomaly with different input sizes are presented in Table A3/A4, where the performance of low resolution still exceeds previous SoTAs by a large margin.
| Method | Input size | I-AUROC | P-AUROC |
|---|---|---|---|
| RD4AD | __256x256__(best) | 94.6 | 96.1 |
| RD4AD | 320x320 | 93.2 | 95.7 |
| RD4AD | 384x384 | 91.9 | 94.9 |
| ViTAD |__256x256__(best) | 98.3 | 97.7 |
| ViTAD | 320x320 | 98.3 | 97.6 |
| ViTAD | 384x384 | 97.8 | 97.5 |
| ReContrast | __256x256__(best) | 98.3 | 97.1 |
| ReContrast | 320x320 | 98.2 | 96.8 |
| ReContrast | 384x384 | 95.2 | 96.5 |
__W5: The selection of hyperparameter in Loose Loss.__
The discarding rate is extremely robust, as shown in the following Table. We will include this ablation in the Appendix of the final version. We follow a simple intuition to discard the majority of easy samples by setting a round number 90% without tuning. Tuning this hyperparameter can result in even better results.
| Discard rate | I-AUROC | P-AUROC |
|---|---|---|
| 95% | 99.58 | 98.32 |
| 90% (Default) | 99.60 | 98.35 |
| 80% | 99.65 | 98.38 |
| 70% | 99.64 | 98.37 |
| 60% | 99.63 | 98.34 |
__W6: Computation cost of Linear Attention__
We have presented the computation cost (parameters, MACs(~0.5*FLOPs), latency) of Dinomaly model in Table A2. We will add the computation cost of other methods as a comparison in the final version. It also shows that Dinomaly is scalable.
In addition, the MACs of Softmax Attention and Linear Attention are 2.82G and 1.86G (each attn layer). Their parameters are exactly the same (2.36M). The cost of other modules, e.g. encoder and MLP, are not affected.
The adoption of Linear Attention is driven by its "unfocus" ability which can alleviate identity mapping and improve UAD performance. Currently,the reduction of computation is not the main concern of this paper.
| Method | Parameters | FLOPs | I-AUROC | P-AUROC |
|---|---|---|---|---|
| RD4AD | 80.6M | 28.4G | 94.6 | 96.1 |
| SimpleNet | 72.8M | 17.2G | 95.3 | 96.9 |
| ViTAD | 39.0M | 9.7G | 98.3 | 97.7 |
| DiAD | 1331M | 451.5G | 97.2 | 96.8 |
| Dinomaly-Small-384x384 | 37.4M | 60.2G | 99.3 | 98.1 |
| Dinomaly-Base-280x280 | 148M | 111.5G | 99.5 | 98.4 |
| Dinomaly-Base-384x384 (default) | 148M | 210.1G | 99.6 | 98.4 |
| Dinomaly-Base-Softmax-384x384 | 148M | 224.7G | 99.5 | 98.2 |
| Dinomaly-Large-384x384 | 413.5M | 571.0G | 99.8 | 98.5 |
__W7: Other methods, such as RD4AD, SimpleNet, and UniAD, perform under the proposed settings. The authors can conduct a more equitable comparison.__
All methods in Table 1 are under the setting of one-model multi-class setting, which is the setting of our proposed method.
If you mean "setting" by the same input size, as the response of Weakness 4, the original resolution of the compared method is nearly the optimal resolution. UAD is different from classical CV tasks like ImageNet classification where the higher resolution must result in better performance. Therefore, it is a fair comparison of optimum vs. optimum. It is less reasonable to force the same input size for all methods as different methods and encoders have different natures; e.g., ResNet with 256x256 input can utilize 64x64 feature maps, while ViT with 256x256 input has only 16x16 feature maps.
---
Rebuttal Comment 1.1:
Title: Revision of the Last Table of Rebuttal
Comment: In the last table of our rebuttal, the computation cost (FLOPs) of compared methods (RD4AD, SimpleNet, ViTAD, DiAD) is drawn from the benchmark paper ADer [a]. However, we just found that they confused MACs(multiply–accumulate operations) with FLOPs(floating point operations). MACs~0.5*FLOPs. The FLOPs they reported are acutally MACs, which are around half of the real FLOPs.
Therefore, we report GMACs for unification.
[a] ADer: A Comprehensive Benchmark for Multi-class Visual Anomaly Detection. arXiv preprint arXiv:2406.03262 2024.
| Method | Parameters | MACs | I-AUROC | P-AUROC |
|---|---|---|---|---|
| RD4AD | 80.6M | 28.4G | 94.6 | 96.1 |
| SimpleNet | 72.8M | 17.2G | 95.3 | 96.9 |
| DRAEM | 97.4M | 198G | 54.5 | 47.6 |
| ViTAD | 39.0M | 9.7G | 98.3 | 97.7 |
| DiAD | 1331M | 451.5G | 97.2 | 96.8 |
| Dinomaly/ViT-Small-384x384 | 37.4M | 26.2G | 99.3 | 98.1 |
| Dinomaly/ViT-Base-280x280 | 148M | 53.7G | 99.5 | 98.4 |
| Dinomaly/ViT-Base-384x384 (default) | 148M | 104.6G | 99.6 | 98.4 |
| Dinomaly/ViT-Base-Softmax-384x384 | 148M | 112.3G | 99.5 | 98.2 |
| Dinomaly/ViT-Large-384x384 | 413.5M | 285.3G | 99.8 | 98.5 |
---
Rebuttal Comment 1.2:
Title: For Authors
Comment: The authors have resolved a few issues, but most concerns remain unaddressed or have been evaded:
Q1. Placing related works in the appendix violates the formatting guidelines, potentially extending the main text to 10 pages instead of 9, which is unfair to other works. Considering that revised papers generally increase in length, I believe it will be challenging for the authors to include the Related Work section in the main text as required.
Q3. The authors list ViT as a contribution point in lines L57-59 and Section 2.1, particularly the use of DINOv2 weights, which offers no technical contribution to AD (as also noted by Reviewer **#m7bP**). However, I believe this is a core aspect of the work. The authors should apply other modules to different frameworks to validate the effectiveness of their contributions.
Q4/Q6. 1) A comparison of model efficiency at the standard resolution of 256 is necessary to ensure a fair comparison of different methods under the same parameter and computation constraints. Please provide the comparison results. 2) For perception tasks, pixel-level metrics should increase with resolution, but the results provided by the authors show the opposite trend. Please explain this. 3) AUROC is an unreliable metric; providing results with additional metrics would be beneficial. 4) Notably, the model performance under several weights in Table A4, such as MAE, significantly deteriorates, weakening the evidence for the effectiveness of other contributions. How does the performance compare when using stronger pre-trained weights for the comparison methods? 5) The reviewers do not accept the authors' claim of focusing solely on model performance without considering computational cost, which is unreasonable for model applications. This issue was also raised by reviewers **#hJXx** and **#m7bP**. The authors should provide a comparison of the proposed method scaled to a similar level as the comparison methods.
---
Rebuttal 2:
Title: Response (1/2)
Comment: Response(1/2)
Thank you for your valuable comments. We by no means intended to evade any concerns. We spare no effort to clarify any raised concerns.
__R-Q1:__
NeurIPS allows one more page for the final version (10 pages), which will fit a condensed Related Work.
In addition, though there is no explicit Related Work section in our submission, we merged such content directly in Introduction (L28-L55). We discussed categories of conventional UAD methods, why conventional methods fail in MUAD, and recent methods of MUAD, which elaborates the background and history of MUAD. Such narrative will also be partly moved to the final Related Work.
Moreover, we have checked the NeurIPS formatting guidelines carefully and found no such guideline that strictly demands an explicit Related Work section. Many NeurIPS papers do not have an explicit Related Work section, some not even in the final version.
To name a few, see:
https://openreview.net/pdf?id=DP2lioYIYl
https://openreview.net/pdf?id=aExAsh1UHZo
https://openreview.net/forum?id=hgLMht2Z3L
https://openreview.net/forum?id=wxkBdtDbmH¬eId=JKLz5pP5sJ
https://openreview.net/attachment?id=4VAF3d5jNg&name
__R-Q3:__
In L99-L101, we have acknowledged that ViTs have been used for UAD in recent works. As discussed in Rebuttal, we consider foundation ViTs as an important and bedstone component of our Dinomaly from a narrative perspective to emphasize the importance of a well-pretrained ViT backbone in UAD context.
According to the "less is more" essence of this paper, we did not intend to propose "novel" or "never seen" technologies (L124-L125), but to propose simple pre-exists elements that have been long ignored for achieving SoTA performance for MUAD.
Most proposed elements (especially noisy MLP, Linear Attention, Loose Constraint) are closely bounded to modern ViTs. Applying such elements to the only previous ViT-based method (ViTAD) would just convert it to Dinomaly. Such elements are extensively evaluated on various ViT variants in Table A4 to show their generalizability. Loose Loss can be directly applied to previous CNN-based methods. Noisy Bottleneck can be adapted to RD4AD with minor modifications (apply dropout before MFF layer). Following your new suggestions, we apply these modules to a different framework to validate the effectiveness of our contributions. The results are shown as follows, where these two elements boost RD4AD to a whole new level.
| Methods | I-AUROC | I-AP | I-F1 | P-AUROC | P-AP | P-F1 | P-AUPRO |
|---|---|---|---|---|---|---|---|
| RD4AD | 94.6 | 96.5 | 95.2 | 96.1 | 48.6 | 53.8 | 91.1 |
| RD4AD+Loose Loss | 98.4 | 99.4 | 97.9 | 97.2 | 58.6 | 60.4 | 92.9 |
| RD4AD+Noisy Bottleneck | 98.2 | 99.2 | 97.5 | 96.8 | 60.0 | 61.1 | 92.7 |
| RD4AD+Both | 98.5 | 99.4 | 97.8 | 97.2 | 59.6 | 61.2 | 93.0 |
__R-Q4/6:__
__1)__ As previously discussed, Dinomaly with the resolution of 224x224 has already been presented in Table A4 (last row), in which Dinomaly also achieved SoTA results (I-AUROC=99.3). 256 is not feasible for ViT-Base/14 as 256 is not divisible by 14. As mentioned in the previous rebuttal, we followed the common comparison strategy based on "optimum vs. optimum" in the manuscript. We will include both settings in the main Table 1.
| Methods | Input Size| I-AUROC | I-AP | I-F1 | P-AUROC | P-AP | P-F1 | P-AUPRO |
|---|---|---|---|---|---|---|---|---|
| RD4AD | 256 | 94.6 | 96.5 | 95.2 | 96.1 | 48.6 | 53.8 | 91.1 |
| ViTAD | 256 | 98.3 | 99.4 | 97.3 | 97.7 | 55.3 | 58.7 | 91.4 |
| ReContrast | 256 | 98.3 | 99.4 | 97.6 | 97.1 | 60.2 | 61.5 | 93.2 |
| Dinomaly | 224 | 99.3 | 99.7 | 99.0 | 98.1 | 63.0 | 64.5 | 92.6 |
__2)__ Most prior UAD methods use ConvNets. ConvNets have much smaller downsample stride than ViTs. Therefore, such methods can utilize 64x64 feature maps under 256 input. Empirically, 64x64 feature maps are already saturated for UAD tasks. Further enlarging causes over-focusing on non-semantic low-level noises. (that is why pixel-reconstruction methods are no longer used) On the contrary, ViT/16 can only get 16x16 feature maps given the same input size, which is the reason why ViTAD does not suffer from performance drop when increasing input size. From another perspective, fairness is also questioned given the same image size as CNN-based methods and ViT-based methods have different feature map sizes.
__Questions 3), 4) and 5) are discussed in the next official comments.__
---
Rebuttal 3:
Title: Response (2/2)
Comment: Response (2/2)
__3)__ All metrics are shown as follows. For RD4AD, P-AP and P-F1 increase with resolution at 320, but still do no further benefits from 384; P-AUROC and P-AURPO do not increase with resolution. For ViTAD, P-AP and P-F1 increase with resolution; P-AUROC and P-AURPO do not increase with resolution. For ReContrast, pixel-level metrics generally do not benefit from increasing resolution. In addition, it is worth considering whether such pixel-level improvement is worth a large decrease in image-level detection performance, as image-level detection is usually more relevant in real applications.
| Methods | Input size | I-AUROC | I-AP | I-F1 | P-AUROC | P-AP | P-F1 | P-AUPRO |
|---|---|---|---|---|---|---|---|---|
| RD4AD | 256 | 94.6 | 96.5 | 95.2 | 96.1 | 48.6 | 53.8 | 91.1 |
| RD4AD | 320 | 93.2 | 96.9 | 95.6 | 95.7 | 55.1 | 57.5 | 91.1 |
| RD4AD | 384 | 91.9 | 96.2 | 95.0 | 94.9 | 52.1 | 55.3 | 90.8 |
| ViTAD | 256 | 98.3 | 99.4 | 97.3 | 97.7 | 55.3 | 58.7 | 91.4 |
| ViTAD | 320 | 98.3 | 99.2 | 97.1 | 97.6 | 61.3 | 63.3 | 92.4 |
| ViTAD | 384 | 97.8 | 98.9 | 96.3 | 97.5 | 62.5 | 63.7 | 92.4 |
| ReContrast | 256 | 98.3 | 99.4 | 97.6 | 97.1 | 60.2 | 61.5 | 93.2 |
| ReContrast | 320 | 98.2 | 99.2 | 97.5 | 96.8 | 61.8 | 62.6 | 93.3 |
| ReContrast | 384 | 95.2 | 98.0 | 96.4 | 96.5 | 57.7 | 59.5 | 92.6 |
__4)__ MAE was also tested as the backbone of ViTAD in their paper, resulting in worse performances (I-AUROC, MAE=95.3 vs. DINO=98.3, Table 1 in ViTAD), which was attributed to the weak semantic expression caused by its pretraining strategy (also note MAE is bad in other unsupervised tasks such as ImageNet kNN). Comparatively, Dinomaly with MAE achieves much better results (I-AUROC=97.3) than ViTAD with MAE.
According to your suggestion, we reproduce ViTAD (original DINO) with the stronger DINOv2 and DINOv2-Register as the backbone. Arming ViTAD with stronger backbones results in slightly better performance, but still lags behind our Dinomaly. It is expected because ViTAD with DINOv2-R is very similar to Dinomaly baseline (Table 3 first row).
| Methods | Backbone | I-AUROC | I-AP | I-F1 | P-AUROC | P-AP | P-F1 | P-AUPRO |
|---|---|---|---|---|---|---|---|---|
| ViTAD | MAE | 95.3 | 97.7 | 95.2 | 97.4 | 53.0 | 56.2 | 90.6 |
| ViTAD | DINO (original) | 98.3 | 99.4 | 97.3 | 97.7 | 55.3 | 58.7 | 91.4 |
| ViTAD | DINOv2 | 98.7 | 99.4 | 98.1 | 97.6 | 55.3 | 59.1 | 92.7 |
| ViTAD | DINOv2-R | 98.5 | 99.3 | 97.8 | 97.4 | 54.5 | 59.2 | 92.8 |
__5)__ Sorry for the confusion. We did not mean that we do not care about computational cost and building models at all costs. We mean that we did not design modules or techniques aiming to reduce computational cost.
We believe scaling ability is important in this decade, so we armed Dinomaly from ViT-Small to ViT-Large. On the contrary, previous methods did not show this obeying of scaling law (e.g., ViTAD favors ViT-Small over ViT-Base. RD4AD favors ResNet50 over ResNet101). We provide a variety of selections for downstream users to choose from according to their budget.
In Table A2, we have shown that Dinomaly can be scaled down to ViT-Small with the computation cost (26.3G) given 396x396 input, which is already lower than RD4AD given 256x256 input (28.4G). This computation cost is very promising for real industrial applications, yielding latency of 6.8ms per image (~147 FPS) on home-use-level NVIDIA 3090.
Moreover, we further scale down Dinomaly-ViT-Small by decreasing input size, as shown below, where Dinomaly produces SoTA results given more limited computation budgets.
| Methods | Parameters | MACs | I-AUROC | I-AP | I-F1 | P-AUROC | P-AP | P-F1 | P-AUPRO |
|---|---|---|---|---|---|---|---|---|---|
| RD4AD | 80.6M | 28.4G | 94.6 | 96.5 | 95.2 | 96.1 | 48.6 | 53.8 | 91.1 |
| ViTAD | 39.0M | 9.7G | 98.3 | 99.4 | 97.3 | 97.7 | 55.3 | 58.7 | 91.4 |
| ReContrast | 154.2M | 67.4G | 98.3 | 99.4 | 97.6 | 97.1 | 60.2 | 61.5 | 93.2 |
| Dinomaly-ViT-Small-396x396 | 37.4M | 26.2G | 99.3 | 99.7 | 98.7 | 98.1 | 68.3 | 67.8 | 94.4 |
| Dinomaly-ViT-Small-280x280 | 37.4M | 14.5G | 99.3 | 99.7 | 98.7 | 98.0 | 65.1 | 65.7 | 93.4 |
| Dinomaly-ViT-Small-252x252 | 37.4M | 11.6G | 99.1 | 99.6 | 98.7 | 98.0 | 63.9 | 64.9 | 93.1 |
We believe scaling ability is important in this decade. By setting the current upper-bounds on various benchmarks, we aim to (and can) provide a scalable framework that can accommodate different demands of application needs. If someone's main concern is detection performance and has abundant computation resources, they are given the option to choose Dinomaly-ViT-Large/Base for ultimate performance. If someone has limited computation resources or cares about FPS/latency, they can choose Dinomaly-ViT-Small while still producing state-of-the-art performance.
Again, thank you for your thorough comments and review. Looking forward to further discussion with you.
---
Rebuttal Comment 3.1:
Title: For Authors
Comment: Thank you for the detailed response. I suggest that the author add the fairness resolution experiments to the revised version and include results from the VisA and Real-IAD datasets. I have also reproduced the Dinomaly-ViT-Small experimental results using the official code. The author could consider uploading the logs of the main experiments (excluding ablation studies) to the website to facilitate replication and comparison by future researchers.
Although I still maintain a somewhat negative view of the paper's technological contributions, the solid experimental results could positively impact the community. Therefore, I have decided to raise my score, but the author should adhere to their commitment to address the issues in the revised version.
---
Reply to Comment 3.1.1:
Title: Response
Comment: Sincere thanks for your reply. We will do so. Again, thank you for your thorough review and valuable discussion. | Summary: This paper introduces Dinomaly, a minimalistic unsupervised anomaly detection (UAD) method designed to bridge the performance gap between multi-class UAD and class-separated UAD. Utilizing pure Transformer architectures with key components such as Foundation Transformers, Noisy Bottleneck, Linear Attention, and Loose Reconstruction, Dinomaly achieves superior performance on MVTec-AD, VisA, and Real-IAD benchmarks, surpassing state-of-the-art methods.
Strengths: 1. Dinomaly effectively bridges the performance gap between multi-class and class-separated UAD, achieving superior results on popular benchmarks such as MVTec-AD, VisA, and Real-IAD.
2. It utilizes a simple, straightforward approach with pure Transformer architectures, avoiding complex modules or specialized tricks.
3. The detailed ablation study demonstrates the effectiveness of each component—Noisy Bottleneck, Linear Attention, Loose Constraint, and Loose Loss—in enhancing anomaly detection.
Weaknesses: 1. The method might be perceived as too application-oriented, lacking broader theoretical contributions.
Technical Quality: 3
Clarity: 3
Questions for Authors: See the weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See the weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable reviews and comments.
We appreciate the concern about the balance between application and theory in our work. While Dinomaly does focus on practical applications and SOTA results, we believe it makes theoretical contributions to the field of unsupervised anomaly detection, providing conceptual insights on what is "identical mapping" and how to mitigate this phenomenon that serves as the chief obstacle of UAD tasks.
For the first time, we attribute the "identical mapping" phenomenon to the over-generalization nature of neural networks. Accordingly, we discovered multiple key components and operations that can both theoretically and empirically alleviate over-generalization in UAD context. In addition, we challenge the conventional views: more complex architectures are necessary for better performance in anomaly detection tasks. | Rebuttal 1:
Rebuttal: First, thank all reviewers for their valuable reviews and comments.
Please post any new questions in the 7-day discussion period.
Pdf: /pdf/97da5aaabad7df74ef6cd078ac93f27b791b79c4.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: Dinomaly simplifies the anomaly detection process by eliminating the need for complex designs, additional modules, or specialized techniques. It relies solely on basic Transformer components such as self attention mechanisms and multi-layer perceptrons (MLPs) to perform anomaly detection for multi class images.
Strengths: This paper has a clear motivation and contribution. The paper effectively proposes the viewpoint of "less is more" in multi class unsupervised anomaly detection, emphasizing how the simplicity of model architecture can achieve or surpass the performance of more complex systems.
Weaknesses: I hope to provide a specific explanation of the information provided by the decision-making process for identifying anomalies in the model.
Technical Quality: 4
Clarity: 3
Questions for Authors: Is the viewpoint presented in this article too sharp? Large scale models may not necessarily lack their advantages. I acknowledge this work and would like to see other people's questions and the author's response to determine the final score.
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The author candidly acknowledged the limitations of the work and provided the problems that need to be addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable reviews and comments.
__W1: Decision-making to identify anomalies.__
Dinomaly is based on the assumption that the networks respond differently during inference between seen and unseen input, faithfully reconstructing normal regions while failing for anomalous regions. We depict the calculation procedure of anomaly activation maps in the PDF of the rebuttal. We believe this figure can clearly demonstrate how the model identifies anomaly regions. It will be included in the paper in the final version.
__Q1: Minimalism and large-scale models.__
The "less is more" viewpoint of this work does not mean "small-scale model" (parameters and computation). The model scale of Dinomaly is actually pretty large compared to existing UAD methods, which can be further scaled up to ViT-Large (Table A2) obeying the "scaling law". On the contrary, the "less is more" philosophy is embodied in the minimalistic design of architectures and tricks, demonstrating the power of plain and simple (not small) framework in UAD tasks. | null | null | null | null | null | null |
Subsurface Scattering for Gaussian Splatting | Accept (poster) | Summary: This paper proposes a framework for capturing the geometry, specular, and subsurface scattering appearance of 3D objects using a captured dataset composed of multi-view OLAT images.
The appearance of the object is decomposed into two different models.
First, a 3D Gaussian representation with a spatially varying BRDF model for explicit surface representation. Secondly, an implicit volumetric representation is used for subsurface scattering appearance.
This framework enables material editing, relighting, and novel view synthesis in real-time.
Strengths: 1. I believe that the authors will provide a multi-view OLAT dataset for subsurface scattering objects in the future, and this dataset will be a great contribution to the vision and graphics community.
2. Screen space shading algorithm for 3D Gaussian representation improves the view-dependent appearance quality of the reconstructed object.
3. The use of two different appearance models enables the reconstruction of the geometry, BRDF, and subsurface scattering of 3D objects.
Weaknesses: 1. Missing references for diffusion-based SSS approximation models:
[1] A Practical Model for Subsurface Light Transport, Jensen et al., 2001.
[2] Light Diffusion in Multi-Layered Translucent Materials, Donner et al., 2005.
2. Lack of validation on the intrinsic properties.
To generate realistic novel relit scenes and material edits, it is important to correctly acquire the intrinsic properties. For the synthetic data, each intrinsic property can be directly compared with the reconstructed one (RMSE). Although reconstructed properties can differ due to the different models from Blender or the limitations of the method, the author should address this. Currently, it is unclear if this method reconstructs each intrinsic property well because there is no ground truth intrinsic property information in the paper or supplemental material, even though the rendered results look realistic and similar to the ground truth data. For example, in the dragon dataset in the supplemental material, there are many specularity changes in the ground truth, but these are not observed in the renders.
3. The relighting results only include novel views with single-light images.
There are no results for environmental lighting or changing light colors. Since the framework does not support multiple light sources, we could generate multiple light sources or environment-relighting results similar to the previous method [3]. Simply adding multiple OLAT images in screen space would provide much more powerful relighting results, which I would like to see.
[3] Neural Light Transport for Relighting and View Synthesis, Zhang et al., 2020.
4. It would be powerful to show the change in intrinsic properties between objects, for example, from bunny to dragon and vice versa.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The real-world dataset images look over-exposed in the specular regions. In the experimental setup. In the paper, same images were captured five times to reduce image noise. Did you change the exposure of the camera while capturing the raw images? Moreover, why did you choose 8-bit instead of 16-bit images?
2. This doesn't need to be addressed in the rebuttal (Minor), but it would be better to add some information about the name of each video. Without clear explanations, some notations can be misleading to the reader.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The author adequately addressed the limitations, including:
1. Difficulty in optimizing strongly heterogeneous materials.
2. Limitations in screen space shading.
I might add that this paper is also limited to static scenes only. Human skin, for example, is constrained not only by personal rights but also by the inability to handle movement during capturing. This will cause blurry results that the current framework cannot handle.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Additional Relighting Results with Image Based Lighting
> Weakness 3
Please find qualitative and quantitative results for the image based lighting in Fig. 1 and Tab. 1 in the PDF accompanying this rebuttal. We also show additional relighting results together with the other editing capabilities in Fig. 2. Using our approach as outlined in the global response in “Image Based Relighting” we can generate a relit frame in a fraction of a second after a one-time pre-computation of the reflectance field which takes 20 seconds for a reasonable resolution. Alternatively, we can perform importance sampling on a given environment map which reduces combined processing time but cannot be reused for a different illumination setting.
### Evaluation of Intrinsic Properties
> Weakness 2
We provide a small qualitative evaluation in Fig. 4 and an extensive quantitative evaluation in Tab. 3 on the prediction of intrinsic properties of our model.
We assume the reviewer refers to albedo and illumination as common properties of intrinsic image decomposition. While our base color parameter can be understood as the albedo we also output additional material parameters that are needed for our physically based rendering model. We think it is interesting to also compare these properties. Therefore, we decided to showcase base color, roughness, metalness, normal, sss residual, specular & diffuse component and again the final render. Specular and diffuse illumination are intermediate results in our rendering pipeline during the deferred shading stage representing the diffuse and specular illumination components (before multiplication with the base color), respectively.
> All selected properties are compared to ground truth properties obtained using the Cycles renderer in Blender [8] for our synthetic scenes. We want to note that Blender uses a different shading model, such that some of these properties are not directly equivalent. Most notably, the SSS Residual cannot be retrieved and is calculated by us as the difference of diffuse reflection from a rendering with SSS turned on and SSS turned off. In Fig. 4 we plotted the absolute difference and properties for one example and will add more to the supplementary material. For all of these intrinsics we calculated the RMSE. While achieving overall good results, especially for the illumination, the value of the quantitative evaluation is limited by the fact that the optimization can generate multiple plausible solutions for a given appearance due to the under-constrained problem space. This is also the reason why base colors might differ.
As correctly noticed by the reviewer, the dragon scene does not capture all specularity changes and is also the one performing the worst in our analysis. It is the most complex scene with heterogeneous subsurface material and a lot of geometric details. Our method does not perfectly represent the geometric detail of the dragon scales in the scene resulting in incorrect specular highlights that lead the optimization to a rougher material. To improve our method for such tough scenes and also to better handle anisotropic scattering are things we want to explore in future work.
### Editing of Intrinsic Properties
> Weakness 4
For the editing of intrinsic properties we want to reference Fig. 2 that highlights free editing of intrinsics and shows results for various settings of those properties. Surely, intrinsic properties could be transferred between models also shown by edit examples in Fig. 2 transforming the statue into various different material types. Note, that some intrinsics such as the residual are position dependent and could hardly be transferred, though.
### Multi-exposure Acquisition
> Question 1
> We acknowledge HDR images in the dataset could be beneficial, especially for fitting BRDFs. We still decided to capture our real-world dataset in LDR for three main reasons, which all boil down to ease of use:
> - In deep learning working with LDR images is much simpler than handling the unconstrained value ranges of linear HDR images.
> - Competing methods are also designed for and trained on LDR datasets.
> - The disk space needed for capturing, processing and later hosting and downloading such a big HDR dataset would be substantially higher than for a compressed LDR dataset. The png-compressed, single channel, 8-Bit Bayer images still need 2.2 TB of storage, which increases to 5.2TB when adding color, rectification and object masks.
>
> While some specular highlights will be unavoidably clipped the overall exposure is carefully set for an optimal SNR for single light images as can also be seen in Fig. 6 of the rebuttal PDF.
### Additional References on Diffusion Based SSS Estimation
> Weakness 1
We reworked the related works section also including the suggested literature on diffusion based estimation of scattering parameters.
### Presentation and Labeling of Videos
> Question 2
We thank the reviewer for the additional feedback regarding the presentation of the results. We will take this into account when redesigning the supplementary material and accompanying website.
> We want to thank the reviewer again for the thorough analysis of our work. We hope that our new evaluation of intrinsic properties as well as the relighting results add insights into the method’s capabilities and that we could clarify the reasoning for some of our choices for the data acquisition.
---
Rebuttal 2:
Comment: I really appreciate the results related to the validation of the intrinsic properties, editing materials, and the novel environment rendering which were pointed out as a major weakness in this review process have been appropriately addressed. I also fully understand the issues related to the dataset size. I believe that if the paper is reorganized to include the rebuttal content, it would be sufficient for acceptance. However, significant level of revisions would be necessary due to the amount of rebuttal. I changed the rating to weak acceptance for now, but I’m leaning more towards borderline acceptance or low. I will take into account the opinions of the other reviewers as much as possible.
---
Rebuttal Comment 2.1:
Comment: We sincerely thank the reviewer for appreciating the additional results and the efforts we made to address the identified weaknesses. We have already incorporated the suggestions from the rebuttal into the paper, leading to an improved version. | Summary: This paper aims to model the subsurface scattering (SSS) effects under the 3D Gaussian Splatting framework, which is an efficient 3D representation for novel view synthesis. The challenge of this SSS modeling is the complicated light path in the final rendering output. The proposed framework is based on Relightable 3DGS but proposes an MLP-based module to capture SSS effects for 3DGS. It improves the representation of specularity by performing shading in image space. The experimental results achieve comparable or better results and fast optimization and rendering on synthetic and real-world data compared with NeRF-based approaches.
Strengths: 1. Introduce SSS residual to the 3DGS pipeline, which achieves comparable results and fast rendering speed.
2. Propose an image-space deferred rendering to capture specularity.
Weaknesses: 1. Some contributions may be overclaimed:
a. Editability. It was not demonstrated in the paper. Moreover, some editability comes with the Relightable 3DG, thus not the contribution of this paper.
b. The relighting effects seem only to change w.r.t the light direction.
c. Shadowing effects are handled by Relightable 3DG, thus not the contribution of this paper.
d. It is unclear how to make applications to other fields as mentioned in line 60, since this paper makes various assumptions like a single light source and multi-view captures.
2. Insufficient evaluation. As the method is based on Relightable 3DGS, it would be better to compare the proposed framework with Relightable 3DGS and 3DGS in quantitative results on novel view synthesis in Table 1.
3. The experiment settings are constrained: a single, known light source, object-centric, and given object masks. Moreover, the backbone model Relightable 3DG also requires estimated depth, which is not discussed in the paper.
4. Some typos and unclear sentences: line 232 about the number of images; Table 1 caption "non quadratic image". Line 298: sentence fragment. line 22 unclear sentence, etc.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. What is the performance of Relightable 3DG / 3DGS?
2. Eq. 6 overfits the experiment setting. Thus, it is hard to say the representation capabilities of the MLP. If the incident lighting in Eq. 6 is predicted and fixed by the MLP, how could the proposed method achieve relighting, like changing the intensity?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors have mentioned the limitations and potential negative societal impacts of their framework in Sec.5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Generalization of MLP / Representational Power of MLP
> Question 2
We add application examples that show interpolation capabilities in both the viewpoint as well as illumination domain. Results on the test set (Fig. 5 in the paper, for example) are outside of the training setting in terms of light and camera position. This can be best observed in the videos included in the supplementary material where also the light and camera distances are varied compared to the training.
The global MLP is chosen specifically for its power to interpolate on the trained manifold similar to how NeRF [24] uses it.
### Additional Relighting Examples
> Question 2, Weakness 1b
Please find qualitative and quantitative results for the image based lighting application in Fig. 1 and Tab. . We also show additional relighting results together with the other editing capabilities in Fig. 2. As can be seen the object can be rendered in different, both indoor and outdoor illumination settings yielding consistent photorealistic results. Please also refer to the global response for additional notes on the relighting examples and the method of computing them.
### Editability
> Weakness 1a
We specifically focus on enabling intuitive and fast editing of the illumination and material properties of a reconstructed scene. In Fig. 2 we show examples of the material editing modes in detail. Fig. 1 shows new examples of relighting using environment lighting.
As this topic was of interest for multiple reviewers we added a paragraph on “Editing Applications” in the global response that we invite the reviewer to take a look at, too.
### Limited Applicability to Other Domains due to Constraint Acquisition Setting
> Weakness 1d, Weakness 3
There are multiple application settings where a constrained acquisition setting like the OLAT approach might be tolerable and materials with subsurface scattering are relevant.
In “Choice of OLAT Data and Relighting” of the global response we discuss the choice of the experiment setting. For example, in the games and movie industry a large effort is put into asset generation, often employing custom build scanning devices or light stages [https://home.otoy.com/capture/lightstage/] very similar to the one used in our dataset acquisition. Scans of individual objects that can then be composed and combined into larger scenes. Our relighting capability enables easy integration of objects into new settings. For scenes with multiple objects only the visibility attributes need to be updated as has been shown in [39].
Also in the medical imaging domain it is feasible to perform a scan of e.g. an organ in a controlled environment to construct a detailed model that is then used for supervision during surgery, for example [47]. In this context also an endoscope with a point light source could be well represented using our model [48].
### Comparison against R3DGS and 3DGS
> Question 1 & Weakness 2
Additional results can be found in the newly added Tab. 2 as R3DGS (with incident light field). The results are obtained from a base configuration of our model without residual prediction and without deferred shading which comes closest to R3DGS.
The original R3DGS approach only supports a single illumination setting which is fundamentally different from the OLAT setting with many different illumination configurations. See the paragraph on “Choice of OLAT Data and Relighting” in the global response for more information about the setting and its relevance to subsurface scattering estimation. We use a light representation that models the changing illumination with explicit point light locations compared to the NeILF [37] based approach in R3DGS. The results in Tab. 2 clearly indicate the limitations of the base model in our experiment setting.
Plain 3D Gaussian Splatting (3DGS) also assumes a static illumination. Hence, we choose to not compare against 3DGS as these differences would overall limit the value of the comparison.
### Shadowing Already Handled by R3DGS
> Weakness 1c
As the reviewer correctly points out, the novelty here is not the handling of shadows but the way they are handled. Detailed shadowing effects can be predicted by our incident light prediction that is conditioned on the ray-traced visibility map on the 3D Gaussians.
Compared to R3DGS we use a different light representation that models the changing illumination with explicit point light locations compared to the NeILF [37] based approach in R3DGS. Our approach together with the OLAT data enables higher frequency illumination and more direct control of the lighting compared to the static Spherical Harmonics based representation of R3DGS.
### Mask & Depth Input and Initialization Scheme
> Weakness 3
Our automatic mask generation only adds minimal overhead during preprocessing and generates masks based on either text prompts or control points (see section 4.1 real-world dataset for additional examples composited on black). Depth input is not needed for our pipeline. As we have a larger number of input images available the quality of the normals is sufficient without depth supervision. Normal estimation is additionally regularized through the deferred physically based renderer in agreement with [44]. Similarly, we don’t use the sparse point cloud from the SfM reconstruction as initialization (as is being done for R3DGS). Finally, we do not require a two staged training scheduling, we only use a single stage optimization initialized with random positions.
### Typos and Text Quality
> Weakness 4
Thanks for the pointers. We will fix the typos and rework the text for improved readability in section 1, 4.1 and 4.3.
> 44: Ye et. al. 3D Gaussian Splatting with Deferred Reflection, arXiv 2024
> 47: J. Schüle et al. Multi-Physical Tissue Modeling of a Human Urinary Bladder, EMBC 2021
> 48: Yang et al. 3D reconstruction from endoscopy images: A survey, Computers in Biology and Medicine 2024 | Summary: This paper presents a method for recovering the shape and radiance transfer field (RTF) of an object from multi-view, OLAT data; placing an emphasis on translucent objects that exhibit subsurface scattering (SSS). In particular, the authors extend the framework of Relightable 3D Gaussian [Gao et al. 2024] in two ways: (1) instead of estimating per-Gaussian physically-based BRDF parameters, shading is performed in image space with a neural field that produces (potentially distinct) BRDF parameters for every pixel (backprojected into 3D); (2) in addition to predicting an incoming light field, an MLP predicts residual radiance due to SSS (i.e. a radiance transfer field that accounts for SSS alone), conditioned on 3D point, viewing direction, light source direction, and light source visibility.
In order to validate the efficacy of their framework, the authors collect a large multi-view OLAT dataset of translucent objects. With quantitative/qualitative comparisons on this dataset, the authors show favorable performance against existing state of the art works for shape and RTF estimation.
Strengths: One strength of the work is the multi-view, OLAT dataset, which the authors say they plan to release. Containing 15 objects, with 100s of views and light positions (over 25000 images per object), this dataset seems like it took considerable effort to collect. I imagine it will prove quite useful in benchmarking approaches that perform shape/material estimation and relighting, esp. for translucent objects.
The quality of the results appears to be good (the qualitative results in the paper and supplement look reasonable), and quantitative results are strong compared to existing methods.
Weaknesses: While the paper is, for the most part, well-structured and easy to follow, I found parts of the presentation to be slightly substandard. The pipeline figure is essentially just a collection of text boxes. A clearer delineation between network input, encoding, network architecture, network output (as well as an illustration of the purpose of each output), would help improve the clarity of this figure. I also don't think that the model design shown in this figure is fully accurate -- roughness, base color, and metalness should be position-dependent quantities only and should not depend on incoming/light direction.
I would also have liked it if the qualitative results were slightly better organized. I thank the authors for providing a number of renders of each object in the dataset under novel lighting/view. Collecting the results in a web page with labels for each video would have made the results easier to navigate.
I was surprised that so few visual comparisons to existing approaches are shown (Fig. 5 only).The KiloOSF results look far worse than I would expect -- the voxel artifacts seem to indicate a bug in evaluation/training.
Finally, while I don't doubt that the authors have built an effective system for shape and RTF recovery, I question the novelty of the contributions that enable this system. Performing deferred shading in pixel space for forward rendering/inverse rendering is hardly a new idea. Leveraging neural fields to predict RTFs (even residual RTFs) has also been done before in the literature (e.g. in Neural Fields for Structured Lighting [Shandilya et al. 2023]). Self-supervised visibility, the third listed contribution, is part of Relightable 3D Gaussian. Given, then, that the argument seems to be that a novel *combination* of these tools enables a better-performing system I would've hoped to see more comprehensive / higher quality evaluation.
Technical Quality: 3
Clarity: 2
Questions for Authors: * Why do the results of KiloOSF look so bad compared to their results on their own dataset? What is the source of the voxel artifacts in the rendered images?
* Can you go into more detail on the conditioning of the outgoing SSS component and the incoming light? As written, it seems like the incoming light is conditioned on the outgoing view direction, which I don't think should be the case?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors adequately discuss the method's limitations and potential negative social impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Improvements of Pipeline Overview (Fig. 2)
> Weaknesses
Thanks for the valuable feedback. We realize that the clarity of Fig. 2 in the paper can be improved. Please see Fig. 5 in the PDF for an updated version. We now tried to make clearer that base color, roughness and metalness are properties of each 3D Gaussian independent of incoming and light direction.
See below as well as the paragraph on “Clearer Presentation of Key Idea and Contribution” in the global response for more information on the conditioning of the residual MLP.
### Conditioning of the SSS Residual and Incident Light Prediction
> Question 2
While the incident light is independent of the view direction, the residual SSS prediction might depend on it.
In our proposed setup we jointly optimize the SSS Residual and incident light prediction.
We find that a single lightweight MLP jointly conditioned on the 3D Gaussian parameters as well as the view direction and the ray traced visibility yields high quality results at interactive rates for the given task. See Tab. 2 “w/o Joint MLP” for comparison, where we split the MLP for residual and incident light inputting only relevant physical properties, leading to worse results. Our reasoning is that with this parameterization the main MLP is able to learn about the global light transport and therefore is capable of providing useful hints for the two output heads predicting the SSS Residual and the incident illumination, respectively. Both predictions are inherently constrained by the prediction of the incident light field which needs to drive the PBR rendering. However, the residual prediction can compensate for some limitations of the rendering models based on the provided input. As can be seen in the results in Fig. 4 the output is still a physically plausible material parameterization that enables multiple downstream applications like relighting or material editing.
### Combination of Tools
> Weaknesses
Our goal is to represent objects featuring translucent materials to be able to relight them or change individual material parameters. The focus is on enabling correct rendering of specular highlights and decomposition of the subsurface scattering component. This cannot be achieved with the existing methods and is a unique application at the time of writing.
While we build on existing tools like 3DGS [15] and R3DGS [9] and ideas like deferred shading [11] we argue that our method is more than a combination of existing ideas.
Our hybrid representation introduces a MLP for residual prediction of the subsurface scattering component which is constrained by the incident light prediction as architectural bias (as outlined above). While works like [46] show that MLPs can be used to represent the light transport also in complex scenes, our particular parameterization has not been proposed before as far as we know.
Please also refer to “Clearer presentation of key idea and contribution” in the global section for further discussion of the residual prediction and MLP parameterization.
Moreover, our work includes multiple modifications of the R3DGS framework like the support for OLAT illumination and the single stage optimization. The results in Tab. 2 also state that only the combination of all our proposed components achieve such high quality results.
Finally, we also contribute a collection of OLAT scenes featuring translucent objects together with a preprocessing pipeline in the hope that this will enable follow up work in this field.
### More Comprehensive / Higher Quality Evaluation
> Weaknesses
Please find additional ablations in the newly added Fig. 3 and 4 as well as Tab. 2. We also demonstrate the editing applications our approach enables in more detail in Fig. 2. Please also refer to the global response on “Comprehensive evaluation” for more notes on the evaluation results. Given the novelty of the proposed editing applications there is no method to directly compare against. For a fair comparison we select KiloOSF [39] as the only real-time method that also enables relighting and novel view synthesis. We assume that the NeRF based methods can achieve similar quality, however at the cost of a much higher optimization and inference cost that makes comparisons on the full dataset unattractive.
### KiloOSF Results
> Question 1
While we did not directly analyze the voxel artifacts, we speculate that they are the results of the voxelization technique used in KiloNeRF and the complexity of the task gives the sparse views and light positions. The results shown in their paper [39] are produced by the default OSF NeRF approach training for multiple days and only achieving 0.27 FPS during inference, for fairness we choose to compare against their KiloOSF variant. There is no visual analysis shown for their real-time approach KiloOSF which achieves 14 FPS, that we could compare against. Therefore we don’t have a qualitative ground truth of the KiloOSF results. We utilized their official implementation and documentation at https://github.com/yuyuchang/KiloOSF and also saw similar artifacts using their own dataset that got worse when using non opaque objects. Our quantitative analysis shows similar results as provided within their work hinting these are the correct results.
### Webpage as Mode of Presentation
> Weakness
We thank the reviewer for the valuable feedback and taking the time to look into the supplementary material. We will launch a webpage including interactive viewing of the results together with the publication of the work and the dataset.
> Again, we thank the reviewer for the valuable feedback. We hope that through our rebuttal it becomes clearer how our approach uniquely enables accurate rendering and material parameterization for translucent objects as evidenced by new evaluations that will be added to the final paper.
> 46: Shandilya et al. Neural Fields for Structured Lighting, ICCV 2023
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their rebuttal, and for answering many of my above questions. One confusion that I still have is the following: you state in your response here and to reviewer uYWZ that incoming light is conditioned on raytraced visibility. However, in the paper on line 196 you state that raytraced visibility is used to supervise a spherical harmonics-based visibility term v (similar to Relightable 3D Gaussian. Which of these is used to represent visibility / used to condition incoming lighting?
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer’s thoughtful consideration of our rebuttal and apologize for any confusion. Both statements are indeed true. The incoming light is indirectly influenced by the visibility property of the Gaussians, as this property is provided as input to the MLP, which then outputs the incident light. Therefore, the incident light is conditioned on the ray-traced visibility. To supervise the visibility represented as spherical harmonics (SH), we ray-trace it similarly to R3DGS, by selecting random samples repeatedly during training for optimization. However, unlike R3DGS, our method does not require fine-tuning before training. Our neural representation is designed to be more flexible and compensating compared to the static representation of R3DGS, which assumes the sampled visibility as ground truth that directly influences the SH-represented incident light. | Summary: This paper proposes the 3D Gaussian Splatting for subsurface scattering objects by decomposing the scene into subsurface scattering, diffuse and specular reflections, and object shape. By using multi-view OLAT (one light at a time) data of translucent objects, the proposed method optimizes BRDF parameters attributed to Gaussians and a small MLP that outputs subsurface scattering radiance and incident light. For accurate rendering, this paper proposes to incorporate deferred shading into 3DGS for specular highlights and explicitly handle shadowing by considering the visibility of Gaussians. The experimental results demonstrate the successful decomposition of the PBR parameters and the effectiveness of using 3DGS compared with the NeRF-based method.
Strengths: + The first method for handling subsurface scattering objects with 3DGS and enabling the decomposition of subsurface scattering, diffuse and specular reflections, and object shape of translucent objects.
+ Introduce deferred shading into 3DGS to enable accurate rendering of specular highlights.
+ Experimentally show that the proposed method successfully decomposes the PBR parameters and significantly outperforms the NeRF-based method for novel view synthesis, training time, and rendering speed.
Weaknesses: - The representation of subsurface scattering is almost the same as the existing NeRF-based work [39], which takes points, viewing direction, and light direction as input of an MLP. The main difference is to simultaneously estimate incident light and use it for the physically-based rendering of specular and diffuse reflections like [9]. Although this difference improves the quality of rendering and enables the decomposition of PBR parameters, this method seems a combination of the existing works.
- The incident light $L_{in}$ can be view-dependent, which is physically inaccurate.
Technical Quality: 4
Clarity: 2
Questions for Authors: - A clearer presentation of the key idea and its superiority beyond the just combination of the existing works (NeRF-based method and relightable 3DGS) is expected.
- Why does the MLP in Eq. 6 take the covariance and normal of a 3D Gaussian as input? The outputs may not depend on them.
Confidence: 4
Soundness: 4
Presentation: 2
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Limited Novelty due to Recombination of Recent Techniques and Similarity to NeRF-based Approaches
> Weakness 1 & Question 1
To enable the first method for reconstruction of translucent objects that enables relighting and material editing we
> - have performed multiple substantial modifications on the R3DGS baseline (Tab. 2),
> - contribute a novel hybrid representation that has an architectural bias for decomposition and global light transport understanding,
> - created and will release an open dataset of OLAT scenes featuring diverse translucent objects.
>
>Compared to the NeRF based approaches [42, 39] our hybrid representation features explicit surfaces modeled by the 3D Gaussians and a novel parameterization of a global MLP for the volumetric part.
Our reasoning for the residual setup is presented in “Clearer Presentation of Key Idea and Contribution” of the global response.
The key insight here is that a single MLP backbone together with two heads for volumetric residual and incident illumination prediction, respectively, performs best in our data setting (Tab. 2). Please also refer to the following paragraphs on the parameterization of the MLP and the incident light prediction. We will rework the relevant sections in the paper to make this reasoning clearer.
In addition to the speed up that 3DGS [15] brings through point based rasterization, we identify the advantage of the strong surface prior from 3DGS for our decomposition task as a key difference compared to existing NeRF based approaches. Our formulation of a surface based shading with a volumetric residual to represent the subsurface scattering component is well suited for many editing applications and leads to a better geometry reconstruction compared to NeRF based approaches as well as a subsurface scattering representation much closer to the physical model than the plain R3DGS [9] (see the examples in section 4.2 comparison of the paper and the new ablation results as part of this rebuttal, specifically Fig. 4 and Tab. 2 and 3).
We find that R3DGS’s capability to render specular reflections is limited by the size of the 3D Gaussians and find a straightforward solution by using a deferred shading pipeline.
Furthermore, we adapt the framework for the OLAT setting (also see “Choice of OLAT Data and Relighting” in the global response), replacing the global environment map and the local illumination representation with a point light based representation. This enables higher frequency illumination and more direct control of the lighting compared to the Spherical Harmonics based representation of R3DGS.
Finally we abandon the two stage optimization approach from R3DGS and propose a single optimization stage without the need of initialization with a sparse point cloud that might be hard to obtain for translucent objects.
Since we could not find a suitable open dataset we also contribute a collection of OLAT scenes with translucent objects as part of this work in the hope that this will enable follow up work in this field.
### Parameterization of MLP (Eq. 6)
> Question 2
Eq. 6 accurately reflects the implementation, meaning that the MLP does indeed take covariance (rotation and scale parameter) and normal of a 3D Gaussian as input. The original intent was to condition the MLP on all surface information we have from the 3D Gaussian and to possibly receive an additional gradient update on the normal parameter.
Inspired by the reviewer’s comment we ran additional experiments and it turns out that conditioning without normals and covariance parameters is sufficient to achieve the reported quality.
### Incident Light Prediction is Physically Inaccurate
> Weakness 2
Thanks to the reviewer’s feedback we realize that the notation $L_{in}$ in Equation 6 in the paper might be a bit misleading in the context of physically-based rendering. We will update the section in the paper to make it clearer that $L_{in}$ is a prediction of our model that is close to the true physical quantity but might be slightly offset to compensate for limitations in the model.
While the incident light is independent of the view direction, the residual prediction is not. Using a single MLP backbone together with two heads for volumetric residual and incident illumination prediction, respectively, performs best in our experiments (see Tab. 2 row “w/o Joint MLP”). The hypothesis here is that the processing inside the MLP includes reasoning about the global light transport inside the object that is inherently constrained by the prediction of the incident light field (which needs to work for the PBR renderer). By relaxing the strictly physical model a little we gain better quality output, potentially compensating limitations of our rendering model.
Please also see the global response on “Clearer Presentation of Key Idea and Contribution” for further notes.
> We want to thank the reviewer again for the thorough analysis and hope that the key ideas of our method became clearer through this rebuttal. As mentioned above we will rework the introduction and method section of the paper to reflect the reviewer’s feedback. We will also add the additional analysis provided in the accompanying PDF to the supplementary section. We invite the reviewer to also take a look into it as it illustrates the effects of the explanations above.
---
Rebuttal Comment 1.1:
Comment: I appreciate your response. Although I acknowledge some improvements from 3DGS and NeRF-based methods and they realize the first 3DGS-based method for reconstruction and decomposition of translucent objects, these improvements seem engineering works rather than novel ideas. Thus, I will keep my rating.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer’s feedback. While it is true that our approach builds upon existing 3DGS and NeRF-based methods, we would like to emphasize once more that the novelty lies in our analytical approach combining these methods to achieve such high quality results, as demonstrated in the Ablation (Rebuttal Tbl. 2) when compared to the baseline and further in comparison to previous methods (Paper Tbl. 1). While some of these findings certainly involve engineering effort, we want to highlight that the joint prediction of residual and incident light, which brings about a drastic performance increase through this specific parameterization, has not been proposed before. Furthermore, regarding the deferred shading, we provide an analytical explanation for why specular details in previous works, including NeRF-based methods for translucent objects, are underrepresented and offer a solution to this issue. Also the constructed dataset is a novel contribution to the field of translucent object reconstruction.
We want to stress that all these contributions are not only technically challenging but also offer new insights into this underrepresented field of research, enabling new capabilities that were previously unattainable | Rebuttal 1:
Rebuttal: R1 pNBB - R2 wfKX - R3 F9dx - R4 uYWZ - R5 uk4p
Find cited works in the original paper, Fig. numbers refer to rebuttal PDF
We thank the reviewers for their constructive feedback and for recognizing our effort to advance an “under-explored research area in inverse rendering” [R1].
We propose a method to “reconstruct relightable objects with subsurface scattering (SSS) effects” [R1] by “decomposing the scene into subsurface scattering, diffuse and specular reflections, and object shape” [R2]. This decomposition in combination with the Gaussian Splatting (GS) “framework enables material editing, relighting, and novel view synthesis in real-time” [R4].
We appreciate the acknowledgement of the reviewers, that the “proposed method successfully decomposes the PBR parameters and significantly outperforms the NeRF-based method for novel view synthesis, training time, and rendering speed” [R2]. Further, that the “quantitative results are strong compared to existing methods” [R3], overall achieving “comparable results and fast rendering speed” [R4]. Additionally, the created dataset can be a “great contribution to the vision and graphics community” [R5], enabling e.g. benchmarking of “approaches that perform shape/material estimation and relighting, esp. for translucent objects” [R3].
We hope to clarify open questions and show new insights to our method (please also see attached PDF) that underline its contribution to the research field. In the following sections we address questions and comments raised by multiple reviewers.
## Clearer Presentation of Key Idea and Contribution [R2, R3]
To the best of our knowledge the proposed method is the first to show joint reconstruction and decomposition of translucent objects for high quality relighting and material editing in real-time.
We will rework the introduction section of the paper to make the key components of our contribution clearer as outlined below.
At the core we propose a hybrid representation that extends 3D Gaussian Splatting with PBR material parameters [9] and deferred shading [11] with a light-weight residual prediction network to learn shading components not modeled by the surface shader. We focus on the subsurface scattering component for translucent objects which is underrepresented in recent research on neural inverse rendering.
We constrain the network predicting the outgoing subsurface scattering (SSS) radiance by jointly predicting the incident radiance used for the PBR rendering step. The incident light prediction is effectively regularized as it is used in a physically based renderer together with the independently optimized material parameters. We highlight this in Tab. 2 of our newly added ablations showing the difference in performance between a joint MLP and two independent ones. By relaxing the physical definition of the residual prediction a little we gain better quality output, potentially compensating limitations of our rendering model.
## Choice of OLAT Data and Relighting [R1, R4]
One Light at A Time (OLAT) data makes it possible to disentangle the subsurface scattering effects and reflectance at the surface that would need additional priors otherwise.
Inverse rendering of translucent objects is a severely ill-posed problem. The OLAT setting is common here [42, 39] as it helps to recover the complex global lighting in the scene by providing an impulse response of the system; every image in the dataset is only illuminated by a single light of known position. This leads to a more accurate reproduction of highlights and SSS than if illuminated by an environment light. We acknowledge the limitations such an acquisition setup imposes and, therefore, want to make our acquired data available. Once reconstructed, our model can work in arbitrary lighting settings.
## Image Based Relighting [R1, R3, R4, R5]
In Fig. 1 we show that our method achieves image based lighting with high visual quality. First we sample the HDR maps representing distant environment illumination. The samples don’t need to correspond to OLAT samples used in training and could be placed much denser. Using this approach we can generate a relit frame in about 20 seconds for a medium resolution. To speed up generation we can precompute a reflectance field assuming white light. We compute the relit view as the sum over the reflectance field scaled by the environment illumination before applying tone mapping for display. This runs in a fraction of a second. As can be seen in Fig. 1 an object can be rendered in different illumination settings yielding consistent photorealistic results.
## Editing Applications [R4, R5]
We demonstrate the editing applications our approach enables in more detail in Fig. 2. We show:
- Relighting with environment map based lighting
- Single light illumination outside the training domain
- Color changes of base color or SSS Residual
- Editing of material parameters to make the appearance more metallic, shinier or rougher
- Changing opacity and intensity of the SSS
Further, our SSS Residual adds editing capabilities that are not available in previous methods [9].
## Comprehensive Evaluation and Ablation [R1, R3, R4]
Please find the additional ablations in Tab. 2 and visual examples in Fig. 3. The provided evidence underlines the pipeline choices presented in our method. Please note that the metrics used only have limited capability to capture the fine details in visual quality that the improved representation of specular highlights adds (see Fig. 3). Compared to previous methods we allow for detailed editing of the reflectance parameters at the surface independently of the volumetric residual while keeping and extending the relighting functionality. The analysis of the material prediction and the intermediate shading buffers highlights the physical plausibility of our results (see Fig. 4).
We would like to thank all reviewers again for their valuable feedback that already led to an improved submission.
Pdf: /pdf/6bc7e8c6d95e1cea7602e03502d9671e3a8d0085.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper proposes an algorithm to reconstruct relightable objects with subsurface scattering (SSS) effects. It proposes to model SSS as a residual to surface PBR using a neural network. It also proposes to perform shading in image space (i.e. deferred shading) to improve specularity. Lastly, the SSS network takes ray-trace light visibility as input, accounting for shadowing effects for the SSS component.
Strengths: 1. Modeling SSS is an under-explored research area in inverse rendering, and the demonstrated results in this paper look promising.
2. Using a neural network to predict residuals to surface-based PBR is straightforward yet effective, as demonstrated in the paper.
Weaknesses: 1. The approach assumes known point-light, which limits its applicability in real-world scenarios
2. The novelty is overclaimed: GS-IR ([19] in the paper) has already introduced deferred/pixel-space shading to Gaussian splatting. There are also several concurrent works reaching the same conclusion as this paper (e.g. deferred shading improves specularity) [1, 2], please cite them and discuss the relations between them and this paper in the final version of this paper. However, I do acknowledge that this paper is one of the first papers that argues deferred shading improves specularity.
3. Insufficient ablation: only qualitative results on ablation are presented. It would be preferable to have quantitative results on the ablation baselines. Also, the comparison to relightable 3D Gaussians is not fair. A more fair comparison is to remove the SSS part of the proposed method, which becomes a variant of relightable 3D Gaussians that can handle known illumination.
[1] Ye et. al. 3D Gaussian Splatting with Deferred Reflection, arXiv 2024
[2] Wu et. al. DeferredGS: Decoupled and Editable Gaussian Splatting with Deferred Shading, arXiv 2024
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What prevents the method from working without knowing point-light?
2. Do the results in Tab. 1 include novel lighting conditions? How are light intensity set in both synthetic and real dataset? How would the method work if the light intensity changes drastically, i.e. can the model handle light intensity changes from e.g. 5 (training) to 20 (testing)?
3. Only point light is demonstrated in the results. Would the model still work under natural illumination (e.g. image-based lighting)? According to Eq. 6 it seems to be designed specifically for a single point light, which is quite limiting. Would be good to see both qualitative and quantitative results on synthetic datasets with image-based lighting
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Limitation is discussed, though I think a crucial part (not supporting image-based lighting) is missing. Please see questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Optimization under Unknown Illumination
> Question 1
Optimization with unknown point light locations adds additional dimensions to an already under-constrained optimization problem which we consider out of scope of this work as we focus on the geometry representation and rendering part.
As of interest for multiple reviewers we discussed the advantages of OLAT data for the reconstruction of subsurface scattering (SSS) in the global response. Opening up the illumination to more complex patterns and natural illumination eventually is an interesting research trajectory that we plan to pursue in future work. To successfully reconstruct the subsurface scattering component of the light transport it is necessary to understand the global light transport in the scene including the heterogeneous volume structure of an object. In theory our method supports any representation of the incident light field. Assuming an unknown illumination this light field would need to be estimated in addition to the light transport, though. This adds additional complexity to the optimization problem. New priors on either illumination or geometry would be needed here which could be potentially obtained with the help of our acquired dataset. However, given the overall complexity of the problem we decided to focus on the geometry and material representation aspect first and consider improvements of the illumination representation beyond the incident light prediction as out of the scope of this work.
Still, our work is robust to deviations from the actual light position and potentially could also optimize an offset on the point light positions without much additional effort.
### Concurrent Works on Deferred Shading for 3DGS
> Weakness 2
We thank the reviewer for hinting at existing deferred shading techniques in the realm of Gaussian Splatting and we will gladly incorporate these in the related work section. The motivation to use a deferred shading pipeline is the limited capability of the shading approach of R3DGS [9] to reproduce specular highlights. See Fig. [3] for a visual comparison.
While Ye et al. [44] and Wu et al. [45] focus on the blending and propagation of normal directions between overlapping 3D Gaussians, our analysis based on the 3D Gaussian areas adds to the understanding of these effects. [45] train a SDF in parallel to improve the surface geometry which we do not need to. Also our shading model differs from the existing ones as we add the explicit SSS Residual term to the PBR material parameters and predict detailed incident light.
### Novel Lighting in Tab. 1
> Question 2
The values shown in Tab. 1 (Paper) include novel lighting positions, we build a train and test set with a 50/50 split for the real world and synthetic dataset (see Sec. 4.1 Paper). Still, these light positions are part of the lightstage, therefore, have the same distance to the object. For truly new light positions we would like to refer to the videos in the supplementary material and Fig. 2 in the rebuttal. Varying intensity or colored light was not available within the test set but we now added a comparison for the synthetic scenes in Tab. 1 and Fig. 1. Illumination editing is still possible as described next.
### Illumination Editing including Light Intensity
> Question 3
In Fig. 2 we present various editing modes our method enables. Light intensity can be changed by scaling the incident light field and residual prediction or the linear output of the deferred shading stage. As we only train with a fixed light spectrum and intensity extreme edits will result in inaccurate results for the residual component, though. Extending the setting to a spectral subsurface scattering model e.g. by training on different spectral responses in addition to the white lights is an interesting idea for future work. We thank the reviewer for the inspiring comment.
### Image Based Lighting
> Question 3
Indeed our method is capable of achieving image based lighting. As this was of interest for multiple reviewers we would like to refer to the global comment as well as Fig. 1 for qualitative and Tab. 1 for quantitative results. As can be seen the relighting results closely resemble the ground truth renders regarding color, light direction and subsurface scattering. Note, that the PSNR metric is affected by denoising artifacts and the noise residual from the Monte-Carlo path tracing in the ground truth data.
### Insufficient Ablation
> Weakness 3
As introduced in the global response on “Comprehensive Evaluation and Ablation” we now provide qualitative and quantitative results of multiple ablations. In Tab. 2 and Fig. 3 can be seen that the residual prediction, the PBR rendering and the deferred shading component all contribute to the quality of the final results. Our model is capable of representing the SSS component and specular materials and also enables intuitive editing of the material parameters as shown in Fig. 2.
### Comparison against Relightable 3D Gaussian Splatting (R3DGS)
> Weakness 3
In Tab. 2 we also compare against a variant of R3DGS [9] with known illumination. Please note that the original R3DGS approach only supports a single illumination setting, therefore differing from our OLAT setting and not capable of learning to disentangle SSS and surface reflection.
Compared to R3DGS we use a different light representation that models the changing illumination with explicit point light locations compared to the NeILF [37] based approach in R3DGS. The results in Tab. 2 clearly indicate the limitations of the base model in our experiment setting. Our approach together with the OLAT data enables higher frequency illumination and more direct control of the lighting compared to the Spherical Harmonics based representation of R3DGS.
> 44: Ye et. al. 3D Gaussian Splatting with Deferred Reflection, arXiv 2024
> 45: Wu et. al. DeferredGS: Decoupled and Editable Gaussian Splatting with Deferred Shading, arXiv 2024
---
Rebuttal 2:
Comment: I think the authors addressed one of my major concerns which is image-based lighting. Though it nullifies the initial argument that the model can achieve real-time rendering. It would make the paper much stronger if the method could achieve real-time rendering for image-based lighting
The added ablations are also sufficient.
I agree that unknown natural illumination + SSS is a challenging setup, as the community hasn't even solved opaque objects reconstruction problem yet. The newly constructed dataset does seem to be a good contribution to the community.
On the other hand, I concur with other reviewers that the technical novelty of the work is quite limited. Judging by all these factors I would slightly increase my score but wouldn't argue strongly for acceptance if other reviewers don't agree.
---
Rebuttal Comment 2.1:
Comment: We are grateful that we could address the reviewer’s major concern and clarify open questions. The reviewer is correct that a real-time evaluation of image-based lighting (IBL) was not shown. However, we want to emphasize that our method still significantly outperforms previous NeRF-based methods in terms of speed for IBL evaluation. Additionally, as we highlighted, with pre-computation, the IBL can be independently calculated very quickly. The numbers we provided represent naive sequential computation, which could be further optimized through parallelization to definitely achieve real-time evaluation. We thank the reviewer for his positiv feedback on our added ablations and the contribution of our newly constructed dataset. | null | null | null | null | null | null |
SMART: Scalable Multi-agent Real-time Motion Generation via Next-token Prediction | Accept (poster) | Summary: The main contributions of the paper include: 1) proposing a new motion generation framework that combines road and proxy trajectory labeling schemes with a decoder only Transformer for training the next token prediction task; 2) The model demonstrates zero sample generalization ability and scale law across different datasets; 3) SMART has achieved cutting-edge performance in most indicators, and the single frame inference time is controlled within 15 milliseconds, meeting the real-time requirements of interactive simulation for autonomous driving.
Strengths: SMART introduces a novel method for generating autonomous driving actions, which converts map and trajectory data into sequence tokens and uses a Transformer structure for prediction, achieving effective modeling of real driving behavior.
In terms of multiple metric, SMART ranks among the top in the Waymo Open Sports Dataset ranking, especially in terms of inference speed, demonstrating efficient performance and zero sample generalization ability.
Weaknesses: - The scale law ability of models based on transformer structure has been proven in many papers, resulting in limited novelty of the method proposed in this paper.
- The result in "Table 4: Absorption study on each component of SMART" indicates that "RVT", "NAT", and "NRVT" are harmful for models trained on WOMD.
Technical Quality: 3
Clarity: 3
Questions for Authors: - can the author provide results on NuPlan and compare with the latest works?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The paper explores motion prediction in GPT-style network, which has been done in many works such as StateTransformer. I recommend that the authors should include more discussion about the differences and improvements compared to the previous works.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1**: ”The paper explores motion prediction in GPT-style network, which has been done in many works such as StateTransformer.” + “The scale law ability of models based on transformer structure has been proven in many papers, resulting in limited novelty of the method proposed in this paper”
**A1**: Thank you for your valuable feedback. We will reference related work in the introduction of the revised manuscript and provide additional clarification.
*1.* While StateTransformer is indeed an autoregressive model, similar models like MVTE also exist. These autoregressive models utilize diffusion or distribution regression frameworks but do not implement discrete tokenization for map and agent input features or employ a cross-entropy based next token prediction paradigm.
*2.* Furthermore, prior methods like StateTransformer have limitations in validating scaling laws because both training and testing datasets are based on the same dataset. This increases the likelihood of overfitting to specific datasets, resulting in better performance. In contrast, validation of scaling laws and generalization abilities in LLM[1] and vision domain[2] often requires completely independent testing datasets. Our paper states in the introduction: "Generalizability means achieving satisfactory results across diverse datasets through zero-shot and few-shot learning, while scalability involves improving model performance as dataset size or model parameters increase." Therefore, our experiments on scalability and generalization are the first to be validated across multiple datasets. Additionally, as shown in the results in our note to all reviewers Q2, we believe that the combination of cross-entropy autoregressive prediction and discrete tokenization for both map and agent features is crucial for achieving scalability and generalization across datasets, which represents our key contribution.
***
**Q2**: "The result in "Table 4: Absorption study on each component of SMART" indicates that "RVT", "NAT", and "NRVT" are harmful for models trained on WOMD."
**A2**: We would like to clarify that the ablation study in Table 4 shows that while discrete tokenization may lead to some information loss, this loss can negatively impact performance on a single dataset. However, it significantly enhances generalization across different datasets, which aligns with our expectations. Moreover, techniques like "NAT" and "NRVT" can effectively improve model performance after discretization of inputs. We also emphasize in the conclusion the importance of lossless discretization for map inputs in future research.
***
**Q3**: "can the author provide results on NuPlan and compare with the latest works?"
**A3**: Thank you for the valuable suggestion. Below are the results of our experiments, which are fully aligned with the Val14 benchmark in PLUTO [3]. Due to character count limitations, we regret that we cannot cite every baseline method. The results in the table demonstrate the performance of our multi-agent SMART model when directly transferred to the NuPlan challenge, with a particular focus on pure data-driven methods. Notably, SMART achieves the highest Planner Score (90.17), Progress (99.52), and Drivable (99.33) among the Pure Learning methods, outperforming other models in several metrics.
| **Pure Learning** | **Planner** | **Planner Score** | **Collisions** | **TTC** | **Drivable** | **Comfort** | **Progress** | **Speed** |
| ----------------- | ------------------- | ----------------- | -------------- | --------- | ------------ | ----------- | ------------ | --------- |
| | PDM-Open | 50.24 | 74.54 | 69.08 | 87.89 | 99.54 | 69.86 | 97.72 |
| | GC-PGP | 61.09 | 85.87 | 80.18 | 89.72 | 90.00 | 60.32 | **99.34** |
| | RasterModel. | 66.92 | 86.97 | 81.46 | 85.04 | 81.46 | 80.60 | 98.03 |
| | UrbanDriver | 67.72 | 85.60 | 80.28 | 90.83 | **100.00** | 80.83 | 91.58 |
| | PlanTF | 85.30 | 94.13 | 90.73 | 96.79 | 93.67 | 89.83 | 97.78 |
| | PLUTO (w/o post.) | 89.04 | **96.18** | **93.28** | 98.53 | 96.41 | 89.56 | 98.13 |
| | SMART (w/o post.) | **90.17** | 95.29 | 90.30 | **99.33** | 99.90 | **99.52** | 97.28 |
To ensure these results are reproducible, we will update the code for Nuplan challenge accordingly. However, as noted in the conclusion of the original paper, "As a motion generation model, the ability of SMART to migrate to planning and prediction tasks still needs to be verified, and this is our top priority for future work." The primary reason for not including this experiment in the paper is that, unlike sim-agent tasks, the characteristics of planning tasks have not been fully considered, such as the need for more ego vehicle information input and a greater emphasis on driving safety.
[1] Achiam, Josh, et al. "Gpt-4 technical report." *arXiv preprint arXiv:2303.08774* (2023).
[2] Bai, Yutong, et al. "Sequential modeling enables scalable learning for large vision models." *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2024.
[3] Cheng, Jie, Yingbing Chen, and Qifeng Chen. "PLUTO: Pushing the Limit of Imitation Learning-based Planning for Autonomous Driving." arXiv preprint arXiv:2404.14327 (2024).
---
Rebuttal Comment 1.1:
Title: Thanks for authors' response
Comment: Thanks for authors' response, which addresses my concern. I would like to raise my score to weak accept (6).
---
Rebuttal 2:
Comment: Thanks for clearly addressing my concerns, I would like to maintain my current Rating.(6) and raise the "contribution" score. | Summary: In this paper, a GPT-style motion generator is developed for scalable multi-agent simulation. Though various techniques such as motion tokenization, factorized agent attention, and next map segment prediction, the proposed SMART framework ranked 1st place on WOSAC leaderboard for meta metric. Further zero-shot and scalablity analysis further manifest the generizability for proposed SMART framework.
Strengths: 1. A unified token design for motion and map, with straightforward decoder-only GPT structure in agent simulation.
2. Solid performance on WOSAC leaderboard for meta metric over other sim agent methods.
3. Comprehensive analysis on scalabllity and zero-shot transferrability.
Weaknesses: 1. Minor in methodology differentiating with Trajeglish[1] and MotionLM[2], other auto-regressive decoder for motion prediction or sim agent.
2. Lack of RoadNet evaluation on the road-vector NTP task.
3. Minor in the experimental details, such as clearer auto-regressive sampling process, sim performance for different scale, etc.
4. A thorough check for notations and writing. For instance, I found a suspected ChatGPT sentence: "Here’s the revised version of your text with improved precision and grammar:" in line 508; Also, the notation for modality should be replaced to differentiate with "Value"
Reference:
[1] Philion, J., Peng, X. B., & Fidler, S. (2024). Trajeglish: Traffic Modeling as Next-Token Prediction. In The Twelfth International Conference on Learning Representations.
[2] Seff, A., Cera, B., Chen, D., Ng, M., Zhou, A., Nayakanti, N., ... & Sapp, B. (2023). Motionlm: Multi-agent motion forecasting as language modeling. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 8579-8590).
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Whats the difference in decoder design compared with Trajeglish? Please clarify.
2. What is the defination and learning process for road-vector NTP? Whats the performance of SMART without this augmentation?
3. How is the inference process for SMART? Does any sampling trick conducted in SMART?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: N/A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for providing thoughtful review and constructive suggestions. We answer the questions as follows.
***
**Q1** : What's the difference in decoder design compared with Trajeglish? Please clarify.” + “Minor in methodology differentiating with Trajeglish[1] and MotionLM[2], other auto-regressive decoder for motion prediction or sim agent.”
**A1**: We appreciate your feedback, and we will include a discussion of these differences in the revised manuscript.
- In addition to the discrete tokenization of road and the decoder-only transformer structure emphasized in the paper, there are also structural differences in the decoder. As described in Section 3.2 of our paper: “In our work, we leverage a factorized Transformer architecture with multi-head cross-attention (MHCA) to decode complex road-agent and agent-agent relationships along the time series.”. This approach separates spatiotemporal attention into two dimensions, unlike Trajeglish and MotionLM, which compresses the spatiotemporal dimensions into a single sequence. We will add the differences to the Related Work in the revised version.
- This design difference significantly affects inference efficiency. In our model, all vehicles make simultaneous decisions for the next time step, while MotionLM and Trajeglish sequentially output each agent's predictions at the same time. For example, generating the future scenes of 32 agents over 8 seconds at a 2 Hz frequency requires only 16 steps of autoregressive inference with our model. In contrast, the latter requires 32 * 16 = 512 autoregressive inference steps. This efficiency is a key reason why our model can complete multi-agent interactive simulations within 20 ms per step. We will include similar explanations in the discussion of the inference section in the revised version.
***
**Q2**: ” Minor in the experimental details, such as clearer auto-regressive sampling process, sim performance for different scale, etc.”
**A2**: Thank you for your feedback. In the initial version of the manuscript, a description of the inference process is provided in Appendix A.1, line 495, and we conducted performance evaluations of different scale models in Table 9, line 545. We acknowledge that the previous version's brief description of the inference process may have led to your confusion.
*1.* We will add the following clarification in the revised version of Appendix A.1: Specifically, our model operates in 2Hz, and we interpolate the results to 10Hz for evaluation. We use 1 second of history along with the current frame as conditions and predict the information for the next 8 seconds. We can further factorize the traffic simulation task as:
$$
p( S_1, \ldots, S_{T-1}, \ldots, S_T \mid C) = \prod_{1 \leq t \leq T} p(S_t \mid C, S_{1 \ldots t-1})
$$
where $S_t \equiv s_{t}^{1},\ldots,s_{t}^{N_{A}}$ is the set of all states for all agents at timestep $t$. As indicated by the formulas, our approach samples the future motion $S_t$ for all vehicles in the current scenario simultaneously for the next time step. In contrast, previous methods like Trajeglish and MotionLM sequentially sample each vehicle's future motion for the next time step. The main reasons for our approach are twofold: first, it significantly increases the model's inference efficiency by a factor of $N_{A}$, where $N_{A}$ is the number of agents in the scene. Second, in real traffic interactions, it is challenging to define a reasonable sequence of vehicle interaction at the same time, as vehicles typically plan their intentions concurrently.
*2.* Regarding the sampling process, we do not employ complex strategies. As stated in line 502, "To balance realism and diversity, we use top-5 sampling at every step during the simulation. Given the focus of this article on the generalization and scalability of the model, we have achieved strong results in specific scene generation without extensively exploring detailed sampling tricks.”
***
**Q3**: "Lack of RoadNet evaluation on the road-vector NTP task” +“Whats the performance of SMART without this augmentation? "
**A3**: We have included relevant explanations in our note to all reviewers Q1 and will make appropriate modifications in the revised manuscript.
***
**Q4**: “What is the defination and learning process for road-vector NTP?“
**A4**: Unlike sequential agent motions, road vectors form a graph. To address this issue, we extract the original topological information of roads and model the road vector tokens with sequential relationships based on their predecessor-successor connections. This approach requires RoadNet to understand the connectivity and continuity among unordered road vectors. The loss function for a single tokenized road polyline is defined as:
$$
\text{loss}(\theta) = -\sum_{j=1}^J\sum_{i=1}^{V_{r}} (r_{i}^{j+1} == r_{i^{gt}}^{j+1}) \log(p(r_{i}^{j+1}|r_{i}^{1:j}))
$$
where $p_{\gamma}(r_{i}^{j+1}|r_{i}^{1:j})$ denotes the categorical distribution predicted by the RoadNet parameterized by $\gamma$, $J$ represents a complete polyline that has not yet been split into road vector tokens, $\text{r}^{1:j}$ Representing the road token embedding of the predecessor, and $\text{r}_{i}^{j+1}$ is the next predicted road vector token. This loss function ensures that RoadNet learns to predict the correct next road vector token given the preceding tokens, thereby capturing the spatial continuity and connectivity within the road network. The loss function formula and explanations will be included in the revised version of the paper.
***
**Q5**: A thorough check for notations and writing"
**A5**: We apologize for the oversight. We will correct the noted issues in the revised version, including the suspected ChatGPT sentence in line 508 and the notation for modality to clearly differentiate it from "Value."
---
Rebuttal Comment 1.1:
Comment: Thanks for clearly addressing my concerns, I would like to maintain my current Rating.(6) and raise the "contribution" score. | Summary: This paper presents SMART, a model for multi-agent traffic simulation. The approach is based on a decoder-only transformer architecture that predicts all agents motion tokens autoregressively over time. The architecture makes use of factorized attention layers (over map, agents, and time) and relative positional encodings between agent and map tokens. As in Trajeglish, SMART uses the K-disk algorithm to tokenize agent motion trajectories. However, SMART also tokenizes road vectors and does next-token prediction for road tokens as well for pre-training. Experiments on the Waymo Open Sim Agents Challenge shows that SMART achieves state-of-the-art performance. In addition, SMART's tokenization strategy allows it to generalize better between datasets (NuPlan and WOSAC).
Strengths: - This paper presents a simple architecture for multi-agent traffic simulation that achieves state-of-the-art results in WOSAC. The architecture is quite similar to Trajeglish. However, SMART uses a decoder-only architecture, also tokenizes the road vectors, and uses factorized self-attention with relative positional encodings. In the paper, the authors demonstrate the efficacy of tokenizing road vectors for generalizing between datasets.
- The scaling law and zero-shot generalization experiments are interesting. To my knowledge, they are the first experiments of their kind in multi-agent traffic simulation. The zero-shot experiments also demonstrate the efficacy of tokenizing road vectors and using noised tokens for data augmentation during training, which validate the authors' design choices and provide interesting insights for others in the field.
- The ablation studies offer interesting insight into SMART's design choices; e.g., Table 7.
- Code is available as a part of the submission.
- The paper is generally well-written and easy-to-follow.
Weaknesses: - Without comparisons to other methods, it is difficult to interpret the significance of the scaling law and zero-shot generalization experiments. For example, while it appears that SMART's performance scales with its number of parameters, we cannot conclude whether it does so better than other architectures. Likewise, while SMART can zero-shot generalize from NuPlan to WOSAC, we cannot conclude whether it does so better than other architectures. This limits the significance of these experiments.
- The paper's results can be made stronger with more statistically significant experiments. For example, in Table 9, it is unclear whether the differences between SMART 8M, 36M, 96M are statistically significant or within the bounds of noise, given how close the numbers are.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How much does road vector next token prediction contribute to SMART's performance?
2. In Table 1, why is SMART's minADE higher than that of Trajeglish? This doesn't seem to match the results in Table 6 of the appendix.
3. Table 4 shows the efficacy of certain architecture choices on generalizing from one dataset to another. How do these architecture choices affect the model's ability to learn from multiple datasets? Do these choices matter in the setting where you have access to NuPlan, WOSAC, and the proprietary dataset?
4. L308: Why does dataset size limit your architecture size to 100 million parameters? Is this a matter of overfitting? If so, I think that it would be interesting to make note of this in the appendix of your paper.
Other:
- In L508, there may be an unintended sentence "Here’s the revised version of your text with improved precision and grammar:"
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper adequately addresses limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for providing thoughtful review and constructive suggestions. We answer the questions as follows.
***
**Q1**: How much does road vector next token prediction contribute to SMART's performance?
**A1**: We have included relevant explanations in our note to all reviewers and will make appropriate modifications in the revised manuscript.
***
**Q2**: In Table 1, why is SMART's minADE higher than that of Trajeglish? This doesn't seem to match the results in Table 6 of the appendix.
**A2**: We appreciate your attention to this detail. The minADE in Table 1 reflects the correct evaluation results from the WOSAC 2023 leaderboard. We mistakenly reported the minADE metric from WOSAC 2024 in Table 6. We appreciate your understanding and have corrected this in the revised manuscript.
***
**Q3** :"Table 4 shows the efficacy of certain architecture choices on generalizing from one dataset to another. How do these architecture choices affect the model's ability to learn from multiple datasets? Do these choices matter in the setting where you have access to NuPlan, WOSAC, and the proprietary dataset” +“Without comparisons to other methods, it is difficult to interpret the significance of the scaling law and zero-shot generalization experiments. “
**A3**: Thank you very much for your feedback. We agree that including relevant results can enhance the reliability of our main contributions. We have added these results in our note to all reviewers Q2. We regret that due to the high GPU cost of training large-scale models across datasets, we were unable to conduct ablation comparisons for each sub-module during our validation of scaling laws. We only compared SMART W/O, SMART, and the earlier replicated MVTE method. Nevertheless, the existing results demonstrate that discrete tokenization is an effective method for bridging dataset gaps. Additionally, autoregressive models utilizing cross-entropy classification loss are crucial for scalability.
***
**Q4**: “Why does dataset size limit your architecture size to 100 million parameters? Is this a matter of overfitting? If so, I think that it would be interesting to make note of this in the appendix of your paper.”
**A4**: This conclusion is based on our observations during training. Specifically, we found that for models with 10 million parameters, performance improvements plateau when the total training token count reaches around 0.1 billion. For 30 million parameter models, this plateau occurs after training on approximately 0.7 billion tokens, while the 100 million parameter model continues to show improvements with full data. Due to constraints in training resources and dataset size, we did not pursue training larger models. Regarding the potential for overfitting with larger models, our experience suggests that when a model with a higher parameter count is trained on a single dataset for multiple epochs, its generalization ability tends to decline. We will include this discussion in the appendix of the revised manuscript.
***
**Q5**: “ There may be an unintended sentence "Here’s the revised version of your text with improved precision and grammar:".
**A5**: We apologize for the oversight. This sentence was unintentionally included and will be removed in the revised version of the paper.
***
**Q6**: “The paper's results can be made stronger with more statistically significant experiments. For example, in Table 9, it is unclear whether the differences between SMART 8M, 36M, 96M are statistically significant or within the bounds of noise, given how close the numbers are“
**A6**: We appreciate your suggestion regarding the need for more statistically significant experiments. Given the extensive test dataset of 44,920 samples in the actual WOSAC evaluations, statistical noise is minimal, with metric fluctuations typically around ±0.02. Furthermore, the models with different scales are already performing very close to the maximum possible score on the Waymo sim agent metrics (where the ground truth maximum score is 0.80), meaning the differences in performance metrics have become less pronounced. Consequently, the performance improvements between SMART 8M, 36M, and 96M models may not appear as significant due to these near-maximal scores.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my questions. Since I have no further concerns, I would like to maintain my "accept" rating.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for acknowledging our additional experiments and providing positive feedback! Your constructive comments and suggestions are very helpful in improving our paper quality. Thanks! | null | null | Rebuttal 1:
Rebuttal: We thank all reviewers for their reviews. We are incorporating feedback into our paper and will post direct responses to each reviewer's comments and questions. First, we would like to address the common concerns raised by multiple reviewers:
***
**Q1**: How much does road vector next token prediction contribute to SMART's performance? + “Lack of RoadNet evaluation on the road-vector NTP task”
**A1**: We appreciate the insightful question regarding this aspect. In our original Table 4, the NRVT was indeed evaluated alongside the road vector next token prediction. To address this, we conducted ablation studies to isolate the impact of this component.
| Model Number | RVT | NAT | NRVT | RVNTP | kinematics (WOMD) | interactive (WOMD) | map (WOMD) | kinematics (NuPlan) | interactive (NuPlan) | map (NuPlan) |
| :----------: | ---- | ---- | ---- | ----- | ----------------- | ------------------ | ---------- | ------------------- | -------------------- | ------------ |
| M1 | | | | | **0.459** | **0.827** | **0.857** | 0.376 | 0.593 | 0.603 |
| M2 | ✓ | | | | 0.434 | 0.807 | 0.840 | 0.389 | 0.696 | 0.724 |
| M3 | ✓ | ✓ | | | 0.448 | 0.809 | 0.848 | 0.413 | 0.750 | 0.743 |
| M4 | ✓ | ✓ | ✓ | | 0.437 | 0.801 | 0.837 | 0.411 | 0.747 | 0.741 |
| M5 | ✓ | ✓ | | ✓ | 0.453 | 0.813 | 0.853 | 0.413 | 0.780 | 0.785 |
| M6 | ✓ | ✓ | ✓ | ✓ | 0.453 | 0.803 | 0.851 | **0.416** | **0.785** | **0.797** |
Our comparison between models M4 and M3 indicates that introducing the NRVT module alone harms the model's overall performance. This finding contrasts with the positive effects observed with NAT on model performance. We hypothesize that this discrepancy arises from a lack of a dedicated training task in map representation learning that guides the model to enhance its understanding of map information. Additionally, the comparison among M4, M5, and M6 shows that the combination of these elements leads to an improvement in overall model performance.
***
**Q2**: “Without comparisons to other methods, it is difficult to interpret the significance of the scaling law and zero-shot generalization experiments. “ + “The scale law ability of models based on transformer structure has been proven in many papers, resulting in limited novelty of the method proposed in this paper”
**A2**: Thank you for your valuable feedback. We recognize the importance of comparative methods to contextualize the significance of our scaling law and zero-shot generalization experiments. Initially, we replicated the MVTE method on the simagent task, and the relevant results are included in the attached PDF.
The term "SMART w/o" refers to the SMART model without the road vector tokenization and noise strategies proposed in this paper. To ensure fairness in our experiments, we adjusted all model parameters to the 90-100M range. An interesting observation is that, although our proprietary dataset contains more data than the NuPlan dataset, the performance of MVTE trained on our dataset was inferior to that on NuPlan. This suggests that models based on distribution regression may overfit to specific datasets. From the SMART w/o results, it is evident that the model's generalization performance is limited. However, including incremental data improves performance compared to using a single training dataset. Our findings indicate that discrete tokenization is an effective method for bridging dataset gaps. Additionally, autoregressive models utilizing cross-entropy classification loss are crucial for scalability, paralleling the significant scaling capabilities observed in large language models (LLMs). We plan to supplement this section with additional experiments in the appendix of the revised manuscript.
***
Pdf: /pdf/f83fd0e200f7da424574647badfdfd07c9acb7a3.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Towards Understanding Extrapolation: a Causal Lens | Accept (poster) | Summary: This work addresses the challenge of extrapolation in scenarios where only a few target samples are available. It aims to provide a theoretical understanding and methods for effective extrapolation without needing a target distribution within the training support. The approach involves a latent-variable model based on the minimal change principle in causal mechanisms. The study identifies conditions under which identification is possible even with a single off-support target sample. Experiments with synthetic and real-world data validate the theoretical and practical aspects of the findings.
Strengths: 1. The paper gives relatable motivating examples making it approachable for readers.
2. The paper is well-written, with a few minor typos.
3. The work tackles a very relevant problem in today's world.
Weaknesses: 1. Table 2 appears to omit results from TeSLA-s, which outperforms their TeSLA+SC. Thus their claims of SOTA are in question. Table 1 of:
* Devavrat Tomar, Guillaume Vray, Behzad Bozorgtabar, and Jean-Philippe Thiran. Tesla: Test-time self-learning with automatic adversarial augmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20341–20350, 2023.
2. The paper lacks expected sections. There is no related work, making it difficult to contextualize their work within the field. Additionally, some of the introduction could be better placed in a background section, specifically lines 41-67.
3. The authors do not provide any source code.
4. More elaboration on the limitations would enhance the paper. The authors seem to have overlooked reflecting sufficiently on potential limitations of their method. For instance, since everything operates in latent space, the method would only be applicable to models that possess a latent representation.
5. Minor typos:
* What do the numbers in Table 1 represent? Accuracy? AUROC?
* Line 127: Should it not be, "... values c as x_{src}"?
* Line 127: Missing period.
* Line 321: It would be helpful to provide a link to the code for MAE-TTT.
* Line 356: Missing period.
Technical Quality: 2
Clarity: 2
Questions for Authors: How would you apply your method for regression problems? Do you think it would perform well?
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: See weakness 4.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback and the time dedicated to reviewing our work. We address your concerns and questions as follows.
>W1: “Table 2 appears to omit results from TeSLA-s.”
Thank you for the comment. Thanks to your remark, we perform additional experiments to apply our sparsity constraint (SC) module to TeSLA-s and present the results as follows. We repeat our experiments over 3 random seeds and adopt the hyper-parameters from TeSLA+SC without any tuning.
| Model | CIFAR10-C | CIFAR100-C | ImageNet-C |
|---------|---------------|---------------|---------------|
| Tesla-s | 12.1 | 37.3 | 53.1 |
| Tesla-s+SC | **11.7 ± 0.01** | **37.0 ± 0.06** | **50.9 ± 0.15** |
We can observe that our module can consistently benefit existing methods (especially on the hardest dataset ImageNet-C), thus showcasing the applicability of our theoretical insights.
We have included this result into Table 2 in our revised manuscript -- thank you for pointing us to this!
We didn't include TeSLA-s in the initial draft because TeSLA-s relies on additional access to the source dataset during the adaptation (please also see this in Table 1 of Tomar et al.), which may not be as realistic and challenging as the source-free setting for TeSLA and TeSLA+SC.
>W2: “The paper lacks expected sections.”
Thank you for the constructive suggestion to help us improve the readability of our paper! In the submission, we deferred the related work section to Appendix A1. In light of your suggestion, we have condensed and placed it as Section 2 in our revision (original line 84 - 110) to aid contextualization (we reorganized spacing in Section 4 & 5 and moved Section 5.3 to the appendix to make space).
Further, given your suggestion, we have condensed and merged formalisms (especially in lines 46-53 and 56-60) into the beginning of Section 2 in the revised version to increase the readability.
>W3: source code.
Thanks for the comment. We now provide source code for TeSLA+SC and have already passed them to the AC, following NeurIPS guidelines. Please let us know if you have any questions regarding the code, thank you!
>W4: “More elaboration on the limitations.”
Thank you for raising this concern. In light of your suggestion, we have expanded the limitation section in our revised paper. The added content is as follows.
“On the theory aspect, the Jacobian norm utilized in Theorem 3.1 only considers the global smoothness of the generating function and thus may be too stringent if the function is much more well-behaved/smooth over the extrapolation region of concern. Therefore, one may consider a refined local condition to relax this condition.
On the empirical side, our theoretical framework entails learning an explicit representation space. Existing methods without such a structure may still benefit from our framework but to a lesser extent. For instance, TeSLA (Table 2) receives a smaller boost compared to MAE-TTT (Table 3).
Also, our framework involves several loss terms including reconstruction, classification, and the likelihood of the target invariant variable. A careful re-weighting over these terms may be needed during training.”
>W5: "minor typos".
Thank you so much for pointing them out! 1) Yes, it is classification accuracy -- we have added it to Table 1 caption. 2,3) We've reworded the sentence as "we need to identify the target sample $ \mathbf{x} _{\mathrm{tgt}} $ with source samples $ \mathbf{x} _{\mathrm{src}} $ that share the same invariant variable values with the target sample, i.e., $ \mathbf{c} _{\mathrm{src}} = \mathbf{c} _{\mathrm{tgt}} $." 4,5) Thanks -- added both to the revision!
>Q1: “regression problems.”
Thank you for noting this. As our theory doesn’t place specific assumptions on the conditional distribution $ p( y | \mathbf{c} ) $, the framework is rather flexible to accommodate regression problems.
Specifically, the procedure to learn the disentangled invariant variable $ \hat{\mathbf{c}} $ through objectives (3) or (4) is totally unsupervised and thus agnostic to $y$’s distribution. After learning $ \hat{\mathbf{c}}$, we can choose to train a regressor on pairs $ (\hat{\mathbf{c}}, y) $ from the source distribution.
Thanks to your question, we have included the following synthetic experiments on regression.
**Data Generation:** The regression target $y$ is generated from a uniform distribution $U(0,4)$. We sample 4 latent invariant variables $\mathbf{c}$ from a normal distribution $N(y, I_c)$. Two changing variables in the source domain $\mathbf{s} _{\mathrm{src}}$ are sampled from a truncated Gaussian centered at the origin. In the target domain, changing variables $\mathbf{s} _{\mathrm{tgt}}$ are sampled at multiple distances (e.g., $\{18, 24, 36\}$) from the origin. Observations $\mathbf{x}$ are generated by concatenating $\mathbf{c}$ and $\mathbf{s}$ and feeding them to a 4-layer MLP with ReLU activation. We generate 10k samples for training and 50 target samples for testing (one target sample accessed per run).
**Model:** We make two modifications on the classification model in the paper. First, we change the classification head to a regression head (the last linear layer). Second, we replace the cross-entropy loss with MSE loss. We fix the loss weights of MSE loss and KL loss at 0.1 and 0.01 for all settings, respectively, and keep all other hyper-parameters the same as in the classification task. We use MSE as the evaluation metric.
**Results:** The results are summarized below, which indicate that the proposed method can be extended to the regression setting.
| Dense Shift Distance | 18 | 24 | 30 |
|---------|----------|----------|----------|
| Baseline| 1.6400 | 2.4430 | 3.2627 |
| Our Method | 1.4006 | 1.6039 |1.6812 |
We are also working on more settings and will update you once the results are available.
---
Please let us know if you have further questions -- thank you so much!
---
Rebuttal Comment 1.1:
Title: More results on the regression setting
Comment: We now provide more results on the regression task we’ve included in the rebuttal. This setup corresponds to the sparse shift setting in our theory where only two out of six dimensions of $\mathbf{x}$ are influenced by the changing variable $\mathbf{s}$. In comparison, the setting we’ve included in the rebuttal corresponds to the dense shift case. The distinction is described in lines (lines 300-304). The other setups are identical. We directly adopt the hyper-parameters from the dense shift setting without modifications. The results are also measured in MSE (the lower the better).
| Sparse Shift Distance | 18 | 24 | 30 |
|----------------------|--------|--------|--------|
| Baseline | 1.835 | 3.323 | 5.841 |
| Our Method | 1.145 | 1.476 | 1.596 |
We can observe that like the dense shift case, our method outperforms the baseline consistently and maintains the performance over a wide range of shift severities. In contrast, the baseline that directly uses all the feature dimensions degrades drastically when the shift becomes severe. This indicates that our approach can indeed identify the invariant part of the latent representation, validating our theoretical results.
---
Please let us know if we have resolved your concerns – thank you!
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer 5kN3,
As the discussion deadline approaches, we are wondering whether our responses have properly addressed your concerns? Your feedback would be extremely helpful to us. If you have further comments or questions, we hope for the opportunity to respond to them.
Many thanks,
7667 Authors | Summary: The paper addresses the problem of out-of-support extrapolation and presents identification results within a latent variable framework. This framework resembles invariant learning, where one subset of latent variables directly causes the labels, while another subset, termed style latents, undergoes a distribution shift at test time. The authors provide identification results for scenarios where this distribution shift affects the entire image (global shift case) and where it affects only specific regions of the image (local shift case). Additionally, they connect their identification results to the entropy maximization objective used in test time adaptation (TTA) algorithms. These theoretical aspects are verified through experiments on popular TTA benchmarks, such as CIFAR-C and ImageNet-C.
Strengths: * The identification results for out-of-support extrapolation presented in the paper appear to be novel, to the best of my knowledge.
* The paper is overall well-written, with clearly explained assumptions required for the identification results, and detailed descriptions of the experimental setup and results.
Weaknesses: * My main concern is with the technical soundness of the paper, particularly regarding certain claims and connections. I believe the connection between the identification results and the test-time adaptation (TTA) objective is problematic. Specifically, Theorems 3.2 and 3.4 require that the learned distribution of the data ($\hat{p}(x)$) matches the true distribution ($p(x)$). However, the TTA methods are not generative models, and therefore they do not estimate the distribution of the data. However, TTA methods are not generative models and do not estimate data distributions, so they cannot enforce the constraint $p(x) = \hat{p}(x)$. Therefore, it is unclear how the identification results provide insights into TTA methods. Identification concerns the existence of a unique solution for perfect estimation under the learning objective, but the learning objectives for the identification analysis and TTA methods are different. While the authors discuss this in Section 3.3, however, the issue is deeper, they do not address the density estimation constraint and its connection to test-time adaptation comprehensively.
* The definition of identification considered by the authors (Definition 2.1) is problematic because it does not relate the estimated latents ($\hat{c}$) to the true latents ($c$). This implies that the estimated latents could be an arbitrary function of the true latents, which is concerning as it makes it unclear why such identifiability would be desirable. The identification criteria state that the estimated content latents are the same whenever the true content latent variables are the same. However, this does not necessarily imply a meaningful relationship between the estimated and true latents. For instance, block identification [1] ensures that the estimated content latents are a function of only the true content latents, establishing a clear relationship. If we cannot relate the estimated latents to the true latents, it is unclear whether the inferred content latents ($\hat{c}$) are genuinely informative about the true content latents ($c$). This lack of a meaningful connection undermines the practical value of the identifiability results.
References:
[1] Lachapelle, Sébastien, Divyat Mahajan, Ioannis Mitliagkas, and Simon Lacoste-Julien. "Additive decoders for latent variables identification and cartesian-product extrapolation." Advances in Neural Information Processing Systems 36 (2024).
[2] Khemakhem, Ilyes, Diederik Kingma, Ricardo Monti, and Aapo Hyvarinen. "Variational autoencoders and nonlinear ica: A unifying framework." In International conference on artificial intelligence and statistics, pp. 2207-2217. PMLR, 2020.
[3] Zimmermann, Roland S., Yash Sharma, Steffen Schneider, Matthias Bethge, and Wieland Brendel. "Contrastive learning inverts the data generating process." In International Conference on Machine Learning, pp. 12979-12990. PMLR, 2021.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please refer to the weaknesses section in my review above for more details regarding my concerns and queries.
* Why does the identification analysis apply for TTA when its learning objective is different from the learning objective considered in the identification analysis?
* How do the identification criteria considered in this study relate to standard identification criteria used in causal representation learning/ non-linear ICA [1, 2, 3]? Does satisfying the proposed criteria imply we learn meaningful content variables such that they are related to the true content variables?
* Following the results in Table 2 regarding the effect of adding sparsity regularization, it is unclear how TeSLA + SC provides an improvement over TeSLA, as the difference in mean performance between the two approaches is within the standard error. The authors should consider revising line 357 to accurately reflect this observation.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Yes, the authors have properly addressed the limitations and impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and the valuable time you have dedicated to our work.
It seems that the reviewer might have seen a previous version of this manuscript. In that case, please let us kindly highlight that this submission is significantly different: in the earlier instance, we tried justifying TTA with our theory, which we later realized was not optimal.
In this submission, the main contribution is to provide a fundamental understanding of extrapolation and discuss principled ways to address it, independent of existing TTA algorithms. To support it, we substantially rewrote the introduction, problem formulation, and analysis of the identification theory. Further, we implemented our framework with synthetic experiments in Sec 4 to rigorously validate our theory. Additionally, for real-world experiments, we base our implementation on the autoencoder-based MAE-TTT, whose reconstruction loss facilitates matching the estimated distribution $\hat{p}(\mathbf{x})$ and the true marginal distribution match $p(\mathbf{x})$ in our objective.
Thus, we feel the review comments mainly apply to the earlier version, not this one. We apologize for any confusion caused by the multi-version problem may have caused and appreciate your further feedback.
>W1& Q1: “...the connection between the identification results and the test-time adaptation (TTA) objective is problematic..”
Thank you for raising this concern. Please kindly note that we do not claim to justify or explain existing TTA methods with our identification theory. Instead, we discuss the relationship between our theoretical framework on extrapolation and TTA methods to identify gaps (lines 281-286) and explore how our ideas may benefit them. Our experiments show that incorporating our insights (aligning the invariant variable in Sec 5.1 and sparsity in Sec 5.2) can improve TTA methods.
In Sec 3.2 and the main experiments in Sec 5.1, we focus on the autoencoder-based model MAE-TTT, where the reconstruction loss facilitates matching the estimated distribution $\hat{p}(\mathbf{x})$ with the true marginal distribution $p(\mathbf{x})$, especially in the extreme case where the reconstruction is perfect.
Even with our modifications, TTA algorithms may not fully adhere to the theoretical framework. Nonetheless, we hope this serves as a meaningful step towards developing more principled extrapolation methods for real-world datasets.
>W2 & Q2: “The definition of identification considered by the authors (Definition 2.1) is problematic.”
Thank you for the question. Our identification notion (Definition 2.1) implies the block-wise identifiability [1] – there exists an invertible map from the true invariant variable $\mathbf{c}$ to the estimated invariant variable $\hat{\mathbf{c}}$. In light of your feedback, we’ve added the following text to line 120 to make it clearer: “which is equivalent to the blockwise identifiability in prior work [1].”
We give a proof that Definition 2.1 implies block-wise identifiability.
As both the generating function $g$ and the estimated one $\hat{g}$ are invertible (Assumption 3.1), we have an invertible map $ h: ( \mathbf{c}, \mathbf{s} ) \mapsto ( \hat{\mathbf{c}}, \hat{\mathbf{s}} ) $.
We first show that Definition 2.1 implies that $\hat{ \mathbf{c}}$ depends only on $ \mathbf{c}$. Suppose, for contradiction, that $\hat{ \mathbf{c}}$ depends on both $ \mathbf{c}$ and $ \mathbf{s}$.
Then there would exist $ \mathbf{c} _{o}$, $ \mathbf{s} _{1}$, and $ \mathbf{s} _{2}$ $( \mathbf{s} _{1} \neq \mathbf{s} _{2}$) such that: $h( \mathbf{c} _{o}, \mathbf{s} _{1}) = (\hat{ \mathbf{c}} _{1}, \hat{ \mathbf{s}} _{1})$ and $h( \mathbf{c} _{o}, \mathbf{s} _{2}) = (\hat{ \mathbf{c}} _{2}, \hat{ \mathbf{s}} _{2})$ where $\hat{ \mathbf{c}} _{1} \neq \hat{ \mathbf{c}} _{2}$.
However, this contradicts Definition 2.1, because we have $ \mathbf{c} _{o} = \mathbf{c} _{o} $, but $\hat{\mathbf{c}} _{1} \neq \hat{\mathbf{c}} _{2}$. Therefore, the proposition must be false, and $\hat{ \mathbf{c}}$ must depend only on $ \mathbf{c}$.
Now we define $h_{c}: \mathbf{c} \mapsto \hat{ \mathbf{c}}$ as follows: for any $ \mathbf{c}$, choose any $ \mathbf{s}$ and let $h_{c}( \mathbf{c}) = \hat{ \mathbf{c}}$, where $(\hat{ \mathbf{c}}, \hat{ \mathbf{s}}) = h( \mathbf{c}, \mathbf{s})$. This is well-defined because we've proven that $\hat{ \mathbf{c}}$ only depends on $ \mathbf{c}$, so the choice of $ \mathbf{s}$ doesn't matter.
We finally show that $h_{c}$ is invertible by noting the following.
$h_{c}$ is injective: Let $ \mathbf{c} _{1}, \mathbf{c} _{2}$ be in the domain of $h _{c}$. If $h _{c}( \mathbf{c} _{1}) = h _{c}( \mathbf{c} _{2})$, then $\hat{ \mathbf{c}} _{1} = \hat{ \mathbf{c}} _{2}$ by definition. By Definition 2.1, this implies $ \mathbf{c} _{1} = \mathbf{c} _{2} $. Therefore, $h _{c}$ is injective.
$h_{c}$ is surjective: Let $\hat{ \mathbf{c}}$ be in the image of $h_{c}$. Since $h$ is invertible, there exists $( \mathbf{c}, \mathbf{s})$ such that $h( \mathbf{c}, \mathbf{s}) = (\hat{ \mathbf{c}}, \hat{ \mathbf{s}})$ for some $\hat{ \mathbf{s}}$. By definition of $h_{c}$, $h_{c}( \mathbf{c}) = \hat{ \mathbf{c}}$. Therefore, $h_{c}$ is subjective.
Please let us know if we have fully addressed your concerns and we’d be delighted to discuss further.
>Q3: “it is unclear how TeSLA + SC provides an improvement over TeSLA, as the difference in mean performance between the two approaches is within the standard error.”
Thanks for the comment. We were wondering if there was a misreading – the improvements of TeSLA+SC over TeSLA are $0.4$. $0.2$, and $0.5$, all larger than the std $0.1$.
As we acknowledge in lines 357-359, these improvements, though modest, are consistent and thus demonstrate the applicability of our approach beyond autoencoding TTA approaches like MAE-TTT.
---
We are eager to hear your feedback. We’d deeply appreciate it if you could let us know whether your concerns have been addressed.
---
Rebuttal 2:
Comment: Thanks for your detailed response! I have a better understanding of the implications of the theoretical results and how authors intended to connect them with prior learning methods for TTA (hence adjusted my rating as well). A suggestion would be to change the introduction and especially the contribution to highlight this more, it is heavily centered around the connection between theoretical results and the existing TTA approaches. The authors could mention more explicitly that they propose an improvement over MAE-TTT inspired by their theory and evaluate it on TTA benchmarks. Furthermore, section 3.3 can be improved and perhaps made into a separate section, with more details MAE-TTT being related to the theoretical analysis and proposed changes on top of it.
I would be most interested in understanding the performance of MAE-TTT + since it directly conforms to the theory. This prompts another question: why did the authors not experiment with the proposed MAE-TTT + entropy minimization for benchmarks in Table 2? MAE-TTT is only explored in section 5.1 on the ImageNet100-C dataset and is not included for the benchmarks in Section 5.2 Wouldn't that be an important question to understand how the approach following the theoretical analysis compares with other state-of-the-art approaches? I am happy to increase my score further if the authors can perform experiments for the same.
---
Rebuttal 3:
Comment: Also, thanks for the proof on Definition 2 and its connection with block identifiability! The argument is correct and addresses my concerns.
Regarding my final question about improvement with sparsity regularization over TeSLA, thanks for the clarification, my earlier statement was incorrect. However, I am still not convinced since it also depends on the standard deviation in the performance of the method TesLA as well. For example, if the performance of TeSLA on CIFAR10-C is $12.5 \pm 0.3$, then the confidence intervals would overlap. But I do not see this as a major concern though I encourage authors to report the deviation in the error rate of all baselines as well.
---
Rebuttal 4:
Comment: Thank you for your detailed feedback and we are delighted that we have cleared your previous concerns!
> Writing suggestions.
We highly appreciate your suggestion on the writing. Thanks to your suggestion, we have made the following modifications:
1. We have replaced current lines 70-72 with: “In particular, we apply our theoretical insights to improve autoencoder-based MAE-TTT [24] and observe noticeable improvements on TTA tasks. We also demonstrate that basic principles (sparsity constraints) from our framework can benefit state-of-the-art TTA approach TeSLA [27].”
2. We have replaced the contribution lines 82-83 with: “Inspired by our theory, we propose to add a likelihood maximization term to autoencoder-based MAE-TTT [24] to facilitate the alignment between the target sample and the source distribution. In addition, we propose sparsity constraints to enhance state-of-the-art TTA approach TeSLA [27]. We validate our proposals with empirical evidence.”
3. We have re-organized lines 275-287 in Section 3.3 to make the relationship with MAE-TTT clearer. In particular, we make lines 276-280 as a separate paragraph to highlight the connection (the reconstruction objective) and lines 280-287 another paragraph to specify the distinction and our proposal, starting with “Despite the resemblance on the reconstruction objective, MAE-TTT doesn’t explicitly perform the representation alignment as our objectives (2)(3).”
We hope these modifications would make the message clearer and we would appreciate your further feedback!
> MAE-TTT +entropy for other benchmark datasets.
Thanks for the great question!
In light of your question, we have started running MAE-TTT + entropy minimization on ImageNet-C. Given our computing resource, MAE-TTT on ImageNet will require 20 days to complete and we may not be able to provide full results by the end of the discussion period. Nevertheless, we will include this result in our revision. Thank you for your understanding!
For CIFAR-10/100-C: MAE-TTT requires pre-trained and fine-tuned MAE checkpoints, which are only available for ImageNet (please see the original paper [a]). We are concerned that directly using the ImageNet pre-trained checkpoints on CIFAR-10/100-C would lead to unfair comparisons and also require a significant amount of fine-tuning for the transfer between datasets. Thus, we decide to focus on ImageNet-C dataset for now, and this is the most representative and challenging benchmark.
We thank you for your understanding and patience!
[a] Masked Autoencoders Are Scalable Vision Learners. He et al. CVPR 2022.
---
Rebuttal Comment 4.1:
Comment: > Proof on Definition 2.
Thank you for your positive feedback and we are happy to see this major concern resolved!
> Standard deviations for TeSLA.
Thanks for the question! We’ve included in our revision standard deviations for TeSLA performances as follows.
| Method | CIFAR10-C | CIFAR100-C | ImageNet-C |
|--------|-------------|-------------|------------|
| TeSLA | 12.5 ± 0.04 | 38.2 ± 0.03 | 55.0 ± 0.17 |
Just as you can see, the standard deviations are all relatively small, so they should not affect our conclusion.
---
Thank you for your thoughtful comments and remarks and we’d highly appreciate your further feedback! | Summary: The authors discuss the problem of domain adaptation or distribution shift in the case when only a single point in the shifted domain is available rather than the full distribution. The authors approach such a problem from the perspective of a latent variable model, which assumes that the observations are generated from two latent variables: one is invariant under domain shifts (and carries informations about the classification label) and the other one changes under domain shifts but is irrelevant for classification. However, the generator function mixes these two latent variables making the classification under domain shifts difficult. The authors theoretically discuss the identifiability of the invariant latent variables in two scenarios, when the irrelevant for classification latent variables affects all dimensions in the input or only a subset of them. These theoretical results are validated in experiments on synthetic and real-world data.
Strengths: + Interesting and practically relevant problem
+ The paper is clearly written and easy to follow
+ Theoretical results formalising the intuition of invariant and nuisance latent variables
Weaknesses: - I am not sure I completely understood the connection between the theory and the actual learning objective. Specifically, I would appreciate if the authors could elaborate a bit more on how the Eq. (2) and (3) are connected to the learning objective in Sec. 3.3 (also see the Questions part of the review)
Technical Quality: 3
Clarity: 3
Questions for Authors: - It is very interesting that you derive a bound of how far the target point can be from the source manifold in Assumption 3.1-v. I wonder how tight do you think this bound is?
- Could you provide a bit more intuition why Assumption 3.3.-iv allows us to "identify the unaffected dimension indices [...] with our estimated model", I am not sure I understood this point?
- Did I understand correctly that in practice the estimation algorithms (as described in Sec. 3.3.) reduce to training an auto-encoder to match the data distribution, and a classifier in the latent space? So the key idea is that "raising" the classification to the latent space allows us to separate invariant latents from the nuisance variables?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations are adequately addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and valuable questions! We address your questions point-to-point in the following.
> W1 & Q3: “... how the Eq. (2) and (3) are connected to the learning objective in Sec. 3.3”, “Did I understand correctly that in practice the estimation algorithms (as described in Sec. 3.3.) reduce to training an auto-encoder to match the data distribution, and a classifier in the latent space? So the key idea is that "raising" the classification to the latent space allows us to separate invariant latents from the nuisance variables?”
Thank you for the great question! You are absolutely right about the general implementation framework. The matching distribution constraint in (2) (3) entails learning a generative model with a representation space $\hat{\mathbf{z}}$ and maximizing likelihood of the target invariants $\hat{p} ( \hat{\mathbf{c}}_{\mathrm{tgt}} )$ enables us to separate the invariant latents $\mathbf{c}$ from the nuisance variables $\mathbf{s}$, as you pointed out precisely. Given this disentangled representation, we can train a classifier on the invariant latents from the source distribution, which can be directly applied to the target distribution. Please let us know if you’d like elaboration – thanks!
> Q1: “It is very interesting that you derive a bound of how far the target point can be from the source manifold in Assumption 3.1-v. I wonder how tight do you think this bound is?”
We appreciate your insightful question. This bound is tight in a sense that the equality can be attained in worst-case scenarios. This is because beyond the threshold in the bound, a manifold characterized by a distinct invariant variable $ \mathbf{c} \neq \mathbf{c} _{\mathrm{tgt}} $ may also explain the target sample $\mathbf{x} _{\mathrm{tgt}}$. This ambiguity would thwart our attempt to uniquely determine the invariant latent of $ \mathbf{x} _{\mathrm{tgt}} $.
To aid intuition, let’s look at Figure 1b. The threshold in the bound exactly characterizes the starting point of the “unidentifiable region”. In the worst case when the target sample $\mathbf{x} _{\mathrm{tgt}}$ lands in the doubly shaded area, we cannot uniquely identify which manifold it belongs to without further assumptions/knowledge.
> Q2: “Could you provide a bit more intuition why Assumption 3.3.-iv allows us to "identify the unaffected dimension indices [...] with our estimated model", I am not sure I understood this point?”
Thank you for the question! Assumption 3.3 iv enforces that the unaffected pixels exhibit certain dependence among them, in the sense that if we divide this region into any two partitions and generate these two regions separately, we would need more “capacity” (i.e., Jacobian ranks) than generating this region jointly, because in the latter case some information would be shared between the partitions. Further note that Assumption 3.3 iii coerces the affected region to be either small or disjoint from the unaffected region. Thus, the cross-region dependence (between the unaffected and the affected regions) is small. Intuitively, this clear contrast offers us the signal to disentangle these two regions.
For instance, in Figure 1c, the pixels within the region of the cow are highly dependent, whereas the cow region and the background region are significantly less so. Thus, this contrast aids our humans to distinguish the two regions rather easily.
---
Please let us know if you’d like further illustration, thank you!
---
Rebuttal Comment 1.1:
Comment: Thank you very much for a thorough rebuttal! I confirm my positive view on this paper and happy to increase my score.
---
Reply to Comment 1.1.1:
Comment: Thank you so much for your encouraging words and we are truly grateful for your dedicated time and constructive comments! | Summary: * With regard to extrapolability in classification problems, they introduce a reasonable assumption that the latent factor generating the distribution shift only affects the input x but does not affect the label y is introduced and the identifiability of the latent factor is proved under additional reasonable assumptions.
* The interaction between the smoothness of the generating function, the distance out of support, and the nature of the shift (is the shift restricted to some pixels of the image?) is clarified.
* They discuss the relationship with test-time adaptation, extend using sparsity constraint, and verify its empirical performance.
Strengths: * The relation between extrapolation and causality is a hot research topic.
* The motivation behind the assumption is clear that the class label is related only to the inviant factors: "factors such as camera angles and lighting do not affect the object’s class in an image."
* Under the reasonable assumption, an interesting theoretical result on the identifiability of the latent invariant factors is shown.
* The existing test-time adaptation is shown to be related to this theory and extended based on the implication; the sparsity is added in the adaptation, which shows superior empirical performance.
Weaknesses: * Methodological improvement and its empirical superiority to the existing test-time adaptation itself is marginal.
Technical Quality: 3
Clarity: 2
Questions for Authors: Q1. Is the reported error bar in Table 2 standard error or standard deviation?
Q2. Does it matter if the marginal distribution of the label $p(y)$ changes in the test phase?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 4
Limitations: Most of the important results are for the image classification problem, where the class-conditioned distribution p(x|y) is separated for each y and x has much information, and thus only unchanged pixels have enough information of $y$ under the sparse change assumption. It seems not to be simply extended to numerical predictions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your encouraging words and valuable feedback! Below, we address your questions and indicate the changes we’ve made thanks to your suggestion.
> W1: “Methodological improvement and its empirical superiority to the existing test-time adaptation itself is marginal.”
Thank you for your feedback. Please kindly note that our main empirical results (Table 3) demonstrate rather significant performance gain over the baseline ($+4.87 \\%$). This is the case where the base method (MAE-TTT) is a representation-based approach and thus conforms to our theoretical model (please see Section 3.3), thus directly substantiating our theoretical insights.
Table 2 shows that our theoretical insights can consistently benefit a broader class of TTA approaches, even if they don’t exactly conform to our framework. Although some gains are marginal, we believe that these results demonstrate the generality of our theoretical insight.
>Q1: “Is the reported error bar in Table 2 standard error or standard deviation?”
It is the standard deviation over three random seeds. Thanks to your reminder, we’ve already included this in the figure caption in our revised manuscript.
>Q2. “Does it matter if the marginal distribution of the label $p(y)$ changes in the test phase?”
Thank you for the interesting question. The marginal shift of $p(y)$ would not affect the model performance, under proper assumptions of the shift (please note assumptions are generally necessary to avoid arbitrary shifts).
To be specific, under our graphical formulation in which $y$ is a child of $\mathbf{c}$, the shift of $p(y)$ corresponds to that of the support-invariant variable $p(\mathbf{c})$. If the shift doesn’t change the support of $p(\mathbf{c})$ but only its density on the shared support, our guarantee will still hold true with sufficient samples. This is because this conditional distribution $ p( y | \mathbf{c} )$ can still be learned and transferable as in our case.
> Q3: “Most of the important results are for the image classification problem…It seems not to be simply extended to numerical predictions.”
Thank you for noting this. As our theory doesn’t place specific assumptions on the conditional distribution $ p( y | \mathbf{c} ) $, the framework is rather flexible to accommodate regression problems.
Specifically, the procedure to learn the disentangled invariant variable $ \hat{\mathbf{c}} $ through objectives (3) or (4) is totally unsupervised and thus agnostic to $y$’s distribution. After learning $ \hat{\mathbf{c}}$, we can choose to train a regressor on pairs $ (\hat{\mathbf{c}}, y) $ from the source distribution.
Thanks to your question, we have included in our manuscript the following synthetic experiments on regression tasks.
**Data Generation:** The regression target $y$ is generated from a uniform distribution $U(0,4)$. We sample 4 latent invariant variables $\mathbf{c}$ from a normal distribution $N(y, I_c)$. Two changing variables in the source domain $\mathbf{s} _{\mathrm{src}}$ are sampled from a truncated Gaussian centered at the origin. In the target domain, changing variables $\mathbf{s} _{\mathrm{tgt}}$ are sampled at multiple distances (e.g., $\{18, 24, 36\}$) from the origin. Observations $\mathbf{x}$ are generated by concatenating $\mathbf{c}$ and $\mathbf{s}$ and feeding them to a 4-layer MLP with ReLU activation. We generate 10k samples for training and 50 target samples for testing (one target sample accessed per run).
**Model:** We make two modifications on the classification model in the paper. First, we change the classification head to a regression head (the last linear layer). Second, we replace the cross-entropy loss with MSE loss. We fix the loss weights of MSE loss and KL loss at 0.1 and 0.01 for all settings, respectively, and keep all other hyper-parameters the same as in the classification task. We use MSE as the evaluation metric.
**Results:** The results are summarized below, which indicate that the proposed method can be extended to the regression setting.
| Dense Shift Distance | 18 | 24 | 30 |
|---------|----------|----------|----------|
| Baseline| 1.6400 | 2.4430 | 3.2627 |
| Our Method | 1.4006 | 1.6039 |1.6812 |
We are also working on more settings and will update you once the results are available.
---
Please let us know if we have properly addressed your questions and we are more than happy to discuss more!
---
Rebuttal Comment 1.1:
Title: More results on the regression setting
Comment: We now provide more results on the regression task we’ve included in the rebuttal. This setup corresponds to the sparse shift setting in our theory where only two out of six dimensions of $\mathbf{x}$ are influenced by the changing variable $\mathbf{s}$. In comparison, the setting we’ve included in the rebuttal corresponds to the dense shift case. The distinction is described in lines (lines 300-304). The other setups are identical. We directly adopt the hyper-parameters from the dense shift setting without modifications. The results are also measured in MSE (the lower the better).
| Sparse Shift Distance | 18 | 24 | 30 |
|----------------------|--------|--------|--------|
| Baseline | 1.835 | 3.323 | 5.841 |
| Our Method | 1.145 | 1.476 | 1.596 |
We can observe that like the dense shift case, our method outperforms the baseline consistently and maintains the performance over a wide range of shift severities. In contrast, the baseline that directly uses all the feature dimensions degrades drastically when the shift becomes severe. This indicates that our approach can indeed identify the invariant part of the latent representation, validating our theoretical results.
---
Please let us know if we have resolved your concerns – thank you!
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer zpbG,
We were wondering whether your technical had been properly addressed by our responses so far. Please let us know if you have further questions or concerns that we can address. Thank you for your engagement with our work.
Many thanks,
Authors of 7667
---
Rebuttal 2:
Title: Thanks a lot for your recognition and valuable suggestions
Comment: Dear Reviewer zpbG,
We really appreciate your recognition of our work and your kind words, and we are happy to hear that your concerns have been addressed. Again, thank you for your valuable suggestions which have undoubtedly contributed to improving the quality of our paper.
Many thanks,
The authors of #7667 | Rebuttal 1:
Rebuttal: We are grateful to all reviewers for their efforts and helpful comments regarding our paper. We are encouraged that Reviewers **zpbG**, **HS3a**, and **5kN3** find our problem setup relevant and interesting, Reviewers **zpbG** and **gbwX** find our theory interesting and novel, and all Reviewers find our paper clearly written.
Below is a summary of our responses:
* To **Reviewer zpbG**: We have provided further details and introduced a new experiment focusing on the regression task.
* To **Reviewer HS3a**: We have detailed the connections between our theoretical framework and the practical learning objectives and assumptions.
* To **Reviewer gbwX**: We have clarified the key contributions of our formulation and theory.
* To **Reviewer 5kN3**: We have included new experiments using TeSLA-s as a baseline and in a regression setting. We have also shared the source code for TeSLA+SC and TeSLA-s+SC, which has been forwarded to ACs. Further, we have expanded the limitation section, moved the related works from the Appendix to Section 2, and corrected the typos.
Please review our detailed responses to each point raised. We hope that our revisions and clarifications satisfactorily address the concerns raised. We thank you again for your valuable time and expertise. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Externally Valid Policy Evaluation from Randomized Trials Using Additional Observational Data | Accept (poster) | Summary: This paper introduces a method for inferring policy decisions based on randomized controlled trial data when applied to a target population with new covariate data. The method is nonparametric and makes no assumptions about the distributional forms of the data and certifies valid finite-sample inferences of the out-of-sample loss.
Strengths: The problem of policy evaluation with covariate shifts studied in this paper is important.
Weaknesses: 1. The motivation for problem setup is not clear. Why inferring L_{n+1} for the additional (n+1)th data point rather than the new 1...n? This would be reasonable in clinical trial, n new patents coming in with unknown outcomes.
2. Discussion of theoretical guarantees is not thorough, especially when the method relies heavily on heuristics, e.g. strong assumptions about the correctness of the estimates of unknown distribution of p(S|X,U) are made, and division of D into D' and D'' seems arbitrary.
3. The experimental setup looks overly simplistic, a basic two-d covariate space and a simple quadratic model for the loss.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. Could you clarify the rationale behind inferring L_{n+1} for a single additional data point rather than considering the distribution of outcomes for multiple new observations (1...n)?
2. Is there a way in your experiments to evaluate the estimated p(S|X) and hence odds ratios (Figure 4)? Is proposed alg 1 sensitive to mis-estimation of p(S|X), which is very likely due to unobserved confounding U?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors have marked some limitations, such as the method requires independent samples and may not be suitable in scenarios like major virus outbreaks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments and provide our response below. We believe they may clarify some potential misunderstandings.
> [W1] The motivation for problem setup is not clear. Why inferring $L_{n+1}$ for the additional $(n+1)$th data point rather than the new $1...n$?
Bounding $L_{n+1}$ quantifies the out-of-sample performance of $\pi$ with respect to a *future individual* drawn from the target population (after having sampled the covariates of $n$ persons), see lines 81-83.
In our assessment, it could be possible to extend the methodology to consider, say, $N$ future individuals, i.e., $n+1, …, n+N$ by leveraging permutations of independent samples. However, it appears to lead to a combinatorial explosion which quickly becomes intractable. We have therefore opted for clear intuitive results that are computationally feasible.
> [W1, cont’d] This would be reasonable in clinical trial, $n$ new patents coming in with unknown outcomes.
It seems this could be a misunderstanding of the problem setting we are considering: The aim of the paper is not to infer outcomes in a *clinical trial*, but rather outcomes of a policy applied to a *target* population for which we only have covariate data.
> [W2] the method relies heavily on heuristics, e.g. strong assumptions about the correctness of the estimates of unknown distribution of p(S|X,U) are made, and division of D into D' and D'' seems arbitrary.
A *central* point of the methodology we develop is precisely to *avoid* strong assumptions about $p(S|X,U)$ (see for instance lines 34-35, 69-80, 131-132, 290-291). We do so using a model $\hat{p}(S|X)$ and the sensitivity specification $\Gamma$ in eq. (3).
Sample splitting $\mathcal{D}$ into $\mathcal{D}'$ and $\mathcal{D}''$, as described in lines 168-169, is a standard procedure in statistical inference to ensure valid inferences. (In supervised machine learning, this occurs e.g. in train-test splits to evaluate out-of-sample risk of a learned prediction rule.)
> [W3] The experimental setup looks overly simplistic, a basic two-d covariate space and a simple quadratic model for the loss.
The first experimental setup presented in Section 5.1 is indeed intended to be *illustrations* as the subheading indicates. This is so that the reader can obtain some intuition behind the method. For this reason we could either use one or two-dimensional covariates to visualize the setting. We chose two-dimensional covariates as in Figure 4.
Note that the method uses a *nonparametric* assumption for the loss distribution; and only uses a parametric model for the sampling pattern, i.e., $\hat{p}(S|X)$, along with $\Gamma$. The simplicity of the conditional loss distribution is therefore not relevant when illustrating the method.
The second experimental setup presented in Section 5.2 (and Figure 3) evaluates seafood consumption policies based on real data and involves 8 covariates. This is a considerably more complex problem for policy evaluation.
> [Q1] Could you clarify the rationale behind inferring L_{n+1} for a single additional data point rather than considering the distribution of outcomes for multiple new observations (1...n)?
Please see our reply to W1 above.
> [Q2] Is there a way in your experiments to evaluate the estimated p(S|X) and hence odds ratios (Figure 4)? Is proposed alg 1 sensitive to mis-estimation of p(S|X), which is very likely due to unobserved confounding U?
We are unsure about the first part of the question. Figure 4 belongs to the illustrative experiments in Sec. 5.1 in which there is a synthetic ground truth $p(S|X)$. In real experiments, our suggestion is to use ideas from sensitivity analysis (see lines 86-99 respective Figure 3a and 5a) to evaluate a credible range for $\Gamma$ that limits the odds ratios in eq. 3.
The second part of the question seems to restate the misreading above: A *central* point of Alg. 1 is that it is *robust* against mis-estimation of $p(S|X)$, not least when there is unobserved confounding $U$! Hence the (benchmarked) miscalibration degrees $\Gamma$ required as input.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal. However, I am still not satisfied with the responses regarding the problem setup, motivation and the sensitivity estimated model p(S|X). For instance, the authors use notation m for indicating/summing trial population data, and 1... {n+1} for covariate-only for target population data, which could all be "future", but strangly focuses on L_{n+1} without convincing explanation. I believe a clearer version of this paper might be of interests to Neurips community, but not in its current form. Hence, I maintain my score.
---
Reply to Comment 1.1.1:
Comment: We believe there are still some fundamental misunderstandings regarding the problem setup. Let us restate it succinctly here. We assume access to two data sets in this scenario:
1. The first data set comprises $m$ samples drawn from the *trial* distribution, and typically represents a randomized controlled trial (RCT) setup. This data set includes covariates $X$, actions $A$, and losses $L$. (See lines 64-65.)
2. The second data set consists *solely* of covariate data $X$ and contains $n$ (*not* $n+1$!) samples drawn from the *target* distribution. (See lines 65-66.)
The goal is to infer the *out-of-sample* loss in the *target* population. That is, infer $L$ for a *new* individual which does not exist in our second dataset. The future loss is therefore indexed $L_{n+1}$.
To exemplify this setting, we are currently analyzing data from a randomized controlled trial (RCT) that compares the blood pressure responses to various blood pressure-lowering drugs. In this RCT data set, we have $m$ samples, where $m$ is less than 500. Additionally, we have access to medical records from a large number of patients, $n$ is over 100 000, who are seeking treatment for high blood pressure. This larger data set represents our target distribution, where we aim to evaluate and implement new treatment policies.
It is also vital to understand that the method is robust to miscalibrations that occur when the conditional probability $p(S|X, U)$ is estimated as $\hat{p}(S|X)$ (see lines 69-78). The robustness is controlled by the parameter $\Gamma$. We also provide a practical approach for determining suitable ranges for $\Gamma$ (lines 86-96). | Summary: This paper proposes a method for constructing limit curves (upper bounds on the CDF) of an outcome, under a given policy, in a target population, using data from an experimental study. The general goal is to certify that bad outcomes are unlikely in a target population, given experimental data and some knowledge of the strength of potential biases arising from selection on unobservable characteristics, model misspecification, and so on.
Strengths: Full disclosure: I reviewed a previous version of this paper, and previously recommended rejection. Hence, my review is influenced both by the current version, and by my knowledge of what has changed since the previous version.
Without further ado, the strengths of this paper, as I see them:
1. This paper considers an important and significant problem at the intersection of several interesting lines of work (e.g., conformal prediction & risk control, generalization of policy evaluation from experimental to observational settings, etc).
2. With some minor nits (see below), the paper clearly presents their contributions, the motivation for their approach, and the assumptions required. Moreover, the current version does a much improved job putting some of their work in the context of other papers that consider similar bounds on generalizing from experimental to observational data.
3. The approach is practical, given the incorporation of some informal methods for benchmarking plausible values of the $\Gamma$ parameter. This is new since the previous version, and greatly improves the practical applicability of the method in my view.
Weaknesses: The main weakness, in my view, is the claimed contribution that Gamma can incorporate finite sample and model misspecification error (see lines 77-78, "includes all sources of errors...selection bias, model misspecification, estimation error)"), since it is not at all clear how those considerations enter into the selection of Gamma? The informal benchmarking approach involves building intuition for plausible impacts of selection bias due to unobserved factors, but it didn't seem to speak to model misspecification or estimation error. This is not a major issue necessarily, but it might be worth softening some of the claims that the proposed approach handles all these other types of error, without some corresponding method for benchmarking these. For instance, for smaller sample sizes, I could imagine these factors being much more influential in the true value of $\Gamma$.
The second weakness, which I'm less inclined to weigh heavily, is that the novelty of technical contribution is slightly unclear: Mechanically speaking, there is little difference between sensitivity analysis considering unobserved confounding that impacts selection & outcome, versus confounding that impacts treatment & outcome, and there is similarly little difference between policy evaluation and evaluating average treatment effects. It would be helpful if the authors could highlight the technical contributions they think are most notable compared to Jin et al. 2023, Ek et al. 2023, Huang 2024, etc, beyond the simple fact that the setting differs (e.g., considering treatment-outcome vs selection-outcome confounding, or considering ATE vs policy evaluation).
I also have some additional minor feedback that might be worth incorporating into a future version:
1. Regarding the benchmarking, it would be worth giving the caveat that this is an informal approach to benchmarking that can yield unintuitive results, see [0]. Part of the contribution of Huang 2024 (cited here) was to give a more principled approach under a different sensitivity model. However, I don't view this as a major weakness, since this is generally an unresolved challenge as I understand it for Rosenbaum-like sensitivity bounds as used here (Huang 2024 & Cinelli and Hazlett 2020 use an R2-based sensitivity model where a more principled approach is possible in the first place).
2. There's a lot of work on combining observational & experimental data, and it might be worth highlighting differences to the settings considered. For instance, [1] deals with a similar setting, but one where outcome information is available from the observational data. There are other papers that deal with estimating causal effects in target populations from experimental data where observational data includes individuals not represented in the trial, e.g., [2], [3].
3. It's not entirely clear from the introduction what the "loss" $L$ entails, as the name suggests something like a prediction error. I think it would be useful to highlight upfront that $L$ can represent e.g., an outcome of interest (more typically rendered as $Y$), as done in the application. It is more intuitive to me to interpret the setup in light of the application, e.g., the goal is to bound the probability of extreme outcomes occurring.
4. As an improvement, it seems like it should be fairly straightforward to derive lower bounds as well, no? E.g., replacing $L$ with $-L$ and applying the same machinery. That might be useful in applications where you want to ensure that an outcome stays within a certain range.
[0] Making Sense of Sensitivity: Extending Omitted Variable Bias. Carlos Cinelli, Chad Hazlett. https://doi.org/10.1111/rssb.12348 (JRSS B, 2020)
[1] Hidden yet quantifiable: A lower bound for confounding strength using randomized trials. Piersilvio De Bartolomeis, Javier Abad, Konstantin Donhauser, Fanny Yang. https://arxiv.org/abs/2312.03871 (AISTATS 2024)
[2] Removing Hidden Confounding by Experimental Grounding. Nathan Kallus, Aahlad Manas Puli, Uri Shalit. https://arxiv.org/abs/1810.11646 (NeurIPS 2018)
[3] Falsification before Extrapolation in Causal Effect Estimation. Zeshan Hussain, Michael Oberst, Ming-Chieh Shih, David Sontag. https://arxiv.org/abs/2209.13708 (NeurIPS 2022)
Technical Quality: 3
Clarity: 3
Questions for Authors: I would appreciate any clarifications or additional comments from the authors on the two main weaknesses I raised above:
1. How to incorporate model misspecification or estimation error into the choice of Gamma?
2. Are there any parts of the technical contribution the authors would highlight as particularly interesting from their perspective? Honest question, as I have not read some of the related papers in depth.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately discuss limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are very grateful for the reviewer’s past comments, which improved the revision substantially.
> The informal benchmarking approach involves building intuition for plausible impacts of selection bias due to unobserved factors, but it didn't seem to speak to model misspecification or estimation error. This is not a major issue necessarily, but it might be worth softening some of the claims that the proposed approach handles all these other types of error, without some corresponding method for benchmarking these. [Q1] How to incorporate model misspecification or estimation error into the choice of Gamma?
This is a valid point: the benchmarking method addresses only the plausible impact of unobserved selection factors $U$. We will add a remark that points the reader to Appendix B.3, where we suggest using reliability diagrams to quantify bounds on model misspecification or estimation errors.
The possible joint impact of $U$ *and* model misspecification or estimation error is, of course, beyond the scope of our methodology.
> The second weakness, which I'm less inclined to weigh heavily, is that the novelty of technical contribution is slightly unclear. [Q2] Are there any parts of the technical contribution the authors would highlight as particularly interesting from their perspective?
Yes, while the `mechanical’ aspects of the employed proof technique does build on Jin et al. 2023 and cited works developed in other problems areas, the contribution to the problem of establishing externally valid policy evaluation is novel. We have tried to break down our proof steps in Appendix A in a transparent manner and highlight those steps that invoke past results (e.g. lines 767, 776, 778). To make it clearer we will also refer to the relevant theorems in the cited work.
Regarding the minor feedback:
> [1] Regarding the benchmarking, it would be worth giving the caveat that this is an informal approach to benchmarking that can yield unintuitive results, see [0]. Part of the contribution of Huang 2024 (cited here) was to give a more principled approach under a different sensitivity model. However, I don't view this as a major weakness, since this is generally an unresolved challenge as I understand it for Rosenbaum-like sensitivity bounds as used here.
We agree. While we have tried to be transparent about it, adding a reference could make it even more formal.
> [2] There's a lot of work on combining observational & experimental data, and it might be worth highlighting differences to the settings considered. For instance, [1] deals with a similar setting, but one where outcome information is available from the observational data. There are other papers that deal with estimating causal effects in target populations from experimental data where observational data includes individuals not represented in the trial, e.g., [2], [3].
We have descriptions of the setting both in the introduction (lines 25-31) and the background (lines 109-117), but the difference to other settings may not be clear enough. We will review the references and try to clarify this.
> [3] It's not entirely clear from the introduction what the "loss" $L$ entails, as the name suggests something like a prediction error. I think it would be useful to highlight upfront that $L$ can represent e.g., an outcome of interest (more typically rendered as $Y$), as done in the application. It is more intuitive to me to interpret the setup in light of the application, e.g., the goal is to bound the probability of extreme outcomes occurring.
We refer to it as “loss” as the opposite of “reward”, either way it is an *ordered* decision outcome that we want to bound. However, this is a good suggestion that will help the reader and we will try to clarify it already in the introduction.
> [4] As an improvement, it seems like it should be fairly straightforward to derive lower bounds as well, no? E.g., replacing $L$ with $-L$ and applying the same machinery. That might be useful in applications where you want to ensure that an outcome stays within a certain range.
Replacing it with $-L$ will result in a lower bound. Although we have not considered this type of application, it is a valuable suggestion and we will add a remark about it.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response - just chiming in that I've read the response and that all sounds good to me (e.g., clarifying limitations wrt handling both misspecification and confounding from a benchmarking point of view, clarifying the theorems being used from the cited references, etc). Glad to hear that you can easily get lower bounds as well. Regarding the comparison to other settings, feel free to take or leave the feedback - it's clear enough to me what the differences are (as I mentioned in the review), just thought it might help frame the contribution better for a reader. | Summary: This paper aims to use trial data to make valid inferences about policy outcomes for a target population. By incorporating additional covariate data from the target population, the sampling of individuals in the trial study is modeled. The authors develop a nonparametric method that provides certifiably valid trial-based policy evaluations, regardless of model miscalibrations, and ensures validity even with finite samples. The effectiveness of the certified policy evaluations is demonstrated using both simulated and real data.
Strengths: Paper has theoretical analysis.
Weaknesses: 1. paper is not well organized.
Technical Quality: 2
Clarity: 3
Questions for Authors: NA
Confidence: 1
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes. I have very limited knowledge for this area, my reviews for this paper should not be count.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s honesty and trust that the chair will discount this review. | Summary: This paper studies the challenge of generalizing randomized controlled trial (RCT) results to a target population, addressing the potential issue of distributional shift from RCT participants to the intended population. Instead of estimating the expected loss on the target population, the paper proposes a nonparametric method that leverages covariates information from the target population and certifies valid finite-sample inferences of out-of-sample tail loss. The approach is validated using both synthetic and real data, ensuring its applicability to the target population.
Strengths: 1. The paper is well written. For me who work on causal inference but not very familiar with the methodology on handling distributional shift, I can easily follow the paper and get the gist of the idea.
2. I think the method and the technical results are very clean and sound.
3. I believe the inference on out-of-sample tail risks is of vital importance, which is often omitted in policy evaluation.
Weaknesses: 1. If I understand correctly, the policy Pi is given as an exogenous parameter. However, it is a common practice that we learn a policy using the RCT data and aim to know its performance on the target population. This definitely introduces the dependence issue. I suspect some type of cross-fitting may help but I doubt the empirical performance, or any chance to analyze the bias in this case?
Technical Quality: 3
Clarity: 4
Questions for Authors: Please refer to the weakness part for questions.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the question and are pleased that he/she acknowledges the importance of the problem we are tackling.
> If I understand correctly, the policy Pi is given as an exogenous parameter. However, it is a common practice that we learn a policy using the RCT data and aim to know its performance on the target population.
Yes, this paper focuses on the *evaluation* of any given policy $\pi$, which could either be proposed by any clinical expert or *learned* using past data. It is therefore possible to set aside, say, $N$ past samples from an RCT study to learn a policy $\pi_N$ and then use the proposed methodology to evaluate its out-of-sample performance. We will add a remark in the manuscript to inform the reader about this possibility. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
RadarOcc: Robust 3D Occupancy Prediction with 4D Imaging Radar | Accept (poster) | Summary: This paper introduces a novel approach for 3D occupancy prediction in autonomous driving using 4D imaging radar sensors. Traditional methods rely heavily on LiDAR or camera inputs, which are vulnerable to adverse weather conditions. RadarOcc leverages the robustness of 4D radar data, which provides comprehensive scene details even in challenging weather.
The key contributions of the paper include:
- **Utilization of 4D Radar Tensors (4DRT):** Unlike previous methods that use sparse radar point clouds, RadarOcc processes raw 4DRTs, preserving essential scene details and capturing comprehensive environmental data.
- **Novel Data Processing Techniques:** The paper introduces Doppler bins descriptors, sidelobe-aware spatial sparsification, and range-wise self-attention mechanisms to handle the large and noisy 4DRT data. It also uses spherical-based feature encoding followed by spherical-to-Cartesian feature aggregation to minimize interpolation errors.
- **Benchmarking and Evaluation:** RadarOcc is benchmarked on the K-Radar dataset against state-of-the-art methods using various modalities. The results show that RadarOcc outperforms in radar-based 3D occupancy prediction and demonstrates promising results compared to LiDAR and camera-based methods. The approach also shows superior performance in adverse weather conditions.
Strengths: 1. The writing is very good, and the demo is provided for the convenience of reviewers to see the effect
2. First work use 4D imaging radar in 3D Occ. The authors propose several novel techniques, such as Doppler bins descriptors, sidelobe-aware spatial sparsification, and range-wise self-attention mechanisms, to efficiently process and utilize the voluminous and noisy 4D radar data. These methods demonstrate creativity in addressing the unique challenges posed by radar data.
3. Extensive experimental and ablation studies.
Weaknesses: 1. maybe high computational demand, lack of this part description
2. Dependence on a Single Dataset
3. The experiment was limited because of the data set
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. In my understanding, the advantage of radar compared with lidar is that it can achieve good perception even in bad weather. However, because of GT procurability, author just use well-condition sequences, which i think may be unfair to lidar-based methods or vision-based methods, because the domain of radar changes little while others change a lot. It would have been better if the authors had added data on nuscenes to the baseline for comparison. Time may not be enough to complete, but this is really a point of concern to me
2. Could you provide the overlap ratio between different sensors and ROI?
3. Could you provide the overall inference speed and the speed of each module? Additionally, how much does the Doppler bins descriptor reduce the overall computational load, and do the range-wise self-attention and spherical-to-Cartesian feature aggregation introduce significant computational overhead?
4. Since Occ gt is difficult to obtain, is there any way to improve training data and generalization
**I'm willing to raise the grade if address my concerns**
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: As explained by the authors in the limitations section:
1. The model does not fully utilize the 4D radar information.
2. Due to the lack of point-wise annotations in the dataset, the semantic information is limited.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Comment: Dear Reviewer XRVn:
We appreciate your detailed summary and positive comments regarding the presentation, novelty, creativity, and the experimental results of our research. Thanks a lot for providing the valuable feedback and raising insightful questions about our work. We address and answer your questions below:
**Q1: Could you provide the overall inference speed and the speed of each module? Additionally, how much does the Doppler bins descriptor reduce the overall computational load, and do the range-wise self-attention and spherical-to-Cartesian feature aggregation introduce significant computational overhead?**
A1: Thank you for raising this concern. Please first refer to our general answer to the **‘Computation complexity and requirement’** question in the global rebuttal.
Specifically, the range-wise self-attention module is highly efficient, with a runtime of only 2.5ms. The spherical-to-Cartesian feature aggregation initially costs 72ms per frame. However, this can be optimized to 29.7ms by reducing the number of layers. This module not only converts the spherical feature volume to a Cartesian one without interpolation errors but also enhances the encoding of voxel feature relationships.
The Doppler bins descriptor significantly reduces the data volume of raw 4D radar tensors (4DRTs) by a factor of D/8, where D equals 64 in our case. This reduction decreases the input data volume to 1/8 of its original size. This reduction not only shortens loading times (from disk/sensor to processors) but also lessens the overall computational burden during spherical-based feature encoding.
**Q2: Dependence on a Single Dataset. The experiment was limited because of the data set**
A2: Thanks for highlighting this concern. Please refer to our general answer to the **‘Dependence on a single dataset’** answer in the global rebuttal.
**Q3: Author just use well-condition sequences, which I think may be unfair to lidar-based methods or vision-based method**
A3: Thanks for this thoughtful feedback. We are not sure if we understand this question exactly. If you were concerned that the training data used to train baselines method does not include sequences collected from adverse weathers, which causes worse performance due to the domain change of LiDAR and camera data, we are happy to answer this question as follows.
Currently, it is hard to resolve this problem as the LiDAR data is inevitably affected by the adverse weather. For example, water droplets or snowflakes can scatter or absorb LiDAR beams, reducing the effective range of LiDAR and inducing noise in the data. As a result, the occupancy labels obtained from accumulated LiDAR data under adverse weather have low fidelity. The nuScenes dataset also faces a similar issue where occupancy labels are affected by adverse weather, despite having annotated bounding boxes. Adding new nuScenes data collected under adverse weather to train our LiDAR and vision-based baselines might not be feasible. This approach could potentially hinder network training because of noisy labels rather than improve adaptation to adverse weather. Moreover, as nuScenes does not provide 4D radar tensor data, it’s unfair for our radar-based methods because they cannot benefit from the new data samples.
This situation highlights a significant research question: how can we generate reliable 3D occupancy labels under adverse weather conditions when LiDAR measurements are noisy? We acknowledge the importance of this question and plan to address it in our future work.
**Q4: Could you provide the overlap ratio between different sensors and ROI?**
A4: Thanks for this comment. Our evaluation focuses on the overlap area between the horizontal field of view (FoV) of all sensors and our defined RoI to minimize potential data discrepancies beyond the FoV. Specifically, the overleap hFoV of K-Radar sensor suite is 107◦, symmetrically distributed around the front axis. The ratio of between the final evaluation area and our RoI can be calculated as:
$$ 1 - \frac{\cot\left(\frac{107^\circ}{2}\right)}{4} \approx 0.812 $$
which means 81.2% area of our RoI is taken into account for our final evaluation area.
**Q5: Any way to improve training data and generalization**
A5: Thanks for this insightful question. It aligns with our research interest in efficiently training 3D occupancy prediction models. Some recent works are exploring self-supervised methods in occupancy prediction. One notable example is EmerNeRF (https://emernerf.github.io/), which can represent highly dynamic scenes in a self-sufficient manner. This approach has the potential to label occupancy ground truth using RGB cameras without requiring human annotation efforts.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for their response. I have carefully reviewed the feedback from the other reviewers as well as the corresponding rebuttal. While the rebuttal has addressed most of my concerns, the limited experimental results prevent me from raising my score. I believe the score I have given is appropriate for this submission.
It is a commendable piece of work, and I look forward to seeing more research in this area. | Summary: This paper leverages recent advancements in automotive radar technology and introduces a novel approach that utilizes 4D imaging radar sensors for 3D occupancy prediction. The proposed method incorporates Doppler bin descriptors, sidelobe-aware spatial sparsification, and range-wise self-attention mechanisms to effectively manage and mitigate the noise present in 4D radar data.
Strengths: This paper presents a compelling algorithm designed to enhance radar-based 3D occupancy prediction by incorporating Doppler information and sidelobe-aware techniques. The authors propose an innovative pipeline that effectively reduces data volume, mitigates sidelobe measurements, and utilizes interpolation-free feature encoding and aggregation.
Weaknesses: Please consult the question section for further information.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Dataset Limitation: The experimental analysis is thorough, but it is conducted solely on the K-Radar dataset. There is a concern that the method may be overfitted to this specific dataset. Could the authors provide additional examples using other datasets to demonstrate the generalizability of the method?
2. Computational Complexity: The methodology includes both an attention mechanism and multi-scale 3D convolutional blocks. A potential issue is whether this structure significantly increases the computational requirements. Could the authors provide a comparison of model size and computational time to address this concern?
3. Selection Criteria: The method selects the Top-3 power values and their corresponding indices. How does this choice affect performance? Would increasing or decreasing the number of selected values significantly impact the results?
4. Failure Cases: Are there any documented failure cases of the method? Considering that Doppler effects are most pronounced when the relative velocity is aligned with the wave direction, how does the method perform in scenarios where this condition is not met?
5. Would the code be published together with the paper?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please consult the question section for further information.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer f9yn:
Thank you for acknowledging our method to be compelling and innovative. We are grateful for your insightful questions and feedback. We provided detailed explanations and responses as follows:
**Q1: Dataset limitation: The experimental analysis is thorough, but it is conducted solely on the K-Radar dataset. There is a concern that the method may be overfitted to this specific dataset. Could the authors provide additional examples using other datasets to demonstrate the generalizability of the method?**
A1: Thanks for providing this feedback. Please refer to our general answer to the ‘Computation complexity and requirement’ question in the global rebuttal.
**Q2: Computation complexity: The methodology includes both an attention mechanism and multi-scale 3D convolutional blocks. A potential issue is whether this structure significantly increases the computational requirements. Could the authors provide a comparison of model size and computational time to address this concern?**
A2: Thanks for raising this concern. Please refer to our general answer to the ‘Computation complexity and requirement’ question in the global rebuttal.
**Q3: Selection criteria: The method selects the Top-3 power values and their corresponding indices. How does this choice affect performance? Would increasing or decreasing the number of selected values significantly impact the results?**
A3: Thank you for this thoughtful question. To investigate the effect of the number of preserved top values ($N_d$) among Doppler bins for each spatial location, we conducted a series of experiments by varying $N_d$. As shown in the table below, the change in $N_d$ does not significantly impact our results. For both efficiency and performance, we chose $N_d$=3 for our method based on the validation set performance.
| $N_d$ | IoU @ 51.2m (%) | mIoU @ 51.2m (%) |
|-----|-----------------|------------------|
| 1 | 30.9 | 18.7 |
| 2 | 28.8 | 19.4 |
| 3 | 31.9 | 19.1 |
| 4 | 31.1 | 18.9 |
| 5 | 30.1 | 18.8 |
This can be explained by the fact that K-Radar wraps around overflow values in Doppler
measurements due to the limited Doppler measurement range. For example, Doppler speeds of 3.0 m/s and 6.0 m/s are measured within the range of -1.92 to 1.92 m/s as 3.0 - 3.84 = -0.84m/s and 6.0 - 3.84*2 = -1.68m/s, respectively. This ambiguity means the information from the Doppler axis only marginally improves our model. Consequently, changing $N_d$ hardly affects our performance. Table 2 in our paper also shows that our baseline without Doppler bin descriptor (w/o DBD), which only uses mean power, reflects this minimal impact. However, we believe our Doppler bin encoding method could bring more improvement with other radar sensors that have a larger measurement range.
**Q4: Failure cases: Are there any documented failure cases of the method? Considering that Doppler effects are most pronounced when the relative velocity is aligned with the wave direction, how does the method perform in scenarios where this condition is not met?**
A4: Thank you for this comment. We did not observe any failure cases solely due to the loss or weakness of the Doppler effect. The information encoded by the Doppler bins serves as a complement to our primary feature, the mean power values, which reflect the overall measurement intensity at each spatial location. Even without Doppler bins descriptors (reducing the Doppler axis via average-pooling), our method performs well for 3D occupancy prediction, as shown in Table 2. However, we do observe some failure cases of RadarOcc due to other reasons, such as radar sensors suffering from insufficient resolution and decreased Signal-to-Noise Ratio at far distances. The around ~30 IoU and ~23 mIOU indicate that this is not a perfect model. Please refer to Figure 1 in the **global rebuttal PDF**.
**Q5: Would the code be published together with the paper?**
A5: Thanks for reminding us. As claimed in our contribution list, we will release our code public upon acceptance. We also provide our trained models and tools used to preprocess the K-Radar dataset.
---
Rebuttal 2:
Title: Rebuttal follow up
Comment: Dear reviewer f9yn,
As the discussion period is approaching its end, we kindly invite you to review our detailed rebuttal. We believe it addresses the concerns you raised in your review. If you find our work satisfactory, we would greatly appreciate it if you could consider raising your score to a positive one. We also welcome any further comments you may have.
Thank you again for your time in reviewing our paper. | Summary: This paper introduces a 3D occupancy prediction method that, unlike previous radar-based approaches, utilizes 4D imaging radar to leverage additional information. To harness the potential of this under-explored 4D data, the paper tackles challenges such as the large size of raw 4D radar data, inherent noise, and discrepancies in coordinate systems. The proposed method results in improved reconstruction accuracy compared with other baselines.
Strengths: The motivation to further exploit the 4D radar data for occupancy estimation is logical. This paper represents pioneering work in addressing this issue.
The paper is well-crafted, featuring clear figures and well-organized content.
The performance under adverse weather conditions is noteworthy, underscoring a critical perception task for the safety of current autonomous vehicles (AVs).
Weaknesses: Comparing the proposed method to only one published baseline and relying on a single dataset raises concerns about the persuasiveness of the findings.
As a learning-based model, the paper misses an evaluation of the generalization of the learning formulation to datasets out of the domain. This oversight is crucial for assessing the model's applicability across various real-world scenarios.
Technical Quality: 3
Clarity: 3
Questions for Authors: Can the proposed learning formulation be extended to incorporate radar methods with other modalities, such as images and lidar?
What is the trade-off relationship between compressing the input data (originally 500MB) and its impact on performance? Providing this information could assist readers in balancing efficiency and accuracy.
MISC
Line 110: The line distance is unusual.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The experiments are conducted solely on the K-Radar dataset, which is the only autonomous driving dataset containing 4D radar data. This situation highlights a significant issue: the scarcity of 4D data compared to other perception methods (e.g., images, normal radar data), which limits its applicability in broader contexts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer 7z3i:
We are greatly encouraged that you found this work has logical motivation, represents pioneering work in the area, features clear figures and well-organized content, and underscores safety-critical perception tasks for AVs. We acknowledge the concerns you've highlighted and would like to offer clarifications:
**Q1: Compare to only one published baseline and using single dataset for experiments**
A1: Thank you for highlighting this. There seems to be a misunderstanding regarding the baselines we used for comparison. As RadarOcc is the first method to utilize 4D radar tensor data for 3D occupancy prediction, we compared our method against point-based methods suitable for radar-based comparisons. Specifically, we used three baseline methods from the OpenOccupancy paper: **L-baseline** and **L-CONet** with radar point cloud input, and **L-baseline** with 4DRT-XYZ input. OpenOccupancy represents state-of-the-art point-based work for 3D occupancy prediction. Other available methods such as LMSCNet and JS3C-Net, while influential, are relatively outdated (2020, 2021).
Additionally, to provide a comprehensive evaluation, we included comparisons with LiDAR-based **L-baseline** and single/stereo camera-based **SurroundOcc** methods. These baselines represent state-of-the-art techniques for their respective input modalities, ensuring a persuasive inter-modality comparison.
Regarding the dataset, please refer to our general answer to the **‘Dependence on a single dataset’** answer in the global rebuttal.
**Q2: Generalization of the learning formulation to datasets out of the domain**
A2: Thanks for raising this concern. Our RadarOcc is specifically designed for 4D radar tensor input rather than radar point clouds, which means it cannot be directly applied to other existing 4D radar datasets such as VoD and TJ4DRadSet. However, our algorithm can be readily applied if another 4D radar tensor dataset becomes available.
One potential difference that might affect generalization is the sidelobe levels, which can vary with different radar designs. Various sidelobe suppression techniques, such as the Hamming window, might be applied during the FFT process. To address this, we can incorporate different levels of sidelobe-aware sparsification into our approach to process the data accordingly. This ensures that our method remains robust and adaptable to variations in radar hardware and design.
By utilizing this flexible approach, we believe that RadarOcc can generalize well to different 4D radar tensor datasets as they become available, thereby extending its applicability across various real-world scenarios.
**Q3: Extend the learning formulation to incorporate radar methods with other modalities**
A3: Thanks for providing this insightful idea. To incorporate radar methods with other modalities, we can fuse the multi-modal information at the feature level by integrating our **voxel feature G** and features extracted from other sensor data, such as images and lidar point clouds. We plan to explore the multi-modal fusion based method in the future.
**Q4: Trade-off relationship between compressing the input data and its impact on performance**
A4: Thanks for this thoughtful comment. There is indeed a **trade-off** between preserving critical measurements and filtering noise/compressing the radar tensor in our sidelobe-aware spatial sparsification process. Excessive compression/filtering may result in the loss of weak reflections, while insufficient compression/filtering increases computational costs and retains some level of noise. To identify the optimal balance, we conducted a series of experiments varying the number of selected top elements for each range, i.e., $N_r$, and assessed performance and inference speed on the validation set.
The results, presented in Table 3 of the ‘global’ rebuttal PDF, indicate that RadarOcc achieves the best results in half of all metrics on our validation set when $N_r$ = 250. Both higher and lower values of $N_r$ lead to suboptimal results, suggesting that $N_r$ = 250 strikes the best balance between retaining critical signals and filtering noise. Additionally, the inference speed at $N_r$ = 250 is relatively higher compared to configurations with larger $N_r$ values. Therefore, we select $N_r$ = 250 for RadarOcc’s evaluation on our testing set.
**Q5: The applicability in broader contexts of 4D radar data is limited by its scarcity compared to other perception methods**
A5: Thanks for this feedback. As a reminder, although the K-Radar dataset is the only autonomous driving dataset providing available 4DRT data, there are increasing datasets providing 4D radar point clouds (e.g., VoD, TJ4DRadSet, Dual-Radar, NTU4DRadLM). As an emerging sensor, 4D radar is attracting broad attention from the industry and academia. We believe the scarcity of 4D radar dataset will be addressed very soon.
MISC: Line 110: The line distance is unusual.
Thanks for your careful review and for pointing out the issue with the line distance on line 110. We used the ‘\vspace’ command to adjust the line spacing for compactness in this instance. We acknowledge the importance of maintaining consistent formatting and will ensure that such adjustments are removed in future versions of our work.
---
Rebuttal Comment 1.1:
Title: Comment
Comment: Thanks for the authors' response to my concerns. I have no further questions and feel I can keep my positive ratings for the paper. | Summary: It is proposed to utilize 4D imaging 8 radar sensors for 3D occupancy prediction by directly processing the 4D radar tensor, thus preserving essential scene details. RadarOcc innovatively addresses the challenges associated with the voluminous and noisy 4D radar data by employing Doppler bins descriptors, sidelobe-aware spatial sparsification, and range-wise self-attention mechanisms. The demonstration of the RadarOcc’s state-of-the-art performance in radar-based 3D occupancy prediction was carried out on K-Radar dataset.
Strengths: It is well-accepted that radar raw data could provide more information for perception tasks in autonomous driving. This submission follows the same direction to utilize 4D imaging radar measurement.
Weaknesses: It is not the first paper to utilize the 4D imaging radar raw measurement for perception. As mentioned by the authors, they focus on semantics, rather than typical road users detection and classification.
For the reference to 4D imaging radar, please consider cite the following paper
S. Sun and Y. D. Zhang, "4D Automotive Radar Sensing for Autonomous Vehicles: A Sparsity-Oriented Approach," in IEEE Journal of Selected Topics in Signal Processing, vol. 15, no. 4, pp. 879-891, June 2021.
For general automotive radar contribution to autonomous driving, please consider cite the following paper
S. Sun, A. P. Petropulu and H. V. Poor, "MIMO Radar for Advanced Driver-Assistance Systems and Autonomous Driving: Advantages and Challenges," in IEEE Signal Processing Magazine, vol. 37, no. 4, pp. 98-117, July 2020.
Technical Quality: 3
Clarity: 2
Questions for Authors: It seems to be a trade-off in the sidelobe-aware spatial sparsification process. Is it interesting to know whether important feature information is lost due to this sparsification. For example, targets with weak reflections, such as the information of road curbs might be lost.
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: As mentioned before, the proposed work does not apply to the general road users detection and classification.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer v56z:
We appreciate you for the positive feedback regarding the paper’s approach, innovation and experiment results, and agree that radar raw data could provide more information for perception tasks. We understand your concerns and would like to address your points one by one.
**Q1: It is not the first paper to utilize the 4D imaging radar raw measurement for perception. As mentioned by the authors, they focus on semantics, rather than typical road users detection and classification. For the reference to 4D imaging radar, please consider cite the following paper S. Sun and Y. D. Zhang, "4D Automotive Radar Sensing for Autonomous Vehicles: A Sparsity-Oriented Approach," in IEEE Journal of Selected Topics in Signal Processing, vol. 15, no. 4, pp. 879-891, June 2021.**
A1: Thanks for this comment. In this work, we aim to focus on the 3D occupancy prediction task based on 4D radar tensor data. Compared to traditional road user detection and classification, 3D occupancy offers a detailed open-set depiction of scene
geometry, not limited to specific object classes and shapes. This capability allows it to address a broader range of corner cases than previous object-based perception approaches. The recommended reference is indeed one of the pioneering works that introduces 4D radar sensing to autonomous driving and has inspired our research in some aspects. To ensure rigor, we will cite this paper in Sec. 3.1, where we introduce the 4D imaging radar.
**Q2: For general automotive radar contribution to autonomous driving, please consider cite the following paper S. Sun, A. P. Petropulu and H. V. Poor, "MIMO Radar for Advanced Driver-Assistance Systems and Autonomous Driving: Advantages and Challenges," in IEEE Signal Processing Magazine, vol. 37, no. 4, pp. 98-117, July 2020.**
A2: Thanks for sharing this valuable reference, which systematically reviews advancements and challenges of MIMO radar technology in automotive applications, emphasizing its role in enhancing angular resolution for high-resolution imaging radar systems in L4 and L5 autonomous driving. To ensure our audience gains a solid foundational understanding of MIMO radar, we will cite this paper in the second paragraph of the introduction, where we discuss general automotive radar.
**Q3: It seems to be a trade-off in the sidelobe-aware spatial sparsification process. Is it interesting to know whether important feature information is lost due to this sparsification. For example, targets with weak reflections, such as the information of road curbs might be lost**.
A3: Thank you for raising this insightful question.
The potential loss of important feature information during the sparsification process is indeed a critical consideration. To address this, our network design incorporates strategies to mitigate these issues. Specifically, our sidelobe-aware spatial sparsification technique selects the top-$N_r$ elements for each individual range rather than the entire dense radar tensor (RT). As demonstrated in **Fig. 2**, this approach retains essential measurements scattered across different ranges, including both strong and weak reflective objects. This is in contrast to percentile-based methods, which often focus on elements corresponding to highly reflective objects, potentially missing crucial data from weak reflective objects.
There is indeed a trade-off between preserving critical measurements and filtering noise/compressing the radar tensor in our sidelobe-aware spatial sparsification process. Excessive compression/filtering may result in the loss of weak reflections, while insufficient compression/filtering increases computational costs and retains some level of noise. To identify the optimal balance, we conducted a series of experiments varying the number of selected top elements for each range, i.e., $N_r$, and assessed performance and inference speed on the validation set.
The results, presented in **Table 3** of the **‘global’ rebuttal PDF**, indicate that RadarOcc achieves the best results in half of all metrics on our validation set when $N_r $ = 250. Both higher and lower values of $N_r$ lead to suboptimal results, suggesting that $N_r $ = 250 strikes the best balance between retaining critical signals and filtering noise. Additionally, the inference speed at $N_r$ = 250 is relatively higher compared to configurations with larger $N_r$ values. Therefore, we select $N_r$ = 250 for RadarOcc’s evaluation on our testing set.
---
Rebuttal 2:
Comment: Dear reviewer v56z,
As the discussion period is approaching its end, we kindly invite you to review our detailed rebuttal. We believe it addresses the concerns you raised in your review. We also welcome any further comments you may have.
Thank you again for your time in reviewing our paper. | Rebuttal 1:
Rebuttal: Dear reviewers and ACs,
We would like to express our sincere gratitude for the careful inspection and constructive feedback from all the reviewers. We are glad to see that all of the reviewers in general hold a positive attitude towards our paper in the pre-rebuttal period. For positive comments, our utilization of 4D radar tensor (4DRT) data is well-motivated (reviewer kdzm, v56z, 7z3i, XRVn), our pipeline of tackling challenges associated with 4DRT is innovative (reviewer v56z, f9yn, XRVn), our experiment is extensive and noteworthy (reviewer kdzm, 7z3i, XRVn), and our paper and demo is well-crafted (reviewer 7z3i, XRVm), we appreciate them and will carry them forward in our future work.
For concerns and questions, we address them one by one in the rebuttal for each individual reviewer. Here, we provide our general answers to two general questions raised by the reviewers, including:
**1. Computation complexity and requirement.**
Answer:
To evaluate the efficiency of RadarOcc, we conducted model inference on a single Nvidia GTX 3090 GPU, achieving an average inference speed of approximately 3.3 fps. Although there is still a gap between the real-time application (10fps), our inference speed has surpassed that of many camera-based methods as reported in the Table 8 of https://arxiv.org/pdf/2303.09551.
Further improvements in inference speed can be achieved by reducing network complexity and applying precision reduction techniques, such as converting model precision from Float32 (FP32) to Float16 (FP16). To validate this, we simplified the feature encoding and aggregation modules by reducing some redudancy layer (e.g. number of layers in deformable attention) for efficiency, and converted the computationally intensive 3D occupancy decoding module from FP32 to FP16. These optimizations resulted in a 126% increase in inference speed, reaching approximately 7.46 fps, with only a minimal impact on performance.
Given the increasing computational power of modern embedded GPUs, such as the Nvidia Orin AGX, which can almost rival desktop GPUs like the Nvidia GTX 2090, we believe this enhanced inference speed demonstrates the potential for real-time application of our method in future vehicle systems, especially if further model quantization is applied. Please refer to Tables 1 and 2 in the global rebuttal PDF for detailed changes in performance and runtime for each module. We will include these results as part of the efficiency evaluation in our paper.
**2. Dependence on a single dataset.**
Answer: We share your concerns regarding dataset usage, but currently, this is the best available option. K-Radar is the only dataset that provides publicly available 4D radar tensor (4DRT) data, which is essential for our RadarOcc method. Our method is specifically designed for 4D radar tensor input, rather than the less informative radar point clouds, making it incompatible with other 4D radar datasets like VoD and TJ4DRadSet. However, if another 4D radar tensor dataset becomes available, our algorithm can be directly applied to it.
While our evaluation is limited to the specific radar configuration of K-Radar, using raw radar data helps mitigate the influence of different point cloud generation algorithms (e.g., CFAR) across various radar systems. This suggests that our method could generalize well to other radar sensors that provide data in the same 4DRT format. In this work, our goal is to demonstrate the potential of 4D radar tensors for this task and to raise awareness within the community and among practitioners about the value of this unique data format.
We warmly welcome any additional questions during the discussion period. Once again, we appreciate everyone's effort in reviewing.
Pdf: /pdf/ec26727ec9e29a90aa8cff094b85ed9ee7fbc9eb.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper presents RadarOcc, a method that enhances 3D occupancy prediction for autonomous vehicles using 4D imaging radar. It directly processes the 4D radar tensor, aiming to overcome the limitations of sparsity and noise associated with conventional radar processing. The methodology introduces techniques such as Doppler bins descriptors and sidelobe-aware spatial sparsification to improve data integrity and scene detail retention. The evaluation is conducted on the K-Radar dataset, with RadarOcc demonstrating superior performance compared to traditional LiDAR and camera-based methods, particularly under adverse weather conditions.
Strengths: The utilization of 4D imaging radar data directly (instead of converting it to sparse point clouds) allows for a more comprehensive environmental perception. It is also good to consider different weather conditions.
Weaknesses: 1. The processing of dense radar tensors can be computationally expensive, potentially limiting real-time application in less powerful onboard systems.
2. The current framework processes single-frame data, which may not fully capture the dynamic nature of driving environments, possibly affecting prediction reliability in highly dynamic scenarios.
3. As this study focuses on solving real problems in autonomous driving, it is important to thoroughly justify why 4D imaging radar has advantages over LiDAR in addressing issues like occupancy prediction. Can methods related to LiDAR really not solve these problems in various environments? In fact, many autonomous vehicles not only refrain from using mm-wave radar but also do not use LiDAR, managing to solve most issues using only vision.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. How does the RadarOcc system perform in terms of real-time processing, and what are the computational requirements?
2. Could the method be integrated with temporal data to predict dynamic changes in the environment, and if so, what modifications would be necessary?
3. Are there specific scenarios or environments where RadarOcc's performance might be significantly reduced, such as urban canyons or areas with high radar interference?
4. How does the system handle objects with low radar cross-section, such as pedestrians or animals?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: 1. The method's high computational demand may not scale well to lower-end processors commonly used in commercial vehicles.
2. The lack of temporal modeling could limit the system's predictive capabilities in highly dynamic environments.
3. It remains unclear how well the method generalizes across different radar systems or configurations not represented in the K-Radar dataset.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer kdzm:
We sincerely thank you for providing valuable comments and raising insightful questions about our work. We are glad that you acknowledge that the utilization of 4DRT data allows for a more comprehensive environmental perception, and our work considers different adverse conditions. Here we answer all the questions and hope they can address your concerns.
**Q1: How does the RadarOcc system perform in terms of real-time processing, and what are the computational requirements?**
A1: Thank you for raising this concern. Please refer to our general answer to **‘Computation complexity and requirement’** question in the global rebuttal.
**Q2: Could the method be integrated with temporal data to predict dynamic changes in the environment, and if so, what modifications would be necessary?**
A2: Thanks for providing this comment. In this work, we only consider the task of 3D occupancy prediction with single-frame 4DRT data. In general, there are two ways to integrate our method with temporal data to improve the prediction reliability.
One approach is to **accumulate multiple frames** captured by the radar sensor and feed them into the network together to obtain per-frame 3D occupancy prediction. To achieve this, we can incorporate a **temporal attention mechanism** after the spherical-to-Catersian feature aggregation module. The temporal self-attention can be employed to explicitly attend to the voxel features of every frame and output **per-frame spatio-temporal voxel features**, which can be input to our 3D occupancy decoding module for sequential-based prediction.
Another way is to utilize the **information from historical frames** to benefit the 3D occupancy prediction for the current frame. To propagate previous latent information to the current frame, we can add a **temporal update module** after the spherical-to-Catersian feature aggregation module, e.g., LSTM and GRU, that treats the voxel features as the hidden state and update it temporally. The **newly updated voxel features** contain temporal information and can produce more reliable prediction for the current frame.
We would consider one of the aforementioned approaches in our future work and discuss this promising direction in our revised paper.
**Q3: As this study focuses on solving real problems in autonomous driving, it is important to thoroughly justify why 4D imaging radar has advantages over LiDAR in addressing issues like occupancy prediction. Can methods related to LiDAR really not solve these problems in various environments? In fact, many autonomous vehicles not only refrain from using mm-wave radar but also do not use LiDAR, managing to solve most issues using only vision.**
A3: Thank you for your thoughtful feedback. While LiDAR and cameras perform well under normal conditions, they face significant challenges in extreme weather. For example, in our study (as shown in Figure 3 and our demo videos), rain-induced glare obscured camera lenses, and LiDAR struggled to detect certain objects, occasionally missing ground points entirely. These sensor limitations underscore the difficulties autonomous vehicles encounter without mmWave radar in adverse weather. In dynamic and unpredictable settings, 4D imaging radar provides unique robustness against adverse conditions like fog, rain, and snow, making it a crucial component for enhancing the reliability and safety of autonomous driving. In practice, each sensor modality contributes to different scenarios and plays a vital role in safety-critical autonomous driving, with every piece, including our radar, being essential for long-term mobile autonomy.
**Q4: Are there specific scenarios or environments where RadarOcc's performance might be significantly reduced, such as urban canyons or areas with high radar interference?**
A4: Thank you for raising this concern. In crowded areas like urban canyons, the portion of **multipath reflections** in the raw radar measurement would increase and, and **inter-object** occlusions become severe. As a result, the performance of RadarOcc might be affected. However, some designs of our network have tried to mitigate these issues. For example, our proposed sidelobe-aware spatial sparsifying could reduce the noise level while reserving important measurements, and our feature encoding and aggregation modules allows for more robust feature extraction. We are committed to refining RadarOcc to further address these challenges in the future, ensuring robustness across various environments.
**Q5: How the system handle object with low radar cross-section**
A5: Thanks for your valuable comment. In this work, we address objects with low radar cross-section (RCS) from two key perspectives:
**Input Perspective**: We utilize 4D radar tensor (4DRT) data instead of radar point clouds for 3D occupancy prediction. This approach avoids the loss of weak signal returns that can occur during the point cloud generation process, e.g., those filtered out by the Constant false alarm rate (CFAR) detection, preserving more measurements from low RCS objects compared to radar point clouds.
**Method Perspective**: Our sidelobe-aware spatial sparsifying technique selects the top-$N_r$ elements for each individual range rather than the entire dense RT. As shown in Fig. 2, this method retains critical measurements scattered across different ranges, including both low and high RCS objects. This contrasts with percentile-based methods, which often concentrate on elements corresponding to high RCS objects, thereby missing important data from low RCS objects.
*An example of a detected pedestrian is shown in Figure 2 in the global rebuttal PDF.*
**Q6: It remains unclear how well the method generalizes across different radar systems or configurations not represented in the K-Radar dataset.**
A6: Thank you for highlighting this concern. Please refer to our general answer to the **Dependence on a single dataset** answer in the global rebuttal.
---
Rebuttal 2:
Title: Rebuttal follow up
Comment: Dear reviewer kdzm,
As the discussion period is approaching its end, we kindly invite you to review our detailed rebuttal. We believe it addresses the concerns you raised in your review. We also welcome any further comments you may have.
Thank you again for your time in reviewing our paper. | null | null | null | null | null | null |
DEL: Discrete Element Learner for Learning 3D Particle Dynamics with Neural Rendering | Accept (poster) | Summary: This paper presents a new method to learn 3D particle dynamics from sparse 2D observations using inverse rendering.
In constrast to previous work, it does not learn a fully unconstrained model but makes use of known physical priors.
The method learns graph network kernels to model the particle interation forces in the DEA framework, which are classically designed domain experts.
These graph networks are trained from 2D observations using a differentiable renderer without any 3D supervision.
The method is evaluated on different scenes and using different physical materials.
It surpasses the provided baseline comparisons in the test shown in the paper.
Strengths: The paper presents a novel combination of graph networks and classical particle-based simulation that, in conjunction with a differentiable renderer, allows it to recover 3D particle dynamics from images.
The presented method is technically sound, well explained, and easy to follow.
The evaluation is sensible and very extensive, encompassing different material configurations, and shows a siginficant improvement over the SoTA.
The source code and dataset will be made available to the public.
Weaknesses: There are several things that are not clear to me, especially related to scene initialization and the initial velocities. These points are detailed in the questions below. I don't think these are fundamental problems, but I nonetheless hope the authors can clarify these aspects in the rebuttal.
Technical Quality: 3
Clarity: 3
Questions for Authors: - 3.3: the $v_{ij}^{n}$ or $v_{ij}^{t}$ used in equations 14 to 16 do not match any of the definitions in appendix B.
- For training (3.4), how is the scene initialization done with the renderer? And how accurate is the estimated initial velocity based on this initialization?
- Closely related: how sensitive is your method to inaccuracies in the initial velocity?
- How long are the sequences used in your experiments?
- How long is your training (number of iterations, epochs, or variable updates)?
- Are the views shown in the figures the ones that were used for training or are they novel views?
There are several grammatical errors throughout the paper, for example:
- 291 "metrix"
- 291 "cham**b**er distance" should be "cham**f**er distance"
- 306 There seems to be a new paragraph missing before "Results in particle view"
- 311 "we claim the reason that..."
- 343 "while keep the priors regulated..."
I would encourage the authors to do a complete pass over the paper to fix any writing mistakes.
The tables and figure on page 9 are also very squeezed.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are briefly discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Weakness and Question 2 and 3: Scene and Velocity Initialization**
Thanks for the reviewer pointing out these ambiguities. We explain these details in the following and will add them to our revised manuscript for better readability.
**Scene Initialization.** As we stated in the first paragraph of *Section 3*, we adopt the Generalizable Point Field (**GPF**) [1] to initialize the scene. GPF is a point-based NeRF-like approach, which can directly convert multiview images into a point-based NeRF representation in a generalizable way because GPF is pretrained on large 3D reconstruction datasets. In detail, GPF first projects 2D images into 3D points by predicting their depth maps. Second, it hierarchically aggregates features from images to the point scaffold to obtain separate appearance and geometry features. GPF can render changeable content by moving these featured points. We slightly fine-tuned the GPF on our training set based on the authors' provided checkpoints for better reconstruction.
Moreover, we also evaluate some other point-based renderers in *Section G2, Figures 10 and 11* in the Appendix, the results illustrate that **DEL is also robust to these renderers**. We would like to claim that all point-based renderers can be adopted to train DEL as long as **(1)** they can render different content with the point movement **(2)** they are fully differentiable to propagate gradients.
**Velocities Initialization.** We follow the methods in PAC-NeRF [2] and PhysGaussian [3] to use images by using the first three frames to optimize the initial velocities. We assume that the initial velocities for all particles of the moving objects are identical. Hence, we only need to run the following loop:
1. optimize the velocity,
2. move the particle set according to it,
3. render novel views based on the new position of the particles
4. compute the loss between the rendered views and the real views
5. update the velocity to align the renderings with the real views
After the error is smaller than a predefined threshold, the loop is broken.
Following the reviewer's advice, we additionally evaluate the effect of the approximated velocities. We train two models on the Plasticine scene using estimated velocities and another using actual velocities. Then, we test each model on two test sets, one provides the real velocities and another does not. The results are shown below:
| Test/Train | w | w/o |
|-------|-------|-------|
| w | 7.37 | 7.48 |
| w/o | 7.82 | 7.54 |
This table reports the Mean Rollout Chamfer Distance for each situation. "w" refers to "with real velocities" and "w/o" refers to "without velocities".
From the Table, we can see that the results do not exhibit significant fluctuations, which may be due to the simplistic nature of the initial velocity estimations. How to estimate initial velocities in a more complex environment remains a worthy topic for future research.
*[1] Wang J, et al.. Learning robust generalizable radiance field with visibility and feature augmented point representation. International Conference of Learning Representations (ICLR) 2024*
*[2]Li X, Qiao Y L, Chen P Y, et al. Pac-nerf: Physics augmented continuum neural radiance fields for geometry-agnostic system identification. International Conference of Learning Representations (ICLR) 2023*
*[3]Xie T, Zong Z, Qiu Y, et al. Physgaussian: Physics-integrated 3d gaussians for generative dynamics. Conference on Computer Vision and Pattern Recognition. (CVPR) 2024*
> **Question 1: Notation**
We thank the reviewer for the careful check very much. The $v_{ij}^n$ in Equation 14 represents the relative velocity of particle i with respect to j, actually, the velocity could be a vector, so it should be modified as **v**$_{ij}^n$ . (bond font represents a vector)
Similarly, the $v_{ij}^n$ which denotes the tangential velocity should be modified as **v**$_{ij}^t$. We thoroughly check all notations in the paper and correct them all, and we update the Nomenclature in Appendix B accordingly.
> **Question 4: The length of the sequences**
For different simulation scenarios we have different sequence lengths. We report the simulation steps (i.e. the sequence lengths), which are averaged over the entire test set for each scenario, in the following Table.
| Scenario | plasticine | SandFall | Multi-Obj | FluidR | Bear | Fluids |
|-------|-------|-------|-------|-------|-------|-------|
|Avg Step Number | 86 | 94 | 132 | 155 | 124 | 138 |
To better clarify this, we add this description to Section F in our revised manuscript.
> **Question 5: Training time**
Thanks for this comment. As we stated in the answer of Weakness, we first finetuned the GPF by using their default learning rate on our training set for 500 iterations, this step only takes 12 minutes. Then we train our model on a certain scenario, for example the Plasticine, for about 24000 iterations on a single NVIDIA RTX3090, and the loss has been steady. The training time for a certain scenario is about 3.6~4 hours. We add the above training information to the *Section D* in our revised paper for the convenience of readers.
> **Question 6: Test views**
Thanks for this question. We would like to clarify that all camera views and initial conditions in the test set have never been seen in the training set. Because we believe that this should be better for evaluating the generalization ability of these models. We add this illustration in the first paragraph of Section 4 in our revised paper for better readability.
> **Question 7: grammar errors and layout**
We appreciate a lot the reviewer for pointing out these typos and errors. Following these suggestions, we correct them in the revised version. Moreover, we carefully review and improve the text's language and grammar, and optimize the layout to boost the quality of the paper. Now the revised version is more readable and clearer.
---
Rebuttal Comment 1.1:
Title: Re: Rebuttal by Authors
Comment: Thank you for the clarifications. Trusting that these unclear parts will be clarified in a final version I'd be happy to still support an "accept" for this paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for these valuable comments and encouraging feedback. We are sure that they improve the quality of our manuscript by a large margin. Thanks again for your time and attention in the review and discussion periods. | Summary: This paper proposes to incorporate the neural network with Discrete Element Analysis framework for particle-based simulation. The method adopts GPF as differentiable renderer and is trained through multi-view videos. The proposed model delivers faithful rollouts and outperforms the baselines.
Strengths: * The proposed method deeply integrates the DEA and obtains robust performance.
* The model can be trained through multi-view videos in an end-to-end manner.
Weaknesses: 1. To simulate different scenes, such as deformable objects and fluid, it seems that one has to train different models on different cases, indicating scene-specific design with limited generalisation abilities.
2. The results of verifying the generalisation abilities are missing. For example, after trained on the deformable bunny as shown in Figure 5, can the model simulate objects of other shapes, such as a car, or objects composed of more particles?
3. The method seems to be capable of dealing with objects with limited particles and struggle to simulate large amount of particles.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. To enable the training on different scenes through the images, are different pertained GPF needed for different scenes?
2. How is the performance of the method for long-term predictions? For example, a trajectory with 150 frames.
3. At L253, the author claims to use L2 loss. However, for equation 17, it seems the loss is L1. Is this a typo?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: Please refer to the weaknesses and limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Weakness 1: Scene Specific**
Thanks for this comment. The proposed approach indeed needs to be trained for modeling different materials. In fact, we prefer to describe it as **material-specific** training rather than scene-specific training. This is because DEL can implicitly encode the mechanical properties of a certain material into the graph neural operator during training, and after training, the trained graph neural operator can be directly inserted **into any unseen scenes with different initialization conditions**, such as shapes and velocities.
We also conducted additional experiments and analysis for this point in *Section E, Figure 7, Figure, 13, and Figure 14* in Appendix. It can be observed that various materials can be simulated by **simply exchanging the graph operators trained in different scenes**. We believe that this also shows the generalization ability of the DEL. An ideal situation to use this method is that one can train the graph kernel for a specific material in a certain dataset in advance, and then invoke it when one needs to simulate the material.
> **Weakness 2: Generalization Ability**
We would like to argue that all visualized results in our paper are from the **test set**, including the bunny in Figure 5 and the duck in Figure 22. Both of the initial shapes have never been seen by the model during training. As we stated in the *Dataset* part in Section 4, for each specific material pair, for example, the plastic and the rigid pairs in the **Plasticine** scenario (Figure 5), there are 128 different training sequences and 12 testing sequences. All of them have different initial shapes, velocities, and locations. We would claim that **the "Plasticine" scenario does not refer to the specific "bunny" or "duck", it refers to the same plastic material that makes up them**.
For example, in the training set, the plasticine might be initialized as a "sphere" or a "dog", but in the test set, it can be a "bunny" or a "duck". Therefore, we could claim that our model can be generalized well to different initial conditions that have never appeared in the training phase as long as the same type of material is simulated. Also, the dataset and experimental setups are identical across all models discussed in the paper. To conclude, **our DEL can effectively generalize to unobserved scenarios with different initial conditions** when simulating the same materials.
Furthermore, in *Figure 8, 9 and Section G.3, G.4* in the Appendix, we evaluate the training efficiency of our model. The results show that even when trained with a limited amount of data, such as **1/8** training dataset, our model still achieves impressive performance. In contrast, other baselines that do not adopt physical priors exhibit a rapid decline in performance as the training data is reduced. This also indicates the generalization ability of our DEL.
> **Weakness 3: The Large Number of Particles**
Thanks for this comment. We indeed have not conducted experiments in particularly large scenes because representing large scenes with particles requires a substantial amount of VRAM. However, unlike other methods that model the dynamics across the entire spatial domain, our approach models only the interactions between particles, analyzing each pair individually. This way significantly **reduces the computational overhead** when simulating relatively large scenes, and is **scale-independent**. Due to its scale-invariant properties, our method should outperform others when simulating large scenes. In the future, we plan to explore hierarchical simulation techniques to reduce memory consumption, thereby improving the efficiency of simulating larger scenes.
> **Question 1: GPF related**
Thanks for this question. The GPF does not need to be retrained for different scenes, because GPF is a **generalizable point-based NeRF** method, it has already been pretrained on large 3D reconstruction datasets and can directly convert multiview images into a point-based NeRF representation without backpropagation. Moreover, we slightly fine-tuned the GPF on our training set based on their provided checkpoints for better reconstruction quality.
To improve the readability of this paper, we introduce the above basic preliminaries about GPF in *the first paragraph of Section 3* following this comment. Moreover, we add detailed descriptions of the point-based renderer that we use in our revised Appendix.
Besides, we also evaluate the performance of our method when different point-based renderers are adopted in *Section G2, Figures 10 and 11* in the Appendix. The results illustrate that DEL is robust to different renderers. As long as the renderer meets the following requirements: **1.** Point-based: can change the content with the movement of points. **2.** Differentiable: can propagate gradient, it can be used to train DEL.
> **Question 2: Long-term prediction**
The average timestamps for each scenario in the test set can be seen in the following Table:
| Scenario | plasticine | SandFall | Multi-Obj | FluidR | Bear | Fluids |
|-|-|-|-|-|-|-|
|Avg Step Number | 86 | 94 | 132 | 155 | 124 | 138 |
Among them, the Plasticine and SandFall predict relatively short-term dynamics, while **the Multi-Obj, FluidR, and Fluids provide the results for long-term predictions** whose timestamps are approximately **from 130 to 160**. In addition, all metrics reported in this paper including *CD, EMD, PSNR, SSIM, and LPIPS* are averaged across all timestamps of all sequences in the test set. This indicates that our method can perform better than these counterparts in both long-term and short-term predictions. We are also going to explore more super-long examples of these learned simulators in the follow-up plan.
> **Question 3: A typo in the definition of loss function**
We thank the reviewer's careful check very much. This is a typo in *Line 253*, the loss should be L1 loss and we have corrected it in our revised paper. | Summary: The paper proposes a framework to learn 3D dynamics from 2D observations. DEM uses hand-designed kernel functions to model interaction force between particles, which may vary significantly for different material types. The paper changes these kernels to learnable GNN kernels. The time integration still follows DEM. Combined with a differentiable particle renderer, these kernels can be trained to match 2D observations.
Strengths: The framework is quite general, with force-based time integration providing a strong physics prior. The use of learnable kernels to model point interactions enables the DEM to effectively fit observations.
Numerous comparisons are conducted, and the results are promising.
Weaknesses: The evaluations are conducted only on synthetic data.
The learned particle interactions are black-boxes, which are not explainable.
Technical Quality: 3
Clarity: 3
Questions for Authors: [Learning Neural Constitutive Laws From Motion Observations for Generalizable PDE Dynamics](https://arxiv.org/abs/2304.14369) should be cited, since it solves a similar task.
Are the material types in particle attribute A pre-known? Are the initial particle distributions known? The known assumptions should be listed in the paper.
How are particle colors assigned? In reality, objects usually have textures. Manually assigning colors may be impractical.
The particle initialization step is illustrated in Fig. 1, but no technical details are discussed in the text. It seems that the initial geometry is provided in the experiments. In that case, the known-geometry assumption should be illustrated in the figure instead. However, geometry reconstruction is important for real data applications.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Weakness 1: The evaluations are conducted only on synthetic data.**
Thanks for this comment. At present, the proposed method is indeed only evaluated on synthetic data. This is because capturing multiview videos for a dynamic process in the real world is difficult. Because it is hard to **simultaneously capture multiple dynamic videos** of the same object. Additionally, some objects, such as plastic materials, may become damaged via collision when shooting the first video. So it is **hard to be repeated**. We have also pointed out this issue in *the Limitation section* of our original paper.
In the future, we are going to investigate potential solutions for that. First, transferring the model trained on synthetic data to real-world situations might be possible. Second, we are going to explore few-shot or one-shot learning to learn dynamics with a single real-world video, which might be achieved by integrating more explicit physics and geometry priors into learning-based frameworks.
> **Weakness 2: The learned particle interactions are black-boxes, which are not explainable.**
The DEL is proposed by **integrating graph neural operators into a classic mechanics analysis framework**, we intentionally make these operators to approximate the mapping from particle deformation to particle interaction forces. Therefore, we would like to claim that this approach is **partially interpretable**. We list the reasons as follows:
**First**, we directly adopt the predicted particle interaction forces to update the velocities and positions of all particles, and achieve accurate results, which can indicate **the predicted forces are correct** since **incorrect force cannot result in precise deformations** under the framework of Euler integration.
**Second**, since we employed the framework of the Discrete Element Method (DEM), it naturally satisfies **the conservation of momentum and energy**. In addition, given our assumption that particle mass remains constant and particles neither vanish nor are created spontaneously, our method also adheres to the law of **conservation of mass**.
**Third**, we conduct additional analysis to evaluate the traits of these neural operators in *Section H and Figure 17* in the Appendix. We visually **study how the predicted forces relate to the particle deformation** of different materials. The relationships are implicitly learned by our DEL. We observe that the **force-deformation curves indeed reflect the real-world properties** of these materials.
There are two examples to illustrate. **(1)** With plastic materials, the force grows as they stretch at the beginning. But after stretching to a certain extent, they begin to yield, and more displacement would not increase forces further. This behavior is truly like real plastic materials. **(2)** For rigid solid materials, even tiny shape changes can cause extremely large force to maintain their initial shapes, which also matches what we see in the real world.
> **Question 1: Cite a relevant paper**
We appreciate this reviewer for bringing such a relevant paper to us. We cite the paper in a proper place in our revised paper.
> **Question 2 and 4: Known Assumptions and Particle Initialization**
We would like to present that both the material types and initial particle distribution are also **unknown** in our DEL framework. In *particle attribute A*, we only tell the DEL which particles belong to the same material, but we **do not provide the certain material type** for them. Only the image sequences and their corresponding camera poses are needed to train the model. The material types, including their mechanical properties, can be **implicitly encoded into the graph neural operators** during training.
The initial particle distribution can be obtained by using the Generalizable Point Field (**GPF**) [1] in this work, which is a differentiable PointNeRF-like 3D reconstruction method. We introduce this step in *Section 3.1*. We would like to claim that our method can be seamlessly combined with other point-based renderers, as long as they can **(1)** render the deformed scene according to the point movement and **(2)** propagate the gradients. The simulation starts only after the initialization of the particles is finished. Moreover, we evaluate our method on the other two different differentiable point-based renderers, i.e. ParticleNeRF and PhysNeRF in Section G2 and Figures 10 and 11 in the Appendix, the results show that our DEL can also **deliver the best performance no matter what renderers are used**.
Following the reviewer's suggestion, we add detailed descriptions of the **assumption and initialization** of particles in *the first paragraph of Section 3* in our revised paper. Furthermore, to improve the readability, we add some basic background knowledge of the GPF in our revised Appendix.
*[1] Wang J, et al.. Learning robust generalizable radiance field with visibility and feature augmented point representation. International Conference of Learning Representations (ICLR) 2024*
> **Question 3: Color and Texture**
As we stated in the answer to Question 1, the colors of these particles are faithfully reconstructed from images by using GPF. These colors and textures would remain unchanged during the simulation process. We agree with the review that the textures in reality could be various and complicated. However, we would argue that **complex textures are more beneficial for learning dynamics because they better reveal the direction of deformation at specific points on an object's surface due to the color gradients in the image**. This helps in reducing uncertainty. However the uniform color does not have large gradients. Conversely, textures of an object with uniform color make it difficult to ascertain the exact direction of deformation at a point on the surface, because there are no color gradients around that point.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal. I do not have further questions.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank the reviewer's constructive suggestions, time, and attention in the review and discussion periods, which help us a lot in polishing our manuscript and updating the revision version. | Summary: The paper considers the problem of physical modeling the dynamics of 3D objects in space using only 2D observations. The authors propose to solve this problem by viewing objects as sets of interacting points and using differentiable rendering of these point clouds. In this pipeline, neural models are used to directly predict the dynamics of 3D particles. Unlike existing methods, the authors propose to use a physically principled framework (Discrete Element Analysis) to develop a partially interpretable neural model that can be used to predict the target dynamics. This framework decomposes the forces applied to the points into gravity, potential, and viscous interaction forces (and further into normal and tangent components). The proposed model thus is constrained to predict certain components from these force decompositions resulting in a more principled, interpretable, and effective dynamics prediction.
The authors propose new more challenging synthetic datasets containing variable materials and objects, evaluate on this data, and compare the results to several existing baselines. The proposed method shows improvements over the baselines in all the scenarios. The authors also provide an ablation study showing the importance of different components.
Strengths: * The proposed model seems like an effective combination of a physically principled framework and neural model, which potentially can be further explored and improved.
* The proposed method significantly outperforms all baselines.
* The authors propose a new dataset.
Weaknesses: * The written text requires some additional polishing (citations should not be treated as nouns, sometimes the layout of figures and text is too tight, and there are some grammatical errors and not correctly formulated sentences).
Technical Quality: 4
Clarity: 2
Questions for Authors: How are CD and EMD computed? Are any of the intermediate states used to compute the metrics or are they only computed for the final state? If the proposed model is interpretable does it make sense to compare the predicted particle forces in all the intermediate steps with the ground truth forces from the simulations?
Some examples show that objects can be «torn» over the course of the considered scenario. Currently, the resulting distinct parts of the torn object will still be considered as one single object by the model, is not it? Does it cause problems? Have you considered any solutions for it?
Confidence: 3
Soundness: 4
Presentation: 2
Contribution: 4
Limitations: The limitations are properly addressed in the text.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Weakness: Polishing Language**
We thank the reviewer for pointing out this. Following this comment, we have thoroughly revised the language and grammar to enhance the overall quality of the text, carefully checked the citation, and optimized the layout. The updated version is fully improved for the academic language.
> **Question 1: Metrics Calculation**
As stated by the reviewer, our goal is to predict a sequence of particle trajectories, therefore there are lots of intermediate states. We would claim that **all the metrics** in this work, including *CD, EMD, PSNR, SSIM, and LPIPS*, are derived by averaging over all timestamps of all sequences in the test set. Hence the evaluated metrics contain **all intermediate states and all sequences**. We will clarify this point in *Section 4* of our revised manuscript following this comment.
> **Question 2: Interpretable**
Thanks for this comment. Yes, the proposed method is partially interpretable because we predict the interaction forces between particles by our physical neural operators. The predicted force is directly applied to update the velocity and position of particles by Euler Integration. Therefore, the next accurate positions can **only be obtained by correct force prediction**. However, we do not have the groundtruth of the particle interaction forces. The reason is that all the synthetic data are generated by the Material Point Method (MPM). In MPM, the interaction forces are not explicitly and directly derived. Instead, it simulates the dynamics by transferring the momentum between particles and background grids. Therefore, it's hard to obtain the groundtruth of the interaction forces.
Nevertheless, we conducted additional analysis to validate **the physical meaning of the forces** predicted by our DEL. In *Section H and Figure 17 in the Appendix*, we visualize the **relationship between the predicted forces and particle deformations** for each type of material by accessing the learned graph operators with different inputs. These curves indeed reveal the inherent mechanical properties of these different materials. For example, for plastic materials, the force initially increases with displacement. However, when the deformation reaches a certain extent, the material undergoes yielding, and the force no longer increases with further displacement or even decreases. This phenomenon closely mirrors real-world plastic materials. Moreover, for the rigid body, even very small deformations can lead to a significant increase in force, which is also in agreement with real-world situations.
> **Question 3: Torning Objects**
In fact, the proposed method recognizes particles in terms of the **material level** instead of the object level. If an object is torn apart, **the material of these parts remains unchanged**, therefore they have the same mechanical properties. For example, a piece of plastic, even when torn into two halves, remains plastic. This phenomenon looks like the DEL recognizes them as the same objects, but as we stated above, these particles just belong to the same material with constant properties, such as mechanical parameters, colors, etc. Hence DEL does not meet these object-level issues. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Meta-Exploiting Frequency Prior for Cross-Domain Few-Shot Learning | Accept (poster) | Summary: This paper introduces a novel framework for cross-domain few-shot learning. The input images are first decomposed into low-frequency content and high-frequency structure using FFT. Then, the PRM-Net includes three branches, low-frequency, high-frequency, and main branch. The PRM-Net includes two priors to regularize the feature embedding network. The approach shows state-of-the-art results on multiple benchmarks.
Strengths: 1. The use of frequency decomposition to address CD-FSL task is easy to understand and implement.
2. The method is thoroughly evaluated on multiple benchmarks.
3. The method achieves state-of-the-art results on multiple benchmarks.
4. The paper provides a clear and detailed description of each component of the method.
Weaknesses: 1. The method avoids additional inference costs, however, the decomposition and additional branches (high-low frequency branches) could introduce significant computational overhead. Moreover, the improvement by incorporating the reconstruction prior seems marginal according to Table 3.
2. Lack of proper citation for the datasets mentioned in section 3.1, e.g., CropDisease, CUB, ISIC.
3. The reliance on fixed decomposition strategies like FFT might not be optimal for some scenarios, e.g., the foreground objects have similar colors to the background, or the background is complex. The whole model cannot be trained end-to-end due to FFT.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How do you see the value of research in the (cross-domain) few-shot learning while the foundation models can perform well in zero-shot tasks?
2. Is it possible to combine this approach with fine-tuning strategies to further enhance performance in certain domains?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## To Reviewer HqaU :
### Weaknesses (1)
### Response 1 : About computational overhead.
1) Thanks for the comments. The proposed method indeed introduces additional computational cost during the training phase. We will further clarify this weakness in the manuscript.
2) We present the computational costs of our method during the training and inference phases in Table 2 and 3 (Due to character limitations, please see "To Reviewer SF8W".), respectively. As shown, the backbone and the feature reconstruction network in our method are lightweight, which to some extent mitigates the computational overhead of the high and low-frequency branches. In addition, a portion of the computational overhead of our method comes from the FFT. In future work, we will explore the use of efficient, learnable image decomposition techniques, employing differentiable kernels to capture information from different frequency bands of the images.
3) During the inference, the proposed method does not introduce additional computational overhead, resulting in efficient inference and good generalization.
### Response 2 : About the reconstruction prior.
We have analyzed the reconstruction prior in the manuscript (please see "Effectiveness of the proposed frequency prior" in 4.3. Ablation study.) and performed experiments (please see Table 3 in the manuscript). We copy the results from the manuscript here, as shown in Table 12. It can be seen that compared with the baseline, the proposed reconstruction prior can achieve better performance. This demonstrates the contribution of the proposed reconstruction prior.
**Table 12 : Verifying the alignment and reconstruction prior under 5-way 1-shot (5-way 5-shot ) setting.**
| Method | CUB | Places | Plantae | CropDisease | Ave. |
|---|---|---|---|---|---|
| Baseline | 47.05 (67.99) | 51.09 (71.74) | 39.26 (57.82) | 70.22 (89.54) | 51.90 (71.77) |
| Ours just alignment | 50.79 (72.65) | 51.42 (73.22) | 41.05 (60.93) | 70.80 (90.11) | 53.51 ( 74.22) |
| Ours just reconstruction | 50.55 (71.39) | 51.96 (72.60) | 41.11 (60.22) | 70.04 (89.44) | 53.41 (73.41) |
| Ours (alignment + reconstruction) | 51.55 (73.61 ) | 52.06 (73.78)| 41.55 (61.39) | 71.47 (90.68)| 54.16 (74.87) |
### Weaknesses (2)
### Response :
Thanks for the suggestion. We will supplement the references for CropDisease, CUB, and ISIC in the manuscript.
### Weaknesses (3)
### Response :
Thanks for the comments. We will further clarify this weakness in the manuscript. To alleviate this problem, we will explore the use of learnable image decomposition methods as an alternative to FFT. Essentially, FFT uses fixed kernels for signal decomposition, making it difficult to achieve data-adaptive decomposition. We are considering designing differentiable kernels to separately extract high-frequency and low-frequency information from images. Additionally, we consider constructing a model that generates kernel parameters conditioned on the original image for data-adaptive image decomposition. We will validate it in future work.
### Questions (1)
### Response :
We think that it is valuable to study cross-domain few-shot learning (CD-FSL) in the era of foundation models. The reasons are as follows.
1) Adapting foundation models to CD-FSL has practical value. The core elements of CD-FSL are feature initialization and rapid adaptation. Early research has concentrated on using meta-learning methods to learn a good feature initialization model in the source domain, and then quickly adapting the model to the target task with a small number of labeled samples.The foundation models essentially accomplished the first step of FSL. In future research, effectively adapting large models to cross-domain few-shot tasks is valuable.
2) Exploring the adaptation of foundation models to extreme CD-FSL holds significant research value.
Previous work [1] explored applying pre-trained foundation models to extreme cross-domain few-shot tasks. For comparison, we implemented our method under the same settings, and the results are shown in Table 13. As can be seen, even with the assistance of pre-trained foundation models, these methods still perform poorly in extreme cross-domain scenarios (e.g., Chest, ISIC, etc.).
Therefore, exploring the adaptability of foundational models to extreme CD-FSL tasks is valuable.
**Table 13 : Performance using the foundation model as feature initialization.**
|Method|Backbone|Setting|CUB|Cars|Places|Plantae|Chest|ISIC|EuroSAT|CropDisease|Ave.|
|---|---|---|---|---|---|---|---|---|---|---|---|
| P>M>F[2] | ViT-S (DINO pretrain) | 1-shot | 78.13 | 37.24 | 71.11 | 53.60 | 21.73 | 30.36 | 70.74 | 80.79 | 55.46 |
| **Ours** | ViT-S (DINO pretrain) | 1-shot | 85.07 | 42.82 | 73.71 | 56.6 | 23.32 | 35.13 | 72.03 | 81.82 | 58.81 |
| P>M>F[2] | ViT-S (DINO pretrain) | 5-shot | - | - | - | - | 27.27 | 50.12 | 85.98 | 92.96 | - |
| **Ours** | ViT-S (DINO pretrain) | 5-shot | 97.5 | 74.6 | 89.24 | 81.63 | 26.31 | 51.72 | 89.95 | 95.93 | 75.86 |
[1] Pushing the Limits of Simple Pipelines for Few-Shot Learning: External Data and Fine-Tuning Make a Difference, CVPR 2022.
### Questions (2)
### Response :
Following the suggestion, we have fine-tuned our method on target domain. Following the common fine-tuning strategies, for each few-shot task in the target domain, we fine-tune the model using the support set, and then test it on the query set. Due to time constraints, we have conducted experiments on the EuroSAT and ISIC target datasets as examples, as shown in Table 14. It can be seen that, through fine-tuning, the performance of the proposed method can be further improved. For example, on the EuroSAT dataset, fine-tuning the proposed method improved performance by 1.55%.
**Table 14 : Performance when fine-tuning our method under 5-way 5-shot setting.**
| Method | EuroSAT | ISIC |
|---|---|---|
| Ours | 81.24 | 48.70 |
| Ours+fine-tuning | 82.79 | 49.50 | | Summary: The paper introduces an innovative framework that leverages the concept of frequency priors for cross-domain few-shot learning. The novel idea of decomposing images into high and low-frequency components and integrating these into the meta-learning process is a creative advancement in the field.
Built upon established image transformation theories such as the Fourier Transform, the proposed method is grounded in a solid theoretical foundation, which enhances its credibility and applicability across various domains. The empirical validation through extensive experiments on multiple cross-domain benchmarks further substantiates the effectiveness of the framework, showcasing its superiority over state-of-the-art methods. And the experiment, along with the supplementary materials, has been very comprehensive, addressing most of my questions.
Strengths: The paper is commendably structured, with a clear and concise presentation of ideas, and it exhibits a high degree of reproducibility, further enhanced by the open sourcing of some of the code. The abstract and introduction effectively encapsulate the motivation and key contributions of the work, providing readers with a quick yet comprehensive understanding. The methodology is explained in a step-by-step fashion, making it accessible to readers who may not be intimately familiar with meta-learning or frequency domain analysis, thereby demonstrating the paper's strength in both clarity and comprehensiveness.
Weaknesses: (1) While the paper highlights the efficiency advantages of the proposed method, it falls short of offering a detailed analysis of computational complexity. A critical evaluation extending beyond inference times to include the total floating-point operations (FLOPs) during both training and inference phases is essential. Understanding the complete computational costs provides insights into the method's practicality, scalability, and suitability for deployment across varying computational environments.
(2) The paper could be improved by including more extensive ablation studies on the choice of backbone network and loss functions. The current presentation lacks a thorough exploration of why specific architectural decisions and loss formulations were made, which is crucial for understanding the contribution of these choices to the overall performance.
(2) Although the paper demonstrates strong empirical results, there is a need for a more profound theoretical analysis underpinning the effectiveness of frequency priors in cross-domain generalization. A deeper theoretical understanding would bolster the paper's claims and offer insights into the broader applicability of the proposed method.
Technical Quality: 4
Clarity: 4
Questions for Authors: (1) Is there a hyperparameter used during the FFT process to determine the high and low-frequency information? If so, how was the threshold for decoupling these frequencies determined, and how might it influence the experimental results?
(2) How does the model perform over a longer period of training, and are there any signs of degradation or improvement in performance? Can the loss optimization curve and loss landscape be provided?
(3) The content task and structure task are measured in a decoupled manner. Are these tasks entirely independent, and theoretically, could the low-frequency information be used as the query set to predict within the raw image support set, and vice versa for high-frequency information? And so on.
(4) Besides the EMA (Exponential Moving Average) update method, what other update mechanisms have been considered or could be applicable to the model's training process?
(5) There seems to be a limited ablation study regarding the choice of loss functions. Why was MSE chosen for the Feature Reconstruction Loss, and could cosine similarity loss be a viable alternative? Similarly, why was KL divergence chosen for the Prediction Loss, and was this decision based on empirical results or theoretical analysis?
(6)The experiments were conducted using ResNet-10. Could the authors provide insights into whether the proposed method could be applied to larger models such as ResNet-101? Additionally, could the method be beneficial when applied to models like CLIP from OpenAI or the DINO model, and are there any explorations in this direction?
(7) Some of the experimental content in the supplementary materials could be incorporated into the main text, such as the sections on 'low' and 'high' in Table 4. This is just a suggestion.
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors have transparently addressed the limitations of their method, particularly its performance on the Chest dataset, and have responsibly considered the broader societal impacts, confirming no negative social effects. They also acknowledge the increased training time associated with their approach. Proactively, they suggest future improvements and responsibly guide how their work can contribute to the field's progression, aligning well with the checklist for addressing limitations and societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## To Reviewer mWuv :
### Weaknesses (1)
### Response :
We present the computational costs during the training and inference phases in Table 2 and 3 (Due to character limitations, please see "To Reviewer SF8W".), respectively. We can draw the following observations.
1) During the training, the computational overhead of our method mainly comes from the backbone and image decomposition. As shown in Table 2 (please see "To Reviewer SF8W"), the backbone and the feature reconstruction network are lightweight, which to some extent mitigates the computational overhead.
2) During inference, our method does not introduce additional computational overhead, resulting in efficient inference and good generalization.
### Weaknesses (2) & Questions (5) & Questions (6)
### Response 1: Using Resnet-101 and ViT as backbones.
We validate our method with Resnet-101 and ViT as backbones, as shown in Table 6 (Please see Table 6 in the global rebuttal). It can be seen that our method effectively scales to different backbones and demonstrates certain performance advantages.
### Response 2: Verifying cosine loss.
We validate cosine similarity as an alternative to MSE, as shown in Table 7 (Please see Table 7 in the global rebuttal). It can be observed that MSE and cosine similarity yield nearly equivalent performance.
### Response 3: About KL divergence.
In our alignment prior, we use the predictions from the main branch as an anchor, and expect the high/low-frequency predictions to align with this anchor. To achieve this, we use KL divergence as the metric. The reasons are as follows. First, since the anchor is dynamically changing during training, meaning the entropy of the anchor is not constant, cross-entropy is not suitable. Second, our alignment loss is asymmetric, so JS divergence is also not applicable. In addition, Wasserstein-1 distance could serve as an alternative to KL divergence. However, it requires computing the optimal transport between different prediction distributions, which involves quadratic programming and results in higher computational complexity.
### Weaknesses (3)
### Response :
Thanks for the suggestions. In this work, we resort to the image decomposition prior which have been proved shared across different images despite their domains, i.e., each image can be decomposed into complementary low-frequency content details and high-frequency robust structural characteristics. More importantly, we specially establish a feature reconstruction prior and a prediction consistency prior to separately encourage the consistency. This allows for collectively guiding the network’s meta-learning process with the aim of learning cross-domain generalizable embeddings. We will supplement more theoretical analysis into the manuscript.
### Questions (1)
### Response :
We validate the hyper-parameters in FFT(e.g., radius ratio), which determines the boundary between high and low frequencies. The results are shown in Table 8 (Please see Table 8 in the global rebuttal). It can be observed that this hyper-parameter does not significantly affect the overall performance. We set it to 0.5 for all cases.
### Questions (2)
### Response 1 : About training epoch.
We validate the impact of training epochs on performance. The results are shown in Table 9 (Please see Table 9 in the global rebuttal). It can be observed that as the number of epochs increases, performance initially rises and then declines. However, since we cannot link to the test data during the training phase, we can only judge based on the loss curve during training (please see Fig. 2 in Response.pdf). We observed that the loss tends to converge around 50 epochs. Therefore, we set it to 50 for all cases.
### Response 2 : About loss curve and landscape.
We provide the loss optimization curve and loss landscape in Fig. 2. As can be seen, the baseline tends to over-fit at the relatively early epochs. In contrast, with the assistance of prior regularization, our method can mitigate over-fitting to some extent. For the loss landscape, we followed the implementation in [1]. Firstly, randomly perturb the model trained in the source domain in 2000 directions; Secondly, perform inference on the target domain for each perturbed model, and record the loss value; Finally, we visualize the loss landscape based on the recorded loss values and directions. It can be seen that, compared to the baseline, the trained model of the proposed method is more robust to unknown perturbations. For example, the loss values in our method converge within a certain range. In contrast, the baseline method's loss values tend to diverge. This demonstrates the good generalization ability of our method.
[1] Visualizing the loss landscape of neural nets. NIPS, 2018.
### Questions (3)
### Response :
We designed a variant experiment that utilizes the raw support set to predict high-frequency and low-frequency query. The results are shown in Table 10 (Please see Table 10 in the global rebuttal). It can be seen that this variant also achieves significant improvements over the baseline. However, the performance of this variant is slightly lower than our original approach. This may be because independently using high-frequency or low-frequency support images to predict high-frequency or low-frequency query images can more explicitly utilize the diverse frequency information.
### Questions (4)
### Response :
We supplement some experiments to verify alternative mechanisms, and the results are shown in Table 11 (Please see the global rebuttal). We compared three update mechanisms. Case 1: we keep parameters shared among the three branches. Case 2: non-shared parameters between the three branches and joint training in an end-to-end manner. Case 3 (Ours): EMA strategy to update. As can be seen, our method achieves the best performance.
### Questions (7)
### Response :
Thanks for the suggestions. We will supplement those results into the main text. | Summary: The paper introduces a novel framework called Meta-Exploiting Frequency Prior for Cross-Domain Few-Shot Learning, which aims to improve meta-learning's generalization by decomposing images into high- and low-frequency components. This method leverages these components to guide the feature embedding network, enhancing category prediction consistency and feature reconstruction. The framework achieves state-of-the-art results on various cross-domain few-shot learning benchmarks, demonstrating its effectiveness and efficiency.
Strengths: 1. The paper presents a novel approach by introducing the concept of exploiting cross-domain invariant frequency priors for few-shot learning. The idea of using cross-domain invariant frequency priors is interesting.
2. The proposed framework is rigorously evaluated on multiple cross-domain few-shot learning benchmarks, demonstrating its effectiveness.
3. Good writing and clear presentation.
Weaknesses: 1. The method relies heavily on the strong prior assumption that using low and high-frequency components can effectively mimic the distribution shift encountered during testing. However, this assumption may not hold true when the test and training datasets are entirely unrelated, as observed in the EuroSAT dataset where the performance of the method is not as impressive. The authors need to clarify and justify this assumption.
2. The paper does not provide an evaluation of the method's performance on within-domain few-shot learning tasks. Understanding how the proposed framework performs in a scenario where the training and testing data come from the same domain could provide a more comprehensive view of its robustness and generalizability.
3. The computational complexity of the proposed framework during the training phase is not thoroughly discussed. While the paper claims no additional inference cost, the added steps of image decomposition and feature reconstruction during training could introduce significant computational overhead. Detailed analysis and discussion on the computational requirements and efficiency would be beneficial.
4. The impact of the choice of image decomposition method (e.g., Fast Fourier Transform) on the overall performance of the framework is not thoroughly explored. Alternative decomposition techniques may yield better results or be more computationally efficient. A comparison of different decomposition methods and their influence on the performance and efficiency of the framework would provide valuable insights and strengthen the paper's contributions.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weakness
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## To Reviewer SF8W:
### Weaknesses 1
### Response :
We compared the FFT decomposition results of natural images and EuroSAT images (please see Fig. 1 in Response.pdf). We observed that FFT decomposition of natural images can obtain clear low-frequency and high-frequency information. However, the high-frequency part of the image from EuroSAT is almost all noise. There may be two reasons for this. One is that during the imaging process of remote sensing scene images, due to the high spatial resolution of remote sensing scene images, the camera often compresses high-frequency information to improve imaging efficiency. Second, remote sensing scene images focus on containing low-frequency texture content and lack clear high-frequency structural information.
We will further clarify this weakness in the manuscript.
### Weaknesses 2
### Response :
Following the suggestion, we have supplemented experiments for the proposed method in the in-domain setting. All in-domain experiments followed standard settings, the results as shown in Table 1. It can be seen that the proposed method achieved good results with both Resnet-10 and ViT-S backbones.
We will supplement these experimental results into the manuscript.
**Table 1 : Performance on within-domain few-shot learning tasks under 5-way 5-shot setting.**
| Method | Backbone | mini-ImageNet | CIFAR-FS |
|---|---|---|---|
| ProtoNet[1] | ResNet-12 | 80.53% | 83.5%±0.5% |
| MetaOptNet[2] | ResNet-12 | 78.63%±0.46% | 84.3%±0.5% |
| SetFeat[3] | ResNet-12 | 82.71%±0.46% | - |
| DiffKendall[4] | ResNet-12 | 80.79%±0.31% | - |
| MetaDiff[5] | ResNet-12 | 81.21%±0.56% | - |
| **Ours** | ResNet-10 | 83.30%±0.57% | 86.86%±0.63% |
| Method | Backbone | mini-ImageNet | CIFAR-FS |
|---|---|---|---|
| P>M>F[6] | ViT-S (DINO pretrain) | 98.0% | 92.5% | - |
| **Ours** | ViT-S (DINO pretrain) | 98.78%±0.12% | 93.75%± 0.10% |
[1] Prototypical networks for few-shot learning, NIPS 2017.
[2] Meta-learning with differentiable convex optimization,CVPR 2019.
[3] Matching Feature Sets for Few-Shot Image Classification, CVPR 2022.
[4] DiffKendall: A Novel Approach for Few-Shot Learning with Differentiable Kendall's Rank Correlation, NIPS 2023.
[5] MetaDiff: Meta-Learning with Conditional Diffusion for Few-Shot Learning, AAAI 2024.
[6] Pushing the Limits of Simple Pipelines for Few-Shot Learning: External Data and Fine-Tuning Make a Difference, CVPR 2022.
### Weaknesses 3
### Response :
Following the suggestion, we present the computational costs of the proposed method during the training and inference phases in Tables 2 and 3, respectively. We can draw the following conclusions.
1) The backbone network and the feature reconstruction network in our method are lightweight, which to some extent mitigates the computational overhead.
2) In addition, a portion of the computational overhead of our method comes from the FFT. In future work, we will explore the use of efficient, learnable image decomposition techniques, employing differentiable convolutional kernels to capture information from different frequency bands of the images.
3) During the inference phase, the proposed method does not introduce additional computational overhead, resulting in efficient inference and good generalization.
We will supplement these analyses and discussion into the manuscript.
**Table 2 : About parameters (M), FLOPs (G), and Iteration time (S) during training phase. Take the 5-way 1-shot 15-query task as an example to calculate the floating point and training iteration time. N represents the number of pixels in each image.**
| Items | Decomposition | Backbone | Reconstruction |
|---|---|---|---|
| Parameters | 0 | 4.9057 | 0.1313 |
| FLOPs | O(NlogN) | 71.6513 | 0.0105 |
| Iteration time | 0.6912 | 1.3859 | 0.1039 |
**Table 3 : Performance and efficiency during target domain inference phase under 5-way 1-shot (5-way 5-shot ) setting.**
| Method | Inference time | Average performance |
|---|---|---|
| Baseline | 0.06 (0.07) | 51.90 (71.77) |
| Ours | 0.06 (0.07) | 54.16 (74.87) |
### Weaknesses 4
### Response :
Following the suggestion, we compared the performance and efficiency under different decomposition methods. The results are shown in Tables 4 and 5. It can be seen that, compared to the baseline methods, the proposed method achieves better results across various decomposition methods. Additionally, using wavelet-based decomposition results in higher computational efficiency.
When we prioritize efficiency, wavelet decomposition is the optimal choice. However, compared to FFT, the performance of wavelet decomposition is slightly lower. Therefore, overall, FFT remains a better choice.
We will supplement these analyses and discussion into the manuscript.
**Table 4 : Performance on different image decomposition method under 5-way 1-shot (5-way 5-shot ) setting.**
| Method | CUB | Places | Plantae | CropDisease | Ave. |
|---|---|---|---|---|---|
| Baseline | 47.05 (67.99) | 51.09 (71.74) | 39.26 (57.82) | 70.22 (89.54) | 51.90 (71.77) |
| Haar-wavelet | 49.12 (71.12 ) | 52.63 (73.92)| 40.56 (60.46) | 70.82 (90.45) | 53.28 (73.98) |
| DB-wavelet | 49.52 (71.59) | 52.74 (73.65)| 40.78 (60.60) | 70.87 (90.54) | 53.47 (74.09) |
| FFT | 51.55 (73.61 ) | 52.06 (73.78)| 41.55 (61.39) | 71.47 (90.68)| 54.16 (74.87) |
**Table 5 : Decomposition efficiency (sec).**
| Method | One image | All images in 5-way 1-shot 15-query task |
|---|---|---|
| Haar-wavelet | 0.0015 | 0.1200 |
| DB4-wavelet | 0.0016 | 0.1280 |
| FFT | 0.0086 | 0.6912 | | null | null | Rebuttal 1:
Rebuttal: Author Response for ``Meta-Exploiting Frequency Prior for Cross-Domain Few-Shot Learning''
We would like to express our gratitude to the AC and the reviewers for their valuable comments and suggestions. Over the past week, we have responded to all comments mentioned by the reviewers. The response includes:
1) evaluating the computational overhead during the training phase.
2) clarifying the prior assumptions and theoretical advantages of the proposed method.
3) supplementing the experiments in the in-domain setting.
4) validating different image decomposition methods, hyper-parameters in FFT, training epochs, the loss optimization curve and loss landscape.
5) supplementing experiments with different backbone networks, such as Resnet-101 and ViT.
6) validating different update strategies, the choice of loss functions, and variants using the original images for prediction.
7) verifying the performance of the proposed method with further fine-tuning, etc.
For detailed responses, please refer to each reviewer's rebuttal window. Additionally, since we cannot upload figures in the reviewer's rebuttal window, we have included all figures in a separate "Response.pdf" within this global rebuttal window. Besides, due to character limitations in the reviewer's window, we have placed some experimental result tables in this global rebuttal window.
**Table 6 : Performance of the proposed method on different backbone networks under 5-way setting. The results of ProtoNet[1] are derived from our reproduction, and the results of P>M>F[2] were copied from its paper.**
|Method|Backbone|Setting|CUB|Cars|Places|Plantae|Chest|ISIC|EuroSAT|CropDisease|Ave.|
|---|---|---|---|---|---|---|---|---|---|---|---|
| ProtoNet[1] | Resnet101 | 1-shot | 52.14 | 35.08 | 52.40 | 39.01 | 22.01 | 33.34 | 62.30 | 68.34 | 45.57 |
| **Ours** | Resnet101 | 1-shot | 53.98 | 37.95 | 52.06 | 42.44 | 22.32 | 36.10 | 62.54 | 71.74 | 47.39 |
| ProtoNet[1] | Resnet101 | 5-shot | 75.05 | 51.48 | 73.29 | 58.41 | 26.18 | 47.96 | 78.09 | 88.96 | 62.42 |
| **Ours** | Resnet101 | 5-shot | 76.87 | 56.80 | 74.61 | 61.64 | 26.44 | 50.20 | 80.17 | 90.57 | 64.66 |
|Method|Backbone|Setting|CUB|Cars|Places|Plantae|Chest|ISIC|EuroSAT|CropDisease|Ave.|
|---|---|---|---|---|---|---|---|---|---|---|---|
| P>M>F[2] | ViT-S (DINO pretrain) | 1-shot | 78.13 | 37.24 | 71.11 | 53.60 | 21.73 | 30.36 | 70.74 | 80.79 | 55.46 |
| **Ours** | ViT-S (DINO pretrain) | 1-shot | 85.07 | 42.82 | 73.71 | 56.6 | 23.32 | 35.13 | 72.03 | 81.82 | 58.81 |
| P>M>F[2] | ViT-S (DINO pretrain) | 5-shot | - | - | - | - | 27.27 | 50.12 | 85.98 | 92.96 | - |
| **Ours** | ViT-S (DINO pretrain) | 5-shot | 97.5 | 74.6 | 89.24 | 81.63 | 26.31 | 51.72 | 89.95 | 95.93 | 75.86 |
[1] Prototypical networks for few-shot learning,NIPS 2017.
[2] Pushing the Limits of Simple Pipelines for Few-Shot Learning: External Data and Fine-Tuning Make a Difference, CVPR 2022.
**Table 7 : Verifying cosine loss under 5-way 1-shot (5-shot) setting.**
| Method | CUB | Places | Plantae | CropDisease | Ave. |
|---|---|---|---|---|---|
| Cosine similarity loss | 51.50 (73.54) | 52.08 (73.78) | 41.54 (61.40) | 71.49 (90.66) | 54.15 (74.84) |
| MSE loss | 51.55 (73.61 ) | 52.06 (73.78) | 41.55 (61.39) | 71.47 (90.68)| 54.16 (74.87) |
**Table 8 : Verifying the hyper-parameters in FFT under 5-way 1-shot (5-shot ) setting.**
| Radius_ratio | CUB | Places | Plantae | CropDisease |
|---|---|---|---|---|
| 0.1 | 51.14 (73.25) | 51.95 (73.76) | 41.40 (61.33) | 71.68 (90.89) |
| 0.3 | 51.27 (73.33) | 52.18 (73.81) | 41.47 (61.44) | 71.67 (90.83) |
| **0.5** | 51.55 (73.61) | 52.06 (73.78) | 41.55 (61.39) | 71.47 (90.68) |
| 0.7 | 51.30 (73.35) | 51.89 (73.79) | 41.33 (61.28) | 71.37 (90.61) |
| 0.9 | 51.23 (73.10) | 52.00 (73.85) | 41.30 (61.27) | 71.54 (90.63) |
**Table 9 : Verifying the training epoch under 5-way 1-shot (5-shot ) setting.**
| Epoch | CUB | Places | Plantae | CropDisease |
|---|---|---|---|---|
| 40 | 51.12 (72.88) | 52.53 (74.04) | 41.04 (60.71) | 71.15 (90.56) |
| **50** | 51.55 (73.61) | 52.06 (73.78) | 41.55 (61.39) | 71.47 (90.68) |
| 60 | 51.00 (73.45) | 51.60 (73.59) | 41.39 (61.06) | 71.17 (90.33) |
| 80 | 50.69 (72.80) | 51.87 (73.34) | 40.88 (60.87) | 70.75 (90.27) |
| 100 | 50.02 (72.12) | 51.44 (73.15) | 40.91 (60.98) | 70.36 (90.19) |
**Table 10 : Verifying the variant that predict within the raw image support set under 5-way 1-shot (5-shot ) setting.**
| Method | CUB | Places | Plantae | CropDisease | Ave. |
|---|---|---|---|---|---|
| Baseline | 47.05 (67.99) | 51.09 (71.74) | 39.26 (57.82) | 70.22 (89.54) | 51.90 (71.77) |
| Variant | 51.32 (73.18) | 52.12 (73.87) | 41.61 (61.10) | 70.91 (90.19) | 53.99 (74.58) |
| Ours | 51.55 (73.61 ) | 52.06 (73.78) | 41.55 (61.39) | 71.47 (90.68) | 54.16 (74.87) |
**Table 11 : Comparing alternative update mechanisms under 5-way 1-shot (5-shot) setting.**
| Method | CUB | Places | Plantae | CropDisease | Ave. |
|---|---|---|---|---|---|
| Baseline | 47.05 (67.99) | 51.09 (71.74) | 39.26 (57.82) | 70.22 (89.54) | 51.90 (71.77) |
| Case 1 | 50.76 (72.56) | 51.83 (72.76) | 40.73 (60.46) | 69.47 (89.56) | 53.19 (73.83) |
| Case 2 | 50.93 (72.59) | 52.36 (73.84) | 41.00 (60.74) | 70.65 (90.40) | 53.73 (74.39) |
| Case 2 (ours) | 51.55 (73.61 ) | 52.06 (73.78)| 41.55 (61.39) | 71.47 (90.68) | 54.16 (74.87) |
Pdf: /pdf/9dab040c4dc7b7e713809058dab2585c8293a6d2.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
MambaLRP: Explaining Selective State Space Sequence Models | Accept (poster) | Summary: The paper presents a method to correctly apply LRP to Mamba models. Through careful analysis, the authors demonstrate that applying LRP directly results in poor performance and propose modifications to recover the propagation rules. The obtained method outperforms the alternatives, grounded by theory, and the authors show several applications of their method, including identifying gender biases and measuring the long-range abilities of S6.
Strengths: 1) **Impact:** Mamba is an emerging architecture (~600+ citations, 10K+ stars on GitHub). Thus, providing an attribution method for these models is crucial. In this essence, correctly applying LRP is an important direction that facilitates the community in improving and understanding these models.
2) **Simplicity:** The method and modifications are very simple, with few hyper-parameters. While this might be seen as a lack of novelty, I view it as an advantage, making the method easy to use and adaptable to various applications, such as other variants of Mamba (Mamba2) and other domains (DNA, speech and more).
3) **Informative Ablation studies** and justification of decision choices are insightful. For example, Figure 3 and Tables 1, 2, 6, 9 provide comparisons with naive LRP and also present ablation studies that allow the reader to measure the contribution of each modification to the method.
4) Section 6 (use cases) shows that the method is **applicable** and allows the authors to explore the gender bias and long-range abilities of Mamba models and provide insightful analyses about Mamba models.
Weaknesses: 1) **The comparison with previous work should be improved:**
- 1.1) Metrics and Benchmarking: Can the authors highlight which results are reproduced by them and which results are taken from previous work? Moreover, as far as I understand, the method in [4] is the only method that developed an interpretability method for Mamba before (perhaps in parallel) to this work. If I understand correctly, although the two methods share and use the same pre-trained models, there is no overlap in the metrics. Am I correct? If so, is there a reason for this discrepancy? Can the authors compare results directly with previous work using previously proposed metrics to ensure the gap doesn’t arise from employing [4] incorrectly?
- 1.2) Informative Comparison: Additionally, I think the comparison with [4] can be more accurate. The method in [4] applies its method only to S6 layers (without gating and convs), while Mamba LRP is an end-to-end method (which is a strength of Mamba-LRP). However, it is still important to make some apple-to-apple comparisons. Can the authors check both methods on a model without conv and gating layers to determine which approach is better for providing an explanation to S6 layers? Alternatively, it seems that the issue with [4] is fixed in [Uni]. Would the authors be able to compare their method to [Uni]? Providing this comparison would be highly valuable to the community. (I understand that this is a very new paper, so I will not decrease my score if the results are less favorable than those of [Uni]).
2) **Insufficient empirical analysis:**
- 2.1) For the long-range experiments (Figure 6), it would be very insightful to compare the behaviour of Mamba to Transformers (Pythia pre-trained models can serve as the transformer baselines since they are trained on the same data). Additionally, analyzing the trend with larger models could yield valuable information on how increased model size enhances long-context capabilities in practice, perhaps the 7B models from [Sca] can be used (which exist in Hugging Face).
- 2.2)
> “Residual lack of conservation is due to the presence of biases in linear and convolution layers, which are typically non-attributable”
Can the authors empirically justify this claim? This can be easily validated by taking a pre-trained model, omitting the biases, fine-tuning it for several epochs, and then checking the conservation again.
3) **Novelty:** The novelty of the method is somewhat limited. One could argue that it merely involves a few applications of detach and the simple half-propagation rule (in addition to previously proposed contributions), which can be easily summarized in a few lines of code (as detailed in Algorithm 2). However, I believe this is not a significant drawback, particularly given the demand for such simple tools in the community. Additionally, the thorough evaluation, including insightful ablation studies, novel test case evaluations, and theoretical justification, is sufficiently robust.
[Uni] A Unified Implicit Attention Formulation for Gated-Linear Recurrent Sequence. Models. Zimerman et al.
[Sca] An Empirical Study of Mamba-based Language Models. Waleffe et al.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. Mamba LRP uses "Detach" on the SSM recurrent matrices. Similarly, [4] uses an attention matrix determined by A, B, and C parameters. It seems that both methods ignore the influence of the input on the system matrices (which is the core of the selection mechanism of Mamba). Am I right? Could addressing this issue provide a way to improve both methods? I would be glad to hear what the authors think about it.
2. Minor: Traditionally in the SSM domain, Delta denotes the step size. I suggest the authors replace \Delta with \delta when discussing the differences between the two scores.
3. In the Needle-in-a-haystack experiment (Figure 7), is there a reason not to increase the context length? It seems that the most interesting part of the figure is missing (which could show if there are edge cases, for example, regimes where the model succeeds in finding the needle, but Mamba-LRP fails).
4. Minor: Perhaps a relevant work that is missing is “Does Transformer Interpretability Transfer to RNNs?” by Paulo et al.
5. There is a standard trend in the SSM literature to omit D (and treat it as a parameter-based skip connection). Is it used in Mamba LRP, or is it ignored (like other biases, which are typically non-attributable)? It would be better if it were written explicitly in the paper.
6. I wonder if the authors can explore the potential limitations or failure cases of the proposed Mamba-LRP. Such information can help the community improve the method in the future. Are there cases where [4] or naive LRP might be better than Mamba-LRP?
7. Half propagation: While I'm not an expert in LRP, I suspect there are more effective methods to manage the gating mechanism. For instance, instead of normalizing the scores by averaging (0.5(x + y)), it may be better to use a weighted approach such as $$(1 - a) \cdot x + a\cdot y$$
where the value of a is determined by the actual norms of x and y, with different a values for each channel. Is there something I'm missing? Can this method improve the conservation properties?
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: From my perspective, the authors address most of the limitations, except those pointed out in the weaknesses section and question 6 (failure cases).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the valuable comments. We address them below and add more discussions as an official comment.
1.1: We use the official code of [1] to produce the results of AttnRoll and G$\times$ AttnRoll (Mamba-Attr). We evaluated MambaLRP's performance against these approaches using flipping and insertion metrics that are well-established for analyzing faithfulness [4,6] and are directly related to those used by [1]. In our metrics, $A^F_{MoRF}$/$A^F_{LeRF}$ corresponds to what [1] calls positive/negative perturbation. [4] suggests combining both metrics to derive a more resilient metric, resulting in $\Delta A^F = A^F_{LeRF} - A^F_{MoRF}$, used in [4,5]. The only difference is that [1] tracks accuracy in positive/negative perturbations, while we track changes in output logit, which is a standard practice in the XAI community [4,5,6]. Both metrics highly overlap as shown in Tab.B of the attached PDF. When using faithfulness metrics, various factors can impact the result, e.g. how the input is masked. To ensure a fair and consistent analysis across all models, tasks and XAI methods, we use a unified evaluation metric, as done in [3,5,6], and have not directly used the numbers reported in [1].
As requested, we follow the metric of [1] and mask pixels from least-to-most relevant and vice versa (10\%-90\%), track the top-1 accuracy and calculate AUC for the prediction on ImageNet val set for Vim-S (Tab.C in the attached PDF). As shown, MambaLRP outperforms [1].
1.2: [1] argues that Mamba uses implicit attention mechanisms and provides XAI tools, which aggregate attention across layers and may not capture the impact of certain layers. In contrast, MambaLRP, as you noted, considers the full model structure, highlighting its strength. Moreover, [1] introduced Mamba-Attr, which as said in the paper "exploits the gradients of both the S6 mixer and the gating mechanism''. In our evaluations, we compared MambaLRP with G$\times$AttnRoll (Mamba-Attr). As the methods of [1] are designed to explain the predictions of Mamba models, it is both valid and fair to compare MambaLRP with them in terms of the faithfulness of the explanations, as done in [3,5]. Despite [2] being released after the NeurIPS deadline, we compared MambaLRP's performance with the numbers reported in [2] for Vim-S in Tab.C of the PDF. This shows that MambaLRP outperforms the recent approach of [2]. We have added [2] to our related works.
2.1: As requested, we did a direct comparison to Transformers for our LRD use case (Tab.A of attached PDF). Many of the widely-used Transformers, e.g. GPTNeoX and Pythia unfortunately, do not allow inputs longer than 2048 or do not generate sensible text from long inputs. Instead, we used Llama-2 and Llama-3. We found that Llama-3 uses information from more intermediate mid-range dependencies than Mamba, though both favor tokens close to the end of context. Given Llama-3's larger size (8B) compared to Mamba (130M) and their different training settings, the analysis supports that Mamba can effectively use long-range information. Please refer to our answer to reviewr 4ExW for details.
2.2: Please find the results in Fig.A of PDF; conservation is fully preserved for biases set to 0.
3: As many XAI methods, including LRP, are not model-agnostic, tailored approaches must be developed for emerging classes of DNNs like Selective State Space Sequence Models (Mamba models). Mamba models include previously unstudied components, making existing LRP rules unreliable for these models, as shown in Tab.1 of paper. Since the explainability of Mamba models using LRP has not been addressed in the literature, we propose novel LRP rules specifically for SSM components, SilU, and multiplicative gates. These rules are not chosen heuristically but are the result of our thorough analysis of how relevance propagates through each component and when conservation breaks. Please refer to Appendix A and B for our theoretical analysis and derived LRP rules. As ease of implementation is desirable for XAI methods, we provided straightforward implementations that bypass the need for the implementation of complex LRP rules. These highlight the strengths of our approach: 1) theoretical soundness, 2) more faithful explanations, and 3) ease of implementation. Our contributions are the theoretical analysis of relevance propagation in different Mamba components, proposing new LRP rules for those that violate conservation, providing straightforward implementations, thorough performance evaluation against other XAI approaches, and insightful use cases, showing the value of MambaLRP for other lines of XAI research.
**Questions**
1: In forward path, $A$, $B$, and $C$ are calculated based on the input and are treated as constants in backward path. By detaching, we do not consider the gradient flow through them; however, their influence is still addressed by weighting the input in backward pass.
3: We initially limited the context length to 2048 to match the model's training and focus on retrieval accuracy limitations and MambaLRP solutions. At your request, we extended the experiment to 4k context length, with results in the PDF. Performance drops beyond 2560 and we found no cases where the model retrieved correctly but MambaLRP failed to explain reliably.
5: $D$ is used in MambaLRP. As it does not violate conservation, no modifications are necessary.
6: We investigated edge cases using Mamba-1.4B trained on Med-BIOS and found that MambaLRP outperforms Mamba-Attr [1] in about 95% of instances. In the most extreme edge case, both methods mostly select the same relevant tokens, but their score distributions differ slightly (see Fig.B of PDF).
7: In our work, $\alpha$ is not treated as a hyperparameter but set to 0.5 based on theoretical considerations. Specifically, in Equation (12) of the Appendix A.3, we show that $\mathcal{R}(x) = 2\mathcal{R}(y)$, leading to the choice of 0.5.
We will include your suggestions in the paper.
---
Rebuttal Comment 1.1:
Title: Response to Reviewer dNt6: Further Clarification
Comment: Due to space constraints, we had to keep our earlier responses brief. We would now like to take this opportunity to expand on some of our answers to ensure greater clarity and convenience for you.
> Further elaborations on the experimental results of long-range dependencies (LRD) experiment
Inspired by your comments, we conducted comparisons using two state-of-the-art Transformers: Llama-2 and Llama-3.
**Setup:**
We employed LRP propagation rules of [3] to extract explanations for the Llama models. As in the Mamba experiment, we generated 10 additional tokens from HotPotQA's input and analyzed the prediction of the generated token at each step.
**Results:**
Our findings are summarized in Table A of the attached PDF. Notably, Llama-2, which was trained with a context length of 4096 tokens, begins to produce less sensical text when given inputs exceeding this length. This behavior aligns with observations in recent studies [7,8].
In contrast, Llama-3 and Mamba are capable of generating sensible text even with longer context lengths, as shown in Table A of the PDF. Although Llama-2’s histogram suggests it uses the full context window and appears to identify more relevant long-range input tokens than Llama-3 and Mamba, closer inspection reveals that this long-range capability is based on unspecific token information. The most relevant tokens for Llama-2 are often non-semantic, such as the newline token `<0x0A>` and the beginning-of-sentence token `<s>`, which often appear at the beginning of the context paragraph. This reliance on non-semantic tokens explains the long-range information retrieval seen in the histogram, particularly in the context size exceeding 4K tokens (the shaded areas in the histogram of Llama-2).
In contrast, Llama-3 and Mamba primarily attribute relevance to semantically meaningful tokens, particularly those near the end of the input context window. Llama-3’s histogram indicates it uses more intermediate mid-range dependencies compared to Mamba. Given Llama-3’s significantly larger size (8B) compared to Mamba (130M), our initial analysis shows that Mamba effectively uses long-range information. We also find that this ability is not exclusive to SSMs but can also be achieved by larger Transformers (e.g. Llama-3).
In this initial investigation, MambaLRP has proven valuable in comparing the long-range context capabilities across models. We hope our work will facilitate further comparative studies in this area. We will include a summary of this comparison in our final paper.
----
**Questions**
3. > Is there a reason not to increase the context length in needle-in-a-haystack of Figure 7, and are there any failure cases?
We did not include context lengths beyond 2048 in the paper, as the model was trained using a context length of 2048 and the study was not designed to assess extrapolation beyond this limit. Our study highlights some limitations of the retrieval accuracy metric and introduces an explanation-aware measure. This new measure ensures that the retrieved information is not only correct but also correct for the right reasons. As requested, we extended the experiment to context lengths beyond 2048, up to 4096, and included the results in Figure C of the PDF. As can be seen, the model's retrieval performance begins to drop as the context length goes beyond 2560.
In our analysis, we did not encounter any edge cases where the model successfully retrieved the needle but MambaLRP failed to provide the correct explanations. However, we have shown cases where the model found the needle based on incorrect features, which MambaLRP was successfully able to detect. Please refer to Appendix C.7 (Figure 13).
4. > Including "Does Transformer Interpretability Transfer to RNNs?"
Thanks for your suggestion. We will add it to our related works.
5. > Is $D$ used in MambaLRP?
Skip connections are not ignored in MambaLRP. As they do not violate the conservation property, no modifications are necessary. This is why we did not mention them in the paper. However, based on your suggestion, we will include it in the paper.
If you have any further comments or questions, we are happy to address them during the discussion period.
---
Rebuttal 2:
Comment: Thank you for your valuable feedback and suggestions. We are happy to see that you have increased your score. We have included the new experiments in our paper.
> Regarding making the code accessible
We understand the importance of providing accessible and clean code to the community. Therefore, we are preparing a user-friendly GitHub repository, and we will include the link in the camera-ready version. We further plan to extend MambaLRP and our code to include Mamba variants including the recent Mamba2 model.
Thank you again for the support of our work and we look forward to see how the open access to reliable XAI for Mamba can lead to domain insights and model improvements in our community. | Summary: ### **Post-rebuttal update**
Given the authors' additional experiments and changes-to-be-made to the manuscript, raising my original score from a 5 to a 6.
### **Original review**
The authors tackle the problem of explainability in Mamba SSMs, which have been recently proposed and widely adapted. Towards this end, they leverage layer-wise relevance propagation (LBP) and derive the necessary procedures/equations to enable LBP in Mamba models, similar to those previously proposed in Conservative Propagation paper for Transformers (Ali et al, 2022). They benchmark their procedure on both Mamba language and vision models across a range of benchmarks and other widely used explainability methods to demonstrate the effectiveness of their approach.
Strengths: **Originality** - Given the wide-spread adaptation of Mamba SSMs since their recent debut, explainability of these models is a very important topic. While the specific detach approach to enable LRP in LLMs is not new, the adaptation to Mamba models is new and an excellent application of this methodology.
**Quality** - The proposed approach is a good addition to the growing list of Mamba works, and will make a good tool for practitioners/researchers to explain the decisions of Mamba models going forward. The extensive benchmarks and evaluation of both NLP and vision models and tasks are also solid contributions. However, the evaluation of Mamba LLMs was somewhat limited (see weaknesses for further discussion).
**Clarity** - For the most part, the paper is well written. However, important details regarding evaluation are either missing or could be better highlighted (discussed in weaknesses).
**Significance** - As previously mentioned, Mamba models continue to gain traction, particularly with the recent release of Mamba 2. Thus, this work has significant opportunity for significant impact as the need to explain the decision making processes behind Mamba models grows.
Weaknesses: ### Quality
As previously noted, there is significant room for improvement regarding the evaluation of Mamba LLMs. In particular, only the smallest and second largest checkpoints are used for the majority of experiments (with the exception of the 2.8B checkpoint used in the needle in the haystack tests). However, for a paper dedicated to enabling the capability of this line of LLMs, this seems insufficient; why are the 130M and 1.4B checkpoints used for the majority of tests? In particular, the 130M model performs the worst among the suite of Mamba 1 models, but is used to demonstrate qualitative claims about the method (e.g., Figure 3, Figure 4, and reported runtimes). Given the 2.8B is the most accurate and, thus, most desirable for practical use, this should ideally be evaluated along with the 130M (and 1.4B) models to demonstrate the efficacy of this approach at 2.8B Mamba-parameter size.
### Clarity
For both the presented experiments and compared methods, several important details were not clear in the paper. In particular, for Figure 6, how is position difference calculated? E.g.:
>We use the HotpotQA [70] subset from the LongBench dataset [11], designed to test long context understanding. After selecting all 127 instances, containing sequences up to 8192 tokens, we prompt the model to summarize the full paragraph by generating ten additional tokens Fig. 6, shows the distribution of the positional difference between a relevant token and the currently generated token.
How are the relevant tokens determined? Upon first pass, I wondered whether they were somehow derived from the
supporting fact labels in the HotpotQA dataset, but it seems to be calculated as the difference between the tokens generated position vs position in the input. Can you clarify this, as well as how (exactly) the histogram in Figure 6 is calculated? Also, the HotpotQA subset of LongBench contains 200 instances, yet the paper evaluates 127; what is the source of this discrepancy?
Most importantly, it is not clear whether the author's are comparing to Attention Rollout, which the Ali et al 2024 Mamba explainability paper was based on, or Mamba-Attribution (i.e., the new mechanism derived for Mamba models in the Ali et al 2024 paper). As this is the most relevant Mamba-specific method given the proposed method, this point requires clarification within the main text.
Other portions which are unclear:
- "The results without fast CUDA kernels" <- what are fast CUDA kernels in this context? The hardware-optimized selective SSM scan kernel included in Mamba (in contrast to the full PyTorch SSM kernel)? Or are these kernels the authors are contributing to speed up their described method?
- "All models were trained for a maximum of 10 epochs, with an early stopping mechanism in place." <- What is the early stopping criterion?
- Lines 192-193 should forward reference fine-tuning details are further described in "C.1 Models and dataset." Otherwise, it is easy to miss that small excerpt on line 193 which alludes to 130M and 1.4B being fine-tuned for the various tasks (which could lead to thinking pretrained Mamba models are being evaluated).
Technical Quality: 3
Clarity: 2
Questions for Authors: "We use an instruction-finetuned Mamba-2.8B model" <- In order to make the paper as self-contained as possible, could the authors summarize the fine-tuning recipe for this model within the supplementary (rather than forward referencing to the huggingface page)?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Could the authors include foreseeable limitations of their work? E.g., potentially large memory utilization to enable explainability in Mamba models? Also, in the case that the compared method AttnRoll was not Mamba-Attribute (or not evaluated using the released code of the Ali et al 2024 paper), these would be important evaluation limitations to list.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your detailed review and useful suggestions. We will address your points below.
> Evaluation of Mamba LLMs also on larger 2.8B models
With our empirical evaluations in Table 1, we aimed to cover a representative range of tasks (four different text classification tasks, image classification), models (Mamba, Vision Mamba) and model sizes (130M and 1.4B), which confirmed that our proposed approach consistently have highest faithfulness scores---independent of the performance, architecture and domain of the specific Mamba model, compared to existing explanation methods.
It is important to note that our experiments aim to evaluate the effectiveness of our explanation method against other methods applicable to Mamba-based models, rather than evaluating the capabilities of the Mamba architecture itself. To our knowledge, this is the most comprehensive evaluation of different attribution methods for Mamba models in the literature. Previous work [MambaAttr2024] was tested on the Mamba-130M and Vim-S model.
To further extend our evaluations and follow your suggestion, we fine-tuned Mamba-2.8B model on the SST-2 dataset using the same protocol described in Appendix C.1. The following table presents the performance evaluation of different explanation methods on this model, confirming the superior performance of MambaLRP. We are currently running this experiment on the other NLP datasets as well and will add the results to the final version of the paper.
Mamba-2.8B results:
| Methods | $\Delta A^F$ |
|-------------------|----------------|
| Random | 0.007 |
| GI | -0.043 |
| SmoothGrad | -0.675 |
| IG | 0.322 |
| AttnRoll | 0.452 |
| G$\times$AttnRoll (Mamba-Attribution) | 0.341 |
| LRP (LN-rule) | 0.820 |
| MambaLRP (Ours) | **1.157** |
> How are the relevant tokens in Figure 6 determined and how is the histogram calculated? Why are there 127 of 200 HotpotQA samples considered?
As mentioned in Section 6, we used the top-k most relevant tokens that received positive relevance scores when applying MambaLRP and plotted the resulting histogram of relevant tokens over the positional differences between the generated and identified relevant tokens. In the paper, we present results for $k=10$. We have further validated that the shape of the histogram distribution of relevant tokens remains consistent for $k \in \{1,3,5\}$. By applying a context length cutoff of 8096 to filter HotpotQA, we obtained the 127 samples used in our experiment.
> Clarification regarding comparison to [MambaAttr2024]
We compared our proposed method to both Attn-Rollout and Mamba-Attr introduced for the Mamba models in [MambaAttr2024]. We referred to them in our paper as AttnRoll and G$\times$AttnRoll, in consistency with the respective methods proposed for Transformers. We have modified the name G$\times$AttnRoll to Mamba-Attr in our paper.
> What are fast CUDA kernels in this context?
When we mention fast CUDA kernels, we are referring to the hardware-optimized selective SSM scan kernel included in Mamba. Our proposed explanation method does not involve any additional optimizations.
> What is the early stopping criterion?
In our experiments, we employ early stopping and end training as soon as the validation loss ceases to improve. We have now added this information to the paragraph on training details in Supplement C.1.
> Adding references to Appendix C.1 and adding details about instruction-finetuned model
Thanks for your suggestions. We have now added a reference to enhance accessibility and clarify our experimental setup in Section 5. We will also add more information about the instruction-finetuned model to the supplementary material.
> Regarding including foreseeable limitations of our work and potential memory limitations, clarification on comparison to [MambaAttr2024]
Regarding memory consumption, LRP is a backpropagation-based explanation method that requires storing activations and gradients. The memory usage depends on the model and input size. To reduce memory consumption, techniques such as gradient checkpointing can be employed. This is also true for other gradient-based methods.
Regarding the comparison with [MambaAttr2024], we did evaluate the performance of MamabLRP against Mamba-Attr (see also our answer to your previous point above). We have now clarified this potential misunderstanding in the main text and apologize for any confusion about the specific baseline method we used. For our evaluation of Mamba-Attr and Attn-Rollout, we have used the official code provided by the authors, as described in Appendix C.3, providing a fair and reproducible benchmark evaluation.
A potential limitation of gradient-based explanation methods, like MambaLRP, is that the gradient information might not be accessible, e.g., due to proprietary constraints. In these situations, one possible solution is to approximate the gradient information. We will use the additional space to add a dedicated Limitations section to outline and discuss these aspects of our method.
[MambaAttr2024] A. Ali, et al. The hidden attention of mamba models. arXiv:2403.01590, 2024.
---
Rebuttal Comment 1.1:
Title: Reply to rebuttal
Comment: I thank the authors for their detailed response. My concerns have been addressed, and the authors have agreed to incorporate these changes into the next draft of the paper.
As previously stated, the submitted work is especially important as the Mamba family of models continues to be widely adapted.
> It is important to note that our experiments aim to evaluate the effectiveness of our explanation method against other methods applicable to Mamba-based models, rather than evaluating the capabilities of the Mamba architecture itself. To our knowledge, this is the most comprehensive evaluation of different attribution methods for Mamba models in the literature. Previous work [MambaAttr2024] was tested on the Mamba-130M and Vim-S model.
The breadth of evaluation is appreciated. Note that MambaAttr2024 is an unpublished manuscript, so their evaluation has not faced scrutiny for NeurIPS publication. For NeurIPS, rigor is necessary. Evaluating the proposed method on the largest checkpoint is representative of how well MambaLBP will work on the Mamba model most likely to be used by LLM practitioners; in personal experience, the behavior/performance of the 5 Mamba LLM checkpoints varies significantly. Thus, generalization across the checkpoints is not a given, and the authors' additional experiments to confirm this greatly help to address this concern. I look forward to the other NLP experiments evaluated on the 2.8B model, and I plan to raise my score from a five to a six.
As noted by another reviewer, the reproducible implementation of this method is a critical contribution, as well as the benchmarking suite. Mamba and Mamba2 have proven temperamental for features considered standard for Transformer models (e.g., unmodified use in Huggingface's SFTTrainer), so there is a large margin for error involved in researchers reproducing these results from scratch. What are the authors' plans for releasing code to reproduce results from the paper?
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful comments and for recognizing the improvements made during the short rebuttal period. We are glad to hear that your concerns have been addressed and you are increasing your score. We are actively working on including all NLP experiments for the 2.8B model in the camera-ready version to ensure a comprehensive evaluation across different checkpoints. Additionally, the results for the SST-2 dataset, which were ready for the rebuttal are already added to the paper.
> Regarding plans for releasing the code
We understand the importance of sharing our code to support ease of use and ensure reproducibility. To this end, we are preparing a user-friendly GitHub repository to provide easy access. The link to this repository will be included in the final camera-ready version of our work. Additionally, detailed instructions for replicating our results, along with demo Jupyter Notebooks, will be available in the repository. | Summary: The paper introduces an LPR framework for Mamba. The method breaks the Mamba architecture into three parts (SiLU activation, selective SSM, and multiplicative gating) and analyzes the layers using relevance scores. The evaluations on languages and images show that the proposed method is more precise and faithful than other explanation methods. They also show several use cases including gender bias, long-range capabilities, and the needle-in-a-haystack test.
Strengths: - The paper is easy to follow.
- The introduced LRP framework is faster and provides more faithful explanations than other explanation methods.
- The experiments show interesting interpretations of Mamba specifically for gender bias, long-context capabilities, and needle-in-a-haystack test in Mamba.
Weaknesses: - The current paper is limited to the explainability of Mamba only. I believe LRP can be applied to Transformers. It would be more interesting to compare the behavior of different methods instead of showing the behavior of one method.
- This point is related to the first point. The use case experiments are interesting, but it would be more useful if this could be compared with different methods. For instance, in the long context capability of Mamba, the paper mainly shows that Mamba can use earlier context. Instead, the paper can show that one method has better long context capability than the other in terms of accuracy, and it is verified by LRP. Also, the behavior of different methods can be compared such as 1) if the Transformer-based method is not as good as Mamba in terms of long-range modeling, what does the Transformer model focus on 2) how does the behavior change with tasks (generation vs QA) 3) how does it change with the training datasets, etc.
Overall, the paper lacks a more interesting and useful analysis of the methods using LRP as explained above. This limits the contribution of the paper.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Tables 10 and 11 show the runtime comparisons with and without using fast CUDA kernels. When using fast CUDA kernels, other methods including gradient × Input, SmoothGrad, and Integrated Gradients improve the speed a lot (22-25x faster), but the proposed MambaLRP has only 1.3x speed-up. What is the reason?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The paper briefly explained the limitation in Section 7.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your detailed feedback. We are happy to see that you appreciate the potential of our approach in providing faithful explanations. We will address your points below.
> Regarding the paper being limited to the explainability of Mamba
The class of Selective state space sequence models (Mamba models) present a significant change in model architecture compared to Transformers, necessitating tailored XAI methods to address their unique components. As LRP is not model-agnostic, new LRP rules must be developed for emerging classes of deep neural networks, as they may contain new components for which no existing LRP rules apply. For instance, **the LRP framework has already been successfully extended to include Transformers in [3,5]**.
**Mamba models, however, include new components absent in Transformers, and no prior propagation rules exist for them**. Consequently, following the work of [3,5,9,10,17] who have extended the LRP framework to new architectures (e.g. Transformers, CNNs, GNNs, regression networks), **we derived new LRP rules for Mamba models**. Proposing these rules is not trivial; it requires performing a detailed analysis of the relevance propagation process, guided by the conservation axiom. This resulted in more faithful explanations compared to naive applications of existing LRP rules not designed for Mamba and also other approaches, presented in our main paper (Table 1).
> Regarding comparing to the behavior of Transformers
While we agree that comparing the capabilities of Transformer and Mamba is an interesting line of research, our work focuses on deriving and thoroughly evaluating our proposed explanation approach for Mamba models. In this, we follow the standard approach to evaluate new explanation methods in terms of faithfulness [4, 6], which is inherently model-specific. Therefore, a comparison to transformer models is not included in the evaluation section of our paper. Instead, to show the versatility and robustness of our proposed method, we tested MambaLRP across various selective state space model architectures and sizes.
To compare the capabilities of Mamba and Transformer using their explanations, we have extended our long-range dependency experiments to include Llama-2 and Llama-3 Transformers (see the attached PDF for results). We describe our findings just below. Moreover, in our bias detection use case in Section 6 of our paper, we have already compared the performance of Mamba-130M and Mamba-1.4B to several Transformer models.
> Comparing long-range dependencies of Mamba and Transformers
Thank you for this suggestion. We also found comparing Mamba and Transformer models in terms of their long-range capabilities very interesting. Thus, we investigated this further and did a direct comparison to Transformers for our long-range dependency use case. Many of the widely-used Transformers, e.g. GPT-2, GPTNeoX and also Pythia unfortunately, do not allow inputs longer than 2048 or do not generate sensible text from these long inputs. Instead, we use two state-of-the-art Transformers: Llama-2 and Llama-3, and extract attributions using the LRP rules of [3]. As in our Mamba experiment, we generate 10 additional tokens from the HotPotQA's input and at each step explain the prediction of the generated token. Please find the results in Table A of the attached PDF.
For Llama-2, trained with a context length of 4096, the generated text becomes increasingly less sensible and repetitive for contexts longer than 4k, a limitation noted in [7,8]. When analyzing histogram distributions across models, it seems that Llama-2 uses information more uniformly across the entire context and identifies more relevant long-range dependent tokens compared to Llama-3 and Mamba. However, its output becomes non-sensible for lengths above 4K tokens. Thus, the identified relevant tokens are typically non-semantic such as new line token ``<0x0A>`` in Llama-2 and beginning of sentence token ``<s>`` found at the start of the context paragraphs. For Llama-3 and Mamba, the attributions can identify meaningful relevant tokens. When directly compared, Llama-3 uses information from more intermediate mid-range dependencies than Mamba, though both favor tokens close to the end of the input as relevant. Given Llama-3's much larger size (8B) compared to Mamba (130M) and their different training settings, this analysis supports that Mamba can effectively use long-range information. We also find that this ability is not exclusive to SSMs but can also be achieved by Transformers.
In this first step, MambaLRP allowed us to compare long-range context capabilities across models and hope our work facilitates the generation and investigation of future comparative studies. We will add a summary of this comparison to our final paper.
> Regarding the reduced speed-up of MambaLRP compared to Gradient × Input, SmoothGrad, and Integrated Gradients.
We apologize for the confusion caused by a typo in the number. The correct runtime for MambaLRP listed in Table 11 is 0.03063. This means when using or not using fast CUDA kernels, the runtime of MambaLRP is comparable to GI.
Please let us know if you have further questions or comments. At this stage, we are still able to make revisions to the draft and actively participate in the discussion.
---
Rebuttal Comment 1.1:
Title: response to the rebuttal
Comment: Thank you for the answers. The additional analysis provided during the rebuttal is helpful. Although the rebuttal response partially addresses my concerns (limited empirical analysis), it certainly improves the quality of the paper. I also believe that the proposed method would be useful to the community. I increase my rating from 4 to 6.
Please add the full analysis between Transformers and Mamba (like Figures 6 and 7 in the paper) to the revised version.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback. We are glad to hear that you found our additional analyses helpful and that our proposed method is useful to the community. We are also pleased that you have raised your score. As promised, we will include the additional experiments in the paper. | Summary: The paper applies Layer-wise Relevance Propagation to Mamba layers. To maintain the relevance conservation law, the authors propose three fixes to SiLU activation, S6 and the multiplicative gating operators respectively with the technique of gradient blocking. The proposed method improves the faithfulness of LRP explanantion on Mamba significantly.
Strengths: 1. The paper is well written and easy to follow.
2. The improvement of the faithfulness is significant compared to the baselines without the fixes proposed in the paper.
Weaknesses: 1. The contribution of the work seems incremental and the techniques used are well established. The main novelty here is applying the existing LRP techniques to the Mamba architecture.
2. The explanation generated by MambaLRP is not surprising and does not bring any new insights into the behavior of architectures. It is unclear how these explanations can be used to improve the model's performance on alleviating gender bias and retrieving long-range information.
Technical Quality: 3
Clarity: 4
Questions for Authors: Can we leverage the explanations generated by MambaLRP to improve the model 's behavior on the downstream gender debiasing and the long-context retrieval tasks?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: No. The authors should discuss the limitation of the usefulness of their methods.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and useful comments. We will address them in our response below.
> Regarding the main novelty of applying LRP to the Mamba
As new model architectures, such as SSMs and Mamba models, are developed, the field of XAI is challenged to develop faithful attribution methods to explain their predictions, especially given that the naive applications of existing XAI methods often fail in this regard. As LRP is not a model-agnostic framework, new LRP rules must be derived as new classes of deep neural networks emerge, since these models may include new layers or components for which no LRP rules exist. LRP was initially introduced in [9] to explain predictions for kernel-based classifiers and for multilayered neural networks. Over the years, it has been extended to accommodate new architectures. For instance, as the existing LRP rules were insufficient for explaining models such as Transformers, [3,5] introduced novel LRP rules specifically for Transformer models. Similarly, with the advent of Graph Neural Networks (GNNs), [10] proposed GNN-LRP to explain their predictions.
We would like to emphasize that for the class of selective state space sequence models (i.e., Mamba models), **there exist no propagation-based techniques such as LRP that effectively address the Mamba architecture**. A naive application of existing LRP techniques, which are not designed for Mamba, leads to far inferior and unreliable feature attributions as shown in Table 1. To overcome this shortcoming, we have (1) identified the unreliable model components and (2) derived novel propagation rules for them, which form MambaLRP. Our novel LRP rules are grounded in our theoretical analysis of the relevance propagation process through different layers/components within Mamba, which identified those that violate the conservation property. Consequently, **we proposed novel LRP rules for the SSM components, SiLU activation functions, and multiplicative gates**. The explicit LRP rules that we have proposed for Mamba models can be found in Appendix B. Given that ease of implementation is a key characteristic of XAI methods, we have further contributed by proposing straightforward implementation strategies that bypass the need for the implementation of complex LRP rules in Section 4.
> Regarding MambaLRP explanations not being surprising, without new insights into the behavior of architectures.
In the field of XAI, one line of research focuses on developing reliable explanation methods for DNNs [3,5,9,10,13,14] and another complementary line of research on using these methods for in-depth model analysis [12,15] and insight [16]. XAI applications, such as model debugging, identifying biases, examining fairness, and assessing capabilities like long-range dependencies, rely on high-quality explanations. Thus, **our study specifically targets the challenge of generating high-quality explanations for the novel class of selective state space sequence models**, as their highly non-linear and complex architectures make explaining them a significant challenge.
As we have proposed a novel explanation algorithm for Mamba, our main experiments are focused on analyzing the faithfulness of the explanations generated by MambaLRP against those produced by other methods, shown in Table 1. We demonstrate that our method provides more faithful explanations compared to existing alternatives while being computationally more efficient (see Appendix C.8). We observed that this approach does allow us to identify unexpected model behaviors, as illustrated in our image classification experiment in Figure 5, where the prediction of the class 'carton' is influenced by the presence of a watermark on the image. **To bring new insights into the model's behavior, we used MambaLRP to uncover gender bias, investigate long-range capabilities of Mamba, and propose an explanation-aware measure for the Needle-in-a-haystack test, shown in Section 6**.
> Regarding leveraging MambaLRP explanations for model improvement
Please note that **our use cases (Section 6) focus on model diagnosis rather than model improvement**. Identifying a model's weaknesses is the essential first step before any improvements can be made. For example, as illustrated in Figure 5, MambaLRP can be employed to analyze the model's sensitivity to Clever-Hans features, such as watermarks in Chinese. Additionally, it can be used to detect gender biases, as discussed in Section 6. In the case of gender debiasing, as an example, if ground-truth explanations are available, the model can be fine-tuned to align the generated explanations with the ground-truth as done in [11,15] for other XAI approaches and bias types. Regarding long-context retrieval tasks, the needle-in-a-haystack test is designed to evaluate such capabilities in LLMs. In Section 6, we introduced a new metric based on the explanations generated by MambaLRP to measure the model's retrieval performance in an explanation-aware manner. This means that instead of solely checking retrieval accuracy, one can also assess if the model retrieves the right information for the right reasons. For more detailed and interesting results, please refer to Figure 13 in Appendix C.7. In that figure, we have shown a case where the model retrieved the correct information based on incorrect evidence, which MambaLRP was successfully able to detect. This failure case could not be identified by the retrieval accuracy metric typically used in the needle-in-a-haystack experiment, highlighting the value of MambaLRP in providing practical insights. Leveraging XAI methods for improving safety aspects of models is a very active line of research and crucially requires reliable explanation methods. With MambaLRP, we have proposed a faithful and robust explanation method for this purpose.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
I would appreciate if you could comment on the author's rebuttal, in light of the upcoming deadline.
Thank you,
Your AC | Rebuttal 1:
Rebuttal: We thank the reviewers for their comments and valuable feedback. We responded to the comments and made the following changes to our submission. In particular:
- We extended our faithfulness evaluation in Table 1 (main paper) to the larger Mamba-2.8B model trained on SST-2, confirming MambaLRP consistently outperforms all other approaches. To our knowledge, this is currently the most extensive evaluation of XAI methods for Mamba in the literature.
- Reviewer 4ExW and Reviewer dNt6 expressed interest in a direct comparison to Transformers to investigate long-range dependencies. We performed this experiment for Llama-2 and Llama-3 using LRP for Transformers. Our analysis reveals that Mamba effectively uses long-range information, matched by the more sophisticated and larger Transformer model (Llama-3), and notably not Llama-2. Due to space constraints, details for this experiment are given in response to Reviewer 4ExW.
> Clarification regarding the novelty and contributions of MambaLRP
As new classes of deep neural networks are developed, such as selective state space sequence models, XAI needs to keep up with these model innovations by providing reliable explanations. We contribute to this goal via:
A novel, thorough analysis of the relevance propagation through Mamba components, guided by the conservation axiom (see Section 4 and Appendix A).
Proposal of new LRP rules to mitigate violation of conservation, leading to state-of-the-art explanations for Mamba (see Appendix B).
Efficient implementation of these proposed rules (see Section 4).
Thorough evaluation of attribution methods for Mamba (see Section 5).
Insightful and practical use cases, addressing aspects of AI safety and transparency (see Section 6).
While we focused on Mamba, the proposed propagation rules are generally applicable to other models using these components, e.g. multiplicative gates in recent models HGRN [18], RWKV [19] and MEGA [20].
We summarized our contributions and methodological novelty more clearly in Section 1, 2 and 4 now.
We also appreciate the positive remarks, such as:
- The improvement of the faithfulness is significant compared to the baselines. (Reviewer wgXY)
- The introduced LRP framework is faster and provides more faithful explanations than other explanation methods. (Reviewer 4ExW)
- The experiments show interesting interpretations of Mamba specifically for gender bias, long-context capabilities, and needle-in-a-haystack test in Mamba. (Reviewer 4ExW)
- Explainability of Mamba models is a very important topic. The proposed adaptation to Mamba models is new and an excellent application of this methodology. (Reviewer mtfa)
- This work has significant opportunity for significant impact as the need to explain the decision making processes behind Mamba models grows. (Reviewer mtfa)
- LRP for Mamba is an important direction that facilitates the community in improving and understanding these models. (Reviewer dNt6)
- The method is applicable and allows the authors to provide insightful analyses about Mamba models. (Reviewer dNt6)
- The paper is well written and easy to follow. (Reviewer wgXY)
In summary, additional experiments prompted during rebuttal (1) confirmed the robustness of our evaluation, (2) provided detailed descriptions addressing soundness and novel contributions of MambaLRP, and (3) extended our use cases with additional comparisons, demonstrating the reliability and practical benefits of MambaLRP to the community.
We responded to each reviewer's initial comments below and hope our answers appropriately address all their questions and concerns. We are happy to answer any remaining questions during the discussion phase.
To avoid redundancies, we added references from the individual rebuttals below.
**Rebuttal References**
[1] Ali et al. The hidden attention of mamba models. arXiv, 2024.
[2] Zimerman et al. A Unified Implicit Attention Formulation for Gated-Linear Recurrent Sequence Models. arXiv, 2024.
[3] Ali et al. XAI for transformers: Better explanations through conservative propagation. ICML, 2022.
[4] Blücher, et al. Decoupling Pixel Flipping and Occlusion Strategy for Consistent XAI Benchmarks. TMLR, 2024.
[5] Achtibat et al. AttnLRP: Attention-Aware Layer-Wise Relevance Propagation for Transformers. ICML, 2024.
[6] Samek et al. Evaluating the visualization of what a Deep Neural Network has learned. IEEE TNNLS, 2017.
[7] Chen et al. Clex: Continuous length extrapolation for large language models. ICLR, 2024.
[8] Huang et al. Training-free long-context scaling of large language models. ICML, 2024.
[9] Bach et al. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 2015.
[10] Schnake et al. Higher-Order Explanations of Graph Neural Networks via Relevant Walks. PAMI, 2022.
[11] Anders et al. Finding and removing clever hans. Inf Fusion, 2022.
[12] Przemyslaw et al. Marrying Fairness and Explainability in Supervised Learning. ACM Conf on Fairness, Accountability, and Transparency, 2022.
[13] Samek et al. Explaining deep neural networks and beyond: A review of methods and applications. Proc of IEEE, 2021.
[14] Arrieta et al. Explainable Artificial Intelligence: Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf fusion, 2020.
[15] Ali et al. Explainable Artificial Intelligence: What we know and what is left to attain Trustworthy Artificial Intelligence. Inf Fusion, 2023.
[16] Roscher et al. Explainable machine learning for scientific insights and discoveries. IEEE Access, 2020.
[17] Letzgus et al. Toward explainable artificial intelligence for regression models: A methodological perspective. IEEE Signal Proc, 2022.
[18] Qin et al. Hierarchically gated recurrent neural network for sequence modeling. Neurips, 2023.
[19] Peng et al. RWKV: Reinventing RNNs for the Transformer Era. EMNLP, 2023
[20] Ma et al. Mega: Moving average equipped gated attention. ICLR, 2023.
Pdf: /pdf/764e52b49d00620f354d0cb586131174447d9f7d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Adaptive Experimentation When You Can't Experiment | Accept (poster) | Summary: This article studies the pure exploration transductive linear bandit problem in the presence of instrumental variables. The authors assume a linear structural equation model on the instrument, treatment and outcome. The proposed method attempts to estimate the parameters in the structural model while optimally designing to learn the best arm (i.e. treatment). The paper is mainly theoretical, but some simulations are provided to demonstrate the method.
Strengths: 1. The combination of instrument variables and pure exploration appears novel.
2. The proposed method is guided by finite-time confidence bounds for two-stage least squares. The lower and upper bounds of sample complexity are provided in the paper.
3. The method outperforms the standard methods UCB-OLS and UCB-IV.
Weaknesses: 1. Section 3 is very dense. The authors can consider shortening the content before Section 2.2 and use more room to explain Section 3.
2. No real-data demonstration of the proposed method.
Technical Quality: 3
Clarity: 2
Questions for Authors: N.A.
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: N.A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for acknowledging the novelty of our problem setting and ideas, as well as our theoretical contributions.
Regarding your comments:
1. We agree that Sec 3 is dense, making this paper a heavy lift with many theoretical results. We appreciate the suggestion to shorten the content. The camera-ready version allows an additional page, giving us the opportunity to make these improvements.
2. We would have loved to examine our approach on real data and compare it against alternative methods. Unfortunately, to the best of our knowledge, there is no public dataset available for this specific problem. | Summary: The paper introduces the confounded pure exploration transductive linear bandit (CPET-LB) problem, which addresses the challenges of conducting adaptive experimentation in environments where direct randomization is not possible. From my understanding, the paper studies the best arm identification problem under linear structural model with non-compliance. The paper proposes the algorithms with nearly optimal sample complexity guarantee.
Strengths: 1. The paper provides a thorough theoretical analysis, including proofs of finite-time confidence intervals for the estimators and sample complexity bounds. The theoretical contributions are significant and appear to be solid.
2. The problem is practical-relevant and important.
Weaknesses: 1. The connections with two streams of the literature should be more clearly stated. The basic problem set up is very classical econometric setting where the non-compliance exists. The analysis framework and some of the tools are very standard in transductive linear bandit problem.
2. The presentation of Section 1 is not easy to follow. I feel the authors try to manage the terminology from causal inference and pure exploration. For example, in Line 78, “measurement” and ”evaluation” are new terminology of the paper and a bit hard to connect with the example proposed in introduction.
3. The authors might want to reconsider the title. The title, from my perspective, is a bit confusing and misleading.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Could you please elaborate a bit more on why the confidence interval in Section 2.2 is novel? I know the traditional 2SLS always using asymptotic normality to construct CI. Is non-asymptotic or asymptotic the key difference?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for acknowledging our thorough theoretical analysis, noting its significance and solid foundation, as well as the strong practical value of our work.
Regarding your comments on weaknesses, we appreciate your detailed concerns:
1. As the reviewers kindly pointed out, we did take on the challenge of connecting fairly disparate settings in the literature. We addressed these aspects in the introduction, page 3 footnote, and the related work section of the appendix. We will conduct another round of editing to more clearly delineate the two streams of topics.
2. Thank you for this very insightful comment." We will link the terminology back to the example proposed in the introduction, where $Z$ refers to encouragement and $X$ refers to treatment. The terms "measurement" and "evaluation" come from the transductive linear bandit literature. We will clarify this further in the revision.
Thanks for your question:
Non-asymptotic confidence intervals are one of our key novelties. While asymptotic intervals for the 2SLS estimator are known, they are not useful for algorithm design because they do not allow for identifying and eliminating bad treatments in the non-asymptotic regime with a precise correctness guarantee.
Our second novelty addresses the limitations of the only other known non-asymptotic interval, which requires simultaneous control over Zs and Xs, this is challenging as X is a random result of Z. Our confidence interval solves this by using data splitting in two phases. This discussion is included in the paper.
---
Rebuttal Comment 1.1:
Comment: I have read the rebuttal carefully. Thanks for the clarification. I really appreciate your efforts! | Summary: This paper addresses the issue of adaptive experimentation using "encouragement" rather than "compulsion instruction," a scenario commonly encountered in industrial applications. The proposed solution integrates linear bandit algorithms with instrumental variables regressions. The authors provide rigorous theoretical guarantees for their method and demonstrate its superior performance compared to traditional A/B testing and conventional linear bandit algorithms.
Strengths: The scenarios examined by the authors are well-motivated. In practice, numerous situations exist where only encouragement can be employed to influence user decisions. Considering the shift in industry from traditional A/B testing to adaptive experimentation, the study presented here has the potential for significant industry impact.
In addition, the proposed algorithm is intuitive, and the theoretical guarantees provided are robust.
Weaknesses: The authors could enhance their study by conducting additional experiments to verify the robustness of their results.
Furthermore, there should be a more detailed discussion of the p-values and confidence intervals to strengthen the statistical analysis.
Technical Quality: 4
Clarity: 3
Questions for Authors: How to think of p-values/confidence intervals as it is important in an experimentation setting.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are happy to see the reviewer likes our work and find its strong practical value.
Regarding your comment on additional experiments:
Our work is primarily theoretical and to provide an initial solution, which makes the surprising connection between encouragement designs and pure exploration linear bandits. This highlights an area that the adaptive experimentation/bandits community has largely overlooked. We have included experiments in the Appendix to demonstrate the impact of confounding and the effectiveness of our solution. We acknowledge that more experiments can be conducted and hope our work inspires further study in this area. We will update our conclusion to reflect this.
Regarding the question on p-values and confidence intervals in an experimentation setting:
they correspond 1-to-1. For example, we can construct a confidence interval by considering the set of $\theta$ that are not rejected under the null hypothesis for a given $p$-value. A good reference is Johari, Ramesh, et al. "Always valid inference: Continuous monitoring of a/b tests." Operations Research 70.3 (2022): 1806-1821. Thus our confidence intervals in Section 2 could be used to provide inference. We will add a small discussion of this correspondence to the final draft.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my questions on the confidence interval. I will keep my score unchanged. | Summary: This paper addresses the problem of pure exploration bandits in the setting of encouragement designs. The authors describe the problem in terms of online instrumental variable regression. Toward this end, the authors derive a finite time confidence interval. Using this as the main tool, the authors then describe a pure exploration transductive bandit. Algorithms are provided for optimizing the pure exploration problem. A number of theoretical results are provided describing the entailed sample complexity of the proposed approach. Empirical results are provided which show strong performance against other baselines.
Strengths: This is a problem that has practical relevance in both industrial and social scientific settings. The task, to my knowledge, is novel in its formulation, and the authors do a nice job of delineating this work from prior art. The authors do a nice job of presenting algorithms and analysis in both the settings of a known and unknown structural models. Further, there is thorough analysis of each of the proposed procedures' properties.
Weaknesses: My concerns are largely around two things:
1. There is a fairly limited experimental evaluation here. It would be helpful if the authors provided a more thorough evaluation of the proposed approaches' behavior across a wider range of settings.
2. The text is a little meandering at times and as a result was a little hard to follow on first read. I would suggest a round of editing in order to improve the narrative and organization of the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: Throughout an E-optimal design is also employed. My main question is whether the choice of E-optimality is done as a matter of convenience, or if it is motivated by aspects of the problem that would favor this criterion over other choices.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for acknowledging the novelty of our problem formulation and the thorough analysis throughout.
Concern on limited experimental evaluation:
Our work is primarily theoretical and to provide an initial solution, which makes the surprising connection between encouragement designs and pure exploration linear bandits. This highlights an area that the adaptive experimentation / bandits community has largely overlooked. We have included experiments in the Appendix to demonstrate the impact of confounding and the effectiveness of our solution. We acknowledge that more experiments can be conducted and hope our work inspires further study in this area. We will update our conclusion to reflect this.
Concern on readability of results:
We will undertake additional rounds of editing and polishing to enhance readability. The camera-ready version will allow us to add one more page, which will help us improve clarity.
Regarding the question:
Thank you for your question. We chose the E-optimal design to ensure the covariance matrix of the collected data is well-conditioned by maximizing the minimal eigenvalue, a requirement not necessarily met by other objectives.
This well-conditioning is crucial for efficiently meeting the stopping condition of the $\Gamma$ estimation phase.
That said, we believe E-optimal design is sufficient but may not be necessary -- perhaps a more sophisticated design could be more useful, which remains to be a future direction.
A significant algorithmic analysis contribution is determining the number of samples for E-optimal design to maintain these properties without significantly increasing sample complexity.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer Y9pp,
As we approach the end of the rebuttal period, we sincerely appreciate this last opportunity to further engage with the reviewer and clarify any outstanding questions or concerns. We thank the reviewer again for acknowledging the novelty of our problem formulation and the thorough analysis throughout. If our rebuttal has adequately addressed the points raised, we would kindly request that the reviewer re-evaluate the score accordingly. We remain open to any additional feedback.
---
Rebuttal Comment 1.2:
Comment: Thank you for your answers and clarifications to my questions/concerns. I will leave my score unchanged, but appreciate the points raised by the authors. | Rebuttal 1:
Rebuttal: We thank the reviewers for providing insightful comments. As the strengths of our paper,
reviewers Y9pp, oap7, and 4vBb have acknowledged the practical relevance and importance of the problem we introduced, and reviewers Y9pp and byyL acknowledged that our formulation is novel.
Finally, reviewer Y9pp, oap7, and 4vBb have mentioned that our analysis is through and solid. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Contextual Multinomial Logit Bandits with General Value Functions | Accept (poster) | Summary: This paper considers MNL bandits with a general value function. The authors first examine the case of stochastic contexts and rewards. They suggest an epoch-based algorithm with an offline regression oracle. With uniform exploration, the algorithm achieves a regret bound, specifically $T^{2/3}$ for finite and linear classes. By utilizing better exploration, it achieves $\sqrt{T}$ for these classes. Next, they consider adversarial contexts and rewards. With uniform exploration, the algorithm achieves a regret bound of $T^{5/6}$ for finite or linear classes. With better exploration, it achieves $T^{3/4}$. Lastly, by using Thompson sampling, it achieves a $\sqrt{T}$ regret bound. Importantly, the suggested algorithms do not depend on $\kappa$, implying better dependency on $B$.
Strengths: - This paper first considers a general value function for MNL bandits.
- They propose algorithms for stochastic or adversarial context and rewards and provide regret analysis.
- The regret bound has better dependency on $B$ (without including $\kappa$) compared to previously suggested ones.
Weaknesses: 1. The regret bound has supper linear dependency on $K$ for $\sqrt{T}$ in Corollayr 3.8, 4.7, 4.8.
2. It does not provide regret lower bounds so it is hard to know the tightness of the achieved regret upper bounds.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Is there any insight about how regret bounds do not include the dependency on $\kappa$ for linear class? Or does it include $\kappa$ when it is applied to the standard contextual MNL model?
2. Feel-good Thompson sampling algorithm seems to outperform other algorithms including stochastic or adversarial cases. Is there a benefit to using epoch-based algorithms over the Thompsom sampling method?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: I could not find a discussion on the limitations of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Is there any insight about how regret bounds do not include the dependency on 𝜅 for linear class? Or does it include 𝜅 when it is applied to the standard contextual MNL model?**
A: The intuition on why we do not have $\kappa$ dependency is that most previous MNL works (e.g. [7,20,23]) adopt a UCB-type approach, which first approximates the true weight parameter $\theta$ via MLE and constructs a confidences set based on this approximation. The $\kappa$ dependence will unavoidably appear from this approximation step. Different from previous approaches, we reduce the learning problem to a regression problem, and show that the regret directly depends on the regression error. This key step enables us to remove the dependency on $\kappa$.
**Q2: Feel-good Thompson sampling algorithm seems to outperform other algorithms including stochastic or adversarial cases. Is there a benefit to using epoch-based algorithms over the Thompson sampling method?**
A: As shown in Table 1, while Feel-good Thompson sampling outperforms other algorithms in terms of regret upper bound, it is not computationally efficient (even for a small $K$) since it is unclear how to sample the value function at each round efficiently. On the other hand, our other algorithms, including those epoch-based algorithms, are computationally efficient (at least for small $K$).
**Q3: I could not find a discussion on the limitations of this work.**
A: We do discuss the limitations of our work in various places (as acknowledged by other reviewers), such as the assumption on the no-purchase option, the inefficiency of solving Eq. (5) when K is large (Lines 214-215), and the disadvantage of FGTS algorithm (Lines 322-326),
---
Rebuttal 2:
Title: Thank you for your response
Comment: For the $\kappa$, it is still not clear how to avoid this term. For the regression with linear class, ERM seems to minimize log-loss to estimate parameters as in the previous MNL model, which includes approximate parameter. Could you provide any helpful comments regarding this? I'm also wondering about that the $\epsilon$-covering number does not need to include $\kappa$.
---
Rebuttal Comment 2.1:
Comment: To clarify, the reason why most previous MNL works using UCB-type approach (e.g. [7,20,23]) have this $\kappa$ dependency is that their analysis depends on the confidence width constructed via MLE (e.g.Theorem 2 in [20]). However, different from previous approaches, we show that the regret directly depends on the **regression error**. Moreover, while the ERM oracle may involve parameter approximation, importantly its regression error **does not** explicitly depends on the distance between the estimation and the true parameter, because making accurate predictions is in a sense easier than making accurate estimation of the parameter (an inaccurate estimation on the parameter could still lead to an accurate prediction).
As for the covering number, we show in Appendix B.1 (line 427) that the $\epsilon$-covering number is bounded by $\left(\frac{16B}{\epsilon}\right)^d$, with no dependence on $\kappa$. | Summary: This paper addresses the problem of contextual multinomial logit (MNL) bandits with general value functions across both stochastic and adversarial settings. The authors develop a suite of algorithms for different settings and with different computation-regret trade-offs. The application to the linear case surpasses previous works in terms of both statistical and computational efficiency.
Strengths: 1. **Novelty of the Setting**: This paper is the first to explore contextual MNL bandits with general value functions, representing a significant expansion in the scope of MNL bandit problems. The setting is both novel and interesting.
2. **Innovative Techniques**: The introduction of several new techniques to tackle the complexities introduced by general value functions is commendable. The methods may inspire the following works and be useful in other areas.
3. **Improved Efficiency**: The application of these methods to linear cases shows improvements over previous works in both statistical and computational efficiency, making this a valuable contribution to the field.
Weaknesses: 1. **Computational Inefficiency**: The Feel-Good Thompson sampling algorithm, as discussed, lacks computational efficiency, even for linear cases, which could limit its practical applicability.
2. **Lack of Experimental Validation**: The absence of empirical experiments to verify the theoretical claims weakens the paper's impact. Experimental results are crucial for validating the effectiveness and practicality of the proposed methods.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Zhang and Sugiyama (2023) developed a computationally efficient algorithm for MLogB bandit problem. As MLogB and MNL are similar, how might their approach be adapted to the MNL bandit problem addressed in this paper to enhance computational efficiency?
2. The authors claim that to ensure that no regret is possible, they make Assumption 1 in Line 96. Does this imply that achieving no regret is impossible in unrealizable scenarios? Could the authors provide some intuition about the reason?
Ref: Yu-Jie Zhang and Masashi Sugiyama. Online (Multinomial) Logistic Bandit: Improved Regret and Constant Computation Cost. In NeurIPS 2023.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Computational Inefficiency: The Feel-Good Thompson sampling algorithm, as discussed, lacks computational efficiency, even for linear cases, which could limit its practical applicability.**
A: We acknowledge that the Feel-Good Thompson sampling is not efficient theoretically. However, empirically, as mentioned in [30], one can apply stochastic gradient Langevin dynamic (SGLD) to approximately sample a value function.
**Q2: [Zhang and Sugiyama, 2023] developed a computationally efficient algorithm for MLogB bandit problem. As MLogB and MNL are similar, how might their approach be adapted to the MNL bandit problem addressed in this paper to enhance computational efficiency?**
A: While MLogB and MNL share some common components, their setups are in fact quite different: MNL considers the case where multiple items (decision) are chosen at each round and one of these items is selected according a multinomial distribution, while MLogB considers the case where a single item (decision) is chosen at each round but the outcome is one of the $K+1$ different possible outcomes (including the no-purchase outcome) following a multinomial distribution. Since the computational inefficiency for MNL usually comes from the fact that the number of possible subsets is large, we do not see how ideas from MLogB can be utilized to improve the computational efficiency of our algorithms.
**Q3: The authors claim that to ensure that no regret is possible, they make Assumption 1 in Line 96. Does this imply that achieving no regret is impossible in unrealizable scenarios? Could the authors provide some intuition about the reason?**
A: Making a realizability assumption is standard for bandit problems with function approximation [10, 11, 14, 29, 31], usually for the purpose of obtaining efficient algorithms via reduction to regression, but it does not imply that no regret is impossible without it. In fact, for contextual bandits, applying the classical (but inefficient) Exp4 algorithm [Auer et al., 2002] achieves the optimal regret even without the realizability assumption.
[Auer et al., 2002]: Peter Auer, Nicolò Cesa-Bianchi, Yoav Freund, and Robert E. Schapire, The Nonstochastic Multiarmed Bandit Problem, SIAM Journal on Computing, 2002.
---
Rebuttal Comment 1.1:
Title: Thank you for your rsponse.
Comment: I thank the authors for their response and have no further questions. | Summary: This paper introduces a couple of algorithms for contextual multinomial logit bandits under two different assumptions: i) stochastic contexts and rewards; ii) adversarial contexts and rewards. The theoretical analysis for algorithms for these two setups is pretty solid. Despite the contribution of this study, I believe that this paper needs more work to be done for acceptance.
Strengths: The setups for this work are pretty inclusive, representing that the contributions of the work can be significant. The literature review is also solid as well.
Weaknesses: The paper seems incomplete, possibly due to page limits. Some algorithms and results are not fully described, and there is a lack of experimental validation to support the theoretical findings. The authors should better organize and present their work, focusing on the most important results. Additionally, many terms and mathematical notations are used without proper definitions or introductions.
In spite of repeated appearances of log loss regression, its definition has not been stated.
The definitions of Err_log, Reg_log, and ERM are missing.
For the function class F, what is the definition of |F| in Lemma 3.1?
I think that many things other than these are missing.
Technical Quality: 2
Clarity: 2
Questions for Authors: What is the relationship between \pi and q_m?
I am interested in how to apply the feel-good TS [30] for this setup.
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: Thank the authors for mentioning some limitations in the manuscript. The reviewer also agreed that solving eq (5) with polynomial time complexity is not easy work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: The paper seems incomplete, possibly due to page limits. Some algorithms and results are not fully described. Additionally, many terms and mathematical notations are used without proper definitions or introductions.**
A: We strongly disagree with this comment. Could the reviewer kindly point out what algorithms and results are not fully described in this paper? For your examples of “terms and mathematical notations used without proper definitions or introductions”, they are in fact all explicitly defined (see below).
**Q2: In spite of repeated appearances of log loss regression, its definition has not been stated.**
A: This is not true. The definition of log loss regression is in Assumption 2 (offline) and Assumption 3 (online), with the definition of log loss in line 77.
**Q3: The definitions of Err_log, Reg_log, and ERM are missing.**
A: $\text{Err}_{\log}$ is defined in Assumption 2 and meant to be an abstract generalization error bound of the offline regression oracle, whose concrete form depends on its three arguments:
$n$ (the number of samples), $\delta$ (the failure probability), and $\mathcal{F}$ (the function class) and is instantiated clearly in Lemma 3.1 and Lemma D.1 for three examples.
Similarly, $\text{Reg}_{\log}$ is defined in Assumption 3 and meant to be an abstract regret bound of the online regression oracle. Again, its concrete form for three examples are provided in Lemma 4.1 and Lemma D.1.
We also emphasize that deriving regret bounds with a general generalization error or regret bound of the regression oracle and then instantiating them for concrete examples is very common in this line of work; see [10, 11, 14, 25, 27, 29, 31].
ERM, “empirical risk minimizer” (line 119), is explicitly defined in Lemma 3.1 (line 123).
**Q4: For the function class $\mathcal{F}$, what is the definition of $|\mathcal{F}|$ in Lemma 3.1?**
A: $|\mathcal{F}|$ represents the cardinality of the finite set $\mathcal{F}$ (which is a rather standard notation).
**Q5: What is the relationship between $\pi$ and $q_m$?**
A: $q_m: (\mathcal{X}\times r) \mapsto \Delta(\mathcal{S})$ is a stochastic policy (mapping from a context-reward pair to a distribution over subsets (line 136)) that Algorithm 1 decides in each epoch m, while $\pi: (\mathcal{X}\times r) \mapsto \mathcal{S}$ is a deterministic policy (mapping from a context-reward pair to a subset (line 143)) that appears in our analysis. We are not very certain about what specific “relationship” the reviewer is asking about.
**Q6: I am interested in how to apply the feel-good TS [30] for this setup.**
A: All details on applying feel-good TS to our setup can be found in Appendix E (as mentioned in Sec 4.2).
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed response. I will make some changes in my evaluation based on this. | Summary: The paper presents three primary contributions for the contextual multinomial logit bandits considering both stochastic and adversarial contexts and rewards.
- a suite of algorithms proposed each with a different computation-regret trade-off.
- advances existing regrets by removing the dependence on certain problem-dependent constants.
- extends existing works to study a general value function class (1-Lipschitz function class).
Strengths: - This seems to be the first contextual multinomial logit bandit paper considering adversarial context and reward with online regression oracle. I think the community will find this interesting.
- The paper is generally well-written and technically solid.
Weaknesses: - No experiments and simulation results are provided. But to be fair, there are other theoretical papers without experimental illustrations.
- For stochastic contexts and adversarial rewards, [20] show there exists an efficient algorithm achieving $O(\sqrt{T})$ regret. The adversarial reward setup is more general than the stochastic reward. So it is fair to compare it to corollary 3.5. Algorithm 1 has a larger regret upper bound $O( T^{\frac{2}{3}})$.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Is this the first MNL bandit paper considering adversarial context and reward with online regression oracle?
- Could you please address the second point in Weakness?
- It is understandable that Algorithm 1 uses a doubling trick. But removing sampling history when feeding to offline regression oracle (4th line in Algorihtm 1) is not sensible in my opinion. Could you address this issue without affecting much of the analysis?
- Since the paper studied MNL bandit with a general value function class (1-Lipschitz function class), do you think the regret guarantee can be characterized via eluder dimension?
The next two questions are related.
- It is claimed after Corollary 3.5 that Algorithm 1 achieves a smaller regret dependence on $B$. Why is $B$ a parameter related to regret or $\textbf{Err}_{\log}$? According to the definition of linear class in Lemma 3.1, it is equivalent to assigning weight $e^{\theta^\top x_i}$ to item $i$ and $e^B$ to no-purchase. Since it is required that $\lVert x_i \rVert_2 \leq 1$ and $\lVert \theta \rVert_2 \leq B$, it seems that all $B > 0$ are equivalent.
- It is understandable the paper follows [5, 9, 18] to assume the no-purchase option is the most likely outcome. Is this a necessary assumption without which the algorithm could fail? Is it possible to relax the assumption to something like "the probability (or the weight in equation 1) of no-purchase is lower bounded by some $\Delta>0$"? I could imagine that the regrets are related to $\Delta$ (other than $B$ ) since it can be hard to learn $f$ with a large probability of the no-purchase option.
Some minor issues:
- In line 173, should "$\epsilon \rightarrow \epsilon_m$"?
- The optimal choice of $\epsilon_m$ and $\gamma_m$ are given in Appendix. I suggest including them in the Algorithm sections to make the main paper self-contained.
- If extending contextual MNL bandits to a general value function class setup is a major contribution, I would suggest to move the paragraph into the main paper.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: For stochastic contexts and adversarial rewards, [20] show there exists an efficient algorithm achieving $𝑂(\sqrt {𝑇})$ regret. The adversarial reward setup is more general than the stochastic reward. So it is fair to compare it to corollary 3.5. Algorithm 1 has a larger regret upper bound $O(T^{2/3})$.**
A: Note that we did compare to [20] in the discussion after Corollary 3.5 (Line 195 more specifically) and highlighted that the regret bound of [20] depends on $\kappa$ which can be exponential in $B$ in the worst case, while our bound only has polynomial dependence on $B$. We will add that our bound indeed has a worse dependence on $T$ as suggested.
**Q2: Is this the first MNL bandit paper considering adversarial context and reward with online regression oracle?**
A: Yes, to the best of our knowledge, our work is the first one considering the case where both the context and the reward can be adversarial (with or without regression oracles).
**Q3: It is understandable that Algorithm 1 uses a doubling trick. But removing sampling history when feeding to offline regression oracle (4th line in Algorithm 1) is not sensible in my opinion. Could you address this issue without affecting much of the analysis?**
A: While it might seem sensible to feed all the sampling history to the offline regression oracle due to the stochasticity of contexts and rewards, it does not work since the oracle requires **i.i.d. inputs of context-subset-purchase tuples** (not i.i.d. context-reward), and different epochs of our algorithm create different distributions over these tuples.
**Q4: Since the paper studied MNL bandit with a general value function class (1-Lipschitz function class), do you think the regret guarantee can be characterized via eluder dimension?**
A: Our regret bounds explicitly depend on the generalization error of ERM oracle in the stochastic environment, and the online regression regret (Algorithm 2) or the log partition function (Algorithm 3) in the adversarial environment. It is unclear to us how these quantities are directly related to the eluder dimension of a function class.
**Q5: It is claimed after Corollary 3.5 that Algorithm 1 achieves a smaller regret dependence on $𝐵$. Why is $𝐵$ a parameter related to regret or $\text{Err}_{\log}$? According to the definition of linear class in Lemma 3.1, it is equivalent to assigning weight to item 𝑖 and $𝑒^𝐵$ to no-purchase. Since it is required that $||x_i||_2\leq 1$ and $\|\|\theta\|\|_2\leq B$, it seems that all $𝐵>0$ are equivalent.**
A: Your reparameterization is correct. However, a larger parameter $B$ makes the lowest possible weight for each item, $e^{-B}$ under your parameterization, smaller, which in turns makes the scale of the log loss larger and naturally increases the generalization error $\text{Err}_{\log}$.
**Q6: It is understandable the paper follows [5, 9, 18] to assume the no-purchase option is the most likely outcome. Is this a necessary assumption without which the algorithm could fail? Is it possible to relax the assumption to something like "the probability (or the weight in equation 1) of no-purchase is lower bounded by some $\Delta>0$"? I could imagine that the regrets are related to $\Delta$ (other than $𝐵$) since it can be hard to learn $𝑓$ with a large probability of the no-purchase option.**
A: If the weight for no-purchase is lower bounded by $\Delta\in(0,1)$ instead of $1$, all our regret bounds still hold with an extra factor of $1/\Delta$ (explained below). Note that this is the opposite to what the reviewer suggested — while a large no-purchase probability seemingly makes it harder to learn $f^\star$ at first glance, it in fact makes it easier because it leads to a better reverse Lipschitzness of $\mu$, and consequently the difference in value is better controlled by the difference in log loss. As an extreme example, suppose that the no-purchase weight is 0. Then, no matter what value an item has, it will always be purchased by the customer if the selected subset contains only this item, revealing no information to the learner at all.
To see why our bounds hold with an extra factor of $1/\Delta$, note that the only modification needed is Lemma B.3 (reverse Lipschitz Lemma): in this case we need to show the reverse Lipschitzness for $h(a) = a/(\Delta+1^\top a)$, and via a similar calculation, it can be shown that $\Omega(\Delta^2/d^4)\|\|a-b\|\|^2 \leq \|\|h(a)-h(b)\|\|^2.$ The remaining analysis remains the same.
Thanks for pointing this out; we will add this discussion to the next version.
Thanks for the other suggestions and pointing out the typos. We will incorporate these in the next revision.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: We thank the authors for their response. I am maintaining my score since it is the first MNL bandit paper considering adversarial context and reward with online regression oracle. However, I hope the authors could address the following issues or give better explanation.
- Q3: A iid requirement is incorrect. There is no identical distribution. I gues the author means conditional independent given the recommendation actions. So the entire history can be used in my opinion.
- Q5: I still believe $B$ should not be included in the regret upper bound and used in comparison with other algorithm. It is a free parameter and reparameterization does not change the problem definition. A larger $B$ will not affect log loss since $\theta$ will also scale up. If this is right, the comparison with [20] should either be removed or revised.
- Q6: This is just a suggestion. The paper could benefit from introducing our discussion on $\Delta$ into the problem definition and analysis. An intuitive explaination on how it affects regret could give readers a better understanding of the MNL bandit problem.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the further discussions and we address your questions as follows.
**Q3: A iid requirement is incorrect. There is no identical distribution. I guess the author means conditionally independent given the recommendation actions. So the entire history can be used in my opinion.**
A: No, we indeed mean that the tuples $(x, S, i)$ collected within epoch $m$ are i.i.d. drawn from the same distribution. This is true since the selection of $S$ follows a **fixed** conditional distribution $q_m(x,r)$ given the context $x$ and the reward $r$ within epoch $m$ (and $(x,r)$ are i.i.d. of course).
On the other hand, this i.i.d. property does not hold for tuples coming from different epochs since $q_m(x,r)$ is different for different epochs. This is why we cannot use the entire history.
**Q5: I still believe 𝐵 should not be included in the regret upper bound and used in comparison with other algorithm. It is a free parameter and reparameterization does not change the problem definition. A larger 𝐵 will not affect log loss since 𝜃 will also scale up. If this is right, the comparison with [20] should either be removed or revised.**
A: Note that we already normalize the contexts so that they are within the unit $\ell_2$ ball, so further normalizing $\theta$ is not without loss of generality and is restricting the representation power of the function class.
We also do not understand the comment “A larger 𝐵 will not affect log loss since 𝜃 will also scale up”. Specifically, a larger $B$ in our formulation can lead to a lower probability of item $i$ being selected since the value of this item can be as low as $e^{-2B}$, which eventually makes the scale of the log loss to be of order $B$ as shown between Line 425 and Line 426.
**Q6: This is just a suggestion. The paper could benefit from introducing our discussion on Δ into the problem definition and analysis. An intuitive explanation of how it affects regret could give readers a better understanding of the MNL bandit problem.**
A: Thanks for your suggestion. We will incorporate this into our next revision.
---
Rebuttal 2:
Title: Response
Comment: Thanks to the authors for their response and for making these points clear. Now it is clear how $B$ affects regret and what iid means here. I hope the authors can explain the following questions in the revised paper. At least they are not very intuitive to me.
- Why IID is necessary for the algorithm? We can still compute $Err_{\log}$ given $S$. It is known IID assumption with fixed $q_m$ is not a requirement for Thompson sampling. So it is not very intuitive to me. What aspects of the problem make it necessary here?
- The relationship between $B$ and $\Delta$ still gives me contradictory information. Bigger $B$ and $\Delta$ result in a lower probability of item $i$ being selected. Why does the regret increase with $B$ but decrease with $\Delta$?
I hope these questions are helpful. Again, I'll maintain my rating on this paper.
---
Rebuttal 3:
Comment: We thank the reviewer for the questions again. They are indeed very helpful, and we will add more explanation to the paper. Specifically:
**Why IID is necessary for the algorithm? We can still compute $\text{Err}_{\log}$ given $S$. It is known IID assumption with fixed $q_m$ is not a requirement for Thompson sampling. So it is not very intuitive to me. What aspects of the problem make it necessary here?**
- It is necessary to have i.i.d. data in our algorithm design solely because we assume that our oracle requires i.i.d. data as inputs. It is completely possible that to solve the problem itself, using the entire history is a feasible solution, but note that the weaker the oracle assumption we make, the stronger our result is, so we choose to assume that the oracle only works with i.i.d. data.
**The relationship between $\Delta$ and $B$ still gives me contradictory information. Bigger $B$ and $\Delta$ result in a lower probability of item $i$ being selected. Why does the regret increase with $B$ but decrease with $\Delta$?**
- Technically, $\Delta$ and $B$ have opposite effects because a larger $\Delta$ makes it easier for the learner to identify the value difference from the realized item selection, while a larger $B$ makes the regression on the item selection probability harder. This is a great question though, and we will add more intuitive explanations to the paper. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Replicability in Learning: Geometric Partitions and KKM-Sperner Lemma | Accept (poster) | Summary: The authors study the list-replicable coin problem and a closely related underlying geometric problem of constructing ($k,\varepsilon$)-secluded partitions of $\mathbb{R}^d$. The authors resolve the optimal trade-off between $k,\varepsilon$, and $d$ in the latter, and as a corollary give a new set of upper bounds for the list-replicable coin problem trading off list-size and sample complexity for the former.
The coin problem asks, given coins with biases $p_1,\ldots , p_d$, estimate the biases of each coin up to accuracy $\nu$. An algorithm for the coin problem is called *L-list-replicable* if with high probability over samples, it’s output (for any fixed set of biases) is one of $L$ possible ($\nu$-correct) bias vectors. List replicability is closely related to other core notions of stability in learning such as differential privacy, global stability, and replicability.
Prior work established that it is possible the list-replicable solve the coin problem with list size $d+1$ and $d^2$ samples per coin. The former parameter, the list size, was known to be optimal, but it was open whether the number of samples could be improved. The main "learning" result of this paper is an upper and conditional lower bound for the list-replicable coin problem giving a trade-off between list size and samples. The upper bound states for any list size $L \in [2^d] \setminus [d]$, there is an $L$-list-replicable algorithm using $\tilde{O}(\frac{d^2}{\nu^2\log^2(L)})$ samples per coin. The lower bound states that this trade-off is tight for any algorithm based on `secluded partitions’.
The true main result of the paper is a near-tight bound on the construction of ($k,\varepsilon$)-secluded partitions of $\mathbb{R}^d$. A ($k,\varepsilon$)-secluded partition is a partition of $\mathbb{R}^d$ such that for any point $p \in \mathbb{R}^d$, the $\ell_\infty$ $\varepsilon$-ball around $p$ touches at most $k$ members of the partition. It is known that any ($k,\varepsilon$)-secluded partition leads to a $k$-list replicable algorithm with roughly $1/\varepsilon^2$ blow-up in samples per coin over the non-list-replicable coin problem.
Prior works established the existence of $(d+1,\frac{1}{2d})$-secluded partitions. The authors roughly show for any $\varepsilon$ one can construct a $((1+2\varepsilon)^d,\varepsilon)$-secluded partition and that this is tight.
The authors also establish a related "neighborhood" variant of the KKM/Sperner Lemma, stating that any coloring of [0,1]^d where no color touches opposite faces must have a point $p$ such that the $\ell_\infty-\varepsilon$-ball around $p$ touches $(1+O(\varepsilon))^d$ many colors. This is in some sense a `quantified/robust’ version of KKM, which states there is a point where every $\varepsilon$-ball sees at least $d+1$ colors, though it is actually incomparable.
Strengths: Algorithmic stability is a natural and recently fruitful topic in learning theory, closely related to core notions including differential privacy. Understanding the `cost’ trade-off between samples and strength of stability (in this case, list size) is a very natural problem.
The coin problem is a fundamental learning problem that has also shown up as a subroutine in analysis of replicable tasks, e.g. reinforcement learning in ("Replicability in Reinforcement Learning") and hypothesis testing in ("Replicability in High Dimensional Statistics"). The authors prove a new upper bound for this problem that interpolates between (essentially) no overhead at the `trivial’ list size of $2^d$, to $d^2$ overhead at the optimal list size $d+1$. The authors show this is tight assuming the use of secluded partitions.
The problem of secluded partitions itself, while formally a purely geometric result, is a natural problem and does have a similar flavor to recent techniques used for lower bounds in algorithmic stability. I would not be surprised if the author’s new neighborhood lemma sees future use in learning.
Finally, the paper (or at least the introduction) is very well-written, with good proof intuition clearly laid out in the introduction and a good overview of prior literature.
Weaknesses: This work’s only substantial weakness lies in its scope for NeurIPS: while the geometric results proved are indeed fundamental, the resulting implications within replicable learning are, at the moment, a bit lack-luster.
In particular, while the upper bound result itself mentioned above is nice, it is fairly close to being a corollary of the previous work “List and Certificate Complexities in Replicable Learning". All that is needed over the original work is the new secluded partition construction, which is just a product of the construction used in the original paper. Based on the definition itself (namely working in $\ell_\infty$), it is elementary to see such a construction will work, and while not formally stated in the original paper (which just focused on achieving $d+1$ list complexity), it was already pretty clear this was possible.
Thus in terms of novelty, the core contribution of this paper is really the *lower bound* on secluded partitions. This is a beautiful result geometrically, but because we do not know secluded partitions are necessary for list-replicable learning, it does not actually give a lower bound on the coin problem, nor do the authors yet have another use for the bound or related neighborhood KKM Lemma. I believe these notions will indeed prove useful, which is why I am still recommending this paper be accepted, but at the moment the learning results are a bit too weak for a stronger recommendation.
Technical Quality: 4
Clarity: 4
Questions for Authors: N/A
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors have appropriately discussed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Response to Weaknesses Comment:
The concern regarding the scope to NeurIPS is addressed in the General Response as well as the response to Reviewer W1GL. We agree the main novelty and technicality is in establishing the lower bound. However, the upper bound is critical to complete the picture. Thank you for the optimistic remark that these notions have the potential to be useful in the future.
---
Rebuttal Comment 1.1:
Title: Rebuttal Acknowledgement
Comment: We acknowledge the authors' rebuttal. We agree with the authors that the material arose in the context of replicability, has some implications therein, and may be of use in algorithmic stability in the future. I think the results are important, and will be of interest to a specialized subset of the NeurIPS community (including, e.g., myself). Nevertheless, my general opinion as stated in the review stands: the paper should be accepted, but is held back from a stronger recommendation by its scope with respect to learning.
---
Reply to Comment 1.1.1:
Comment: Thank you for reviewing the rebuttal and your positive response. | Summary: This paper studies connections between (i) list-replicability, a well-studied relaxation of the standard replicable learning framework and (ii) the design of partitions of the Euclidean space, in particular $(k,\epsilon)$-secluded partitions. A partition $P$ will be called $(k, \epsilon)$-secluded if for any $d$-dimensional vector $x$, a box centered at $x$ with side-length $2\epsilon$ intersects with at most $k$ members of $P$. Crucially, the lis size of a list replicable learner is closely related to $k$.
The main results of the paper are near-optimal trade-offs between $k$ and $\epsilon$ for secluded partitions. These imply some limitations on the design of list-replicable algorithms based on such partitions of the space.
Strengths: Since there is a $(k, \epsilon) = (d+1, 1/(2d))$-secluded partition of the $d$-dimensional Euclidean space, one natural question is whether one can improve on $k$ or $\epsilon$. For $k$, we knew that this is tight. The main result of the paper states that essentially the dependence on $\epsilon$ is also tight: for any $k \leq 2^d$, it must be the case that $\epsilon \in \tilde{O}(1/d)$. Hence, even by relaxing the number of intersections to be $k = \mathrm{poly}(d),$ one cannot get $\epsilon = \omega(1/d)$. I find this result interesting since it raises new questions on how to design improved list-replicable learners.
The second result of the paper is a construction of a $(k, \epsilon)$-secluded partition for (roughly) any $k \in [2^d]$. Each choice of $k$ gives a related value for $\epsilon$.
I find the connections raised in this paper quite nice. I feel that the geometric insight is very useful and could inspire future work in the area of replicable algorithms. The results of the paper are clear and I enjoyed reading the paper. Finally, I think that the paper is technically interesting and the arguments are nice.
Weaknesses: Some suggestions in terms of presentation:
1) Between line 91-95, I would add some more discussion about Sperner's lemma and the terminology used.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1) Do you have any intuition on what would happen if we replaced the $L_\infty$ norm with some other norm in terms of list-replicability?
2) There is a series of works that establish connections between replicability and approximate DP. Is there some similar connection to list-replicability?
3) Returning to the geometric connections to list replicability, is there some similar geometric intuition for DP?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: No limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Response to Suggestion:
Thank you for the suggestion. We will add the discussion along the lines you propose.
Response to Questions:
Q1: Unit ball in $\ell_p$ norm is a subset of a unit ball in $\ell_\infty$ norm. Thus secluded partitions with respect to $\ell_\infty$ norm are also secluded with respect to $\ell_p$ norm. Based on this, we can get a $d+1$-list replicable algorithm for the d-biased coin problem (where the approximation is in $\ell_p$ norm), using the known relation between the secluded partition and list replicable algorithms. In the present work, we also establish a generic version of Theorem 3.1 that applies to other norms. This result appears in the appendix as Theorem A.8. From this theorem, for example for $\ell_2$ norm, we obtain that $k > (1+\varepsilon\sqrt(2\pi e/d))^d$.
Q2 and Q3: We know from the work reported in [11], the global stability parameter is inverse to the list size. Prior work established the relationship between DP and stability [9]. In particular, from these works it follows that for a concept class $\cal C$ a learning algorithm with a polynomial list size with polynomially many samples implies that the concept class is DP learnable with polynomial sample complexity. Admittedly, we are not experts on DP, and our intuition there is limited.
We thank the reviewer for very encouraging remarks about the usefulness of our work in the future. | Summary: This work studies secluded partitions, which appear in the context of list-replicable learning (among other geometric/TCS applications). The complement known bounds on list complexity, the authors present new upper-bounds on the tolerance parameter as a function of the list complexity k and the ambient dimension d. They show a construction of a secluded partition that roughly achieves these bounds, showing the optimality of their bounds. The secluded partition results are then applied to give a "neighborhood" version of Sperner's lemma, for $\ell_\infty$ neighborhoods.
Strengths: This work provides novel results with potentially broad applications in geometry and computer science. Sperner's lemma is widely useful across disciplines, and this neighborhood variant may prove similarly broadly useful.
Weaknesses: As written, I don’t think this work is a good fit for NeurIPS. This work studies secluded partitions, which have connections to list-replicable algorithms for, e.g., the d-coin bias estimation problem as mentioned by the authors. In this sense the results have applications to replicable learning, but the actual application of the secluded partitions results give somewhat marginal replicable learning results. Theorem 3.1 essentially shows that one cannot meaningfully tradeoff list complexity for better secluded partition tolerance, and therefore cannot hope to improve sample complexity for the d-coin bias problem via the secluded partition approach, but says nothing of other approaches or other problems.
If I try to read this as a learning paper where the focus is improving sample complexity of list-replicable d-coin bias estimation algorithms, it seems like there should be more attention paid in related work to replicable (in the sense of ILPS22) algorithms for the d-coin bias estimation problem as well, as this has been studied in [1], [2], and recently (posted after NeurIPS submission deadline) [3].
[1] “Stability is Stable” Bun, Gaboardi, Hopkins, Impagliazzo, Lei, Pitassi, Sivakumar, Sorrell
[2] “Replicability in Reinforcement Learning” Karbasi, Velegkas, Yang, Zhou ’23
[3] “Replicability in High Dimensional Statistics” Hopkins, Impagliazzo, Kane, Liu, Ye ‘24
Typos/suggested edits:
Abstract
“We show that for each d” -> “we show for each d”
Section 1 pg 2
Brower -> Brouwer
should be seen as a fundamental endeavor. .
Section 2 page 2
“belongs to a list L consisting of almost k good hypotheses”
“over which a family of distribution” -> “over which a family of distributions”
page 3
“the partitions considered in this work have a non-zero” -> “the partitions considered in this work have non-zero
Theorem 2.4 could be written a little more clearly. It would be good to define the mapping implicit in the term “coloring” and clarify that opposite faces mean faces of the hypercube
Section 3
“in general a $(k, \varepsilon)$-secluded partitions”
“Can we improve this and construct a (d+1, \omega(1/d)-secluded partition?”
“is the following result upper bound result”
“Till this work we know”
“There exist a k-list replicable algorithm”
“Spener/KKM Lemma”
Section 4
“A learning algorithm A to be $(n, \eta)$-globally stable”
Section 5.1
Need a period before Thus/ what we do is to “replace”
I found the proof sketch for Theorem 3.1 had a few seemingly unnecessary detours that were a bit distracting. For instance, deriving the lower bound of 1 before deriving the desired bound. The footnote could be more precise (what does “becomes the wrong inequality” mean?)
Need a period before “Now that we have dealt with both issues” on page 6.
“So because there is a ceiling involved” could be made more precise, as could “by our change of perspective.”
Section 5.2
There’s a $d_n$ that should be $d_i$ in Definition 5.2
Section 6
“We also constructed secluded partitions for a wide all $k$”
The second paragraph of Section 6 is very vague, and doesn’t mention that the similar upper bounds are for general lp norms.
Technical Quality: 3
Clarity: 2
Questions for Authors: What direct implications do these results have for list-replicable learning?
How do these implications compare to what is known for $\rho$-replicable learning for similar problems?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Yes, the authors have addressed all limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Response to Weaknesses Comment:
This weakness regarding the scope is addressed in the general response, and we reiterate it here. Our work is motivated by the connection of geometric/topological tools to list replicability (secluded partitions and Sperner/KKM Lemmas). Driven by this connection, the current work undertakes an in-depth investigation of secluded partitions, a tool used to establish list replicability results. We believe that understanding the power and limitations of tools employed in replicability research is a significant contribution to the field and falls within the scope of NeurIPS.
Our investigation into secluded partitions not only led us to a comprehensive understanding of secluded partitions but also to a new neighborhood version of Sperner/KKM lemma. Sperner/KKM lemma and its variants (such as Poincare-Miranda) have proven to be critical tools in replicable learning and have applications in broader areas of computer science. We believe that this new version neighborhood version of Sperner/KKM lemma will have significant applications in learning. Other reviewers (D2ky, 1env) have expressed similar sentiments.
Regarding related works, thank you for suggested references. We will make the related work section more comprehensive in the final version taking into consideration all the references, including the ones that appeared after NeurIPS deadline.
Thank you for carefully reading and pointing out the typos and suggestions for improvements. We will fix the typos and will incorporate your suggestions in the final version.
Response to Questions:
Q1: Our secluded partition construction (Theorem 3.2) leads to corollary 3.3 which gives a tradeoff between list size and sample complexities. For example, if we allow the list size to be $2^{\sqrt{d}}$ then we can get sample complexity of $\tilde O(d/\nu^2)$ per coin, which is a new result. Theorem 3.1 shows the limitations of secluded partitions as a tool.
Q2: It is possible to go from list-replicable algorithms to $\rho$-replicable algorithms (using ideas from Goldreich [25]). In $\rho$-replicability, one is allowed to choose a random string. Once a random string is fixed, the rest of the computation is perfectly replicable (i.e., with list size 1). Goldreich’s construction implies that a $\ell$-list replicable algorithm leads to a $\rho$-replicable algorithm where the length of the random string is of the order of $O(\log l)$ (with additional dependency on $\rho^2$). However, such a transformation leads to a sample complexity blow-up. This is already observed in [16].
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for addressing my concerns regarding fit and for committing to expanding the related work section. I still believe that this paper would be better-appreciated in another venue, but given that I enjoyed the work and the overall positive scores of other reviewers, it seems like I might be wrong and there will be enough of an interested audience at NeurIPS that I will revise my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for reviewing the rebuttal and your response and being positive about the work. | Summary: In this work the authors study various connections between geometric constructions in $R^d$, in particular various $(k,\varepsilon)$-secluded partitions, and its applications to list-replicable learnability. This connection was originally observed in prior work, and the authors provide stronger quantitative results. More specifically, their first main results shows an upper bound on the $\varepsilon$ for every (non-trivial) choice of the parameter $k$ of the secluded partition. This implies a lower bound on the number of samples needed to design a $k$-list-replicable algorithm for the problem of coin estimation through secluded partitions, and can be interpreted as a negative result: if $k = poly(d)$, then one needs $d^2$ samples per coin to design a $k$-list-replicable algorithm (through secluded partitions). This improves upon prior work which shows an upper bound on $\varepsilon$ only for $k = d+1$
Their next main provides a construction of $(k,\varepsilon)$-secluded partitions for all the (non-trivial) values of $k$ that is optimal w.r.t. $\varepsilon$ up to polylogarithmic factors. Due to the previous connection, this implies the existence of $k$-list-replicable algorithms whose sample complexity is determined by $\varepsilon$.
Lastly, the authors use the techniques they have developed to prove a "neighborhood" version of the cubical Sperner's/KKM lemma.
Strengths: -The constructions that the authors present are mathematically elegant.
-The paper is written clearly and is easy to follow.
-Their applications to list-replicable learning, which has gained some attention recently, are spelled out in the paper.
-Some of the results, like the extension of Sperner's lemma, can have further applications.
Weaknesses: I am a bit worried that the results are a bit incremental. In particular, the connection between secluded partitions and list-replicability was already observed in prior work. Moreover, I view the result regarding secluded partitions as a negative result, in the sense that in order to get reasonable list complexity through secluded partitions, one needs a significant blow-up in the sample complexity, and this cannot be improved using this approach. Had the lower bound on the sample complexity been against all algorithms (or at least a broader class) I would have found the result more interesting. Similarly, the extension of cubical Sperner's lemma to its neighborhood variant is interesting on its own, but if the main focus of the paper is list-replicability, I am not too sure how much it adds to the story.
Some minor points/typos:
-Line 53: double full-stop.
-Line 56: "the replicable algorithm" -> replicable algorithms.
-Theorem 2.4: it would be better for a broader audience to define face/closure.
-Line 109: the degree parameter -> it.
-Line 339: know -> known.
-In many places: lowerbound -> lower bound
-Line 156: global stability.
-Line 172: extend -> extended.
-Line 112: a ) is missing.
-Line 114: drop result.
Technical Quality: 4
Clarity: 3
Questions for Authors: -Do you have a conjecture about the lower bound on the sample complexity of $k$-list replicable learning for algorithms that are not based on secluded partitions?
-Can you explain a bit more about the adaptation of Theorem 5.1 you talk about in Line 231?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 2
Limitations: Adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Response to the Weaknesses Comment:
We agree with the reviewer that the connection between secluded partitions and list-replicability was already observed. Indeed, that is our main motivation to further investigate the possibility of constructing secluded partitions better than those previously known. However, our result is negative and shows that the known constructions cannot be improved by much. We believe negative results are a significant contribution in the sense that they guide the research directions. Moreover, the techniques used to establish the negative result led to the discovery of the neighborhood-Sperner/KKM lemma. We disagree with the reviewer that the results are incremental because our results provide a comprehensive picture of a secluded partition (both constructions and impossibility results), and for establishing the lower bounds we need quite different techniques (from measure theory and geometry) than those used in the literature.
Thank you for pointing out the typos. We will fix them in the final version.
Response to Questions:
Answer to Question:
Q1. We conjecture that any $(d+1)$-list replicable algorithms for the d-bias coin problem requires $\Omega(d^2/\nu^2)$ samples per coin.
Q2. Adaptation of Theorem 5.1: This is only a slight technicality: Our lower bound theorem works for partitions whose members have bounded outer measure (need not be measurable). However, for Theorem 5.1 to be applied, we need sets $A, B$, and $A+B$ to be measurable. So an adaptation of 5.1 is needed and is given as Lemma A.6, which allows us to work with outer measures.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their thorough response. Just a clarification on my comment; I didn't imply that the technical contribution of the lower bound is incremental, I do think it's an elegant construction. I meant to say that the result on its own is a bit incremental, since it implies a hardness only for a particular technique/approach for constructing list-replicable algorithms. I also agree that the new variant of the Sperner/KKM lemma is interesting, it just feels a bit disconnected from the main message/story of the rest of the paper.
I agree with some of the points the other reviewers have mentioned, but I am still on the positive side about the paper.
---
Rebuttal 2:
Comment: Thank you for reviewing the rebuttal and your positive response. | Rebuttal 1:
Rebuttal: General Response:
We sincerely thank all the reviewers for their careful reading and thoughtful comments and suggestions.
A common concern raised by the reviewers is the scope and fit of the present work for NeurIPS, which we address here. Our work is motivated by the fundamental connections between geometry/topology and the fast-emerging topic of replicable learning. This connection continues to be revealed. For instance, the works reported in [29, 16] use geometric partitions to design replicable algorithms, while those in [16,12,11] utilize Sperner-KKM-like fixed point theorems from geometry/topology to establish lower bounds (references as appear in the submission). Therefore, it is natural and fundamental to further investigate geometric partitions and Sperner/KKM-type theorems in relation to replicability. We believe that undertaking such investigations is a significant endeavor and should be of interest to the NeurIPS community.
Our work is a comprehensive study of geometric partitions, specifically secluded partitions, which have direct applications to list replicability [16]. Additionally, the work establishes a neighborhood variant of the Sperner/KKM lemma. As majority of reviewers have pointed out, this neighborhood version is a fundamental contribution with strong potential for applications in replicability research.
Thus, while our motivation to investigate the partition problem is inspired by replicability, our study focuses on geometric partitions and Sperner/KKM-type results. We believe such a study is within the scope of the conference. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Beyond task diversity: provable representation transfer for sequential multitask linear bandits | Accept (poster) | Summary: This paper extends the existing work on multi-task linear bandits where the task parameters lie in a low rank subspace. Specifically, this paper assumes $N$ $d$-dimensional linear bandits with parameters lie in an $m$-dimensional subspace. For such a setting, the classical approaches yield a regret linear in $N$ and $d$. This paper then proposed a new algorithm that improves the regret with mild assumptions of well-conditioned ellipsoids of the tasks and bounded task parameters. The algorithm is based on a reduction to a bandit online subspace selection problem. The key idea of the proposed approach is bi-level. At the lower level, the algorithm performs either an meta-exploration algorithm or a meta-exploitation algorithm for each of the tasks. At the upper level, the learner either chooses exploration/exploitation or chooses a subspace to use for exploitation. The effectiveness of the proposed algorithm is verified empirically on synthetic adversarial settings.
Strengths: **Significance**: This paper proposed a new algorithm for multi-task linear bandit with provable low regret without making strong assumptions on task parameters. The sequential setting studied is more challenging than parallel settings due to the slow revelation of the underlining subspace. Compared with a few related existing works, this paper does not need assumptions such as task diversity.
**Quality**: this paper is in general good quality. The algorithm is well described.
The theoretical part of the work is clearly formulated. Thought the experiments are synthetic, the result still verifies the effectiveness of the proposed algorithm.
**Clearity**: The design of the algorithm and intuition behind the design is clearly explained.
Weaknesses: There is no major weakness in the paper. I do have some questions regarding the following:
**1**. This paper assumes knowledge of the parameters corresponding to the task number, the dimension, and the number of rounds, which is not ideal.
**2**. [Originality] The algorithmic design and the guarantee is closely related to a a few works, including the PEGE by [Rusmevichientong and Tsitsiklis 2010] (related to lemma 3 in this paper), the analysis in Yang et al [2020] (related to lemma 4), EWA algorithm (related to lemma 5). Therefore, the technical novelty of the proposed algorithm and analysis is not very obvious to me.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see weaknesses.
Also, I am curious about the following:
1. This paper assumes knowledge to a few hyper-parameters. Among these, I wonder whether the assumption on the prior knowledge of the $\tau$ parameter can be relaxed with certain adaptive design. Could the author elaborate on why knowledge is necessary for the current algorithm to work?
2. Is there a corollary or simple extension that can readily extend the current result to the parallel setting?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: - Originality: The algorithmic design and the guarantee is closely related to a a few works, including the PEGE by [Rusmevichientong and Tsitsiklis 2010] (related to lemma 3 in this paper), the analysis in Yang et al [2020] (related to lemma 4), EWA algorithm (related to lemma 5). Therefore, the technical novelty of the proposed algorithm and analysis is not very obvious to me.
$=>$ As discussed in the Introduction and Table 1, Table 2, our work fills the gap of the sequential representation transfer for linear bandit, without the Task Diversity assumption. Before our work, no algorithms could provably beat the naive individual single-task baseline. The absence of task diversity assumption makes meta-learning the task representation matrix $B$ nontrivial.
Our proposal designs a bi-level approach for this problem and shows that, under mild assumptions, we can efficiently learn the underlying subspace in an online sense and maintain the correct uncertainty over possible experts when the environment acts adversarially (i.e. not revealing the full underlying subspace till the end). We believe our reduction to online subspace selection is new to sequential representation transfer and may benefit other meta-learning applications such as in RL or supervised learning.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: I thank you authors for their efforts in the rebuttal. I will take all the rebuttals into consideration during the discussion period between reviewers. | Summary: The paper studies lifelong learning in linear bandits and designs a two-level algorithm with provable low regret without task diversity assumptions. The paper assumes the tasks share a low-rank representation and provides a regret upper bound.
Strengths: The paper uses a two-level approach to solve this problem, which is novel. A theoretical guarantee is given with numerical experiments performed.
Weaknesses: There are some statements in the theorems that seem wrong and the upper bound provided does not seems tight. See Questions for details.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1) In line 109, for a matrix $U$, isn't $U_{\perp}$ not unique? This should be defined as a set or stated as one of the matrices.
2) The Assumption 2 is stated as linear bandits. I didn't get the reason for this title.
3) In line 3 in Algorithm 1, it should be $\mu=\frac{\tau_1}{d}$
4) The statement in Lemma 4 seems problematic. As stated in 1), the matrix $\hat B_{n,\perp}$ is not unique, hence the RHS is a matrix-dependent bound. Thus, the condition $||\hat B_{n,\perp} \theta_{n}||_2\leq 2\alpha$ is also dependent on the choice of the matrix. This discussion is missing in the paper. This will also affect the definitions in (2) and (3), and the following results.
5) In line 173, the definition of the expert set $\mathcal{E}^{\epsilon}$ is in the Appendix, which makes the paper not self-contained.
6) What does the EWA algorithm do in Algorithm 3? Which part of the results is directly from the EWA algorithm?
7) Overall, the paper provides the regret bounds of order $\tilde{\mathcal{O}} (N m\sqrt{\tau} + N^{\frac{3}{2}} \tau^{\frac{2}{3}} d m^{\frac{1}{3}} + \tau m d)$ in equation (5), times of interactions. Compared to the baseline, which is $\tilde{\mathcal{O}} (N d \sqrt{\tau})$, I didn't see a clear improvement in the regret/ Firstly, the regret scales linear with $\tau$ when fix $N$, which is known to be suboptimal in linear bandit. At least some truncation can be done in the first $md$ tasks to get rid of the linear term. This term cannot be hidden from the abstract from my point of view. Secondly, to achieve similar upper bounds, one needs $N\geq md$ and $d^2\leq\tau\leq (\frac{N}{m})^2$, which means the algorithm can only learn for a finite time and the tightness discussion is missing the paragraph Comparison with lower bounds with only a conjecture.
7) The author answer Yes to the Question about Open access to data and code in the Checklist, but the code is not released during the submission. And I am curious how Figure 1B is generated.
9) The labels on the y-axis in the figures need to be fixed.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The regret upper bounds has a hugh gap to the lower bound.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: - ``In line 109, for a matrix $U$ , isn’t $U_{\perp}$ not unique? This should be defined as a set or stated as one of the matrices.''
**We thank the reviewer for pointing out this impreciseness.** Indeed, the choice of $U_\perp$ is not unique according to our writing in the submitted version.
To make the definition well-defined, it suffices to fix any orthonormal basis $U_\perp$; one may break ties in dictionary order. We will clarify this in the final version of the paper.
Indeed, the specific choice of $U_\perp$ does not affect the correctness of our proof -- e.g., $ \| \hat{B}_{n, \perp} \theta_n \|_2$ are always the same regardless
of the specific choice of $\hat{B}_{n, \perp}$, as long as its columns form a orthonormal basis of $\text{span}(\hat{B}_n)^\perp$:
For any two choices of $\hat{B}_{n, \perp}$
(denoted by $\hat{B}_{\perp, 1}$,
$\hat{B}_{\perp, 2}$, respectively)
there exists orthonormal $V \in \mathbb{R}^{{(d-m) \times (d-m)}}$ such that
$\hat{B}_{\perp, 1}$
$ = \hat{B}_{\perp, 2} V$.
Therefore,
$\|\hat{B}_{\perp, 1}^{\top} \theta_n\|_2$
$= \|V^{\top}\hat{B}_{\perp, 2}^{\top} \theta_n\|_2$
$ = \|\hat{B}_{\perp, 2}^{\top} \theta_n\|_2$
To show that all valid choices of $\hat{B}_{n, \perp}$ are equivalent up to a $(d-m) \times (d-m)$ orthogonal transformation, we have:
Let $W$ be a $k$-dimensional subspace of $\mathbb{R}^d$. Let $B, \hat{B} \in \mathbb{R}^{d \times k}$ be matrices whose columns form an orthonormal basis of $W$. Then, there exists an orthogonal matrix $V \in \mathbb{R}^{k \times k}$ such that $\hat{B} = B V$.
**Proof:** Since $B$ is a basis of $W$, there exists some $V$ such that $\hat{B} = BV$. Since $V^\top V = V^\top (B^\top B) V = (BV)^\top (BV) = \hat{B}^\top \hat{B} = I$, $V$ is an orthogonal matrix.
- ``The statement in Lemma 4 seems problematic. As stated in 1), the matrix $\hat{B}_{n,\perp}$ is not unique, hence the RHS is a matrix-dependent bound.
Thus, the condition $||\hat{B}_{n,\perp} \theta_n||_2 \leq 2 \alpha$ is also dependent on the choice of the matrix. This discussion is missing in the paper. This will also affect the definitions in (2) and (3), and the following results.''
$=>$ As we mention in our previous item, the value of $|| \hat{B}_{n, \perp}^\top \theta_n ||$ is invariant to the specific choice of matrix
$\hat{B}_{n, \perp}$. Still, we agree that this can be made clearer in the main paper, so we will make the necessary edits in the final version.
- ``What does the EWA algorithm do in Algorithm 3? Which part of the results is directly from the EWA algorithm?''
$=>$ The algorithm uses Exponentially Weighted Algorithm (EWA) in line 8 of Algorithm 3. The intuition of using EWA is to choose subspaces $\hat{B}_n$ online such that their linear spans capture $\theta_n$'s over tasks $n=1,..., N$. The idea is to use the feedback from the meta-exploration tasks to update the weights of all possible experts from the expert set. Since this expert set $\alpha$-cover the true $B$, the EWA guarantee would ensure that BOSS would efficiently learn the closest expert to the true $B$ while maintaining the correct uncertainty over all possible experts when the subspace dimensions are not fully revealed, thus, neatly dealing with the non-Task Diversity setting.
Specifically speaking, the EWA's guarantee is used in the proof of Lemma 5, line 455. We will provide a full description of the algorithm with weight updates in the Appendix in the final version.
- ``Overall, the paper provides the regret bounds of order $\tilde{O} (N m\sqrt{\tau} + N^{3/2} \tau^{2/3} dm^{1/3} + \tau md)$ in equation (5), times of interactions. Compared to the baseline, which is $\tilde{O}(N d\sqrt{\tau} )$, I didn’t see a clear improvement in the regret. ... the regret scales linear with when fix N, which is known to be suboptimal in linear bandit. At least some truncation can be done in the first tasks to get rid of the linear term. This term cannot be hidden from the abstract from my point of view''
**We agree that the $\tau m d$ term cannot be ignored and will add it to the abstract.** Note that we are studying the multi-task problem; thus, the interesting regime is when the number of tasks $N$ is large enough to allow useful transfer learning across tasks. The $\tau md$ part of the regret can be understood as the learner ``sacrifices'' a small number of tasks to learn transferable knowledge. This last term is of lower order than $N^{2/3} \tau^{2/3} d m^{1/3}$ when $\tau \ll (\frac{N}{m})^2$.
- ``The definition of the expert set $\mathcal{E}^{\varepsilon}$ is in the Appendix, which makes the paper not self-contained''
**We agree.** At first, we moved these definitions to the appendix to not distract the reader. We realized that they may be required for the main paper to be self-contained, so we will make the necessary edits.
- ``... the code is not released during the submission. And I am curious how Figure 1B is generated.''
**Correct**. We want to add some additional experiment results. The code is in https://anonymous.4open.science/r/Serena-C5F1.
For Fig 1b, since we use a simulated environment, we have access to the $B_n$, the subspace spanned by $\left ( \theta_i \right )_{i=1}^n$ thus far and the estimated subspace $\hat{B}_n$ of the learners for each task. Thus, we plot this figure on this information.
- ``The Assumption 2 is stated as linear bandits. I didn’t get the reason for this title.''
$=>$ We will change it to ``linear bandits with ellipsoid action sets'' in the final version.
---
Rebuttal Comment 1.1:
Comment: Thanks for clarifying some of the points I over-interpreted. I would like to raise the score to 5 but reduce the confidence to 3. The main weakness in
my opinion is still the linear term in $\tau$. Consider any real-world tasks (such as search systems or recommender systems like movie recommendations), there will be a large volume of queries and $\tau$ will be arbitrarily larger than poly$(N)$. | Summary: The paper proposes a strategy for the multi task linear bandit problem, where the tasks are assumed to share a low rank representation, which is ought to be learnt via a new meta learning strategy, where meta exploration and exploitation needs to be balanced. The low rank representation is learnt via optimizing a newly constructed loss function over an epsilon-net. The tasks appear sequentially to the learner and the authors explicitly do not make any assumption on the task diversity of their setting.
The authors provide a meta regret bound for their algorithms and synthetic data results.
Strengths: The paper is well written and easy to follow
Original idea for solving the multi-task/meta learning setting with low rank representation
The assumption on the action set is weaker than in comparable works (constant action set vs iid action set)
Weaknesses: The EWA algorithm for the meta learning procedure should not just be referenced but actually appear in the paper (or at least in the supplementary)
The presentation of the actual meta learning could be made clearer. For that matter, definition 7 and 8 should be moved to the main paper so it becomes clearer what the uniform distributions D_n are supposed to be.
The experiments would benefit if there were another baseline to compare to (Cella et al or Bilaj et al as they appear in the related work section)
The plot labels of figure 1 b) and c) should have latex style writing
Technical Quality: 3
Clarity: 2
Questions for Authors: In Figure 1a) it looks like the Boss algorithm performs worse after a new dimension for the subspace is incremented at n= 2501, does that mean that the dimension of the subspace was falsely estimated?
at n=1 the the low rank should be estimated as m=1 as well, which makes it hard to minimize $B^T_{n+1,\perp}\theta_{n+1}$, (Since the n+1th task parameter might be outside the subspace selected for the first n tasks), Intuitively, the algorithm should work best if at least m task parameters are already bias-less estimated for a m-dimensional low rank structure. Could you explain how you mitigate this issue?
Shouldn't a bound on the term $||B_{\perp}-\hat{B}_{n+1,\perp}||$ or a term that showcases the estimation error on the representation, appear in the final regret?
Does p need to be constant? after enough tasks explored, p could gradually vanish. I see in the final bound that $p \sim 1/\sqrt[3]{N^2}$, which is still constant.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Aside from the assumptions on their setting made, I did not see any limitations explicitly addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: - ``Shouldn’t a bound on the term $||B_{\perp} -\hat{B}_{n+1,\perp}||$ or a term that showcases the estimation error on the representation, appear in the final regret?''
**Yes.** Indeed, our regret bound has an implicit dependence on $|| \hat{B}_{n, \perp}^T \theta_n ||$ - see Eq. (2).
This is a key quantity like the $||B_{\perp} -\hat{B}_{n+1,\perp}||$ you mentioned.
- ``At n=1 the the low rank should be estimated as m=1 as well, which makes it hard to minimize $||B^T_{n+1,\perp} \theta_{n+1}||$ , (Since the n+1th task parameter might be outside the subspace selected for the first n tasks), Intuitively, the algorithm should work best if at least m task parameters are already bias-less estimated for a m-dimensional low rank structure. Could you explain how you mitigate this issue?''
$=>$ If we understand correctly, the reviewer was referring to that the algorithm needs to overcome additional challenges without the task diversity assumption. Without task diversity, we cannot hope to obtain estimate $\hat{B}_n$'s that converge to the underlying $B$. Instead, we use online learning with randomized meta-exploration (using the learned $\hat{\theta}_n$'s to estimate the underlying representation $B$) to ensure that our learned $\hat{B}_n$'s can capture $\theta_n$'s for \emph{most} task $n$, in an average sense.
- ``In Figure 1a) it looks like the Boss algorithm performs worse after a new dimension for the subspace is incremented at n= 2501, does that mean that the dimension of the subspace was falsely estimated?''
$=>$ Our estimator $\hat{B}_n$ is always in $\mathbb{R}^{d \times m}$, so the subspace dimension estimate is always $m$. If the reviewer meant ``the newly revealed direction of the subspace was falsely estimated'', we agree.
BOSS uses the estimated $\hat{B}$ to efficiently estimate $\hat{\theta}_n$. When a new dimension is revealed at n=2501, it takes a while for BOSS to have an accurate estimation $\hat{B}_n$ with respect to $B_n$, the subspace spanned by all $\{\theta_i: i=1,..,n\}$ shown so far. Fig 1b and 1c show that if the expert set covers the underlying $B$ (green plot), BOSS can adapt and learn very fast.
- ``The presentation of the actual meta learning could be made clearer. The EWA algorithm for the meta learning procedure should not just be referenced but actually appear in the paper''
$=>$ Thank you for your suggestion. We will include the full EWA algorithm in the appendix in the final version.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed answers to my questions and concerns. I will take them into consideration for the reviewers discussion period. | Summary: This paper addresses the problem of representation transfer in a sequential multi-task linear bandit problem. The main objective in the paper is to remove the task diversity assumption made in Qin et al (2022), which places a constraint on any subsequence of tasks observed. The paper proposes an algorithm that for each task performs exploration with a constant probability, and uses the exponential weighted average algorithm to choose the exploration probability over a large set of arm vectors. The paper proves an upper bound on the cumulative regret without making the task diversity assumption and demonstrate the performance in an experiment.
Strengths: * The paper presents the results in an engaging and clear manner and the algorithms proposed are intuitive.
* Removal of the task diversity assumption appears to be significant advance.
Weaknesses: * The problem setting emphasizes the sequential nature of the tasks. Specifically, the algorithm needs to collect all samples from $\theta_1$ task before moving on to $\theta_2$, i.e., the tasks cannot be interleaved. It would be helpful to outline some situations where this aspect of the problem setup is important.
* The algorithm needs to know the value of $m, N, \tau$ to choose the exploration probability. The proposed algorithm performs better than the "individual task baseline" only under a specific parameter regimes (see line 242).
* The theoretically valid size of the set of experts is extremely large for reasonable parameter values (lines 408 and 284). Is that one of the reasons why BOSS only performs better when we have very large number of tasks?
* There is a gap to the known lower bound, which is suggested to be due to the task diversity assumption not being met.
Technical Quality: 3
Clarity: 4
Questions for Authors: * If possible, it would be nice to demonstrate the impact of the task diversity assumption. Specifically, could we construct a simulated environment where the assumption is not met by a significantly large fraction of subsequences, and the performance of BOSS is better than SeqRepL Qin et al (2022) ? Maybe your experiment already demonstrates that but I missed it. Could you clarify?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: - ``There is a gap to the known lower bound, which is suggested to be due to the task diversity assumption not being met.''
**Before our work, no regret bounds that are $o(N d \sqrt{\tau})$ were known for Sequential representation transfer multi-task linear bandits without Task Diversity assumption**.
In addition, we don't know if our upper bound can be improved. Even with task diversity, the upper and lower bounds of Qin et al. do not exactly match (see Table 1)
- ``The theoretically valid size of the set of experts is extremely large for reasonable parameter values, is that one of the reasons why BOSS only performs better when we have very large number of tasks?''
**Yes.** $|\mathcal{E}^\epsilon|$ directly affects the $\frac{\tau \ln|\mathcal{E}^\epsilon|}{p} = \tilde{O}( \frac{\tau d m } p )$ term in the regret bound, and will subsequently affect the $N^{2/3} \tau^{2/3} d m^{1/3}$ term in the regret bound. If we can make this term smaller, we can broaden the parameter regimes when our regret bound is better than the $N d \sqrt{\tau}$ baseline. We are also working on a method that is more computationally efficient in the future.
The reason why we need large $N$ is explained in the global answer above.
- ``It would be nice to demonstrate the impact of the task diversity assumption''
$=>$ Our experiments in the submission are designed without the Task Diversity assumption, as shown in the caption of Fig 1 (a new dimension of the underlying subspace $B$ is revealed at tasks 1, 2500, and 3500). We also add some extra experiment results to highlight how BOSS outperforms SeqRepL when the parameters length is larger ($0.8 \leq \|\theta_n\|_2 \leq 1$) in the attached PDF file of the global answer. Furthermore, we also add another set of experiments when the Task Diversity assumption is satisfied, where SeqRepL outperforms BOSS as expected.
The first figure is the experiment results without the Task Diversity assumption and the second is with Task Diversity. Different from the main paper, we ensure that $0.8 \leq \|\theta_n\|_2 \leq 1$ highlights how badly SeqRepL performs when the Task Diversity assumption is violated. We also made some small changes to the hyper-parameters, as shown in the figures' captions.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response, I have increased my score.
---
Rebuttal 2:
Comment: As the author-reviewer discussion period is coming to a close, we kindly ask for your review of our response. This ensures that any additional questions or feedback you may have can be addressed before the discussion period ends. Thank you for your time! | Rebuttal 1:
Rebuttal: We thank the reviewers for their insightful feedback.
In this work, we introduce the first provable representation transfer algorithm for sequential linear bandits without relying on the task diversity assumption. Unlike in parallel settings or sequential settings that assume task diversity, the learner now has to address the $\textit{unique challenges}$ in meta-exploration, which involves carefully deciding when to acquire more information on the low-dimensional representation.
We are encouraged that the reviewers found our work clear (R1, R2, R4), intuitive (R1, R4), and novel (R2, R3).
We are also glad that the reviewers recognized the significance of removing the task diversity assumption (R1, R2, R4) in exchange for some mild assumptions (R2) within the more challenging sequential setting (R4).
In addition, our theoretical guarantee and experiment are also appreciated (R1, R3, R4). We address some common questions below and the specific questions in each reviewer's section.
- ``The proposed algorithm performs better than the ”individual task baseline” only under a specific parameter regimes (R1, R4), $N \geq md$ and $d^2 \leq \tau \leq (N/m)^2$, which means the algorithm can only learn for a finite time (R3). [Does] BOSS only performs better [than the individual single-task baseline] when we have very large number of tasks? (R1)''
$ \textbf{We wish to point out that our work gives the first nontrivial result towards sequential multi-task linear bandit without task diversity assumption.}$
Before our work, no regret bounds better than individual single-task baseline (i.e. $o(N d \sqrt{\tau})$) were known for this setting.
Our main motivation for this paper was to show that sequential representation transfer is indeed possible without the task diversity assumption. Proving this opens the floodgates for developing more sample-efficient algorithms.
We can rephrase the parameter regime where our regret bound is better than the individual single-task baseline as:
$\tau \gg d^2$ and $N \gg m \sqrt{\tau}$. So, indeed, BOSS performs better when $N$ is large. As mentioned in the paper, we think that broadening the parameter regime for improvement is an important problem.
If we interpret R3's concern correctly, $\tau$ being capped at $(N/m)^2$ is a limitation of our paper. We agree -- this is due to the existence of the burn-in term $\tau m d$ (when $\tau > (N/m)^2$, this is greater than $N d \sqrt{\tau}$). Removing this burn-in term is an interesting open problem.
- ``The algorithm needs to know the value of $m, d, N, \tau$. Can this be avoided?'' (R1, R4)
$\textbf{Maybe}$. The requirement for knowledge of $m$ can be relaxed to knowing an upper bound of $m$. Removing this knowledge requires a change of approach, such as low-rank matrix optimization, as in Cella et al. 2023 or additional assumption, as in Bilaj et al. 2024. Cella et al. 2023 is in the parallel setting, which is not applicable here, and Bilaj et al. 2024's guarantee can be as large as $O(Nd\sqrt{\tau})$ as discussed in line 91.
The requirement for $\tau$ and $d$ is not too bad since the action space is given to us ahead of time, so $d$ is known, and $\tau$ can be known after finishing the first task.
Using an adaptive design to not be dependent on $N$ is likely to be possible with the doubling-trick. The details are in the next question.
- ``Can we use an adaptive design (R4)? Can $p$ gradually vanishes (R2)?''
$\textbf{Yes}$. We think achieving adaptivity in $N$ can be done using a doubling trick.
Specifically, in phase $i$, run our algorithm with the assumption that there are $2^i$ total tasks in this phase. The modified algorithm has a meta-regret guarantee that is within a constant of the algorithm that knows $N$. This implicitly gives an adaptive setting of meta-exploration probability $p$ that is decaying over time.
- ``Can our results translated to the interleaved (R1) or Parallel (R4) setting?''
$=>$ Our algorithm is designed to tackle the unique challenges in sequential transfer without the task diversity assumption, and so applying it to the parallel setting would result in losing the intended benefits.
We don't think that our result can be easily adapted to the parallel setting and achieve a better guarantee than Hu et al. 2021.
The interleaved setting can be seen as a unified model for both the Sequential and Parallel settings. Since these two are fundamentally different, investigating the interleaved setting is left for future work.
- ``The labels on the y-axis in the figures need to be fixed (R2, R3)''
$\textbf{Yes.}$ We will correct it as you suggest.
Pdf: /pdf/e01a33af077a5a835fe55d2fb76987bbce86f047.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Multi-language Diversity Benefits Autoformalization | Accept (poster) | Summary: This paper introduces MMA, a large multi-language dataset for autoformalizing theorem statements. MMA employs a back-translation approach using GPT-4 to convert two formal corpora (AFP for Isabelle and mathlib for Lean4) into informal-formal pairs. Experiments demonstrate that LLMs such as LLaMA and Mistral, fine-tuned on MMA, achieve significant improvements over base models (from 0% to 29%-32% with no or minimal correction), outperforming their single-language counterparts in autoformalization on the miniF2F and ProofNet benchmarks.
Strengths: * The paper is well-written and well-organized.
* The proposed MMA dataset is one of the largest collections of multi-language informal-formal data pairs, addressing the notable data scarcity in autoformalization. This dataset could be somehow beneficial for future research in this field.
* Experiments demonstrate that MMA is effective for fine-tuning LLMs for autoformalization, showing that multi-language data can enhance performance for single-language tasks.
* I appreciate the manual evaluation and the detailed discussion provided in this paper.
Weaknesses: * I think the main weakness of this paper is that MMA is constructed solely based on zero-shot prompting for informalization by GPT-4, resulting in a dataset that is not perfectly aligned. Table 2 shows that the formalization accuracy is around 75%, indicating significant room for improvement. While the choice of zero-shot prompting may be due to cost considerations given the large scale of the dataset, there are potential ways to improve quality. For example, filtering out obvious errors in informalized statements or using more advanced prompting techniques (e.g., few-shot, self-consistency, or prompt to refine the misunderstanding concept) could help construct a smaller but higher-quality dataset. I am also curious about the effectiveness of fine-tuning with a small high-quality dataset compared to a large but noisy dataset for autoformalization.
* Minor: Missing references: There are some other works that leverage autoformalization to generate informal-formal data [1, 2]. [3] also surveys methods for autoformalization and theorem proving.
[1] FIMO: A Challenge Formal Dataset for Automated Theorem Proving, arXiv preprint 2023
[2] MUSTARD: Mastering Uniform Synthesis of Theorem and Proof Data, ICLR 2024
[3] A Survey on Deep Learning for Theorem Proving, arXiv preprint 2024
Technical Quality: 3
Clarity: 4
Questions for Authors: * In the discussion, the author mentions plans to expand to other languages like Coq. However, I believe that a significant portion of Coq projects focuses on software verification, which is often challenging for human readability. One success of informalization by GPT-4 is that the formalized concepts are easily understood by GPT-4. Do you think your approach could still be effective for such Coq projects?
* (optional) If applicable, I am curious about the performance of DeepSeekMath or Llemma as the base model to be fine-tuned on MMA. These models have been pre-trained on large-scale informal and formal datasets and exhibit some basic autoformalization abilities.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: I appreciate the detailed statement of the limitations in the paper. Moreover, I believe the most significant limitation is the presence of noisy aligned informal-formal pairs, which could be improved further.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their detailed feedback and the time they invested in reviewing our paper. We address specific points raised by the reviewer:
> **I think the main weakness of this paper is that MMA is constructed solely based on zero-shot prompting for informalization by GPT-4, resulting in a dataset that is not perfectly aligned. Table 2 shows that the formalization accuracy is around 75%, indicating significant room for improvement. While the choice of zero-shot prompting may be due to cost considerations given the large scale of the dataset, there are potential ways to improve quality. For example, filtering out obvious errors in informalized statements or using more advanced prompting techniques (e.g., few-shot, self-consistency, or prompt to refine the misunderstanding concept) could help construct a smaller but higher-quality dataset. I am also curious about the effectiveness of fine-tuning with a small high-quality dataset compared to a large but noisy dataset for autoformalization.**
- We thank the reviewer for suggesting the advanced prompting techniques and filtering methods for improving the quality of the MMA dataset. In the paper, we explain that we did not use naive few-shot prompting because the formal data comes from many diverse domains, while few-shot prompting helps the most when the in-context examples are in the same domain as the input (line 156-157). We acknowledge that potentially one could write representative examples in each major category and dynamically retrieve them accordingly. However, considering that there are many diverse mathematical domains present in AFP and mathlib4, we think this approach is very labour-intensive and seems infeasible given our budget and access to formal mathematical expertise.
- For filtering, as we explain in the informalisation taxonomy section, the informal outputs (even the wrong ones) make sense at first sight and require significant expertise to spot the mistakes. We cannot think of good ways to filter out major mistakes in the dataset at the moment.
-We gladly accept that there can be other methods to improve the alignment. As this is the first informalisation attempt at such a big scale, we are unsure how good the 75% accuracy is and hope for follow-up works to improve on it. The main reason for using a standard zero-shot prompting is that it provides the simplest methodology and as we are creating the dataset for the first time, it seems the most obvious choice. Improvements are welcome and can surely be made.
- We would like to stress that the scope of this paper is to introduce a methodology and a dataset at a scale that has not been done before. More improvements are always possible, and this work will hopefully inspire and spark them. Based on these, we think the contributions made are significant enough.
> **Minor: Missing references: There are some other works that leverage autoformalization to generate informal-formal data [1, 2]. [3] also surveys methods for autoformalization and theorem proving.
[1] FIMO: A Challenge Formal Dataset for Automated Theorem Proving, arXiv preprint 2023
[2] MUSTARD: Mastering Uniform Synthesis of Theorem and Proof Data, ICLR 2024
[3] A Survey on Deep Learning for Theorem Proving, arXiv preprint 2024**
We thank the reviewer for pointing out these works. Indeed, the autoformalizaton with feedback approach in FIMO and the joint theorem-proof autoformalization approach in MUSTARD are very nice additions for our context. We will update the paper by citing them.
> **In the discussion, the author mentions plans to expand to other languages like Coq. However, I believe that a significant portion of Coq projects focuses on software verification, which is often challenging for human readability. One success of informalization by GPT-4 is that the formalized concepts are easily understood by GPT-4. Do you think your approach could still be effective for such Coq projects?**
We think that the approach can still be effective for Coq projects. Our reason for this is that software verification often has a couple of important fixed themes like loop invariance, which allows for easier optimisation than mathematics. Furthermore, there are software and hardware verification theories in the AFP, which we did not find to be more difficult to informalise compared to mathemaitcal theories, from the informalisation taxonomy experiments.
> **(optional) If applicable, I am curious about the performance of DeepSeekMath or Llemma as the base model to be fine-tuned on MMA. These models have been pre-trained on large-scale informal and formal datasets and exhibit some basic autoformalization abilities.**
We thank the reviewer for pointing this out. Indeed DeepSeekMath, Llemma, or the recent Mathstral (continuous pretraining and fine-tuning based on Mistral 7B, so can be directly compared to Mistral 7B in the paper) can be very interesting subjects of study. As these models were not available when we started the paper, we did not venture to use them. We will update the paper to say that these additional training might make them better candidates for autoformalization models.
We would like to thank the reviewer for their valuable feedback, which has greatly helped us improve our paper. Given the improvements we've made, we kindly request the reviewer to reconsider their score, or give indication for further improvements.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response. I acknowledge that the MMA is the first large-scale approach to building an aligned informal and formal corpora, and I appreciate the manual efforts involved in the experiments. Your clarification has addressed my concerns, and I have revised my score positively in light of this.
---
Reply to Comment 1.1.1:
Comment: We want to thank the reviewer again for their valuable time and input. It has greatly helped us improve the paper. Thank you! | Summary: The paper presents a dataset MMA and a model trained on the same. The dataset comprises informal-formal pairs of theorem which are generated using LLM. The experiment shows decent results on the autoformalization tasks of miniF2F and ProofNet benchmarks. The authors also claim that autoformalization performance improves when trained on multi-lingual data. These characteristics generalize over different sizes of models, supporting the hypothesis that multi-lingual training is good for autoformalization.
Strengths: 1. The authors present a strong case for how and why multi-lingual training is beneficial for autoformalization. This indicates the need for further studies on how transfer learning helps in training across different formal languages. We need to understand if it is only applicable to Lean and Isabelle or is applicable to other languages like Coq as well.
2. Adequate human studies have been conducted to test different aspects of the training data generated. The authors also try to thoroughly test the statistical significance of their findings which is great.
3. I appreciate the effort put in to manually verify a set of autoformalizations generated during the evaluations.
Weaknesses: 1. Even though, authors try manually to sample from the data, the sample size is often small (which is understandable). However, I want to see how the numbers change with varying sample sizes. For example, instead of just one experiment with 100 sample size, the same experiment with varying sample sizes of 50, 75, and 100 showing similar trends will be appreciated.
2. Authors make a key observation that informalization is much easier than formalization, however, I don't find enough justification for this in the paper. Some studies are mentioned on Page 4, but these observations may not generalize well on the dataset in question.
3. The authors accept that the GPT-4 informalizations should be considered as a noisy approximation. This actually is a big limitation. Since not every informalization is verified, there is a potential chance of leakage of the test data itself. Since informalization is generated by GPT-4, it might as well generate informalization for theorems other than the formal statement in question (maybe something which was similar to the formal statement). If such a similar formal statement is present in the test data, then this potentially can help the model perform better on test data (this can be considered a type of leak in the training data). I can understand that this may not be a common case. However, some case studies about the resemblance between the train and test data should be conducted.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Can you separate Table 2 (Right) into various Error Types per language? Right now all the 200 MMA formalizations from the samples are presented together without the distinction between Lean vs Isabelle.
2. The authors talk about few-shot prompting vs instruction prompting on Page 4. I understand the budget reasons, however, I would like to see some case studies showing the qualitative differences between the two strategies (for a small subset of data) in this domain.
3. In Figure 2 (top), the bar plot for "Llama Autoformalization Isabelle" shows almost no correct formalization for the Isabelle-only model. I would like to the examples of those autoformalization that were possible because of the Isabelle + Lean model. A qualitative analysis of when this type of transfer is effective will be appreciated.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors do try to discuss some limitations, however, I would like to see more. I'm interested in seeing the resemblance of the test and train data since it was generated by GPT-4, and there is a potential chance of data leakage. Since the generation of informalization cannot be controlled, this risk always exists and hence just a comparison with the baseline is not sufficient.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their detailed feedback and the time they invested in reviewing our paper. We address specific points raised by the reviewer:
> **Even though, authors try manually to sample from the data, the sample size is often small (which is understandable). However, I want to see how the numbers change with varying sample sizes. For example, instead of just one experiment with 100 sample size, the same experiment with varying sample sizes of 50, 75, and 100 showing similar trends will be appreciated.**
- We appreciate the reviewer for raising this point. We want to first clarify that each model is evaluated on 100 randomly selected examples, with 50 from miniF2F and 50 from ProofNet. Hence, we can randomly select 50, and 75 from these 100, to perform the required analysis. We do exactly this below:
- We examine the autoformalization quality of the Mistral model fine-tuned on both Isabelle and Lean4, on 50, 75, and 100 random examples respectively. The plots are in Figures 1 and 2 in the general response. We can see that for both languages, the autoformalization quality proportions change very little as we go from 50 to 75 to 100 examples. This validates the sufficiency of having 100 examples as we can project that with more examples (since they are all randomly chosen), the autoformalization quality will not change dramatically, and conclusions drawn based on 100 samples will generalise.
> **Authors make a key observation that informalization is much easier than formalization, however, I don't find enough justification for this in the paper. Some studies are mentioned on Page 4, but these observations may not generalize well on the dataset in question.**
Analytically, informalisation does not have as stringent syntactical requirements as formalisation, so is more easily accepted. Empirically, Wu et al. and Azerbayev et al. both experimentally observed that informalisation is easier than formalisation on two datasets of different difficulty levels. Therefore, we have reasons to believe that the observation generalises to our dataset.
> **The authors accept that the GPT-4 informalizations should be considered as a noisy approximation. This actually is a big limitation. Since not every informalization is verified, there is a potential ...**
When we conducted the taxonomy of the informalisation, we noticed that GPT-4 always closely followed the input it is given and never generated something that is unrelated. In the 200 randomly chosen examples, GPT-4 always closely followed the formal statement. Since the MMA dataset is gathered from AFP and mathlib4, they are disjoint from the miniF2F and ProofNet benchmarks. This is because miniF2F and ProofNet benchmarks are deliberately prevented from becoming entries in AFP and mathlib4 to prevent contamination. Hence, we see no reason to particularly worry about benchmark contamination in our work. We will update the paper by attaching the 200 randomly chosen informalisation examples for the reader to examine as well.
> **Can you separate Table 2 (Right) into various Error Types per language? Right now all the 200 MMA formalizations from the samples are presented together without the distinction between Lean vs Isabelle.**
We appreciate the reviewer’s suggestion and present the error types per language below:
| Error type | Isabelle | Lean4 |
| -------- | ------- | ------- |
| None | 81 | 67 |
| Hallucination | 2 | 6 |
| Misunderstanding concept | 11 | 18 |
| Incorrect assumption | 2 | 9 |
| Incorrect conclusion | 2 | 6 |
| Incorrect type | 4 | 8 |
We will update the paper to use the per-language breakdown.
> **The authors talk about few-shot prompting vs instruction prompting on Page 4. I understand the budget reasons, however, I would like to see some case studies showing the qualitative differences between the two strategies (for a small subset of data) in this domain.**
We would like to clarify that the reason behind going for instruction prompting instead of few-shot prompting is not cost. As we explain on Page 4, there are a large number of mathematical domains in AFP and mathlib4, making it impractical to manually come up with few-shot prompts for every single domain. Hence, we consider the former strategy not eligible for constructing the MMA dataset. We can further conduct the qualitative case studies if the reviewer deems it necessary.
> **In Figure 2 (top), the bar plot for "Llama Autoformalization Isabelle" shows almost no correct formalization for the Isabelle-only model. I would like to the examples of those autoformalization that were possible because of the Isabelle + Lean model. A qualitative analysis of when this type of transfer is effective will be appreciated.**
- We thank the reviewer for this suggestion. We will have one case study in the rebuttal and give a further analysis of the autoformalizations only doable by jointly trained models in the updated paper.
- Case study 1.
- Informal statement: “Find the remainder when $91145 + 91146 + 91147 + 91148$ is divided by 4. Show that it is 2.”
- Ground truth: "theorem mathd_numbertheory_640: "(91145+91146+91147+91148) mod 4 = (2::nat)"
- Jointly trained model autoformalization: lemma "(91145 + 91146 + 91147 + 91148) mod 4 = (2::nat)"
- Isabelle-only model autoformalization: "lemma assumes "x^4 + 34578*x^3 + 1996*x^2 + 688*x + 2 = 0" shows "(9114*x^6 + 334575*x^5 + 496*x^8 + 353*x^7 + 4*x + 2) / 4 = 2"
- One can see that the Isabelle-only model gives a degenerate solution while the jointly trained model manages to correctly formalise it.
We would like to thank the reviewer for their valuable feedback, which has greatly helped us improve our paper. Given the improvements we've made, we kindly request the reviewer to reconsider their score, or give indication for further improvements.
---
Rebuttal Comment 1.1:
Comment: I'm satisfied with the author's responses and their experiments. Hence, I have increase my score. I hope to see more nuanced examples showing the effectiveness of the transfer between languages.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for such a swift response and appreciate their reconsidering the score! | Summary: The authors use backtranslation to create a large (~332k samples) dataset of formal-informal statement pairs in the Lean4 and Isabelle formal proving languages: they take formal samples from a Lean4 and an Isabelle proof library, and ask GPT4 to restate them informally. They use this dataset to finetune two open source LLMs (Llama 33B and Mistral 7B) on the task of autoformalisation: converting informal statements to formal ones, in Lean4 and in Isabelle, comparing the result of this finetuning to the initial state of the LLMs and ablating on training with both languages as opposed to just the target languages.
The authors invest time and manually evaluate the performance of their trained models on 50 samples from each of two small benchmarks, proofnet and miniF2F (total 100 per evaluated model), to draw their conclusions.
For both languages, the fine tuned models outperform the base models on autoformalisation. For Isabelle, they find that training with the combined dataset (containing both Lean4 and Isabelle samples) is better than the relevant monolingual, for Lean4 the difference is slightly less clear.
I note the mixed dataset contains mostly Isabelle statements: 244k/332k, and personally wonder if this relates to the found results - the influence of this ratio is not explored. This makes sense however: evaluating the correctness of autoformalised statements is non trivial, and easily automated metrics such as training loss, or per-token accuracy in a teacher forcing setting, are not immediately indicative of correct formalisation.
Strengths: 1. Provide dataset that will be useful for fine tuning work on autoformalisation, and greater than previously available datasets
2. Method for generating dataset is straightforward: can be extended in future with additional languages and samples, or remade with better models (for better backtranslation)
3. Demonstrate the positive value of dataset for finetuning models for autoformalisation (and carry out the non trivial evaluation needed to do this)
4. Some investigation of whether it is better to use multi- or mono-lingual data for fine tuning autoformalisation models
5. Clearly written
6. While this is in the appendix - autoformalisation case studies are interesting for understanding what is happening. Similarly, in main paper, the analysis of 200 samples from the generated dataset (table 2) helps get a sense of its quality
Weaknesses: 1. Dataset is automatically generated by an LLM, and no filters were applied or considered. Naturally, some of the LLM's informalisations will be incorrect (manually inspecting 200 samples from the dataset, the authors find 52 with errors (table 2)). I wonder if further manual inspection would have raised any obvious failure cases that could be automatically flagged and filtered from the full dataset.
2. I would have liked an investigation of impact of the [ratio of different languages in the training set] on the [performance of the fine tuned model]. Specifically because the models fine tuned on the full dataset did better on Isabelle than when trained only on Isabelle, whereas for Lean this wasnt as clear, despite Isabelle having the larger dataset, I wonder if the issue for Lean was that it was 'drowned out' by the amount of Isabelle samples in the data.
3. I don't know how open NeurIPS specifically is to dataset rather than method contributions, and this seems mostly a dataset contribution. I do think it is a valuable one though.
4. The dataset is based on GPT4 backtranslations, and the evaluations are of open source models before and after fine tuning with these backtranslations. I wonder, is this actually just a distillation of GPT4 capacities into open models, or does this dataset actually advance autoformalisation? A comparison with GPT4 autoformalisation ability (eg just on proofnet and minif2f) would help settle this question.
5. Majority of improvement in token accuracy and loss of the models happens in first few k steps (fig 1). Would have been nice to do an evaluation of the models (ie on the benchmarks) at that stage, to see if remainder of steps is necessary
6. *Important!* Missing baselines/discussion: the question of whether fine tuning (as done in this work) is at all the correct approach for autoformalisation is not raised. Given that there is existing work on using few-shot learning for autoformalisation, the question is relevant and cannot be ignored. The few shot alternative should be properly discussed. At minimum, the existing reported results (Azerbayev at al, Wu et al) should be recalled and compared to in this paper (to the extent the comparison is possible, see next point). (Ideally, the models would have been evaluated in a few shot setting on the same benchmark-subset as used in the paper, though I recognise this is a labour intensive request.)
7. *Important!* Nonstandard benchmarking: The models in this paper are evaluated on the (mixed) subsets of two different benchmarks, proofnet and minif2f. This makes it hard to compare the results to those reported in other autoformalisation works, which use the benchmarks in whole and separately from each other. I hope the authors can at least update the report to list the disaggregated results on the two benchmarks. Independently, please list exactly which samples were taken from the two datasets for the combined one, to allow accurate comparisons to these results in the future. The use of only a small subset of the main benchmarks (50 samples each) also weakens the significance of the results.
Technical Quality: 3
Clarity: 3
Questions for Authors: Clarifications, nits, comments:
1. Are the MMA statements disjoint from those of ProofNet and MiniF2F?
2. line 62 "strong" autoformalisation: subjective
3. section 2 "rule based informalisation" vs "symbolic informalisation" it is not clear to me if these are the same or different. If the same, choose one term, if different, clarify difference.
4. "Manual alignment" also unclear
5. line 86: appreciate the reference to specific case study, but would appreciate some lines on what "syntactically incorrect segments" are directly in this paper for completeness.
6. line 93: some lines on how there is a setting where there is a target-to-source model, but not the other way around, would be appreciated. Speaking of which: given all this dataset is based on GPT4 backtranslations: how does GPT4 perform at autoformalisation?
7. line 95: "usually, the back translation process is..." - would appreciate a/some sources on this!
8. line 99 what language is Azerbayev et al 2023 on?
9. line 116 "high quality": subjective, not clear by what measure this is said
10. line 119 "satisfies both of the criteria": did you check for diversity of the informal statements? how did you measure it?
11. line 142 do you mean "generation" (not curation) cost?
12. nice to have: line 148 does gpt4 add further licenses? what is the license of the combined dataset (is it just "both", or do they merge)?
13. table 2: what's that massive isabelle statement (24331 characters)? is it an anomaly? what happened there?
14. line 178: what is misunderstanding a concept - can you give a concrete example? similarly line 181 with "type".
15. line 191 vs lines 168 and 126: are formal expressions complete and precise or inherently ambiguous? soften/clarify statements to resolve this apparent contradiction
16. line 205: "train" -> "finetune"
17. line 208: what languages are miniF2F and ProofNet in? comes up again in line 250
18. nice to have: would be nice to see metrics on lean when fine tuned on just isabelle, and vice versa. correctness evaluation probably too labour intensive, but loss/token-accuracy curves maybe. but maybe they'll just be zero. i recognise this data may not be obtainable now that models have already been trained (don't know how hard/expensive it is to train the models)
19. line 227: number of epochs doesn't seem to quite line up with equalising number of samples seen/steps made, maybe because you are giving rounded numbers? doesn't particularly matter but strange
20. line 235: figure shows loss and per-token accuracy, which are not necessarily indicative of autoformalization capability, rephrase.
21. fig 1: not so clear what the per token accuracy of the models are in first 2k train steps, would appreciate knowing - maybe can have a line on this in body of paper if dont want to distort images
22. line 243: it has seen lean material 4x more, but it has not seen 4x more lean material (i.e. it is seeing same material)
23. line 242-243: unconvincing/invalid interpretation: if this were the explanation, we would see a similar (but weaker) drop for Isabelle, but instead, Isabelle sees a gain.
24. lines 245-246: lean4 results dont support this
25. line 258: "... generate 36% and 30% of Isabelle statements that compile..." unclear, rephrase
26. lines 256-269: seems compilation rate is discussed for only one model (Llama?). Would prefer to have information on both models. If not, at least clarify which model is being discussed here.
26. lines 262-265: need elaboration - I imagine you have support for this claim, but need to see it. Are you saying that a lot of the generated Isabelle statements type checked, but were incorrect? Give numbers
27. lines 265-267: more mixed results regarding whether it is better to train with mixed or monolingual data. I wish the paper was more up front on this inconclusivity, it appears to have a slight "joint-is-better" narrative, that is not critical to its contribution.
28. line 296: "mdoel"
29. line 308 again insufficiently up front about inconclusive results regarding mono/multi lingual data for lean4 (choosing to focus only on the comparison to not fine tuning at all)
30. line 309-310 don't particularly like this statement when the confidence intervals overlap (table 3), can be up front
31. need more of a gap under table 3
32. line 320: did fine tuning surface any memorised info? would have been interesting to check
33. line 340: call to "sample efficiency" - define and clarify how this relates to findings?
34. last 2 paragraphs of conclusion belong more in "limitations" in my opinion
35. what are the (math) domains of the data used to create MMA (within AFP and mathlib4), and the domains of miniF2F and proofnet, and how do they relate to each other? I.e., are miniF2F and proofnet in- or out-of- distribution for the MMA training set, with respect to type of math considered? What about specifically the subsets of miniF2F and proofnet that were used for the evaluations in this work?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their detailed feedback and the time they invested in reviewing our paper. We address specific points raised by the reviewer:
> **Dataset is automatically ...**
- We thank the reviewer for pointing out that there could be potential improvements to the dataset filtering process. We will include the complete informalisation evaluation results in the updated version for completeness.
- We did not find any obvious failure cases that can be easily filtered out. Table 2 Right gives a breakdown of the errors made by GPT4 during the informalisation process. It can be seen that the most frequent errors are caused by the misunderstanding of formal concepts, e.g., not correctly understanding the context of inverse operations. We find that in general, spotting these errors requires a fair amount of domain expertise in mathematics and formal languages, and believe it to be difficult to find an automatic filter that does the same job.
- We will update the paper to say this explicitly and let the readers judge after seeing the informalisations made.
> **I would have liked an investigation of impact of ...**
We agree with the reviewer that this is a valuable experiment to run. However, we are constrained by computational budget and unable to run the said experiments. To note, our main experiment with llama 30B took 14 hours on a TPUv4-64 to run, which by [TPU pricing](https://cloud.google.com/tpu/pricing) costs $2885. We are unable to do a sweep experiment that controls the Isabelle:Lean4 ratio and evaluate them accordingly. We recognise this as a limitation of the work and will update the paper to say so accordingly.
> **I don't know how open ...**
We think the methodology (back-translation + fine-tuning) can be expanded to other formal languages and datasets than Isabelle AFP and Lean4 mathlib4. The finding that multi-language diversity can make the resulting models stronger and more robust is a contribution in addition to the methodology. Hence we think the paper has more value beyond the dataset.
> **The dataset is based on GPT4 backtranslations, ...**
We think a direct comparison of our fine-tuned models with GPT4 has a confounding factor: model representation capability. The GPT4 model is rumoured to have 1.76T parameters while the models we fine-tuned have 30B and 7B parameters respectively. There is a 60-250x difference in model size and GPT4 thus has a significant advantage in terms of its representation capabilities.
We think the fairest comparison would be to fine-tune a GPT4-level model on its own backtranslations, and see the model performance before and after the fine-tuning. It is impossible to do so for GPT4, but becomes more realistic as new open-weight models start to catch up and surpass GPT4’s performance. We leave this for future work as it still requires a significant amount of computational resources to adapt an open-weight model like Mistral Large 2 or Llama 3.1 405B for the autoformalization task.
> **Majority of improvement ...**
We performed some preliminary experiments on the intermediate checkpoints without performing the full suite of evaluation because it is expensive to do so. We find that the model’s autoformalization quality continues to improve as training progresses. We will update the paper to say so explicitly.
> **Important! Missing baselines/discussion: ...**
We agree with the reviewer that few-shot prompting should be compared as a competing technique. We will compare the results from Azerbayev et al. and Wu et al. below:
- Azerbayev et al. (in Lean3): Codex with few-shot prompting can correctly autoformalize 13-16% of ProofNet theorems. ProofGPT 1.3B and 6.7B have 0 accuracies with few-shot prompting on the same dataset. ProofGPT-1.3B can correctly autoformalize 3.2% of ProofNet theorems after distilled backtranslation (similar to our methodology).
- Wu et al. (in Isabelle): Codex with few-shot prompting can correctly autoformalize 38 out of 150 problems (25.3%) from MATH, which is of much lower difficulty than miniF2F and ProofNet.
- For our best models without few-shot prompting, we can achieve 22% correctness on miniF2F, which is similar to the Wu et al. result but on a much harder dataset. We can achieve 12% correctness on ProofNet theorems, which is similar to Azerbayev et. al result but with a much much smaller model (Mistral 7B instead of Codex).
- In view of the above comparisons, we think fine-tuning presents a very promising approach, which is compatible with few-shot prompting as well. But for simplicity, in this paper, we only consider the vanilla case of zero-shot fine-tuning. We will update the paper with these comparisons.
> **Important! Nonstandard benchmarking: ...**
- We note that only ProofNet (Azerbayev et al.) uses the complete dataset for comparison. The earlier work examining autoformalization (Wu et al.) used a subset of 150 MATH problems for evaluating autoformalization quality.
- We will provide the disaggregated results on the two benchmarks. We already include the exact statements and evaluation results of the two subsets in the supplementary materials.
We thank the reviewer for the very careful reading and feedback! Due to space limitations, here we will answer some questions. And for the rest of the points, we will modify the updated paper accordingly.
1. Yes
3. They are the same. We will update both to “symbolic informalisation”.
8. Lean3
10. Since the formal statements come from diverse mathematical fields, we expect the informal corresponding statements to also be diverse. We examined the informal statements and found it to be so.
12. GPT4 outputs are not constrained by licenses, only subject to Terms of Service by OpenAI.
We would like to thank the reviewer for their valuable feedback, which has greatly helped us improve our paper. Given the improvements we've made, we kindly request the reviewer to reconsider their score, or give indication for further improvements.
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: thank you for the updates! i will be glad indeed to see this paper updated with a discussion of & comparison (to extent that this is reasonably achievable) to few shot prompting, and disaggregated results on the benchmarks.
ideally, i would also like results on the full benchmarks and not just subsets - regardless of whether other papers have also strayed in this way!
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their response! We will do our best in making the comparisons and evaluations more compelling in the updated paper. | Summary: This paper introduces MMA, a large-scale dataset consisting of informal-formal pairs of mathematical statements (i.e., parallel autoformalization data) in two types of formal languages, Isabelle and Lean4. The dataset encompasses statements from multiple domains and exhibits high quality. It was constructed using a back-translation method, converting from a formal language to an informal language, to improve data quality.
Strengths: 1. The paper constructs a large-scale, high-quality, and diverse dataset for autoformalization.
2. The paper analyzes the benefit of training on multiple formal languages to improve the performance of single-language autoformalization.
3. The motivation for the method and the explanation for the data quality are written clearly.
Weaknesses: The experiments and the corresponding analysis are not robust enough.
1. The metrics “loss” and “token accuracy” used on the validation set are limited, as there could be multiple correct autoformalization results beyond the given reference. One piece of evidence for the limitation of these metrics is that in Figure 1, the model fine-tuned on Lean4 alone shows a higher loss, suggesting worse performance, yet its token accuracy is actually better.
2. In line 242-243, the paper claims, “This 0.7% accuracy drop is likely because the single-language fine-tuning has seen 4 times more Lean4 material than the joint fine-tuning”. However, another potential explanation could be that the joint fine-tuning training data contains more Isabelle than Lean4, potentially leading to a deterioration in Lean4 performance. The authors should investigate this possibility.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Is it fair to control the number of training steps across different set of training data? Why not control for the size of the training dataset and the number of training epochs instead?
2. In line 262-265, "it does not fully capture how good/useful the formalisations are", why are the formalizations considered not good/useful as long as they lack type annotations? What proportion of the statements are of this type? If these are removed, what is the proportion of syntactically correct statements generated by models that have been fine-tuned singly or jointly?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The metrics “loss” and “token accuracy” on the validation set are limited, as stated above. This could affect the correctness of all analyses and the corresponding conclusions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their detailed feedback and the time they invested in reviewing our paper. We address specific points raised by the reviewer:
> **The experiments and the corresponding analysis are not robust enough.**
> **1. The metrics “loss” and “token accuracy” used on the validation set are limited, as there could be multiple correct autoformalization results beyond the given reference. One piece of evidence for the limitation of these metrics is that in Figure 1, the model fine-tuned on Lean4 alone shows a higher loss, suggesting worse performance, yet its token accuracy is actually better.**
We fully agree with the reviewer that the “loss” and “token accuracy” metrics do not show the full picture. We point out in line 270-271 that the final and most important metric for the task of autoformalization is that of the formalisation quality. For this metric, we measured it by having human evaluations of the difficulty/effort it takes to correct the machine-generated formalisations.
We still wish to include the “loss” and “token accuracy” metrics for two reasons: (1) These two metrics are fairly standard in language model pre-training and fine-tuning; (2) We think by showing the discrepancy between these two metrics with the “ground truth” formalisation quality metric, we highlight their unreliability and emphasise the importance of the human evaluation. We will make the latter point more pronounced in the updated version of the paper.
> **2. In line 242-243, the paper claims, “This 0.7% accuracy drop is likely because the single-language fine-tuning has seen 4 times more Lean4 material than the joint fine-tuning”. However, another potential explanation could be that the joint fine-tuning training data contains more Isabelle than Lean4, potentially leading to a deterioration in Lean4 performance. The authors should investigate this possibility.**
We fully acknowledge and apologise for the lack of precision in our statement in line 242-243. We will update the paper to show both possible hypotheses: that the two fine-tuning runs have very different Lean4 data quantity, or that the two fine-tuning runs have very different Isabelle-Lean4 data proportions. Due to the limitation on computational resources, we are unfortunately unable to run a sweep of data proportion to confirm/refute the hypothesis, but have to leave it as future work.
> **Is it fair to control the number of training steps across different set of training data? Why not control for the size of the training dataset and the number of training epochs instead?**
This is a very good question! Controlling the total number of training steps and the number of training epochs have their respective advantages: controlling the number of training steps guarantees that the trained models go through an equal number of updates, and controlling the number of training epochs guarantees that the datapoints have been seen by the models for an equal number of times.
In our preliminary experiments, models keep getting stronger as training progresses. In our current experiments, the joint fine-tuning consists of 3.3 epochs for both Isabelle and Lean4, while the Isabelle-only and the Lean4-only fine-tuning consist of 4.4 epochs and 13.2 epochs of the respective datasets (line 226-227). If we control the number of training epochs for the individual fine-tuning experiments, we will end up with training runs lasting 75% and 25% as long as current ones respectively. This will result in two significantly weaker models. Since the stronger models fine-tuned on single languages are worse than the jointly fine-tuned model, we have reasons to believe that their weaker versions are even worse. Therefore, this will not change our conclusion that multi-language training considerably benefits autoformalization capabilities.
> **In line 262-265, "it does not fully capture how good/useful the formalisations are", why are the formalizations considered not good/useful as long as they lack type annotations? What proportion of the statements are of this type? If these are removed, what is the proportion of syntactically correct statements generated by models that have been fine-tuned singly or jointly?**
As a design choice, open variables in theorem statements in Isabelle will be implicitly universally quantified: when properdivisor_sum is not a built-in definition, lemma "properdivisor_sum 18 = 21" is interpreted as lemma "!!properdivisor_sum. properdivisor_sum 18 = 21", where !! is meta-level forall. In contrast, in Lean4, variables need to be explicitly quantified within the statement (or declared earlier in the theory file). Due to this design discrepancy, the Isabelle syntax checker is more tolerant, allowing more false statements, whereas Lean’s syntax checker is stricter (at the cost of slightly more verbose statements).
On a rough inspection (because inspecting very carefully takes as much time as re-doing the evaluations), we find that 19% of the Isabelle formalisations by the Llama model have this issue. So having this removed will reduce the syntactic correctness of the model fine-tuned on Isabelle alone to 14%, while the jointly fine-tuned model rarely has this issue and maintains a 21% syntactic correctness. In the updated version of the paper, we will do a closer inspection and report the corresponding figures.
> **The metrics “loss” and “token accuracy” on the validation set are limited, as stated above. This could affect the correctness of all analyses and the corresponding conclusions.**
We thank the reviewer for pointing this out! We will make it clearer that these two metrics have limited correlation with the model quality in the updated version, as discussed above.
We would like to thank the reviewer for their valuable feedback, which has greatly helped us improve our paper. Given the improvements we've made, we kindly request the reviewer to reconsider their score, or give indication for further improvements. | Rebuttal 1:
Rebuttal: We want to thank all reviewers for their keen input, which has significantly raised the quality of this paper. You will find a one-page attachment containing two plots we use to demonstrate our points.
Below, we'd like to address a few common points:
1. "Is the evaluation dataset size (100 per language) enough?" (reviewers zhuU and ByiA)
- In the pdf attachment, Figures 1 and 2 show the distribution of correction difficulty for the jointly fine-tuned Mistral model for 50, 75, and 100 samples respectively. We can see that the distribution barely changes as the number of samples is increased.
- This demonstrates empirically that the variance present in the 100 samples is small enough, and that adding more samples will not change the conclusion.
2. "Does advanced prompting work better?" (reviewers ByiA and 8WKp)
- We did not use naive few-shot prompting because the formal data comes from many diverse domains, while few-shot prompting helps the most when the in-context examples are in the same domain as the input (line 156-157). Considering that there are many diverse mathematical domains present in AFP and mathlib4, we think few-shot prompting is very labour-intensive and seems infeasible given our budget and access to formal mathematical expertise.
- We construct MMA as a dataset of this scale as a first step towards unlocking better autoformalization capabilities through fine-tuning. We want MMA to inspire and encourage further improvements, through better prompting techniques, while using our methodology.
- We believe that our contributions are beyond the dataset itself, and warrant a publication.
Given the improvements and explanations we've made, we hope we can convince the reviewers to reconsider their score, or give indication for further improvements.
Pdf: /pdf/4027e608d3205a44943d9db4768b97b37fed5b66.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Quantitative Convergences of Lie Group Momentum Optimizers | Accept (poster) | Summary: The authors design and analyze momentum-based algorithms on Lie groups. They first study ODEs and provide convergence rates for them. Then they discretize the ODEs (in two different ways -- Lie Heavy Ball and Lie NAG-SC), and show that the second discretization (Lie NAG-SC) has a *locally* accelerated convergence rate.
Strengths: The paper is decently written (but not exceptional). The most interesting feature of this paper is that one can use the group structure of the Lie group to design optimization algorithms, and avoid parallel transport or the log map usually present in the more general Riemannian setting.
Weaknesses: (1) The requirement that the manifold is a Lie group is very restrictive. There are many manifolds used in practice which are not Lie groups but still have very nice structure. Perhaps the prime example is the Stiefel manifolds (which include spheres). In many applications (especially those related to low rank problems), it is desirable to work with rectangular matrices like Stiefel matrices because storage required for them is much smaller than square matrices (like in SO(n)). Can the authors provide examples of Lie groups used in practice which are not SO(n), U(n), or products of those?
(2) The paper fails to cite *many* important prior works, most notably (in this order):
* "Accelerated gradient methods for geodesically convex optimization: Tractable algorithms and convergence analysis" by Jungbin Kim and Insoon Yang 2022. This is the first paper to truly achieve "acceleration" on Riemannian manifolds, specifically having *global* complexity guarantees which scale like O(sqrt{condition number}) or O(sqrt{1/epsilon}). All prior work only achieved acceleration locally, which is *arguably* not too interesting (see below).
* The updated version of "Global Riemannian Acceleration in Hyperbolic and Spherical Spaces" by Martinez-Rubio also provides global acceleration rates (albeit only on spaces of constant curvature). [Previous version had exponential dependence on the curvature/radius; this was reduced to a polynomial dependence in the updated version.]
* Lower bounds and obstructions for acceleration on Riemannian manifolds: (a) "No-go Theorem for Acceleration in the Hyperbolic Plane" by Hamilton and Moitra, (b) "Negative curvature obstructs acceleration for strongly geodesically convex optimization, even with exact first-order oracles" by Criscitiello and Boumal, (c) "Curvature and complexity: Better lower bounds for geodesically convex optimization" by Criscitiello and Boumal
* Acceleration in the nonconvex case: "An accelerated first-order method for non-convex optimization on manifolds" by Criscitiello and Boumal
(3) The paper only provides local acceleration, which is *arguably* not too interesting (at least from a query-complexity viewpoint) because locally manifolds look like Euclidean spaces. This can be made rigorous: there is a generally method for converting Euclidean algorithms to Riemannian ones which locally have the same convergence guarantees: see for example Appendix D of "Curvature and complexity: Better lower bounds for geodesically convex optimization" by Criscitiello and Boumal.
(4) I am not 100% convinced that the Lie Heavy Ball does not have a locally accelerated rate. My impression is that, with the right choice of parameters, Heavy Ball is *locally* accelerated (essentially because locally a strongly convex cost function looks like a quadratic): see for example "Provable Acceleration of Heavy Ball beyond Quadratics for a Class of Polyak-Łojasiewicz Functions when the Non-Convexity is Averaged-Out" by Wang, Lin, Wibisono, Hu. On the other hand, the provided numerical experiments seem to indicate that Lie Heavy Ball is not accelerated (but maybe a different choice of parameters would improve convergence?).
(5) For SO(n) or U(n), can the authors please explain why the log map and parallel transport are prohibitively costly (in comparison to the exponential)?
(6) Is it necessary to use the exponential map as the retraction? From the reviewer's experience, for the manifold SO(n), using the matrix exponential is more expensive than using the QR-retraction.
(7) The experiments performed (the eigen decomposition problem) are limited. The authors say the eigen decomposition problem "is a hard non-convex problem on a manifold". The reviewer somewhat disagrees: in some sense, it is one of the easiest nonconvex optimization problems as all second order critical points (where gradient = 0, Hessian is PSD) are global minima.
Technical Quality: 3
Clarity: 2
Questions for Authors: Questions and comments (some of which were already asked in the "weaknesses" section):
* Explicitly, what do your algorithms look like when the Lie group is a torus (product of circles)? Does it reduce to standard NAG?
* Can the authors provide examples of Lie groups used in practice which are not SO(n), U(n), or products of those?
* I am not 100% convinced that the Lie Heavy Ball does not have a locally accelerated rate. My impression is that, with the right choice of parameters, Heavy Ball is *locally* accelerated (essentially because locally a strongly convex cost function looks like a quadratic): see for example "Provable Acceleration of Heavy Ball beyond Quadratics for a Class of Polyak-Łojasiewicz Functions when the Non-Convexity is Averaged-Out" by Wang, Lin, Wibisono, Hu. On the other hand, the provided numerical experiments seem to indicate that Lie Heavy Ball is not accelerated (but maybe a different choice of parameters would improve convergence?).
* For SO(n) or U(n), can the authors please explain why the log map and parallel transport are prohibitively costly (in comparison to the exponential)?
* Is it necessary to use the exponential map as the retraction? From the reviewer's experience, for the manifold SO(n), using the matrix exponential is more expensive than using the QR-retraction.
* Line 277 typo: decreaing -> decreasing
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: No potential negative societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > W1: The requirement that the manifold is a Lie group is very restrictive. There are many manifolds used in practice which are not Lie groups but still have very nice structure. Perhaps the prime example is the Stiefel manifolds. In many applications (especially those related to low rank problems), it is desirable to work with rectangular matrices like Stiefel matrices because storage required for them is much smaller than square matrices (like in SO(n)).
> Q2: Can the authors provide examples of Lie groups used in practice which are not SO(n), U(n), or products of those?
Thanks for an opportunity of explanation.
1) *Stiefel versus SO(n).* We agree Stiefel is a very important and useful manifold, but it does not mean SO(n) is not important. Even simple Lie groups like SO(2) and SO(3) are extremely important to applications. For example, machine-learning-based design and generation of molecules (or more precisely, molecular configurations) have already become a big enterprise and a billion-dollar industry. In the majority of research in this direction (e.g., [R5,R6]), side chains are modeled by torsional angles (each in SO(2)), and backbone is modeled by rotational angles (each in SO(3)).
2) *Lie groups in practice that are not SO(n), U(n), or their products.* Heisenberg group plays an important role in quantum mechanics. Lorentz group plays an important role in general relativity. Spin group is necessary for describing Fermions. Symplectic group is used not only in mechanics but also in quantum computing and even cryptography. Projective linear group can find abundant applications in graphics and visions.
> **W2**: The paper fails to cite many important prior works, most notably (in this order):
We appreciate all the references and we will cite them in a revision. However, our paper focuses on a different algorithm under a different problem setting, and the existence of these paper does not weaken our novelty. Kindly see details below:
> "Accelerated gradient methods for geodesically convex optimization: Tractable algorithms and convergence analysis" by Jungbin Kim and Insoon Yang 2022. This is the first paper to truly achieve "acceleration" on Riemannian manifolds, specifically having global complexity guarantees which scale like O(sqrt{condition number}) or O(sqrt{1/epsilon}). All prior work only achieved acceleration locally, which is arguably not too interesting (see below).
This is an interesting paper, however, the proposed algorithm (Algo 1 & 2 in their paper) requires logarithm (which may not be uniquely defined globally on compact manifolds) and the expensive parallel transport. What's more, its global acceleration requires global convexity, which is too strong in our case.
> The updated version of "Global Riemannian Acceleration in Hyperbolic and Spherical Spaces" by Martinez-Rubio also provides global acceleration rates (albeit only on spaces of constant curvature). [Previous version had exponential dependence on the curvature/radius; this was reduced to a polynomial dependence in the updated version.]
Their algorithm (Algo. 1) requires a line search in step 6, which can be hard when applied to complex problems (large-scale and non-convex). In contrast, our algorithm can be easily applied to machine learning problems, e.g., vision transformers in general reply to all.
> Lower bounds and obstructions for acceleration on Riemannian manifolds: (a) "No-go Theorem for Acceleration in the Hyperbolic Plane" by Hamilton and Moitra, (b) "Negative curvature obstructs acceleration for strongly geodesically convex optimization, even with exact first-order oracles" by Criscitiello and Boumal, ( c) "Curvature and complexity: Better lower bounds for geodesically convex optimization" by Criscitiello and Boumal
These interesting papers will be cited. In fact, these negative results further illustrate the nontriviality of our result, which is a positive one, and there is no contradiction due to different setups.
> Acceleration in the nonconvex case: "An accelerated first-order method for non-convex optimization on manifolds" by Criscitiello and Boumal
This paper is interesting because it uses the idea of optimizing on the tangent space and utilizing the accelerated algorithms in Euclidean spaces. However, due to this reason, many of its conditions (e.g., A2 on page 14) depend on the local chart and can be hard to check. Instead, for our algorithm, the condition of the objective function, the choice of hyperparameters and the convergence rate are all intrinsic.
> **W3**: The paper only provides local acceleration, which is arguably not too interesting (at least from a query-complexity viewpoint) because locally manifolds look like Euclidean spaces. This can be made rigorous: there is a generally method for converting Euclidean algorithms to Riemannian ones which locally have the same convergence guarantees: see for example Appendix D of "Curvature and complexity: Better lower bounds for geodesically convex optimization" by Criscitiello and Boumal.
Obtaining a uniform, nonasymptotic bound for the global convergence for the optimization of general nonconvex functions, as far as we know, is still an open question even in Euclidean space. One often needs to work with specific objective function(s) and even consider special initalization (e.g., [R4]) in order to get accurate rates; note high accuracy bound is needed as we are investigating whether there is acceleration in this paper. We agree that for convex problems, such as studied in the nice paper `Accelerated gradient methods for geodesically convex optimization: Tractable algorithms and convergence analysis" by Jungbin Kim and Insoon Yang 2022`, it is possible to get global result (which is remarkable), but the setup considered in this paper is very different, as we consider accelerated optimization of general **non**convex functions in **curved** space. We are not aware of any result that can give global rate in this case.
---
Rebuttal 2:
Title: Rebuttal to zKf2 by Authors (Part 2/2)
Comment: > **W4, Q3**: I am not 100% convinced that the Lie Heavy Ball does not have a locally accelerated rate. My impression is that, with the right choice of parameters, Heavy Ball is locally accelerated: see for example [Wang, Lin, Wibisono, Hu]. On the other hand, the provided numerical experiments seem to indicate that Lie Heavy Ball is not accelerated (but maybe a different choice of parameters would improve convergence?).
We are sorry for the confusion, our acceleration means acceleration under strong convexity assumption. [Wang, Lin, Wibisono, Hu], although establish a similar convergence rate $1-1/\sqrt{\kappa}$, focus on different assumptions, some of them are strong, e.g, diagonal Hessian in Thm. 1 in their paper. In fact, they state in that paper that HB fails to accelerate under strong convex assumption by 'In convex optimization, it is known that Nesterov’s momentum and HB share the same ODE in continuous time (Shi et al., 2021). Yet, the acceleration disappears when one discretizes the dynamic of HB and bounds the discretization error.'
In fact, for the heavy-ball method, although there exist some other choices of hyperparameters, leading to acceleration on quadratic functions, they cannot be applied to a general strongly convex function. A summary can be found in [R7].
> **W5, Q4**: For SO(n) or U(n), can the authors please explain why the log map and parallel transport are prohibitively costly (in comparison to the exponential)?
Sorry for the confusion. They are costly because they are **additional** operations that are avoided by our method. All accelerated 1st-order manifold optimizers that we're aware of use at least one exponential map per iteration, and that is the main total cost of our algorithm. However, all other operations, i.e., logarithms, parallel transport, extra exponentials, can be understood as 'expensive', even if their computational complexity is not worse. An empirical estimation of the cost is, the NAG-SC in [R1] costs around 5 times more time than our algorithm.
In addition, logarithm may cause some trouble when implementing the algorithm due to the lack of unique geodesic on most Lie groups. Regarding parallel transport, similiar to the general manifold case, the parallel transport on Lie groups is also defined by an ODE and requires costly numerical integration (an expression for parallel transport on Lie groups with left-invariant metric can be found in Thm. 1 in [R8]).
> **W6, Q5**: Is it necessary to use the exponential map as the retraction? From the reviewer's experience, for the manifold SO(n), using the matrix exponential is more expensive than using the QR-retraction.
We appreciate this suggestion for using cheaper retractions, e.g., Cayley map and QR retraction. However, even though we may be able to provide some of the empirical results, we are unsure if theoretical results will be optimistic. This is because acceleration is fragile: NAG-SC and HB are only different in an $O(h^2)$ term and lead to a totally different dependence on condition number. However, retractions will introduce numerical errors whose effect is unknown and hard to analyze.
> **W7**: The experiments performed (the eigen decomposition problem) are limited. The authors say the eigen decomposition problem "is a hard non-convex problem on a manifold". The reviewer somewhat disagrees: in some sense, it is one of the easiest nonconvex optimization problems as all second order critical points (where gradient = 0, Hessian is PSD) are global minima.
Thank you for pointing out this and we will correct this in the next version. We admit eigen decomposition is not a good example for avoiding local minimum, however, we are mostly focusing on the rate of convergence and we will add discussions about all local minimum are global minimum in the parts focusing on global convergence. We have additional numerical experiment including other algorithms and more complicated problems. Please see the general rebuttal to all.
> **Q1**: Explicitly, what do your algorithms look like when the Lie group is a torus (product of circles)? Does it reduce to standard NAG?
Yes, it will degenerate to Euclidean NAG-SC if d-torus is unrolled into and represented as $\mathbb{R}^d$.
> **Q6**: Line 277 typo: decreaing -> decreasing
We appreciate the careful review and are sorry for the embarrassing typo.
[R4] Ward and Kolda. Convergence of alternating gradient descent for matrix factorization. NeurIPS'23
[R5] Watson et al. De novo design of protein structure and function with RFdiffusion. Nature 2023
[R6] Stark et al. Harmonic Self-Conditioned Flow Matching for Multi-Ligand Docking and Binding Site Design. ICML 2024
[R7] Lessard, Laurent, Benjamin Recht, and Andrew Packard. "Analysis and design of optimization algorithms via integral quadratic constraints."
[R8] Nicolas Guigui and Xavier Pennec. A reduced parallel transport equation on lie groups with a left-invariant metric
---
Rebuttal Comment 2.1:
Title: Reviewer response to authors (1)
Comment: I appreciate the authors’ detailed response. However, given the overall contribution and novelty of this paper, I will maintain my score. Comments:
* The authors satisfactorily addressed several of my questions, for example, explaining why the (Euclidean or Lie) Heavy Ball does not have a locally accelerated rate.
* However, other concerns remain: for example, the requirement that the manifold is a Lie group is very restrictive (and hard to avoid given the paper's scope).
---
Reply to Comment 2.1.1:
Comment: > The authors satisfactorily addressed several of my questions, for example, explaining why the (Euclidean or Lie) Heavy Ball does not have a locally accelerated rate.
We sincerely thank the reviewer for discussing with us and acknowledging the validity of our statements.
> However, other concerns remain: for example, the requirement that the manifold is a Lie group is very restrictive (and hard to avoid given the paper's scope).
We completely agree that this paper is specifically about Lie group, not general manifold. However, we very much hope the reviewer could agree with us that it is about **quantitative**, **accelerated**, **nonconvex** optimization. Each word alone already carries a lot of weight and has been traditionally well appreciated by the machine learning theory community; for example, [R10] (published in NeurIPS'18) focuses on acceleration under convex functions of Runge-Kutta discretization; [R11] (published in COLT'18) focuses on local acceleration on curved spaces; [R12] (published in ICLR'19) focuses on designing adaptive learning rate method with provable convergence under global convexity. More importantly, at this moment, we're unaware of any result that can do them all in once for general manifolds. It is widely accepted by the community that focusing on a subclass of problems that have more structure is still insightful; for example, the nice reference [R9] suggested by the reviewer him/herself focuses on (no) acceleration in optimization on a very specific case (globally convex functions, and 2-dim hyperbolic plane **only**), but it is a great work in our opinion.
Moreover, for general Riemannian manifold optimization, one can (and typically do) assume geodesic convexity or its relaxation such as convexity outside a ball, but these assumptions cannot be made for compact Lie groups we considered (See Rmk. 1 on page 3 in our original submission). Therefore, restricting to Lie group is not only making the problem easier, but also making it harder at the same time.
But perhaps the reviewer's remaining concern mainly lies in why is Lie group optimization important. Here are some answers:
1) There are already a lot of important machine learning applications in the literature. For example, [R13] demonstrates imposing SO(n) constraint on weight matrices are beneficial for deep networks; [R14] proves orthogonality benefits deep CNNs. [R15] shows artificial orthogonal constraints in RNN improve long-term dependencies. [R16] finds that rotation activation or weight matrices help remove outliers and benefit quantization on large language models; [R17] shows artificial orthogonal constraints improve robustness. All these amount to optimization on Lie groups.
2) In the rebuttal supplementary pdf, we already provided an additional application of Lie group optimzation. It show-cased how high-dimensional accelerated Lie group optimization can significantly improve vanilla attention mechanism, e.g., Lie group optimization boosts the performance of ViT: CIFAR 10 error improves from 9.75% (by Euclidean optimization; see [R2]) to 8.89% (by our proposed Lie NAG-SC)/ 9.46% (by Lie HB that has no acceleration) and CIFAR 100 error improves from 32.61% (by Euclidean optimization; see [R2]) to 31.11% (by our proposed Lie NAG-SC)/ 31.72% (by Lie HB that has no acceleration). Numbers for Lie HB and Lie NAG-SC are from the rebuttal supplementary pdf.
3) We also provided additional examples in the first round of rebuttal [R5, R6] that demonstrate how Lie group generative modeling is creating a big industry, but we now realize that we underexplained (apology) the connection between (Lie group) generative modeling and optimization, so please allow us to do it again: very briefly put, to use diffusion model for generative modeling on Lie group to enable important applications, one needs to have a forward dynamics that can push data distribution forward to an easy-to-sample distribution, and one can do so by first finding an (in this case, a good Lie group) optimizer with momentum, and then adding noise to it to turn it into a sampler, and finally using this sampler as the forward dynamics of a diffusion generative model; see [R18] for details.
We truly hope this can clarify why we plea the reviewer to kindly reconsider our overall contribution and novelty.
(Please see the references in our next comment due to the character limit) | Summary: This work first analyzes the convergence rate of the Lie group momentum optimizer by applying the techniques from optimization theory over manifolds to optimization over Lie groups and extending the Lyapunov analysis to Lie group settings. They also provide the convergence analysis of the discrete version of the above dynamics by constructing new energy function and a new Lyapunov function. Besides, they extend an acceleration technique from Euclidean case to Lie group case and prove its performance.
Strengths: * The authors apply the theories from optimization over general manifolds to considering optimization over Lie groups and carefully analyze the structure of Lie groups such that the general theories have more analytical and tractable formulas. Moreover, they extend techniques from the Euclidean case, like the Lyapunov analysis, and Heavy-Ball algorithm, to the Lie group case, which is both intuitive and rigorous.
* The constructions of the discrete version of the energy function and the Lyapunov function from the continuous version of these functions are insightful and it may be helpful for us to consider a similar problem related to a discrete dynamic system.
* The authors provide intuitive explanations for almost every theory, from which I can see the motivations and it is helpful for understanding the whole picture behind the technical details.
Weaknesses: * In line $280$ and the proof in the appendix, the authors mentioned the ''curvature'' but the article does not provide much information about that. I would like to know if this ''curvature'' is an intuitive term related to the second-order information or a rigorous term related the curvature information of the Riemannian structure of $\mathtt{G}$.
Technical Quality: 3
Clarity: 3
Questions for Authors: * In the article, a function $U$ is convex means it is convex in the meaning of Euclidean case, even it is defined on Lie group $\mathtt{G}$. The convexity means $U$ is convex on $\mathtt{G}$, where $\mathtt{G} \subset \mathbb{R}^N$ is embedded in Euclidean space. Is that right?
* For equation $(16)$, does $d_{\xi} \log g$ means $(d \log)_{g}(\xi)$, i.e. the differential of the logarithm at point $g$ mapping $\xi$ ?
* There may be a typo in equation $(18)$. Is the following statement right? $A := \max_{\left\lVert X \right\lVert = 1} \sigma(\operatorname{ad}_X)$, where $\sigma(\cdot)$ is set of all eigenvalues.
* About the **Assumption 2**, does Lie group $\mathtt{G}$ need to be connected?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **W1**: In line 280 and the proof in the appendix, the authors mentioned the ''curvature'' but the article does not provide much information about that. I would like to know if this ''curvature'' is an intuitive term related to the second-order information or a rigorous term related the curvature information of the Riemannian structure of $G$.
It is a rigorous term related to the curvature information of the Riemannian structure of $G$. $p(a)$ is explicitly defined in Eq. 17, and $a$ is defined in line 272 in Thm 14, whose definition depends on $A$ (defined in Eq. 18). Under the inner product making $\operatorname{ad}$ skew-adjoint in Lemma 3, $A=\sqrt{\text{max sectional curvature}}/4$ (Please see Sec. D.2 in [R3] in the reference for more details.)
> **Q1**: In the article, a function $U$ is convex means it is convex in the meaning of Euclidean case, even it is defined on Lie group $G$. The convexity means $U$ is convex on $G$, where $G\subset\mathbb{R}^n$ is embedded in Euclidean space. Is that right?
The short answer is no. The convexity under discussion is geodesic convexity, which is different from convexity in ambient Euclidean space after embedding. To illustrate the drastic difference, for example, convex functions on Euclidean spaces must be discussed on a convex set, however, we can still have convex functions on a manifold even if its Euclidean embedding is non-convex.
> **Q2**: For equation (16), does $d_\xi\log g$ means $(d\log)_g(\xi)$, i.e. the differential of the logarithm at point mapping ?
Yes, the expert reviewer is totally correct.
> **Q3**: There may be a typo in equation (18). Is the following statement right? $A:=\max_{\|x\|=1}\sigma(\operatorname{ad}_X)$, where $\sigma(\cdot)$ is set of all eigenvalues.
We appreciate the careful review and will correct the embarrassing typo.
> **Q4**: About the Assumption 2, does Lie group $G$ need to be connected?
Thanks for a great question. We understand the rationale behind this question, but we don't actually require connectness. The reason is: to prove global convergence, we prove and leverage the monotonicity of an energy function, whose definition does not require connectness. For convergence rate, it is only quantified asymptotically and it does not make a difference with or without connectness.
[R3] Lingkai Kong and Molei Tao. Convergence of kinetic langevin monte carlo on lie groups. COLT, 2024.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for clearifying my concerns about the curvature information. And it is better to include this contents in the appendix and provide more information.
---
Reply to Comment 1.1.1:
Title: Thanks for "Official Comment by Reviewer JEvs"
Comment: We thank the expert again for recognizing our contributions and helping us further improve the quality of our paper.
Yes, we will certainly include this content in the appendix with more information. | Summary: The authors analyze the momentum method and Nesterov accelerated gradient descent method on the Lie group. With some knowledge of Riemannian geometry, they discuss the computational cost.
Strengths: I have found none strenghs.
Weaknesses: I do not know the author's motivation to write this manuscript.
Could you show me a reasonable example for application in practice, or close to practice?
There are so many high-level mathematical terminologies, but no essential problems are solved.
Technical Quality: 2
Clarity: 1
Questions for Authors: I think this manuscript is none sense, so I do not ask any questions.
Confidence: 5
Soundness: 2
Presentation: 1
Contribution: 1
Limitations: The content are none sense.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 1
Code Of Conduct: Yes | null | Summary: This paper explores the optimization of functions defined on Lie groups using momentum-based dynamics. The authors propose two discretization methods, Lie Heavy-Ball and Lie NAG-SC, and analyze their convergence rates.
The main contributions are as follows:
1. Provide the first quantitative analysis of Lie group momentum optimizers.
2. Theoretically show an intuitively constructed momentum optimizer, namely Lie Heavy-Ball, may not yield accelerated convergence.
3. Generalize technique from Euclidean optimization to propose a Lie group optimizer that provably has acceleration.
Strengths: 1. The authors provide the first quantitative analysis of Lie group momentum optimizers which is significant, since there is no nontrivial convex functions on many Lie groups.
2. Theoretically show an intuitively constructed momentum optimizer, namely Lie Heavy-Ball, may not yield accelerated convergence.
3. Generalize technique from Euclidean optimization to propose a Lie group optimizer, Lie NAG-SC, that provably has acceleration.
4. Comparing to other optimizers that are designed for general manifolds, the proposed approach bypasses the requirements for costly operations.
Weaknesses: 1. The idea of the paper is natural. I think it can be seen as a straightforward extension of the results in [1]. So it may be not novelty.
2. More emperical results are needed I think. Also it better show the comparision with the result in [1].
[1]Tao, Molei, and Tomoki Ohsawa. "Variational optimization on lie groups, with examples of leading (generalized) eigenvalue problems." International Conference on Artificial Intelligence and Statistics. PMLR, 2020.
Technical Quality: 3
Clarity: 3
Questions for Authors: As I mentioned in the weaknesses, the motivation should be more clear. I want to know the novelty of this work compared with [1]
Is it possible show more experiment results in other problems?
Comparision with the experiment results in [1]?
[1]Tao, Molei, and Tomoki Ohsawa. "Variational optimization on lie groups, with examples of leading (generalized) eigenvalue problems." International Conference on Artificial Intelligence and Statistics. PMLR, 2020.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See the weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > W1: The idea of the paper is natural. I think it can be seen as a straightforward extension of the results in [Tao & Ohsawa]. So it may be not novelty.
> Q1: I want to know the novelty of this work compared with [Tao & Ohsawa]
[Tao & Ohsawa] is definitely an inspiration to this work. However, their algorithm is essentially the same as our Lie Heavy-Ball (Rmk. 28) and may not have accelerated convergence. The novelties of our paper are
1) quantification of the convergence rate of Lie Heavy-Ball (not enough acceleration),
2) a solution, which is a new algorithm Lie NAG-SC,
3) quantification of the convergence rate of the new algorithm (true acceleration).
In addition, there is a strong technical contribution, namely all the convergence analyses have to be done for fully non-convex functions, because there is in general no non-constant convex functions on compact Lie groups. Rigorous analysis of nonconvex optimization is always challenging, let alone this time on manifold as well.
> W2. More emperical results are needed I think. Also it better show the comparision with the result in [Tao & Ohsawa].
> Q2. Is it possible show more experiment results in other problems?
Thank you for the suggestion and we perform more numerical experiments. Please kindly refer to the general rebuttal to all and the rebuttal pdf supplement.
> Q3: Comparision with the experiment results in [Tao & Ohsawa]?
Thank you for this suggestion. Lie Heavy-Ball mentioned in our paper is essentially the algorithm in [Tao & Ohsawa] (we will clarify this in a revision). In our section 6.2, [Tao & Ohsawa]/Lie Heavy-Ball is already experimentally compared with our newly proposed Lie NAG-SC.
The general rebuttal to all and Fig. 2 in the the rebuttal pdf supplement contain more experiments, where Lie HB and Lie NAG-SC are applied to (and compared on) vision transformers on Cifar dataset.
---
Rebuttal Comment 1.1:
Comment: Thank you for your clarifications and additional materials. It makes me understand it clearly.
---
Reply to Comment 1.1.1:
Comment: Thank you for your valuable time and kind consideration! | Rebuttal 1:
Rebuttal: **General rebuttal to all**
> **Q1**: More experimental results
A1: We perform more numerical experiments in the rebuttal PDF supplement.
- We add Riemannian GD and Riemannian NAG-SC [R1] into comparison (Fig. 1 in the attached pdf). Of course, our proposed method converges faster than Riemannian GD which has no momentum nor acceleration. But we also see even faster convergence than Riemannian NAG-SC, which should already have acceleration (however with higher computational cost per step), possibly because our method is specially designed for Lie groups.
- We apply (and compare) the newly proposed Lie NAG-SC method and the existing Lie-Heavy Ball method to a practical machine learning application, namely improving Transformer by requiring attention heads to be orthogonal so that attentions are not redundant [R2].
The training of this modified Transformer amounts to a Lie group optimization problem. We used Lie NAG-SC and Lie HB to train the orthogonal parameters and momentum SGD for the unconstraint parameters. We trained vanilla Vision Transformer from scratch on Cifar till a fixed amount of epochs, and observed improved performance in terms of validation error when Lie HB is replaced by the accelerated method Lie NAG-SC (Cifar 10: 9.46% $\to$ 9.89%, Cifar 100: 31.72% $\to$ 31.11%).
[R1] Kwangjun Ahn and Suvrit Sra. From nesterov’s estimate sequence to riemannian acceleration. In Conference on Learning Theory, pages 84–118. PMLR, 2020.
[R2] Lingkai Kong, Yuqing Wang, and Molei Tao. Momentum stiefel optimizer, with applications to suitably-orthogonal attention, and optimal transport. ICLR, 2023.
Pdf: /pdf/1ea9fec7e604bd268f8aaaaf907fc7e14533f70b.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper proposes a new algorithm (Lie NAG-SC) that converges at accelerated rates on Lie groups when initiated close to the true optimum (local convergence). Theoretical analysis and experimental verification shows good performance.
Strengths: The theoretical analysis is quite advanced, and as far as we are aware of, this is the first method to achieve local accelerated rates for Lie groups.
Weaknesses: Both the theory and the experimental evaluation only considered local convergence, i.e. cases when the optimizer is initiated near the global optimum. This is a limitation as the relative behaviour of the algorithms further away from the optimum may be completely different.
The impact of the parameter p(a) on the rates is not sufficiently clearly discussed. This is related to the curvature of the manifold, as well as how closely it is initiated to the true optimum.
Technical Quality: 3
Clarity: 3
Questions for Authors: Would you be able to numerically compare the different methods when started from the same random initialisation (i.e. same initial position), far away from the global optimum?
Could you discuss the impact of the parameter p(a) on the rates, and explain the intuition for this parameter in the introduction?
You should not claim acceleration without addressing the reduction in rates due to this parameter.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations have been clearly addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **W1**: Both the theory and the experimental evaluation only considered local convergence, i.e. cases when the optimizer is initiated near the global optimum. This is a limitation as the relative behaviour of the algorithms further away from the optimum may be completely different.
We agree the expert's opinion that global convergence is interesting. However, due to the lack of global convexity, it is hard to have a global convergence rate. Even in Euclidean space, as far as we know, obtaining a uniform, nonasymptotic bound for the global convergence for the optimization of general nonconvex functions is still an open question.
> **W2**: The impact of the parameter p(a) on the rates is not sufficiently clearly discussed. This is related to the curvature of the manifold, as well as how closely it is initiated to the true optimum.
> **Q2**: Could you discuss the impact of the parameter p(a) on the rates, and explain the intuition for this parameter in the introduction? You should not claim acceleration without addressing the reduction in rates due to this parameter.
As shown in Table 1, $p(a)$ only shows up in the convergence analysis of NAG-SC. The details are in line 280 to 284. In short, $p(a)$ quantifies the loss of convergence induced by the curved space. When close to the minimum, the space is approximately flat. The negative effect of $p(a)$ disappears when $2p(a)<\sqrt{2L}$ (Table 1 and Thm. 14) and the convergence rate will be the same as Euclidean space. This is the reason we claim acceleration. Similar curvature-dependence of convergence rate can also be found in [1] and [27] in the paper. See also Rmk. 35.
> **Q1**: Would you be able to numerically compare the different methods when started from the same random initialisation (i.e. same initial position), far away from the global optimum?
Thank you for the advice and we agree that more numerical results would be helpful. Please see the general rebuttal to all. We added optimizer from Riemannian optimization into comparison and also applied our optimizer to vision transformers with Cifar dataset.
---
Rebuttal Comment 1.1:
Title: Acknowledgement of rebuttal
Comment: We thank the authors for addressing our comments.
The new experiments are much more convincing about the usefulness of the algorithm, please include them in the final version of the paper. I am increasing my score to 7.
---
Reply to Comment 1.1.1:
Title: Thank you for "Acknowledgement of rebuttal"
Comment: We sincerely thank the expert reviewer for helping us greatly improve the quality of our paper and your time! | null | null | null | null | null | null |
OneActor: Consistent Subject Generation via Cluster-Conditioned Guidance | Accept (poster) | Summary: The authors present a generation paradigm called “OneActor” to generate consistent subject in text-to-image generating tasks. The core of this algorithm is called as “cluster-guided score function”, which is based on the concept of score function and created to maintain the consistency of generated images. Additionally, the superiorities of this method for consistency performance and faster tuning are shown quantitatively and qualitatively through various experiments.
Strengths: 1. The authors creatively present an insight that samples of different subjects form different clusters, and analyze it in detail, which is the inspiration of their method.
2. The derivation of formulas is in complete detail without errors and codes are committed.
3. The experiments are sufficient and solidly conducted, including both qualitative and quantitative comparisons. The ablation studies are sensible and include user studies as well.
Weaknesses: The overall quality of the paper is quite good, but some problems still exist:
1. The core of this method seems to split the scores of CFG into a target and an auxiliary part for customized generation and maintain consistent subject, which is also derived in detail in Appendix F. The novelty is straightforward, and it would be better to compare the results with the research of multi-concept text-to-image tasks, for example:
Kumari, Nupur, et al. "Multi-concept customization of text-to-image diffusion." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
2. There are some typos. For instance, as shown in line 119, the conditional and unconditional scores have the same notation ϵ_θ (x_t,t,c_∅ ).
3. Some experimental results are not good enough. For example, the beard of an old man shown in Figure.4 is not consistent. Additionally, a hobbit generated by DB look better than the “OneActor”.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The authors present a framework to maintain consistency property. How to connect the insights shown in this paper with score-sde? In other words, can we use score-matching to understand clusters?
2. As shown in Eq.(5), since it is an expectation why is the expression not as follows?
p(x)∙p(S^tar |x)/(∏_(i=1)^(N-1) p(S_i^aux |x)+p(S^tar |x) )
3. Since the average condition indicates the center of all the auxiliary sub-clusters, is there a strategy to make sure a conception like “radius” for the clusters?
4. What are the advantages over methods based on the attention layer of neural networks?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: 1. As shown in weakness, the method cannot capture all the details of a given target image.
2. The subject-centric shown in this paper can be described by a word so that it would be meaningful to further explore more complex and diverse cases.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate your insightful review of our work. For the weakness and questions, we will response to each of them individually.
***
## Weakness 1: Comparison with multi-concept customization pipeline
We provide a qualitative comparison with FreeCustom[1], the current state-of-the-art multi-concept customization method, in the Figure 22 of the global rebuttal pdf. Note that FreeCustom is officially implemented on StableDiffusion V1.5 and thus exhibits averagely lower image quality. In terms of subject consistency and prompt conformity, which are backbone-irrelevant, our method still outperforms FreeCustom. This reaffirms the effectiveness of the proposed cluster guidance method. Due to the time limit, we are not able to perform more comprehensive experiments of multi-concept customization methods. We will add these methods to the Related Work section and add more experiments in the new version of our paper.
[1] FreeCustom: Tuning-Free Customized Image Generation for Multi-Concept Composition, CVPR 2024
## Weakness 2: Typo
We will carefully review our paper sentence-by-sentence and try to correct every type error. Thanks for your meticulous review!
## Weakness 3: Imperfect consistency
For the "hobbit" case comparison between DB and our method, solely from the perspective of subject consistency, DB indeed beats our method. Yet we believe our method achieves a more balanced performance of subject consistency, background and layout diversity. We believe this balanced performance is more meaningful in the consistent subject generation task. Due to the non-ideal distribution of the clusters, there will be some inconsistent minor details such as the beard or clothing decorations. Addressing this challenge will be the focus of our future work.
## Question 1: Connection with score-SDE
In our opinion, the basic logic of our method is the same as that of the score-SDE. The methodology of score-SDE [2] is to transform the log-likelihood into the predicted score of the denoising network in a score-matching manner. We are inspired and employ the transformation in the derivation from Equation (5) to Equation (6) in our submission. The training of $\epsilon_{\boldsymbol \theta,\boldsymbol \phi^*}(\boldsymbol z_t,t,\boldsymbol c^{sub},c_\Delta^{tar})$ and $\epsilon_{\boldsymbol \theta,\boldsymbol \phi^*}(\boldsymbol z_t,t,\boldsymbol c^{sub},c_\Delta^{aver})$ are essentially two score-matching processes of the target sub-cluster score $\nabla_{\boldsymbol{x}}\log p(\mathcal{S}^{tar}\mid\boldsymbol{x})$ and the auxiliary sub-cluster score $\sum_{i=1}^{N-1}\nabla_{\boldsymbol{x}}\log p(\mathcal{S}_i^{aux}\mid\boldsymbol{x})$.
[2] Score-based generative modeling through stochastic differential equations
## Question 2: Formula expression
There are many different ways to express the expectation of the task including $p(\boldsymbol{x})\cdot\frac{p(\mathcal{S}^{tar}\mid\boldsymbol{x})}{\prod_{i=1}^{N-1}p(\mathcal{S}_i^{aux}\mid\boldsymbol{x})+p(\mathcal{S}^{tar}\mid\boldsymbol{x})}$. We eventually adopted the simplified Equation (5) for the following reasons:
- Retaining only the multiplication and division terms simplifies the subsequent formula transformation.
- $p(\boldsymbol{x})\cdot\frac{p(\mathcal{S}^{tar}\mid\boldsymbol{x})}{\prod_{i=1}^{N-1}p(\mathcal{S}_i^{aux}\mid\boldsymbol{x})+p(\mathcal{S}^{tar}\mid\boldsymbol{x})}$ can be transformed into
$p(\boldsymbol x)\cdot\frac{1}{\prod_{i=1}^{N-1}p(\mathcal{S}_i^{aux}\mid\boldsymbol x)/p(\mathcal{S}^{tar}\mid\boldsymbol x)+1}$.
Thus, the core expectation is also to increase $\frac{p(\mathcal{S}^{tar}\mid\boldsymbol x)}{\prod_{i=1}^{N-1}p(\mathcal{S}_i^{aux}\mid\boldsymbol x)}$ as the Equation (5) does.
- During implementation, the target and auxiliary samples all contribute to the average condition, which completes the center of the whole cluster. So the simplification is more like a theoretical trick.
## Question 3: Radius concept
We believe there exists the concept of *radius* in the latent space, yet it’s hard to precisely determine the *radius*. Because if we consider the $(\boldsymbol z_t,t)$ as a whole, the latent space is complicated and might not be a Euclidean space. The *radius* might appear to be a geometric manifold that extends continuously over $t$. It will be a interesting yet challenging future topic to develop a strategy to determine the *radius*.
## Question 4: Advantages over attention manipulation methods
Compared with the training-free pipelines based on attention manipulation, our tuning-based pipeline has the following advantages:
- Our method can be naturally utilized to pre-train a consistent subject generation network from scratch, which implements this research task into more practical applications.
- Based on the stable tuning process of an auxiliary network rather than the U-Net backbone, our method demonstrates more robust performance across different settings. By contrast, the attention manipulation to the U-Net backbone utilized by training-free baselines exhibits occasional degradations of the image quality (see the first and fourth images of the ‘*man*’ case generated by StoryDiffusion in the Figure 21 of the global rebuttal pdf; also see the first image of the ‘*girl-cat*’ case generated by FreeCustom in the Figure 22).
- Though our method takes averagely 5 minutes to tune, it doesn’t increase the inference time as training-free baselines do. Thus, as the generation number increases, our method outperforms the training-free methods in terms of total generation time (see the Table 3 in the global rebuttal pdf).
***
We believe the rebuttal stage is highly meaningful, and we will add the rebuttal discussion to the new version of our paper as much as possible. If there is any omission in our response or if you have any additional question, please feel free to let us know. Thanks again for your great efforts!
---
Rebuttal Comment 1.1:
Title: comment
Comment: I think the responses of the authors have addressed my concerns, and the submitted experimental results reflect the effectiveness of the method. I will not reduce my score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer BPQh,
We greatly appreciate your satisfaction with our responses and will make revisions to our work based on your comments in order to further improve the quality of our work.
Thanks again for your valuable suggestions and comments. We enjoy communicating with you and appreciate your efforts! | Summary: This Study proposes a one-shot tuning paradigm for efficient and consistent subject generation. They introduce two inference strategies to mitigate overfitting, improving generation quality significantly. The semantic space of diffusion mode trained under the proposed method has the same interpolation property as the latent space, offering enhanced control over generation. Comprehensive quantitative and qualitative experiments prove that the method is effective.
Strengths: 1. The motivation and the methods proposed in the entire article are justifiable. The illustrations are incredibly exquisite, and the presented results effectively prove the efficacy of their method
2. For practitioners in the field of AIGC, I indeed believe it holds practical value.
Weaknesses: Some of the related techniques aren't as novel as the author argued. Such as “We are the first to prove that the semantic space of the diffusion model has the same interpolation property as the latent space does.”. As far as I know, it is quite common to do identity mixing[1,2,3,4] in the semantic space in human photo customization. [3,4] also use two inference strategies including classifier-free guidance and semantic interpolation, which is the same as the method in the paper
1. FaceStudio: Put Your Face Everywhere in Seconds
2. PhotoMaker: Customizing Realistic Human Photos via Stacked ID Embedding
3. InstantID: Zero-shot Identity-Preserving Generation in Seconds
4. FlashFace: Human Image Personalization with High-fidelity Identity Preservation
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. The cases provided in the paper are relatively simple. Can the algorithm handle more complex multi-entity scenes, such as those involving more than one person or cat?
2. How does this compare to a seemingly simpler approach [1]?
[1] StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your valuable reviews. For the weakness and questions, we will response to each of them individually.
***
## Weakness: Novelty of semantic interpolation
We believe our discovery is fundamentally different from previous works. Ordinary diffusion process can be denoted as $p(\boldsymbol{z}\mid \boldsymbol{c},s)$, where $\boldsymbol{c}$ is the semantic condition and $s$ is the weight of classifier-free guidance score. Previous works [1,2,3,4] prove that the semantic interpolation $\boldsymbol c'=\boldsymbol c_1+\alpha\cdot(\boldsymbol c_2-\boldsymbol c_1)$ between two conditional embeddings $\boldsymbol c_1$ and $\boldsymbol c_2$ shows a mixed visual effect and a gradual visual change, which can be written as: $p(\boldsymbol z\mid\boldsymbol c’,s)\approx p(\boldsymbol z\mid\boldsymbol c_1,s)\cdot p(\boldsymbol z\mid\boldsymbol c_2,s)$. This discovery is utilized to mix different input conditions for a fancy hybrid output image. By contrast, our contribution lies in proving that the semantic interpolation $\boldsymbol c'=\boldsymbol c_\emptyset+\alpha\cdot(\boldsymbol c_1-\boldsymbol c_\emptyset)$ between the unconditional embedding $\boldsymbol c_\emptyset$ and conditional embedding $\boldsymbol c_1$ has the same effect with the guidance interpolation $s'=0+\beta\cdot(s_1-0)$ between no guidance $s=0$ and standard guidance scale $s=s_1$. The above conclusion can be written as: $p(\boldsymbol z\mid \boldsymbol c',s_1)\approx p(\boldsymbol z\mid \boldsymbol c_1,s')$. It indicates that semantic manipulation has the same effect with the sampling guidance strength, which we believe is essentially different from the conclusion of previous works. Yet we apologize for the misinformation caused by our inappropriate wording and we will polish our phrasing to be more precise and objective. We will also add the above-mentioned works to the Related Work section in the new version of our paper. Thanks so much for your nuanced review!
[1] FaceStudio: Put Your Face Everywhere in Seconds
[2] PhotoMaker: Customizing Realistic Human Photos via Stacked ID Embedding
[3] InstantID: Zero-shot Identity-Preserving Generation in Seconds
[4] FlashFace: Human Image Personalization with High-fidelity Identity Preservation
## Question 1: Multi-entity scenes
We provided illustrations of double subjects generation and triple subjects generation in Figures 17 and 18 in the Appendix C of the submission. The results show that our method is able to deal with the complex 2-entity and 3-entity scenarios. The storybook generation in the Appendix B.2 also proves that our method is capable of complex sequential generation. Yet when dealing with more numerous subjects (usually >4), as discussed in the Appendix D, some subjects might be neglected. We also observe this defect in the original SDXL model and addressing this foundational limitation will be the focus of our future work.
## Question 2: Comparison with StoryDiffusion
We carry out qualitative comparison with the contemporaneous work StoryDiffusion [5] in Figures 20 and 21 in the global rebuttal pdf. Generally, we think our method performs better in terms of subject consistency and background diversity. In a broader context, compared with the training-free pipelines like StoryDiffusion, our tuning-based pipeline has the following advantages:
- Our method can be naturally utilized to pre-train a consistent subject generation network from scratch, which implements this research task into more practical applications.
- Based on the stable tuning process of an auxiliary network rather than the U-Net backbone, our method demonstrates more robust performance across different settings. By contrast, the attention manipulation to the U-Net backbone utilized by training-free baselines exhibits occasional degradations of the image quality (see the first and fourth images of the ‘*man*’ case generated by StoryDiffusion in the Figure 21 of the global rebuttal pdf).
- Though our method takes averagely 5 minutes to tune, it doesn’t increase the inference time as training-free baselines do. Thus, as the generation number increases, our method outperforms the training-free methods in terms of total generation time (see the Table 3 in the global rebuttal pdf).
[5] StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation
***
We believe the rebuttal stage is highly meaningful, and we will add the rebuttal discussion to the new version of our paper as much as possible. If there is any omission in our response or if you have any additional question, please feel free to let us know. Thanks again for your great efforts!
---
Rebuttal 2:
Comment: Dear Reviewer 7cPE,
We sincerely thank you for your valuable reviews. To each of the question, we have provided detailed respones in the rebuttal stage, which we hope could address your concerns. If you have any further questions or there is any omission in our responses, please feel free to let us know. And we would be extremely grateful if you could consider increasing the rating after reviewing our responses.
Thanks again for your great efforts! | Summary: This paper proposes OneActor, a one-shot tuning paradigm for consistent subject generation in text-to-image diffusion models, driven solely by prompts and utilizing learned semantic guidance to bypass extensive backbone tuning.
A cluster-conditioned model is introduced to formalize consistent subject generation from a clustering perspective. To mitigate overfitting, the tuning process is augmented with auxiliary samples and employs two inference strategies: semantic interpolation and cluster guidance, significantly enhancing generation quality.
Comprehensive experiments demonstrate that this method outperforms various baselines, providing superior subject consistency, prompt conformity, and high image quality. OneActor supports multi-subject generation, is compatible with popular diffusion extensions, offers faster tuning speeds, and can avoid increased inference time. Additionally, this paper proves that the semantic space of the diffusion model has the same interpolation properties as the latent space, presenting a promising tool for fine generation control.
Strengths: - The problem of changing the text prompt while preserving the ID is important.
- Learning semantic guidance to harness its internal potential is insightful.
- The auxiliary augmentation in one-shot tuning is novel in text-to-image applications.
- The code is provided.
- Quantitative results are clearly presented in Figure 7.
- According to Figure 6, this paper also considers multiple subjects.
Weaknesses: - All demos are in cartoon or artistic styles; no photorealistic results are shown. The objects are limited, raising questions about the fairness of the quantitative results.
- The CLIP score may be insufficient to measure ID similarity.
- Figure 2 is somewhat cluttered. Please focus on highlighting the most important technical contribution.
- No qualitative ablation study is provided.
- Efficiency is a concern. Few quantitative results on efficiency are provided. A processing time of 3-6 minutes may be too long for a text-to-image pipeline.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the weakness.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Well discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your valuable reviews. For the weakness and questions, we will response to each of them individually.
***
## Weakness 1: Limited styles and objects in the main part of the paper
The 9-page main part of the submission is limited in space so we placed the most crucial comparisons with the baselines in it and placed more visual results in the Appendix of the submission. Because the baselines (TheChosenOne and ConsiStory) have not yet open-sourced their codes, we can only compare to the results of their written materials, which accidentally limits the variety of the style and central subject in the main part. We showed a little more results of photorealistic styles and objects in the Appendix. B and C. Additionally, we provide more visual results concentrating on objects and photorealistic style in Figure 21 of the global rebuttal pdf. The results demonstrate that our method performs well in this setting.
## Weakness 2: CLIP score is not insufficient
CLIP-I-score is the metric that the baselines utilize so we followed and reported it in the main part of our submission. Yet we completely agree that CLIP-I score is insufficient to measure the subject ID. Thus in addition, we provided a more comprehensive quantitative evaluation in the Appendix A.5, where we reported DINO score and LPIPS score as well. Beside these common metrics, we also notice that recently some human-aligned benchmarks are proposed to specially evaluate the image consistency, such as DreamBench++[1]. Due to the time limit, we are not able to perform comprehensive experiments on such benchmarks in the rebuttal stage, but we will try to add such experiments in the new version of our paper.
[1] DreamBench++: A Human-Aligned Benchmark for Personalized Image Generation, arXiv2406.16855
## Weakness 3: Cluttered figure
We will revise the Figure 2 to highlight the projector and the cluster guidance methodology. Thanks for your valuable advice!
## Weakness 4: Qualitative ablation study
Since there is no space in the main text, we provided the illustration of component ablation study in the Appendix A.3 and parameter analysis in the Appendix A.4. Please refer to it for further review.
## Weakness 5: Quantitative efficiency results
We provided an efficiency analysis in the Appendix A.2 and add an efficiency statistics in Table 3 of the rebuttal pdf. Though our method takes averagely 5 minutes to tune, it doesn’t increase the inference time as training-free baselines do. From the user’s perspective, when generating <20 images for one subject from scratch, ConsiStory is the fastest, followed by our method and then TheChosenOne. However, considering the application scenarios, the user is very likely to have a demand for large-quantity generation (>50 images). In this case, our method demonstrates definite advantages. For example, when generating 100 images, ConsiStory takes about 35 minutes while our method only needs about 23 minutes in total. If we set $\eta_2=0$, with an affordable compromise of generation details, our method will reduce the total time to about 18 minutes. With the generation number increasing, our advantages will continue to grow.
***
We believe the rebuttal stage is highly meaningful, and we will add the rebuttal discussion to the new version of our paper as much as possible. If there is any omission in our response or if you have any additional question, please feel free to let us know. Thanks again for your great efforts!
---
Rebuttal 2:
Comment: Dear Reviewer SGyo,
We sincerely thank you for your valuable reviews. To each of the question, we have provided detailed respones in the rebuttal stage, which we hope could address your concerns. If you have any further questions or there is any omission in our responses, please feel free to let us know. And we would be extremely grateful if you could consider increasing the rating after reviewing our responses.
Thanks again for your great efforts!
---
Rebuttal Comment 2.1:
Comment: I continue to observe a data bias in this method, as most examples are in cartoon or artistic styles. The new examples provided by the author have intensified my concerns. Although the newly added examples feature real objects, they still exhibit a strong artistic influence (coffee, dog). This application is important, especially in real-world scenarios, but I believe the current presentation of the paper remains somewhat skewed. The author's explanation of limited space doesn't convince me. If a method is truly general, it should ideally demonstrate its effectiveness across a wider range of domains. I will keep my score unchanged temporarily, but I think we need to discuss the practical significance of this paper.
---
Reply to Comment 2.1.1:
Comment: Thank you so much for your further review and we would like to offer some clarifications regarding your concern.
Our method is a **one-shot tuning** pipeline based on the pre-trained text-to-image generation model StableDiffusionXL (SDXL). The whole procedure involves:
1) generating an **initial target image using SDXL**;
2) performing our one-shot tuning pipeline to **produce consistent images according to the target image**.
In each set of examples shown in our submission and rebuttal, the first image is the target generated by SDXL. For instance, in the 'dog' example in the rebuttal pdf (Fig. 21 middle top), the image of 'playing in a park' is generated by the base model SDXL. Our method then performs one-shot tuning based on this image to produce the subsequent images of 'standing by a fence', 'hopping in a meadow', and 'resting on a sofa'.
**What our method achieves is to maintain excellent consistency with the target image. The artistic influence you observed in images like the 'dog' and 'coffee cup' is due to the training bias of the SDXL model, rather than a limitation of our method.** During our experiments, we also observed that SDXL often produces animation-like images even when we use prompts like 'photorealistic', 'a photo of', and 'photography'.
In fact, when SDXL does produce truly photorealistic images like the ***'ring in a packing box', 'wallet on the silk fabric',*** and ***'man reading newspaper'*** in the rebuttal pdf (Fig. 21), **our method effectively maintains this photorealistic style**. According to the rebuttal rules, we are not able to provide additional illustrations, but we do believe the current qualitative examples sufficiently demonstrate both the theoretical contributions and practical values of our method.
We sincerely hope this response helps address your concerns. Thank you again for your thoughtful feedback and your efforts! | Summary: This paper proposes to formalize the consistent content generation problem from a clustering perspective. By designing a one-shot tuning paradigm with a cluster-conditioned model, the proposed pipeline OneActor can achieve faster tuning while maintaining superior subject consistency. Extensive experiments show that the semantic interpoloation and cluster guidance can contribute to the high quality of consistent subject generation.
Strengths: 1. The formalization of consistent subject generation from a clustering perspective is novel and elegant.
2. This one-shot tuning paradigm with cluster guidance is creative and could pave a new path of fine control with diffusion models.
3. Experiments are extensive and solid, showing the superior performance compared to baseline methods.
4. This paper is well organized and of high quality.
Weaknesses: There is a typo in line 119 for the unconditional manner being repeated twice.
Technical Quality: 4
Clarity: 4
Questions for Authors: How do you design the projector network? What's the motivation behind the current design?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors have adequately addressed the limitation and social impact in Appendix D and E, respectively.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate your commendation of our work. It is without doubt a tremendous encouragement for us. For the weakness and question, we will response to each of them individually.
***
## Weakness: Typo
We will carefully review our paper sentence-by-sentence and try to correct every type error. Thanks for your meticulous review!
## Question: The design of the projector network
This is a very meaningful question which we've discussed a lot ourselves. In our pipeline, the role of the projector network is to transform a token embedding to another adjusted token embedding with respect to the visual feature. Its essence is a transformation inside one modality (semantic) conditioned on another modality (visual). It's similar to the style transfer task, which transfers one image style to another style with respect to the image elements. Inspired by this, we utilize a projector based on AdaIN, which is a popular transformation layer in the style transfer task. Then the ResNet and linear layer are naturally chosen to extract the visual feature and output the $\beta$ and $\gamma$ of the AdaIN layer. According to the experiments, our design functions well in the one-shot tuning setting. Furthermore, our method is potential to be utilized to pre-train a consistent subject generation model from scratch. In thus setting, we do believe there might be a more perfect network design for the projector.
***
We believe the rebuttal stage is highly meaningful, and we will add the rebuttal discussion to the new version of our paper as much as possible. If there is any omission in our response or if you have any additional question, please feel free to let us know. Thanks again for your great efforts!
---
Rebuttal Comment 1.1:
Comment: I have read all the reviewers' comments and the authors' responses. My concerns have been addressed by the authors. I will not reduce my score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer iFKj,
We greatly appreciate your thoughtful comments and are pleased that you found our responses satisfactory. We will incorporate these important discussions to revise the final manuscript.
Thank you once again for your valuable suggestions and feedback. We really enjoy discussing with you and sincerely appreciate your effors. | Rebuttal 1:
Rebuttal: We thank all the reviewers, ACs, SACs and PCs for reviewing so many papers and providing insightful and objective opinions.
We are so grateful that:
- __All the reviewers give our work positive ratings (7556)__, which is a tremendous encouragement for us.
- Reviewer iFKj highly recommends our work for the novel and elegant formalization, the creative methodology and the solid experiments.
- Reviewer SGyo confirms the importance of our work, the insightful guidance strategy and the novel augmentation approach.
- Reviewer 7cPE appreciates the justifiable motivation, the exquisite illustrations and the practical value of our work.
- Reviewer BPQh highlights the creative insights, the detailed derivation and the comprehensive experiments of our work.
Meanwhile, all the reviewers offer valuable questions and suggestions. We have responsed to every reviewer individually in each separate rebuttal, accompanied by __a global rebuttal pdf__ of figures and tables down below. Please refer to the pdf as well.
We believe the rebuttal stage is highly meaningful, and we will add the rebuttal discussion to the new version of our paper as much as possible. If there is any omission in our response or if you have any additional question, please feel free to let us know. Thanks again for your great efforts!
Pdf: /pdf/3eebd6e6de593332b26e348c931ff102f3b3c712.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Unleashing the Denoising Capability of Diffusion Prior for Solving Inverse Problems | Accept (poster) | Summary: Authors propose ProjDiff which reframes noisy inverse problems with diffusion models as a two-variable constrained optimization by introducing an auxiliary optimization variable. Authors derive a two-variable ELBO as a proxy for the log-prior and solve the optimization problem via projection gradient descent. Authors conduct comprehensive experiments on several image restoration tasks (super-resolution, inpainting and deblurring), source separation, and partial generations tasks.
Strengths: * Overall, paper is easy to follow and structured nicely.
* I appreciate extensive numerical results that are presented in the paper.
* Numbers for source separation look significantly better than the baselines (I have to note that I'm not an expert in that area)
Weaknesses: * Prior work section is not comprehensive.
* The gains in performance compared to baseline methods (especially for noisy restoration tasks) is not convincing. Moreover, hyperparameter search for baseline models were not conducted and wrong baseline method was used (DDNM for measurements with noise).
* See the questions below.
Technical Quality: 3
Clarity: 2
Questions for Authors: * I have several questions regarding DDNM numbers. First of all, DDNM [1] is designed for noise-free image reconstruction problems. I believe that is the reason why DDNM on Gaussian deblurring performs bad (as in Table 1: 7db PSNR, 0.03 SSIM, etc.). I would suggest the authors to switch to the DDNM+ variant as described in [1] for noisy inverse problems. If authors are already using DDNM+, could you clarify why the performance is bad on Gaussian deblurring?
* line 37-39: "However, it’s worth noting that, since diffusion models are inherently effective at denoising, considering the observation noise in the likelihood term fails to fully leverage diffusion models’ denoising capability." Could the authors clarify what it means to not fully leverage diffusion models here?
* I would recommend the authors to include $\Pi$GDM [2] in their comparisons especially since it performs much better than DDRM.
* Some other missing citations on solving inverse problems with diffusion models: CCDF [3], latent diffusion models: PSLD [4]
* line 663-665: "Since [23] did not conduct experiments on CelebA, we use the parameters on FFHQ for the CelebA dataset as both FFHQ and CelebA are datasets of faces.". I don't think this is a good practice. In my experience, even though both CelebA and FFHQ are face datasets, DPS is not robust to the choice of step size. I believe some hyper-parameter search on a small set of images is necessary for fair comparison.
* Do you think ProjDiff can be extended for using latent diffusion models as a prior?
---
References:
[1] Wang, Yinhuai, Jiwen Yu, and Jian Zhang. "Zero-shot image restoration using denoising diffusion null-space model." arXiv preprint arXiv:2212.00490 (2022).
[2] Song, Jiaming et al. “Pseudoinverse-Guided Diffusion Models for Inverse Problems.” International Conference on Learning Representations (2023).
[3] Chung, Hyungjin, Byeongsu Sim, and Jong Chul Ye. "Come-closer-diffuse-faster: Accelerating conditional diffusion models for inverse problems through stochastic contraction." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
[4] Rout, Litu, et al. "Solving linear inverse problems provably via posterior sampling with latent diffusion models." Advances in Neural Information Processing Systems 36 (2024).
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Limitations are adequately addressed by the authors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for the reviewer's affirmation of the effectiveness of our method, especially the outstanding performance in music separation and partial generation tasks. Below are our responses.
1. DDNM+ numbers for noisy Deblurring tasks
We are very grateful to the reviewer for pointing out this issue. We used the official code of DDNM in the noisy linear inverse problem. We double-checked the official code and confirmed that it indeed calls the DDNM+ algorithm. However, we found that the implementation of Gaussian Deblurring in the official code seems to be incomplete, which is the reason for the abnormal results in our paper.
We re-implement the part of Gaussian Deblurring and test it on both ImageNet and CelebA. The corrected results are shown in Table 5 of the attached PDF, where DDNM+ achieved reasonable results. We also checked Super-resolution and Inpainting and confirmed that their implementations are correct. We will correct the experimental results in our paper and annotate that we used the DDNM+ algorithm for noisy tasks. We would like to express our gratitude once again to the reviewer for helping us correct this error.
2. ''What it means to not fully leverage diffusion models''
Our observation is that diffusion models not only model the distribution of clean data $p(x_0)$, but they also implicitly model the distribution of noisy data with different variances of Gaussian noise $p(x_t)$ within the diffusion process. This means that compared to other types of data priors, diffusion models can provide more usable information for solving inverse problems with Gaussian noisy observations. Therefore, if solved within a conventional MAP framework, only the clean data prior modeled by the diffusion model would be utilized while the noisy data prior within the diffusion model is neglected.
In contrast, we introduce an auxiliary variable to transform the noisy observation into an equivalent noise-free observation of the noisy sample, thereby utilizing both the clean data prior and the noise data prior. The process of recovering clean data from the noisy auxiliary variable can be regarded as a denoising process, hence our method actually utilizes both the data prior modeled by the diffusion model and its denoising capability.
3. More comparative algorithms and related works
Thanks for the reviewer's suggestion. We supplement a comparison of ProjDiff with $\\Pi$GDM, together with ReSample, DiffPIR, and DMPS on the CelebA dataset. The results are shown in Table 1 & 2 of the attached PDF. ProjDiff still demonstrates competitive performance.
For $\\Pi$GDM, we carefully conducted hyperparameter searches for each task using the first eight images with the average PSNR as the metric. Except for Noisy Gaussian Deblur task, we avoided using other algorithms to initialize $\\Pi$GDM to ensure a fair comparison. Specifically, we first use the forward transition to map the degraded image to the noise level corresponding to the 500th step, and then use 100 steps of $\\Pi$GDM to solve the inverse problem. We find that this approach is much better than starting directly from white noise (i.e. 1000th step). For Noisy Gaussian Deblur task, $\\Pi$GDM without initialization from other algorithms struggled to achieve satisfactory results. Therefore, we use DDNM+ to obtain the sample at the 500th step and then switch to $\\Pi$GDM to complete the solution.
We notice that $\\Pi$GDM does not always outperform the performance of our reproduced DDRM. This may be because we report the performance of DDRM using 100 steps, which is much better than the default parameters of DDRM (20 steps).
We will further refine the explanation of related work according to the reviewer's suggestions.
4. DPS on CelebA
Thanks for the reviewer's suggestion. We have re-adjusted the learning rate hyperparameter $\\eta$ for DPS on the CelebA dataset task by task. Using the average PSNR of the first eight images as the criterion (consistent with the method we used to adjust the hyperparameter of ProjDiff), we carefully scanned the learning rate hyperparameter with a precision of 0.1. The latest results are shown in Table 1&2 of the attached PDF (denoted as DPS-s). We will update these results in the paper and include an explanation.
5. ProjDiff for latent diffusion
Yes, we believe ProjDiff can be applied in latent diffusion. The main challenge in solving inverse problems with latent diffusion priors is the high nonlinearity of the observation equation introduced by the neural encoder-decoder. ProjDiff has the capability to handle nonlinear observations. Firstly, for noise-free observations, ProjDiff can approximate the projection operation by taking $x_0$ as the initial point and minimizing $||y-\\mathcal{A}(\\mathcal{D}(x))||^2$ (where $\\mathcal{D}$ represents the neural Decoder). Secondly, for noisy observations, ProjDiff also needs to set an equivalent noise level, which can be left as an adjustable hyperparameter. Our supplemental experiments in Table 7 of the attached PDF have verified that ProjDiff is robust to some perturbations in the noise variance (i.e. equivalent noise level), making manual adjustment of the equivalent noise level feasible. We believe this is a promising direction for future work.
We would like to express our gratitude once again for the reviewer's suggestions for our work. Should there be any further questions, we welcome continued discussion.
---
Rebuttal 2:
Comment: I would like to thank the time and effort authors put into their rebuttal. I appreciate that they have incorporated all the feedback into providing a fairer comparison (fixing DDNM+, tuning DPS step size, etc.) against baseline methods especially in a short window of time.
Most of my concerns are alleviated and my questions are answered. I've also read the comments of other reviewers (and authors' response to them). I believe the results are more convincing with the updated numbers and the proposed changes. Therefore, I'm happily updating my score to $6$ and increasing soundness to $3$.
---
Rebuttal Comment 2.1:
Comment: We would like to express our heartfelt gratitude for your recognition and raising the score! Thank you once again for your kind consideration! | Summary: This paper proposed a new sampling strategy for solving noisy inverse problems using diffusion models. The proposed method is called ProjDiff, which is based on two-step minimization of log-posterior using gradient descent. The sampling procedure is derived by (1) Reparametrization of the noisy measurement as a noiseless measurement of an intermediate noisy sample in the diffusion process, and (2) obtaining a proxy objective from the variational lower bound.
Strengths: + The paper's approach to addressing the noisy inverse problem using the reparamterization of measurement is interesting.
+ Deriving the sampling procedure by directly minimizing the evidence lower bound is innovative.
+ The performance is competitive with the SOTA methods in solving inverse problems.
Weaknesses: + The paper is not well-written and it is hard to follow. Justification:
1. Theoretical analysis results lack organization. Example: Lemma 1 and Lemma 2 should be separated from the main proof. The proofs for the propositions should also be separated and distinctly expressed.
2. Theoretical results should be explicitly included. The proofs for Lemma 1 and Lemma 2 are not included in the text. If taken from a source, then the reference should be mentioned, otherwise, the proofs should be included.
3. Abstract includes some concerning statements.
>Since inverse problems inherently entail maximum a posteriori estimation, previous works have endeavored to integrate diffusion priors into the optimization frameworks.
Inverse problems could be solved using MMSE estimation. It is unclear what kind of optimization algorithm the authors refer to.
> by introducing an auxiliary optimization variable. By employing gradient truncation, the projection gradient descent method is efficiently utilized to solve the corresponding optimization problem.
The abstract does not reflect the method used. The statement is vague and could be inferred in many different ways.
Revising the abstract to better reflect the proposed method can strengthen the paper.
4. The second and third paragraph of the introduction needs to be revised. Instead of briefly mentioning what each existing method does, the authors could state how their method is different from the literature and how it contributes to the literature.
* The truncation of the gradient is based on an assumption and leads to an approximation in the solutions. Thus, the reduced computational cost should not be mentioned as a contribution of the paper, specifically when the authors have not included any experimental results regarding this.
> 44: Through gradient truncation, we obtain an approximation of the stochastic gradient of the objective, effectively sidestepping significant computational overhead.
Overall, the weaknesses of the paper are mostly related to the presentation of the paper and the blurred message, not the proposed method and effectiveness of it.
Minor issues:
> Line 19: Their remarkable ability to capture data priors enables effective guidance for solving inverse problems [6], which are widely exploited in image restoration.
Guidance in inverse problems usually refers to enforcing data consistency with the measurement and relies on the forward model.
Technical Quality: 3
Clarity: 2
Questions for Authors: What does the author mean by weak observations? (Line 214)
How does the algorithm look when used for nonlinear inverse problems? Can equation (8) be written for nonlinear measurements?
How do the authors set the $\eta_1$, $\eta_2$, and equivalent noise level in the algorithms?
Are the solutions sensitive to these parameters?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Partially addressed by the authors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for the reviewer's recognition of the innovation and effectiveness of our work. Below are our responses.
1. Reorganizing the theoretical analysis and the proofs of the lemmas.
Thanks for the reviewer's suggestion. We will move Lemma 1 and 2 ahead of the proofs of the Propositions, and we will present the proofs of the two Propositions in a clear and independent manner. The proofs of Lemma 1 and 2 are based on Bayes' theorem and mathematical induction, and we will supplement the proofs of this part.
2. Abstract includes some concerning statements.
Thanks for the reviewer's feedback. We will revise the abstract accordingly.
The first sentence will be revised as:
''Previous works have endeavored to integrate diffusion priors into the maximum a posteriori estimation (MAP) framework and design optimization methods to solve the inverse problem.''
The second sentence will be revised as:
''... by introducing an auxiliary optimization variable that represents a 'noisy' sample at an equivalent denoising step. The projection gradient descent method is efficiently utilized to solve the corresponding optimization problem by truncating the gradient through the $\\mu$-predictor.''
We will further refine the abstract to make it more fluent and reflective of our approach.
3. The authors could state how their method is different from the literature and how it contributes to the literature.
Compared to previous works that apply diffusion in the MAP framework for solving inverse problems, our core observation is that diffusion models not only model the distribution of clean data but also implicitly model the distribution of noisy data. This allows us to leverage both the clean data prior and the noisy data prior to solve noisy inverse problems (in other words, both the prior modeled by the diffusion model and its denoising capabilities). Thanks for the reviewer's suggestions, and we will further improve the expression of paragraph 3 and paragraph 4.
4. The reduced computational cost
Thanks for the reviewer's comments. We did not intend to present the reduction of complexity as a contribution of this paper. Line 44 will be revised to:
''We obtain a more practical approximation of the stochastic gradient of the objective through gradient truncation.''
Additionally, we have added an experiment comparing the use of our gradient approximation with not using it, as shown in Table 6 of the attached PDF. It can be observed that using the approximation results in a certain loss of performance, but the efficiency is nearly tripled. Considering that the method using this approximation has already achieved satisfactory performance, we accept a certain performance loss in exchange for efficiency.
5. Line 19.
Thanks for the reviewer's suggestions. Line 19 will be revised to
''Their remarkable ability to capture data priors provides promising avenues for solving inverse problems [6]...''
6. The weak observations problems.
We refer to inverse problems where the constraints provided by the observation are relatively loose as ''the weak observation problem''. For instance, the partial generation task in this paper aims to generate tracks for other instruments based on certain tracks (e.g., generating tracks of drums, bass, and guitar given a track of piano). This problem is highly flexible as a composer can create various different tracks for other instruments from the same piano track while ensuring harmony. This is a case belonging to the ''the weak observation problems''. In contrast, tasks such as super-resolution, inpainting, and deblurring in image restoration are cases where the observation strongly constrains the original data. For example, given a low-resolution image, its corresponding high-resolution image is almost unique, with lower degrees of freedom, and thus it does not belong to the ''weak observation problems''.
7. Regarding nonlinear measurements
The algorithm for nonlinear measurements is almost identical, except that the projection operations in equations (19) and (22) are replaced with the projection operations corresponding to nonlinear measurements, or by minimizing $||y-\\mathcal{A}(x))||^2$ from the initial point to approximate the projection operation.
For general nonlinear measurements, equation (8) cannot be strictly written. However, we can still apply a similar idea to handle nonlinear measurements, which is to simply find an equivalent noise level to deal with noisy observations. This equivalent noise level can either be calculated using rules or retained as a hyperparameter to adjust. Our Phase Retrieval experiments and HDR experiments have verified the effectiveness of our method.
8. The hyperparameters
All of the details of our hyperparameters are provided in Appendix E and F. Regarding the sensitivity of hyperparameters, we supplement a new experiment in Table 7 of the attached PDF. It can be observed that our algorithm exhibits robustness to the step size and the noise variance.
We would like to express our gratitude once again for the reviewer's suggestions for our work. Should there be any further questions, we welcome continued discussion.
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: Thank you for providing the rebuttal.
After reading the reviews from all the reviewers and rebuttals, I would update my score to 6.
---
Reply to Comment 1.1.1:
Comment: We would like to extend our sincerest gratitude for raising the score! We are greatly encouraged! | Summary: The authors present a new way of solving inverse problems using a diffusion model based prior.
The key idea is that the forward process, which is to be undone, usually involves Gaussian noise.
The authors utilize this by viewing the noisy observation as a as random variable at an intermediate time step t of the diffusion forward process.
This allows them to formulate the inverse problem as a joint optimisation problem for x_0 and x_t given the noisy observation.
As a by product, their method is able to solve inverse problems which do not involve noise as well.
The authors evaluate their method and achieve superior results compared to various baselines.
Strengths: * I believe this is a principled new perspective on diffusion models for inverse problems and I am sure it can be potentially useful for many practical applications.
* The method achieves remarkable results.
* I very much appreciate the fact that the method can be applied to noise-free inverse problems, without having to artificially include non-existent Gaussian noise in the model.
Weaknesses: * The motivation, comparing the method to the more established MAP approach, could be a bit more clear.
The authors write:
"In this work, we introduce a novel two-variable ELBO by constructing an auxiliary variable that accounts for the observation noise, thereby utilizing both the prior information and the denoising capability in diffusion models simultaneously."
Why exactly is is describable to utilize the denoising capability of the model? Is the something wrong with the normal way of including the log of the forward model in the optimisation task? Why can we expect an improvement? Is it a problem of optimisation? Surely the way task itself is normally (e.g. [28]) formulated is correctly, or not?
* Some aspects of the method were difficult for me to follow. This may be due to my inexperience with some of the involved mathematics.
In particular, I wonder if the SVD based decoupling could be explained or illustrated differently to facilitate understanding.
Technical Quality: 3
Clarity: 3
Questions for Authors: Regarding the SVD composition, I believe I am missing an important aspect about how the noisy observation can be considered as a step in the diffusion process when it involves A.
Let's say A corresponds to a convolution with a blur kernel. Would this not mean that we are considering a blurred noisy image to be part of the diffusion process. This would be incorrect, because the blurring would not be part of the training data statistics which would only include clean images, right?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: My understanding is that the method is limited to either image degradation processes that involve Gaissian noise, or are noise-free.
Is this the case?
If yes, this would be a significant limitation with respect to practical applicability, e.g. in settings with Poisson shot noise.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for the reviewer's recognition of the innovation, effectiveness, and practicality of our work. Below are our responses to the reviewer's questions.
1. Clarification on the motivation and comparison to the established MAP approach.
The core motivation of this paper stems from the fact that diffusion models not only model the distribution of clean data $p(x_0)$, but also simultaneously model the distribution of noisy data with varying variances of additive Gaussian noise $p(x_t)$. When observing a degraded data with Gaussian noise, the noisy data distribution can bring more usable information for solving the inverse problem. Therefore, compared to using only the prior information modeled by the diffusion model for $x_0$, we choose to use the prior information of both $x_0$ and $x_{t_a}$. Our algorithm can be viewed as two steps: first, recover the noisy sample $x_{t_a}$, and then use the consistency constraints of $x_{t_a}$ and the prior of $x_0$ to recover $x_0$ (which can be viewed as a denoising process). In other words, we utilize both the prior modeled by the diffusion model and its denoising capabilities.
The conventional approach of including the logarithm of the forward model in the optimization task is also feasible, as seen in [24, 28] in the main paper, and we have compared with it in our experiments (RED-diff). The improvement of our algorithm compared to these methods can be expected because our method utilizes more information modeled by the diffusion model, i.e., it uses both the prior and denoising capabilities (or both the clean data prior and the noisy data prior). We believe this is the essential difference between our algorithm and the methods that include the log of the forward model and $\\log p(x_0)$ modeled by diffusion models in the MAP framework.
2. Further explanation on SVD-based decoupling.
Here we provide additional clarification regarding the description in section 2.3. Firstly, since $V$ is orthogonal, we acknowledge that knowing the prior of $x_0$ is equivalent to knowing the prior of $\\overline x_0=V^T x_0$. Secondly, since the covariance matrix of both the forward transition and a single-step backward transition of the diffusion model is the diagonal matrix, each coordinate can be processed individually. Equation (7) presents the observation equation in a coordinate-wise form, where for each coordinate index $i$, the $i$-th coordinate of the noisy observation $\\overline y_i$ can be considered as a noise-free observation of the $i$-th coordinate of the sample $\\overline x_{t_i}$ at step $t_i$. In other words, each coordinate of the noisy observation (after spectral decomposition) can be considered as a noise-free observation of the coordinate at a certain step in the diffusion process.
Regarding the reviewer's concern about a convolution with a blur kernel, it is not about viewing the noisy blurred image as a step in the diffusion process, but rather, after SVD, each coordinate is treated as the noise-free observation of the coordinate corresponding to a certain step in the diffusion model. We hope the above explanation could describe our method more clearly. Thanks for the reviewer's suggestions, and we will make further modifications and explanations regarding the SVD part of the paper.
3. Regarding the limitations.
Yes, the limitation of this method is that it can only handle noise-free inverse problems or inverse problems with additive Gaussian noise. Addressing other types of noise, such as Poisson noise or multiplicative noise, is a promising direction for further research. In practical applications, noise-free inverse problems or inverse problems with additive Gaussian noise are widespread, hence there is a series of work in the field of solving inverse problems with diffusion models that assumes Gaussian noise (including but not limited to [18, 20, 21, 22, 26, 27] in the main paper). Therefore, our work holds practical significance.
We would like to express our gratitude once again for the reviewer's suggestions for our work. Should there be any further questions, we welcome continued discussion.
---
Rebuttal Comment 1.1:
Title: Thank you for the rebuttal.
Comment: Thank you for the clarifications!
**Regarding the motivation:**
*"The conventional approach of including the logarithm of the forward model in the optimization task is also feasible, as seen in [24, 28] in the main paper, and we have compared with it in our experiments (RED-diff). The improvement of our algorithm compared to these methods can be expected because our method utilizes more information modeled by the diffusion model, i.e., it uses both the prior and denoising capabilities (or both the clean data prior and the noisy data prior). We believe this is the essential difference between our algorithm and the methods that include the log of the forward model and modeled by diffusion models in the MAP framework."*
I am afraid I still can't completely follow here. Surely the correct prior together with the correct forward model should provide (or be proportional to) the correct posterior. Maximising the posterior will provide the true MAP estimate. How can *"additional information"* beyond this be useful?
Are you saying your method is better at finding the MAP estimate (compared to the conventional approach) or is your method finding something better than the MAP estimate?
---
Reply to Comment 1.1.1:
Title: Thanks for the reply.
Comment: We greatly appreciate the reviewers' careful consideration of our rebuttal! The issue can be explained from two perspectives:
On one hand, ideally, if we are given perfectly accurate priors and likelihoods, and have sufficient computational capability, we could obtain the exact posterior and also its gradients. Within the MAP framework, this should lead to the ideal results. However, in practice, the priors provided by diffusion models are not entirely accurate, and the weight coefficients between the priors and the likelihood cannot always be perfectly set. Moreover, dealing with the priors in diffusion models relies on stochastic optimization methods. These factors imply that the MAP framework often fails to achieve the desired solution. In such cases, further exploring the information or capabilities within the diffusion model can provide substantial assistance in solving inverse problems and enhance performance. This is one of the reasons why our algorithm has a performance advantage over the original MAP framework.
On the other hand, MAP methods utilize gradient descent to leverage the information from the observation, while ProjDiff transforms this into a projection operation by introducing noisy auxiliary variables. This approach offers numerous advantages: there is no need to consider the step size of gradient descent (at least for the likelihood term); there is no need to consider the weight coefficients between the likelihood and prior terms; and it ensures consistency between noisy samples and observations (the role of this consistency has also been confirmed in the noise-free scenario). Moreover, the introduction of this auxiliary variable indicates that ProjDiff can actually be viewed as simultaneously recovering clean data and the noise added to the observations, which can yield more accurate results than merely characterizing the noise prior with a Gaussian distribution. These transformations are all thanks to ProjDiff's utilization of both the clean prior and noisy prior modeled by diffusion models (i.e., the prior modeled by the diffusion model and its denoising capability). This is why we claim that we "have utilized more information from the diffusion model than the original MAP framework".
We would like to express our gratitude once again to the reviewer for the response! We hope that these additional answers will make our motivation and the reasons for the performance advantages of our algorithm clearer. | Summary: This paper proposes ProjDiff for solving inverse problems with pre-trained diffusion models. By deriving a two-variable ELBO as a proxy for the log-prior, this paper reframes the inverse problems as constrained optimization tasks and address them via the projection gradient method.
Strengths: 1. The paper writing is clear and easy to follow.
2. The derivation of some formulas in this paper is solid.
Weaknesses: 1. The assumption of this method for the measurement noise in Section 3.1 is doubtable. (1) As the authors claim in the limitation section, assuming the Gaussian noise will limit its applicability to other noise types like Poisson or multiplicative noise; (2) The whole method design heavily relies on know the exact standard deviation of the Gaussian noise \sigma, which is impractical and can lead to robustnesses issues when facing unknown noise level. Actually, the noise estimation itself is a challenging problem, particularly on the degraded measurement A(x). Please check the related works [1,2]. In general, this method is built on an impractical scenario, so I am worried about the practical usage of this method.
2. The approximation for implementing the update rules just after Proposition 2 is not convincing. (1) One of the reasons for this approximation is that "the μ-predictor of the diffusion model should be resilient to small perturbations in the input", but this hope usually is not the truth, particularly when t is large, i.e., at the beginning of the reverse sampling procedure; (2) There is not much evidence for this important approximation, no matter theoretical or empirical evidence.
3. The proposed method is too "delicate", as shown in Algorithm 1. (1) Again, this method needs to know the measurement noise level; (2)
this method contains several hyper-parameters. In Table 17, it is clear that the hyper-parameters are heavily tuned, and there is no specific way to guide the hyper-parameter tuning.
4. No ablation studies for the hyper-parameters. For this "delicate" method, a systematical ablation study is needed. For example, it would be good to report the results for different combinations of some important hyper-parameters.
5. The extension to nonlinear inverse problems is not convincing. This paper proposes to use min_x || y - A(x) ||^2 to approximate the projection operator. However, it is almost impossible to solve nonlinear inverse problems by using this formulation because it will be easy to be stuck in local minimizers. Also, if this formulation can solve nonlinear inverse problems well, then there is no need to write this paper.
6. The experiment settings for phase retrieval are doubtable. (1) I highly suspect that this paper downplays DPS in some implicit ways. I ran DPS for phase retrieval on FFHQ before, and remembered DPS could achieve over 30dB for PSNR after trying different initializations. In the original DPS paper, the authors claim that DPS need different initializations but it seems that this paper omits this; (2) there is no comparison with the golden standard method for phase retrieval, i.e., HIO+ER.
7. Some recent SOTA methods are missing for comparison, including ReSample[3] (ICLR'24 spotlight), DiffPIR[4] (CVPR'23), DMPS[5]. Please check the recent survey for more related works [6].
[1] Liu, X., Tanaka, M. and Okutomi, M., 2013. Single-image noise level estimation for blind denoising. IEEE transactions on image processing, 22(12), pp.5226-5237.
[2] Li, F., Fang, F., Li, Z. and Zeng, T., 2023. Single image noise level estimation by artificial noise. Signal Processing, 213, p.109215.
[3] Song, B., Kwon, S.M., Zhang, Z., Hu, X., Qu, Q. and Shen, L., 2023. Solving inverse problems with latent diffusion models via hard data consistency. arXiv preprint arXiv:2307.08123.
[4] Zhu, Y., Zhang, K., Liang, J., Cao, J., Wen, B., Timofte, R. and Van Gool, L., 2023. Denoising diffusion models for plug-and-play image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 1219-1229).
[5] Meng, X. and Kabashima, Y., 2022. Diffusion model based posterior sampling for noisy linear inverse problems. arXiv preprint arXiv:2211.12343.
[6] Li, X., Ren, Y., Jin, X., Lan, C., Wang, X., Zeng, W., Wang, X. and Chen, Z., 2023. Diffusion Models for Image Restoration and Enhancement--A Comprehensive Survey. arXiv preprint arXiv:2308.09388.
Technical Quality: 2
Clarity: 3
Questions for Authors: See weaknesses.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's suggestions, and here are our responses.
1. ''The assumption for the measurement noise is doubtable.''
We respectfully disagree. Firstly, in practical applications, noise-free inverse problems or inverse problems with Gaussian noise are ubiquitous, and the standard deviation of the Gaussian noise can be estimated using the method mentioned by the reviewer or retained as an adjustable hyperparameter. Secondly, a series of works (including but not limited to [18, 20, 21, 22, 26, 27] in the main paper) that use diffusion models to solve the inverse problems assume additive Gaussian noise and known variance (DMPS mentioned by the reviewer also assumes additive Gaussian noise with known variance, and the DiffPIR mentioned by the reviewer also assumes the variance is known). We believe that these works, as well as ours, have practical value. Lastly, as shown in the ablation study in response to point 4, ProjDiff is robust to some perturbation of the standard deviation. Therefore, our work holds practical significance.
2. ''The approximation after Proposition 2 is not convincing.''
We respectfully disagree. Firstly, this approximation is acceptable when $t$ is small, as the $\\mu$-predictor generally learns well to recover the clean image from a slightly noisy image. Secondly, for large $t$, in the case of VP diffusion, note that the coefficient $\\overline\\alpha_t$ in front of $x_0$ or $x_{t_a}$ is small, thus the gradients corresponding to $x_0$ and $x_{t_a}$ are also small. For VE diffusion, similarly, the dominant term in $\\mu$ will also be the noise term when $t$ is large, making it insensitive to the input $x_0$. Our experimental results have demonstrated the effectiveness of this approximation. Furthermore, we supplement a new experiment that uses the stochastic gradient without this approximation, as shown in Table 6 of the attached PDF (denoted as ProjDiff-FG). It can be observed that using this approximation results in a certain loss of performance, but the efficiency is nearly tripled. Considering that applying this approximation has already achieved satisfactory performance, we are willing to accept a certain performance loss in exchange for efficiency.
3. ''The proposed method is too 'delicate'.''
We respectfully disagree. Once again, assuming knowledge of the variance of Gaussian noise is a default setting in many works within the field. Secondly, as we explain in Appendix F, we fix $\\eta_2=1.0$, thus only $\\eta_1$ needs to be tuned (as well as the equivalent noise level $\\overline \\alpha_{t_a}$ if we leave it as a hyperparameter to be tuned). We conduct hyperparameter tuning on the first eight images for each task. Given that the complexity of ProjDiff is not too high, tuning hyperparameters is not time-consuming. In the ablation study in response to point 4, we also demonstrate that ProjDiff is robust to some perturbation of the step size and the standard deviation (i.e. the equivalent noise level). To the best of our knowledge, many related works have at least one hyperparameter that needs to be adjusted, including the algorithms mentioned by the reviewer.
4. Ablation studies for hyper-parameters
We supplement the ablation study concerning $\\eta_1$ and ${\\sigma}_0$ in Table 7 of the attached PDF. ProjDiff exhibits robustness to some perturbation of the hyperparameters of the step size and the noise standard deviation.
5. ''The extension to nonlinear inverse problems is not convincing''.
The reviewer has misunderstood our approach. One of the core aspects of ProjDiff is to project a sample $x_0$ onto the manifold corresponding to $y=\\mathcal{A}(x)$. When $\\mathcal{A}$ is nonlinear whose projection operator is unavailable, one can instead find a solution that is close to $x_0$ and close to the observation manifold by solving $\\min_x||y-\\mathcal{A}(x)||^2$ starting from $x_0$. This solution can serve as an approximation for the projection operation corresponding to a nonlinear observation equation. We also supplement an experiment to validate the effectiveness of this method in the Phase Retrieval task (ProjDiff-GD in Table 4 of the attached PDF).
6. ''The experiment settings for phase retrieval are doubtable.''
We respectfully disagree. Our experimental settings for phase retrieval are correct, fair, and reasonable. All of our experimental settings are detailed in Appendix E. We generate one sample per image for each algorithm. The good performance of ProjDiff under this setup further demonstrates its robustness. We also supplement the results of allowing ProjDiff and DPS to generate 4 samples for each image, as shown in Table 3 of the PDF (denoted as ProjDiff-4trials and DPS-4trials). Both DPS and ProjDiff show performance improvements, with ProjDiff in particular exhibiting better performance, reaching a PSNR of 41.58 in the noise-free case. The performance of HIO, ER, and OSS is also presented in Table 3 and the global response, where we report the results with both one repetition and four repetitions.
Note that DPS does not achieve the PSNR of 30 as mentioned by the reviewer. To dispel further doubts about the correctness of our reproduction, we independently run the official code of DPS four times without any modifications (i.e., noise variance set to 0.05, with oversampling consistent with our experiments) and take the best result per sample, obtaining a PSNR of 17.45, indicating that our reproduction is also correct.
7. Comparison with the latest SOTA.
We supplement comparisons of ProjDiff with ReSample, DiffPIR, DMPS, and $\\Pi$GDM on the CelebA dataset, as shown in Table 1 & 2 in the attached PDF. The hyperparameters are tuned task by task for fairness. ReSample is designed for Latent Diffusion, thus in our experiments, we consider the encoder-decoder as identity mappings. ProjDiff remains highly competitive when compared with these algorithms.
---
Rebuttal Comment 1.1:
Title: Looking forward to your reply!
Comment: Dear Reviewer g7gD:
We would like to express our sincere gratitude once again for your valuable comments and suggestions.
The discussion phase is due to conclude in 30 hours, and we have already received responses from the other three reviewers. We are eagerly awaiting your feedback. Should there be any further questions, we would be more than happy to engage in continued discussion if time permits.
Best regards. | Rebuttal 1:
Rebuttal: Dear Reviewers,
Please refer to the attached one-page PDF for the supplementary experimental results.
We appreciate the valuable feedback provided by all the reviewers on our paper, which has helped us to further refine our work. We are particularly encouraged by the following comments from the reviewers:
(1) ''This is a principled new perspective on diffusion models for inverse problems'' and ''it can be potentially useful for many practical applications'' (Reviewer Wp65)
(2) ''The performance is competitive'' (Reviewer dXRm)
(3) ''extensive numerical results'' and ''Numbers for source separation look significantly better than the baselines'' (Reviewer krnF)
Based on the reviewers' suggestions and advice, we have diligently conducted a substantial number of additional experiments. Please refer to the attached one-page PDF for the supplementary experimental results. Below, we provide a summary and explanation of the supplementary results.
Supplementary experiments include (all LPIPS metrics have been multiplied by 100 for clarity):
(1) New comparisons with $\\Pi$GDM, Resample, DMPS, and DiffPIR on the CelebA dataset (Tables 1 & 2), as suggested by reviewers g7gD and krnF. We have carefully adjusted the parameters task by task for the four algorithms to ensure a fair comparison. The results indicate that ProjDiff is highly competitive compared to these algorithms.
(2) Results of DPS with a task-by-task parameter tuning on the CelebA dataset (DPS-s in Tables 1 & 2). We are grateful for the suggestion from reviewer krnF. We have conducted task-by-task parameter tuning for the learning rate in DPS on the CelebA dataset and reported the results, ensuring a fair comparison. The results show that ProjDiff still holds an advantage over DPS after parameter tuning.
(3) The results for DPS and ProjDiff in the Phase Retrieval task by taking the best one from 4 independent trials for each image (DPS-4trials and ProjDiff-4trials in Table 3), in response to the question raised by reviewer g7gD. With 4 repetitions, ProjDiff shows a very significant improvement in performance, achieves a PSNR of 41.58 in the noise-free scenario, and still holds a considerable advantage over DPS.
(4) Comparisons with ER, HIO, and OSS algorithms on the Phase Retrieval task (Table 3), as suggested by reviewer g7gD. Due to space constraints, only the results with four repetitions are reported in Table 3. We have also tested the results of these three algorithms with one repetition, as shown in the following table.
| | Phase Retrieval ($\sigma_0=0$) | Phase Retrieval ($\sigma_0=0.1$) |
| :--------: | :----------------------------------------------------------: | :----------------------------------------------------------: |
| | PSNR $\uparrow$ SSIM $\uparrow$ LPIPS $\downarrow$ FID $\downarrow$ | PSNR $\uparrow$ SSIM $\uparrow$ LPIPS $\downarrow$ FID $\downarrow$ |
| ER-1trial | 11.15 / 0.19 / 84.48 / 409.91 | 11.16 / 0.19 / 84.43 / 412.59 |
| HIO-1trial | 11.97 / 0.25 / 82.06 / 342.52 | 11.87 / 0.24 / 82.18 / 339.13 |
| OSS-1trial | 12.57 / 0.35 / 81.08 / 360.98 | 12.55 / 0.25 / 81.20 / 364.52 |
(5) The performance of ProjDiff in the Phase Retrieval task using gradient descent to approximate the projection operation (ProjDiff-GD in Table 4), in response to the question from reviewer g7gD. The results indicate that approximating the projection operator of $x_0$ with respect to the observation equation by minimizing $||y-\\mathcal{A}(x)||^2$ starting from $x_0$ can also achieve satisfactory performance, which suggests that ProjDiff has generalizability for nonlinear inverse problems.
(6) Corrected metrics for the DDNM+ algorithm on the Noisy Gaussian Deblurring task (Table 5). We appreciate the reviewer krnF for pointing out the issue with the metrics. We have re-implemented the DDNM+ algorithm on the Noisy Gaussian Deblurring task and reported the corrected performance.
(7) Ablation study on the gradient truncation method used in ProjDiff (Table 6), in response to the questions from reviewers g7gD and dXRm. ProjDiff without gradient truncation is denoted as ProjDiff-FG. The results show that there is some performance loss when using gradient truncation compared to not using it, while the efficiency increases by nearly three times. Considering that using gradient truncation has already achieved satisfactory performance, we accept the trade-off of some performance loss for the gain in efficiency.
(8) Sensitivity tests for the hyperparameters $\\eta_1$ and noise level $\\sigma_0$ in ProjDiff (Table 7), as suggested by reviewers g7gD and dXRm. '$\\times a$' denotes that we perturb the input standard deviation of ProjDiff by multiplying $a$ (this will affect the equivalent noise level). The results indicate that ProjDiff exhibits a certain degree of robustness to the step size and noise level. This also demonstrates that retaining the equivalent noise level as an adjustable hyperparameter is feasible when it cannot be directly calculated.
We would like to express our gratitude to all the reviewers once again, and we believe that these additional experiments would significantly contribute to the improvement of our manuscript.
Pdf: /pdf/22469f47aae9801b8f2d5f8d09895b55435f28d3.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Gaussian Graph Network: Learning Efficient and Generalizable Gaussian Representations from Multi-view Images | Accept (poster) | Summary: This paper presents Gaussian Graphs to construct the relations of different Gaussian groups, introduces a Gaussian Graph Network to process Gaussian Graphs.
Strengths: Experimental results are sufficient and convincing. This work is easy to follow.
Weaknesses: 1. As far as I can see, the main work of this paper includes the construction of Gaussian Graphs and Gaussian Graph Network. But they are very similar to the common GNN and workflow of GNN. The main innovation of this paper may be the combination of GNN and GS. However, many works that combine GNN and GS have been proposed, such as SAGS and Hyper-3DG. So I think this paper is of low innovation.
2. The authors provide only a few experimental results.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. As far as I can see, the main work of this paper includes the construction of Gaussian Graphs and Gaussian Graph Network. But they are very similar to the common GNN and workflow of GNN. What's the difference between this paper and other works that combine GNN and GS have been proposed, such as SAGS and Hyper-3DG?
2. Could this work be compared with NeRF-based generalizable architectures in the experiments?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: I think this paper is of low innovation, as methods have combined GS and GNN. The construction of Gaussian Graphs and Gaussian Graph Network are very common.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable comments. Below we address specific questions.
**1. The difference between this paper and other works that combine GNN and GS**
Thank you for pointing out these relevant works SAGS [1] and Hyper-3DG [2]. We would like to highlight our key differences with these works.
One of the key differences is the construction of Gaussian graphs. SAGS sets each point as a node and constructs graphs for KNN points, while Hyper-3DG first obtains patch features by aggregating gaussian features within each patch, and then sets each patch as a node with its feature. However, SAGS is still an optimization-based method, where the Gaussian-level graph is less efficient due to the large number of Gaussian points, while the patch-level graph in Hyper-3DG fails to support message passing to exploit Gaussian-level relations.
In our work, we propose a hierarchical method to construct Gaussian Graphs - we formulate each view as a node but also fully preserve the Guassian-level structures in each node. We fully exploit the important view-related information, i.e. (1) group of pixel-aligned Gaussians belonging to the same view and (2) projection relations between different views. This information enables us to capture accurate and sparse cross-view Gaussian-level relation in an efficient manner with the edge matrix $E^{i\rightarrow j}$.
Based on our carefully-designed hierarchical structure, we specifically design linear layers for feature learning by extending the scalar weight of an graph edge to a matrix. Furthermore, we propose a new pooling strategy to fuse Gaussians based on their spatial positions to avoid the redundancy of Gaussians in pixelSplat and MVSplat, while SAGS and Hyper-3DG do not introduce pooling layers in their graph networks.
[1] Ververas, E., Potamias, R. A., Song, J., Deng, J., & Zafeiriou, S. (2024). SAGS: Structure-Aware 3D Gaussian Splatting. arXiv preprint arXiv:2404.19149.
[2] Di, D., Yang, J., Luo, C., Xue, Z., Chen, W., Yang, X., & Gao, Y. (2024). Hyper-3DG: Text-to-3D Gaussian Generation via Hypergraph. arXiv preprint arXiv:2403.09236.
**2. Comparison with NeRF-based generalizable methods**
We reported the comparison with NeRF-based methods on 2-view settings in Table 2 of our paper. We also conduct further experiments with different number of input views, as shown in the following table. Experimental results show that our method have better rendering quality and inference speed, which is benefit from our efficient and generalizable Gaussian representation learning.
Table 1: Comparison of NeRF-based methods on RealEstate10K with different number of input views.
| | 4 view | | 8 view | | 16 view | |
|-----------|--------|--------------------|--------|-----------------|---------|----------------|
| | PSNR | Inference Time | PSNR | Inference Time | PSNR | Inference Time |
| pixelNeRF [3] | 21.82 | 5.38 | 21.84 | 5.77 | 21.85 | 6.48 |
| MuRF [4] | 24.30 | 0.36 | 24.78 | 0.55 | 25.56 | 1.79 |
| Ours | 24.76 | 0.14 | 25.15 | 0.44 | 26.18 | 1.40 |
[3] Yu, A., Ye, V., Tancik, M., & Kanazawa, A. (2021). pixelnerf: Neural radiance fields from one or few images. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 4578-4587).
[4] Xu, H., Chen, A., Chen, Y., Sakaridis, C., Zhang, Y., Pollefeys, M., ... & Yu, F. (2024). Murf: Multi-baseline radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 20041-20050).
---
Rebuttal Comment 1.1:
Comment: Thank you for the new experimental results. You have addressed my concerns, and I have improved the final rating. | Summary: This paper introduces the Gaussian Graph Network (GGN), a novel approach for generalizable 3D Gaussian Splatting (3GDS) reconstruction. The authors identify a problem with previous generalizable 3DGS work: they regress pixel-aligned Gaussians and combine Gaussians from different views directly, resulting in an excessive number of Gaussians when the model processes multi-view inputs. To address this issue, they propose using a graph network that identifies the relationships between the Gaussians generated from different views and merges them if they are similar in positions and features. Thus, GGN can reduce the number of Gaussians for better efficiency and reconstruction quality. Experiments show that GGN outperforms baselines in both efficiency and quality and scales well when the input views increase.
Strengths: 1. The idea of merging redundant Gaussians in generalizable 3DGS is novel and practical. Identifying the relationships between Gaussians using a graph network to merge them is also a novel idea.
2. The results are satisfactory. Compared to the baselines, GGN achieves better reconstruction results with fewer Gaussians and scales well when the input views increase.
3. The paper is well-written and the experiments are complete.
Weaknesses: 1. The inference efficiency of GGN is not shown in the experiments. Since the number of Gaussians is quite large and increases with more input views, the efficiency of GGN is a concern. While the authors provide the results of rendering time and the number of merged Gaussians, these data do not reflect the inference efficiency of the model.
Overall, this paper introduces a novel and practical idea for generalizable 3DGS reconstruction. I think this paper pushes the boundaries of generalizable 3D reconstruction and should be accepted.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How is the inference efficiency compared to the baselines?
2. How is the inference efficiency when the number of input views increases?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: The limitations are discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable comments. We agree with you that the analysis of training and inference latency of our proposed method is important. We discuss this topic in our general rebuttal. We will add this discussion on training and inference latency in Section 4.2 (line 190) as you suggested. | Summary: This paper proposes a Graph Neural Network architecture to model the relations between multi-view 3D Gaussians predicted by generalizable 3DGS methods. The proposed method works in particularly better for large number (e.g., 4, 8, 16 etc) of input images, compared with previous methods like pixelSplat and MVSplat that simply combine per-view pixel-aligned Gaussians. The proposed method effectively reduces the redundancy of 3D Gaussians when more input images are given. Experiments are conducted on standard RealEstate10K and ACID datasets and the proposed method performs significantly better than pervious methods for large number of input views.
Strengths: This paper tackles a common problem in existing generalizable 3DGS models like pixelSplat and MVSplat, where they simply combine per-view 3D Gaussians. This leads to redundancy for more input images. The proposed solution could be applicable to different models and improve their performance.
The presentation is clear and the experiments are well-designed and extensive. The proposed method achieves strong results on standard benchmarks.
Weaknesses: A few implementation details are not completely clear. For example, how are the input views selected for different number of input views? Are they sampled to be more densely within some predefined frame range or are they sampled to cover larger regions of the scene? This also leads to another question if the proposed method could work on large-scale scenes since the model could handle many frames now? For instance, if the inputs are 10-20 images capturing a room, could the full room be reconstructed reasonably? I am also wondering if the authors plan to release the source code, which I think would benefit the reproducibility for the research community.
For results of different number of input views in Table 1 and Figure 4, are they obtained with a single model, or the author trained a specific model for each specific number of input views?
For efficiency analysis, this paper reported the FPS metric. However, I assume the FPS is measured only for the rendering part right? How about the network inference time and memory consumption, especially for more input images? Since the proposed method is a feed-forward model, where the network forward time should also be considered for benchmarking efficiency.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the weaknesses.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: Yes, the limitations are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable comments. Below we address specific questions.
**1. The selection of input views**
As the number of input views increases, they are sampled from larger regions of the scene. Thank you for point out this question. We will clarify this setting in Section 4.1 (line 157) in our final version.
**2. Large-scale scenes**
We further evaluate our method on longer videos sampled from large-scale scenes in RealEstate10K. The visualization results are shown in Figure 1 in the one-page pdf. Our method can reconstruct large-scale outdoor scenes and render better novel views than previous methods, benefiting from multiple input views. We also validate our method on indoor full room scenes. With 16 images capturing a room, we reconstruct the full room reasonably, while pixelSplat and MVSplat still suffer from the redundancy of Gaussians, due to the fact that the large-scale scenes require more input views. We report the quantitative results of each scene in the following table for detailed comparison.
Table 1: Quantitative comparison of different methods on large-scale scenes.
| Sence ID | pixelSplat | MVSplat | Ours |
|:--------:|:----------:|:-------:|:----:|
| Re10k: fea544b472e9abd1 | 12.66 | 12.88 | 18.88 |
| Re10k: 21e794f71e31becb | 15.04 | 14.76 | 20.62 |
| Re10k: de45926738229f67 | 17.13 | 20.21 | 25.07 |
| ACID: d4453ce709bd53e1 | 11.64 | 13.14 | 20.44 |
| ACID: ee09e048af8deba6 | 17.77 | 16.86 | 26.55 |
| ACID: 3fcb7b6b398b4064 | 22.75 | 24.55 | 32.03 |
**3. Release of the source code**
We will release our paper on arXiv and the source code on github if our paper is accepted.
**4. Questions about the different number of input views in Table 1 and Figure 4**
For results of different number of input views, they are obtained with a single model. In our opinion, it is convenient and flexible for users to use one model to process different number of input views. The results also demonstrate that our model has the capability to reconstruct scenes with different number of input views.
**5. Efficiency analysis**
We agree with you that the analysis of training and inference latency of our proposed method is important. We discuss this topic in our general rebuttal. We will add this discussion on training and inference latency in Section 4.2 (line 190) as you suggested.
---
Rebuttal Comment 1.1:
Comment: I have read all the reviews and the rebuttal. First, I would like to thank the authors for taking time in preparing the rebuttal and providing additional clarifications in the discussion. Second, as the questions or concerns I had in my earlier review have all been answered or addressed, I keep my initial rating (Weak Accept). However, I would suggest to add the following missing information in the final version:
- Details of view selection and evaluation for different number of input views
- Video results of more input views, which would make it easier to perceive the global consistency of the reconstruction. Although a few rendered images are provided in the rebuttal, it's less clear (compared to videos) how the model works for more input views.
- Time and memory consumption of different number of views, as shown in the global response.
---
Reply to Comment 1.1.1:
Title: Thanks for your valuable feedback
Comment: Thanks for your valuable suggestions. We will clarify the details of view selection and evaluation for different number of input views in Section 4.1. We will prepare a website page, which shows multiple video results, and we will release it with our final paper. We will also add the time and memory consumption in the global response to Section 4.2. | Summary: The paper presents an incremental design built upon existing generalizable GS reconstruction frameworks (e.g., PixelSplat, MVSplat) to fusion overlapped pixel-aligned gaussian points from multiple images with a graph-based operator and pooling. This additional design presents a faster and better rendering effect than PixelSplat and MVSplat.
Strengths: 1. Overall, I agree that the proposed framework is a good choice for fusing pixel-aligned gaussian points from frameworks like PixelSplat or MVSplat. Grouping and fusing these points in 3D space with a graph-based strategy sounds reasonable and effective in solving the artifact issue caused by PixelSplat.
2. The visualization and comparison do present a better novel view synthesis effect with fewer gaussian points when compared with previous efforts like PixelSplat and MVSplat, which results in faster render speed compared with these two works.
Weaknesses: ***[Efficiency and Scalability]***
Although the paper presents a better rendering speed (a natural outcome of reducing gaussian points), it lacks a discussion regarding learning efficiency: the training latency of the proposed design. We know the point-aligned gaussian points are actually quite a large number, especially when increasing the number and resolution of images. However, classical graph-based methods are usually not efficient in handling a large number of points. Further, the limitation on training efficiency might further restrict the stability of the proposed design. Consequently, I hope the author can consider adding a discussion and benchmarking on a breakdown of training and inference latency of the additional graph-based network. It is fine if the current proposed design is not so efficient, but it is definitely worth a discussion.
On the other hand, the proposed graph network is fundamentally seeking a way to process and fuse points in 3D space. The point cloud processing community already has years of accumulation on handling this kind of data efficiently and effectively. One suggestion is to further enhance performance with the latest efforts in point cloud processing, such as Point Transformer V3.
Technical Quality: 3
Clarity: 4
Questions for Authors: Already discussed in weaknesses regard to the training efficiency and scalability.
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors also mentioned the limitation in model efficiency, especially when increasing the number and resolution of images. Yet this part is actually more important and worth some number to measure this issue. Detailed discussion in weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable comments. Below we address specific questions.
**1. Training and inference latency**
We agree with you that the analysis of training and inference latency of our proposed method is important. We discuss this topic in our general rebuttal. We will add this discussion on training and inference latency in Section 4.2 (line 190) as you suggested.
**2. Inspiration from point cloud processing community**
Your suggestion does inspire us a lot. We further add a point cloud processing part from Point Transformer V3 to extract Gaussian features (Model A). As shown in Table 1, such design has improvement on performance. Due to the limitation of rebuttal time, we adopt a simple design, but this design still boost the performance. We argue that this is a promising direction, and we will consider this suggestion as one of our future works to further benefit from the progress in point cloud processing area.
Table 1: Results of PSNR on RealEstate10K benchmark.
| | Ours | Model A |
|:--------:|:-----:|:-------:|
| 4 views | 24.76 | 24.83 |
| 8 views | 25.15 | 25.26 |
| 16 views | 26.18 | 26.30 |
---
Rebuttal Comment 1.1:
Title: Efficiency comparison with Point Transformer
Comment: Thanks the authors for the response. The comparison with Point Transformer looks interesting (Table 2), I am wondering how the proposed GGN compares with Point Transformer in terms of inference time and memory consumption with respect to different views? It would be great if the authors could also report the time and memory comparisons in Table 2 as well, thanks.
---
Reply to Comment 1.1.1:
Comment: Thanks for your comments. We report the inference time (ms) as well as memory cost (GB) in the following table. Due to the limitation of rebuttal time, we only add a simple part to our model, which leads to a little increase on inference time and memory cost. We will consider how to fully utilize the efficient point cloud processing methods instead of such simple combination as our future topic to further promote our research. Thanks for your inspiration again.
| |Ours | | |Model A| | |
|------|------|--------------|------|-------|--------------|------|
|Views |PSNR |Inference Time|Memory|PSNR |Inference Time|Memory|
|4 | 24.76| 148.1 |4.8 |24.83 |157.3 |5.4 |
|8 | 25.15| 388.8 |8.6 |25.26 |410.4 |10.1 |
|16 | 26.18| 1267.5 |21.4 |26.30 |1334.6 |26.0 |
---
Rebuttal 2:
Comment: Thanks for the rebuttal. My concerns are well addressed. Thus, I keep my original rate to accept this paper.
BTW: I am also considering the possibility of combining point cloud processing with Gaussian splatting. We have spent many years exploring how to handle unstructured data effectively and efficiently, and it's encouraging to see that authors and reviewer LnYN are also interested in this potential.
---
Rebuttal 3:
Comment: Thank you, Reviewer sL12, for your insightful comments. I completely agree with your points and would like to echo your thoughts. Indeed, I also feel that combining point cloud processing with Gaussian splatting has strong potential, and I am excited to see what our community will develop in this direction :)
Best Regards,
Reviewer LnYN | Rebuttal 1:
Rebuttal: We thank all reviewers for their insightful feedback. We tried to address all the questions for each reviewer in the rebuttal session below.
Here we discuss about the **training and inference efficiency** as mentioned by Reviewer sL12, LnYN and zafq. As the point-aligned gaussian points are quite a large number, if we set each gaussian point as a node in the graph, we do face the problem of inefficiency. In this paper, we formulate each view as a node but also fully preserve the Gaussian-level structures in each node. We exploit the important view-related information, i.e. (1) group of pixel-aligned Gaussians belonging to the same view and (2) projection relations between different views. This information enables us to capture accurate and sparse cross-view Gaussian-level relations in an efficient manner. Based on this, our specifically designed layers for the Gaussian Graph are operated on such Gaussian groups for information fusion.
We further conduct experiments to analysis the training and inference latency on RealEstate10K. All the experiments are conducted on a single NVIDIA A6000 GPU. Since the inference of pixelSplat with 16 input views cannot be done in parallel on a 48G A6000, we split them into batches to get Gaussians.
As shown in Table 1 and Table 2, our method uses less training and inference time compared with pixelSplat, benefiting from faster encoding and rendering speed. Due to the additional construction of the Gaussian Graph and application of Gaussian Graph Network, our method uses a little more training and inference time than MVSplat. We also report time consumption of each part of our methods in Table 3, where the Gaussian Graph Network dominates in time consumption and the rendering time increases slowly with more input views. As illustrated in Table 4, the inference time of three methods grows at a quadratic rate with the increase of image resolution due to pixel-aligned Gaussians.
For memory analysis, as shown in Table 5, our method has 80% off on training memory cost compared with pixelSplat. The comparison of inference memory cost between our method and pixelSplat draws to the similar conclusion. The structure of Gaussian Graph leads to a little increase of memory cost compared to MVSplat, which can be almost ignored as the number of views increases.
Table 1: Training time (h) of different methods.
| Methods | pixelSplat | MVSplat | Ours |
|:-------------:|:----------:|:-------:|:----:|
| Training time | 183 | 78 | 111 |
Table 2: Inference time (ms) of different methods with different number of input views.
| | pixelSplat | MVSplat | Ours |
|:--------:|:----------:|:-------:|:-------:|
| 2 views | 137.3 | 60.6 | 75.6 |
| 4 views | 298.8 | 126.4 | 148.1 |
| 8 views | 846.5 | 363.2 | 388.8 |
| 16 views | 2938.9 | 1239.8 | 1267.5 |
Table 3: Inference time (ms) of different parts in our method.
| | Image Encoder | GGN | Parameter Predictor | Rendering |
|:--------:|:-------------:|:-------:|:-------------------:|:---------:|
| 2 views | 8.37 | 28.21 | 34.51 | 4.47 |
| 4 views | 10.01 | 88.89 | 44.56 | 4.64 |
| 8 views | 14.09 | 311.24 | 58.75 | 4.75 |
| 16 views | 21.15 | 1165.65 | 75.88 | 4.84 |
Table 4: Inference time (ms) of different methods with different view resolutions.
| | pixelSplat | MVSplat | Ours |
|:--------:|:----------:|:-------:|:-------:|
| 256$\times$256 | 137.3 | 60.6 | 75.6 |
| 512$\times$512 | 574.2 | 235.3 | 319.5 |
| 1024$\times$1024 | 2445.4 | 943.3 | 1329.9 |
Table 5: Training memory cost (GB) of different methods.
| Methods | pixelSplat | MVSplat | Ours |
|:--------------------:|:----------:|:-------:|:----:|
| Training Memory Cost | 37.5 | 5.9 | 7.7 |
Table 6: Inference memory cost (GB) of different methods with different number of input views.
| | pixelSplat | MVSplat | Ours |
|:--------:|:----------:|:-------:|:----:|
| 2 views | 6.1 | 1.6 | 3.2 |
| 4 views | 11.8 | 3.2 | 4.8 |
| 8 views | 29.0 | 8.0 | 8.6 |
| 16 views | 73.7 | 20.0 | 21.4 |
Please kindly let us know if you have any follow-up questions or areas needing further clarification. Your insights are valuable to us, and we are ready to provide any additional information that could be helpful.
Pdf: /pdf/2ec9b54797ff5139761c01a8205cc04945df4cef.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs | Accept (poster) | Summary: The paper introduces Chain-of-Preference Optimization (CPO) to improve reasoning in LLMs. CPO utilizes preference data generated during ToT to fine-tune models. This approach improves reasoning without increasing inference complexity.
Strengths: 1. The idea of CPO is interesting.
2. CPO achieves enhanced performance without increasing inference complexity.
3. The paper is well-written and easy to follow.
Weaknesses: 1. While CPO is effective across several tasks, its performance on more complex reasoning benchmarks, such as the MATH benchmark, remains unexplored.
2. Fine-tuning with reasoning paths generated by ToT may reduce the diversity of generated paths. A systematic investigation into how the generated reasoning paths change before and after fine-tuning is needed.
3. The effectiveness of CPO is heavily dependent on the quality of reasoning paths generated by ToT. If ToT produces suboptimal paths, the preference data might not be as beneficial.
4. The paper does not adequately address the applicability of CPO to more advanced models, such as LLaMA3.
5. While CPO enhances performance, the paper lacks a deep analysis of the interpretability of the optimized reasoning paths. Understanding why certain paths are preferred could provide additional insights and improve trust in the model’s decisions.
Technical Quality: 3
Clarity: 3
Questions for Authors: NA
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback and questions. Below, we respond to the comments in ***Weaknesses (W)*** and ***Questions (Q)***.
---
***W1: CPO performance on MATH benchmark.***
Following your feedback, we included the performance of both CoT and CPO using the LLaMa3-8b-base model in $\\textrm{\\color{blue}Table B}$ of the Rebuttal PDF. For the MATH benchmark, we tested on a sample of 300 instances, as discussed in our response to ***W1*** of **Reviewer VaVd**. As indicated in the table, CPO enhances performance by 1.1% on the MATH benchmark relative to CoT, demonstrating its effectiveness on more complex reasoning tasks.
---
***W2: CPO's influnce on the diversity of generated paths.***
In response to your insightful suggestion, we systematically evaluated the diversity of generated reasoning paths before and after fine-tuning, as measured by distinct-N [H]. The results are summarized in the table below:
||LLaMA2-7b|||| LLaMA2-13b|||| Mistral-7b||||
| ---- | ---- | ---- | ---- | ----| ---- | ----| ---- | ----| ---- | ----| ---- | ----|
||distinc-1|distinc-2|distinc-3|distinc-4|distinc-1|distinc-2|distinc-3|distinc-4|distinc-1|distinc-2|distinc-3|distinc-4|
|Before|0.12|0.408|0.568|0.632|0.035|0.167|0.288|0.366|0.460|0.199|0.329|0.407
|After|0.039|0.148|0.229|0.287|0.032|0.144|0.272|0.35|0.037|0.134|0.215|0.271|
The results indicate a decrease in distinct-N values after fine-tuning with CPO, suggesting a reduction in diversity. This phenomenon may stem from our preference optimization algorithm (i.e., DPO), which optimizes the reverse KL divergence and inherently exhibits mode-seeking behavior, thus reducing diversity in generation [I]. We will incorparate this evaluation and analysis in the revision.
[H] Li et al. A diversity-promoting objective function for neural conversation models, NAACL 2016
[I] Wang et al. Beyond Reverse KL: Generalizing Direct Preference Optimization with Diverse Divergence Constraints, arXiv:2309.16240
---
***W3: CPO's effectiveness is heavily dependent on ToT's performance.***
We acknowledge the concerns raised regarding the dependence of CPO on the quality of reasoning paths generated by ToT. While it is true that suboptimal paths from ToT could affect the overall performance, our results indicate that CPO has the capability to occasionally surpass the performance upper bounds of ToT. This is evidenced by the results presented in Table 1 of our paper.
Additionally, CPO's design is not inherently restricted to using ToT-generated paths. Our framework is adaptable and can integrate with other tree search methods that might yield higher quality reasoning paths, which we leave as our future work.
---
***W4: CPO's performance on LLaMA3.***
The release of LLaMA3 on April 18, 2024, just 28 days prior to our submission deadline, presented significant challenges in conducting comprehensive experiments across all seven tasks with this new model.
In response to your feedback, we have included additional results for the LLaMA3-8b-base model’s performance using both CoT and CPO on the ProofWriter and MATH benchmarks in Table B of the Rebuttal PDF. These results demonstrate that CPO enhances performance by 1.1% on both datasets when compared to CoT, affirming its applicability even on more advanced models.
---
***W5: The interpretability of the optimized reasoning paths.***
We appreciate your emphasis on the importance of interpretability in the reasoning paths optimized by CPO. To address this, we have included an analysis using two specific examples in $\\textrm{\\color{blue}Table E}$ of the Rebuttal PDF. These examples illustrate that the paths preferred by CPO closely resemble those selected by ToT rather than CoT, suggesting a relatively higher quality of reasoning in the paths chosen by CPO. A thorough analysis of these patterns and their implications will be provided in the revision to enhance the interpretability and trustworthiness of the model's decision-making process. | Summary: This paper introduces Chain of Preference Optimization (CPO), a method to enhance mathematical reasoning in large language models by feeding step-level pairs rather than response-level ones into DPO objectives. CPO leverages non-optimal reasoning thoughts from tree-search processes to construct paired preference data at each reasoning step, where both responses share the same content until a certain step and then start varying. Experiments across seven datasets show CPO significantly improves performance compared to Chain-of-Thought approaches, achieving comparable results to Tree-of-Thought methods while being substantially faster during inference. The authors provide a comprehensive analysis of CPO's components and demonstrate its effectiveness in utilizing both preferred and dispreferred thoughts in the reasoning process.
Strengths: - The experiments demonstrate that CPO is effective across different tasks including question answering, fact verification, and arithmetic.
- This paper conducts an extensive analysis of several factors that can influence the model performance. The insights on scaling effects, data mixture, and dispreferred information can be useful.
- The paper is well-written and easy to follow.
Weaknesses: - The major concern is that despite being effective, the method is basically a new use of the DPO algorithm with different inputs (converting response-level pairs to chain-level) and thus it's fundamentally still DPO.
- In section 6, the authors have explained why chain-level optimization is important. However, according to the explanation, token-level DPO is more natural in locating errors and avoiding LCP gradient cancellation. Then according to your theory, why not directly use token-level optimization, which is exactly what PPO does, but an intermediate chain-level? I suppose there exists an efficiency-effectiveness trade-off in these settings.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weakness.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: See above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback and questions. Below, we respond to the comments in ***Weaknesses (W)*** and ***Questions (Q)***.
---
***W1: CPO is fundamentally still DPO.***
Thank you for raising this point. We would like to clarify that while our approach adopts the DPO algorithm to fine-tune language models, our core contribution lies in different aspects compared to DPO.
DPO is an algorithm designed to align models (typically LLMs) with a target reward directly through pair preference data, avoiding the need for a surrogate reward model, and is widely used in RLHF for LLMs. The algorithm itself does not specify how to obtain the preference data, which can be sourced from human annotators [26,41,5,40], external reward models [43,1], or the model being fine-tuned itself [14].
Our work falls into a specific methodology for constructing preference data from the model itself. We find that time-consuming inference schemes such as ToT naturally produce thought-level preference data that are helpful for learning more time-efficient inference methods like CoT. This construction, although cost-effective (as it requires no external reward models or human annotation), has not been explored in previous research. Please also refer to our response to ***W1*** of **Reviewer WuBE** for a discussion and comparison with closely related works.
Moreover, the DPO algorithm in our approach is not the only choice. Advanced preference optimization algorithms (such as SimPO [F]) could potentially yield better results, which we will leave as future work. Please also refer to our response to ***Q1*** of **Reviewer WuBE**.
[F] Meng et al. SimPO: Simple Preference Optimization with a Reference-Free Reward, arXiv:2405.14734.
---
***W2: Why not directly use token-level optimization.***
We opted not to use token-level optimization primarily due to the following reasons:
1. Challenges in Reward Acquisition: CPO relies on constructing preference data from the model’s self-evaluation capabilities. Our preliminary experiments suggested significant difficulties in generating reliable token-level rewards using prompt-based LLM methods, which is crucial for token-level preference data construction.
2. High Computational Cost: The computational demands and associated costs of token-level ToT are substantially greater than those for thought-level ToT.
Considering these factors, chain-level optimization presents a balanced approach, mitigating both the computational overhead and the LCP gradient cancellation problem. | Summary: This paper presents a method called Chain of Preference Optimization (CPO) that fine-tunes large language models (LLMs) using the tree-of-thought (ToT) method to improve the performance of chain-of-thought (CoT) decoding. CPO aligns each step of CoT reasoning paths with those of ToT by leveraging preference information in the tree-search process, resulting in similar or better performance than CoT alone while avoiding the significant inference burden. Experimental results demonstrate its effectiveness.
Strengths: Innovation: The method of constructing preference dataset with ToT has a certain novelty.
Writing quality: Your paper is well-written, with clear and precise language and a smooth flow of ideas. The structure is reasonable, and the logic is sound, making it very enjoyable to read.
Experimental analysis: This paper has made a full experimental demonstration.
Weaknesses: 1. The experiment lacked a stronger baseline comparison. In my opinion, the author's method belongs to the self-imporvement methods, so it should be compared with such typical methods, such as the ReST and self-rewarding mentioned by the author in related work.
2. The ToT approach to dataset construction appears to be computationally expensive.
3. the author does not seem to provide a task-specific prompt templates (although the general guideline prompt is mentioned in line 141). This is not necessary, but I think the quality of building a preference dataset can be seriously affected by the prompt template of evaluation.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Why use DPO instead of other preference optimization algorithms?
2. Explain why the experiment lacks different self-imporvement baseline algorithms.
3. There may be a large number of negative samples in the preference dataset constructed by ToT making the dataset unbalanced. May I ask whether this will have a negative impact on the training stage?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations of the study and the possible negative social impact have been well documented by the authors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback and questions. Below, we respond to the comments in ***Weaknesses (W)*** and ***Questions (Q)***.
---
***W1: Comparision with ReST and self-rewarding baseline***
We would like to clarify that the settings of ReST and self-rewarding are different from ours, as discussed in L66-L78 of Section 2. These methods rely on either external reward models (ReST) or labeled data (self-rewarding), which makes them not directly comparable to our approach.
However, following your suggestion, we have added a comparison with these two baselines in the table below. To ensure a fair comparison with our CPO method, we prompted the LLM itself in the same way as our CPO to serve as the reward model for ReST and as the labeled data annotator for self-rewarding, respectively. The results indicate that, on average, our CPO method surpasses both ReST and self-rewarding under this fair comparison setting.
We will include the two baselines in the revision.
|||Bam.| 2wiki.| Hotpotqa| Fever| Feverous| Vitaminc| Svamp|Average|
| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |---- |
|LLaMA2-7b| Rest| 30.4|24.0|22.3|45.5|43.9|51.7|42.3|37.2|
|| SelfR| 31.2| 25.3| 21.0| 48.8| 44.7| 51.3|43.0|37.5|
|| CPO| **32.0**| **29.7**| **24.0**| **53.2**| **49.0**| **52.7**|**46.0**|**40.9**|
|LLaMA2-13b| Rest|48.0|28.3|28.7|46.8|48.0|50.2|44.3|42.0|
|| SelfR|48.0|29.0|30.0|47.8|48.5|51.0|45.3|42.8|
|| CPO|**52.0**|**30.3**|**30.3**|**49.2**|**50.7**|**54.0**|**50.0**|**45.2**|
|Mistral-7b| Rest|43.2|26.7|27.4|59.5|48.3|49.7|63.3|45.4|
|| SelfR|44.0|28.0|28.1|58.2|48.0|50.0|65.0|45.9|
|| CPO|**45.6**|**31.7**|**29.4**|**59.9**|**54.0**|**53.7**|**69.3**|**49.1**|
---
***W2: Dataset construction using ToT is computationally expensive.***
The computational expense introduced by ToT is limited to the training stage. A key motivation of our approach is to transfer the computational burden from the inference to the training phase. This allows CPO to directly produce answers through greedy decoding during inference, significantly enhancing efficiency.
Moreover, by using only 200 samples to generate preference pairs, we were able to achieve performance improvements efficiently (Please also refer to our response to ***Q3*** of **Reviewer VaVd**).
---
***W3: Task-specific evaluation prompt templates***
We apologize for any confusion. To clarify, the prompt template used in our evaluation consists of two parts: (1) the general guidelines, and (2) task-specific demonstration examples (L135-L137).
For example, here is one of the demonstration examples we used for QA datasets:
Question: When did the last king from Britain's House of Hanover die?
Thought: Step 1, when did the last king from Britain's House of Hanover born?
Evaluation Process:
The thought process focuses on the birth date of the last king from Britain's House of Hanover. However, knowing the birth date does not directly help in determining the date of death, which is the actual question. The lifespan of an individual can vary widely and cannot be accurately inferred from their birth date alone. Therefore, this thought process is unlikely to lead to the correct answer without additional information.
So the evaluation result is: this thought is impossible to help pariticially or directly answer the question.
Evaluation Results:
Impossible
We will clarify theis and provide task-specific demonstration examples in the revision.
---
***Q1: Choice of DPO instead of other preference optimization algorithms.***
Our contribution primarily focuses on the construction of preference data (Please also refer to our response to ***W1*** of **Reviewer ADqw**). We chose to experiment with DPO because it is one of the most typical and widely used preference optimization algorithms. While we acknowledge that DPO is not the sole option available, advanced algorithms like SimPO [F] could potentially yield superior results, which we will leave as future work.
[F] Meng et al. SimPO: Simple Preference Optimization with a Reference-Free Reward, arXiv:2405.14734.
---
***Q2: Lacking self-imporvement baselines.***
Please see our response to ***W1***.
---
***Q3: Unbalanced preference dataset constructed by ToT.***
In the constructed preference data, each positive sample is structurally paired with a corresponding negative sample, maintaining a one-to-one mapping at the level of individual pairs, thus achieving balance structurally. However, since ToT selects more paths than it rejects, the overall dataset exhibits an imbalanced distribution with more positive than negative samples.
Despite this numerical discrepancy, our method oversamples positive instances by design, which could mitigate the potential negative impacts of an unbalanced dataset on the CPO model training. This strategy is also adopted by [G]. Figure 3(a) of our paper further substantiates this hypothesis by exploring various methods for selecting negative samples, which leads to differerent quantities of negative examples. The results in the figure suggests that the number of negative samples has minimal impact on the training outcomes.
[G] Pattnaik et al. Curry-DPO: Enhancing Alignment using Curriculum Learning & Ranked Preferences, arXiv:2403.07230.
---
Rebuttal Comment 1.1:
Comment: Thanks to the author for solving my confusion. The experiments added by the author have well solved my worries, so I raised my score from 4 to 6.
---
Reply to Comment 1.1.1:
Comment: We greatly appreciate your valuable feedback and the score improvement. We will further polish the paper and incorporate the rebuttal discussions into the final revision. Thank you! | Summary: The paper presents a novel method to enhance the performance of LLMs in complex problem-solving tasks using Chain of Preference Optimization (CPO). The authors propose fine-tuning LLMs using the search tree constructed by tree-of-thought (ToT), allowing CoT to achieve similar or better results without the heavy tree-searching during inference. The CPO method aligns each step of the CoT reasoning paths with those of ToT, leveraging the inherent preference information in the tree-search process. The authors provide experimental evidence of the effectiveness of CPO, showing significant improvements in LLM performance in tasks such as question answering, fact verification, and arithmetic reasoning.
Strengths: The paper presents a novel method, Chain of Preference Optimization (CPO), that significantly enhances the CoT reasoning ability of LLMs without increasing the inference load. It offers a convincing comparison with the ToT method, demonstrating that CPO substantially reduces inference time, as evidenced by the decrease in latency from 1749s/ins to 38s/ins on Llama2-7B. Furthermore, the paper shows that CPO not only matches but surpasses the ToT baseline in terms of accuracy across multiple datasets. This indicates that CPO effectively fine-tunes LLMs to generate more optimal reasoning paths, thereby improving their performance in complex problem-solving tasks. The research is significant as it provides a practical solution to the challenge of balancing inference complexity and reasoning quality in LLMs.
Weaknesses: The paper presents a promising approach to fine-tuning LLMs using CPO. However, there is a significant gap in the evaluation of the potential impacts of this fine-tuning process on other abilities of the pretrained LLMs. This omission could leave readers questioning the broader implications and versatility of the proposed method. Future work should consider a comprehensive evaluation of the fine-tuning process to fully understand its effects on the LLMs' overall performance.
Technical Quality: 3
Clarity: 3
Questions for Authors: LN35, what is the advantage of CPO comared to existing MCTS methods? And why?
LN117, have you considered online RL that regenerates the chain-of-preferences using the updated LLM?
Table1, I'm wondering how does the method perform on common reasoning tasks like GSM8K and ProofWriter?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback and questions. Below, we respond to the comments in ***Weaknesses (W)*** and ***Questions (Q)***.
---
***W1: Evaluation of the potential impacts of fine-tuning process on LLM's other abilities***
In response to your suggestion, we conducted additional experiments using the LLaMA2-7b-base model to assess how fine-tuning on one specific task influences performance on other tasks. The results are presented in the table below:
|Training|Test|||||||
| ---- | ---- | ---- | ---- | ----| ---- | ----| ---- |
||Bam.|2wiki.|hot.|Fever|Feverous|Vitaminc|SVAMP|
|-|29.6|26.3|21.0|45.8|44.3|47.3|37.7|
|Bamboogle|**32.0**|23.3|19.7|45.5|44.3|47.1|41.7|
|Fever|28.8|25.4|22.3|**53.2**|47.0|49.3|40.3|
In the table, '-' represents the base model without any fine-tuning. **Bold** indicates testing on the same task as the model was fine-tuned on.
The results reveal that while fine-tuning on specific tasks often improves performance on the targeted task, it can lead to decreased performance on other tasks. To address this, we are considering strategies such as using a more diverse mixture of data for fine-tuning and incorporating additional regularization techniques. Interestingly, the improved reasoning ability may generalize to other domains (e.g., the improved performance of SVAMP dataset after fine-tuning).
We will include discussions in the revised version. Thanks for your suggestions.
---
***Q1: LN35, what is the advantage of CPO comared to existing MCTS methods?***
Our CPO utilizes computationally expensive ToT to construct preference data for optimization during the training phase, and can efficiently generate answers through greedy decoding during testing.
ToT is a method that augment the reasoning capabilities of LLMs by using tree search algorithms (i.e., BFS and DFS) to guide multi-step reasoning. While existing methods also explore using MCTS (Monte Carlo Tree Search) to guide LLM reasoning, our choice of ToT offers several advantages:
1. Self-Evaluation: ToT leverages the LLM itself to evaluate its generated reasoning paths, eliminating the need for labeled data to train an additional value function or reward model, which MCTS requires.
2. Performance: According to our preliminary exploration, MCTS guided LLM decoding does not consistently outperform ToT, which is also observed by [14].
3. Efficiency: MCTS is generally more resource-intensive than the search algorithm used in ToT (i.e., BFS and DFS), especially in terms of computational resources and time, because it involves a large number of random simulations. Moreover, ToT, with its pruning mechanisms, offers a more efficient tree search process by reducing unnecessary explorations.
---
***Q2: LN117, considering online RL that regenerates the data using the updated LLM.***
We chose not to employ online RL due to the significant computational overhead it incurs, as each update step involves sampling from the current policy model.
Instead, we opted for an iterative approach as a compromise between online and offline settings. This method reduces computational demands while still allowing for periodic updates from the policy model. Details on our iterative setting can be found in Appendix C of our paper.
---
***Q3: Table1, CPO performance on GSM8K and ProofWriter.***
Following your suggestions, we have included our results on GSM8K and ProofWriter in $\\textrm{\\color{blue}Table B}$ of the Rebuttal PDF. For the ProofWriter dataset, we sampled 300 instances for testing, as justified in our response to ***W1*** of **Reviewer VaVd**. For the GSM8k dataset, we evaluated the models using the full test set. As shown in the Table, CPO enhances performance by 1.5% on GSM8K and 1.1% on ProofWriter compared to CoT, demonstrating its effectiveness on these common reasoning tasks.
---
Rebuttal Comment 1.1:
Comment: Thanks the author for the update on the potential impacts of fine-tuning process, which confirms my concerns. I'd like to keep my current score.
---
Reply to Comment 1.1.1:
Comment: We appreciate your detailed comments and suggestions. In the revision, we will include the new results and the discussion on the potential impacts of fine-tuning process. Thank you! | Rebuttal 1:
Rebuttal: We thank all reviewers for their constructive feedback, and we have responded to each reviewer individually. We have also uploaded a Rebuttal PDF that includes:
- $\\textrm{\\color{blue}Figure A}$: Effect of the number of instances in generating paired thoughts;
- $\\textrm{\\color{blue}Figure B}$: Effect of dispreferred thoughts in optimization;
- $\\textrm{\\color{blue}Figure C}$: Component-wise Evaluations and Analysis;
- $\\textrm{\\color{blue}Table A}$: Evaluation using entire test sets;
- $\\textrm{\\color{blue}Table B}$: Experiment results: GSM8K on LLaMA2-7B; others on LLaMA3-8B;
- $\\textrm{\\color{blue}Table C}$: Sensitivity of data mixture;
- $\\textrm{\\color{blue}Table D}$: F1 scores on QA datasets;
- $\\textrm{\\color{blue}Table E}$: Illustrative examples of the reasoning paths preferred by CPO.
Pdf: /pdf/1ea65488bb2b97f7d1c3c5faea75bcdc004f963b.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper presents Chain-of-preference optimization (CPO) a self-supervised learning extension of Tree of Thought (ToT). Rather than use ToT during test time, which takes exponentially longer than end-to-end sampling, this paper proposes to use ToT at training time to annotate data for DPO fine-tuning and then use the DPO-tuned model end-to-end at test time. Notably, the DPO annotations are performed by the untuned model itself being prompted to label inferences as either useful or not. This differs from other approaches that are trained on only full successful reasoning paths.
For certain QA and reasoning tasks, the paper presents experimental evidence that CPO yields stronger models than either (1) fine-tuning the same LM on positive examples (TS-SFT) as done in previous papers or (2) running regular chain-of-thought prompting.
Strengths: 1. The approach is an intuitive and straightforward improvement to multi-hop reasoning that that is much faster than tree-based inference procedures.
2. The approach is appealing as it doesn't rely on correct labels nor external LLMs other than the LLM being fine-tuned on its own preference data.
3. The methodology is well explained and easily reproducible.
Weaknesses: 1. While the authors run evaluation over several datasets, they only consider 300 questions per dataset which is quite small. A stronger evaluation would use the whole test sets.
2. Details about the baseline implementation are unclear. The authors refer to both "TS-SFT" and "TS-LLM" (l.206) which in the original paper refer to different things-- SFT refers to the model fine-tuned on training examples with reasoning traces pulled from both a gold-annotated dataset and/or reasoning traces from the model that led to correct answers. However, TS-LLM in the original paper refers to the result of an iterative refinement process.
3. The choice of datasets is somewhat odd. E.g. HotpotQA is meant to be evaluated against support documents, which might explain the very low scores. They also compare against approaches (ToT, TS-SFT) that were not evaluated on any of the QA or Fact Verification tasks under consideration. This paper did _not_ consider the Game of 24 or GSM datasets, which both ToT and TS-SFT did evaluate.
Missing references:
1. Khalifa et al 2023: [GRACE: Discriminator-Guided Chain-of-Thought Reasoning](https://aclanthology.org/2023.findings-emnlp.1022.pdf)
2. Li et al 2023: [Learning Math Reasoning from Self-Sampled Correct and Partially-Correct Solutions](https://openreview.net/pdf?id=4D4TSJE6-K)
Technical Quality: 3
Clarity: 4
Questions for Authors: * What metrics did you use for the QA datasets? many of them use more than exact match, e.g. HotpotQA uses F1.
* The discussion around ablations in 5.3 is unclear. Are the trends consistent across different models and datasets?
* It is odd to have only considered up to 200 instances for constructing preference pairs. Why is this experiment limited to such a small maximum? The difference between e.g. 160 and 200 seems much less important than the difference between 200 and 1000 or 1000 and 5000.
* How many total preference pairs do you end up training on? It would be helpful to include this number in the main body of the paper.
* How stable is this approach to different prompts for state evaluation? Did you experiment with other prompts?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback and questions. Below, we respond to the comments in ***Weaknesses (W)*** and ***Questions (Q)***.
---
***W1: Evaluation using entire test sets***
Thank you for your suggestion. We selected evaluation sets of 300 questions per dataset to manage the high computational demands of evaluating ToT (more than 230,000 A100 GPU hours across seven full test sets and three base models). Sampling a relatively small subset is consistent with common practices in the field; for instance, ToT by Yao et al. [46], Self ask by Press et al. [29], Verify-and-edit by Zhao et al. [50], and Contrastive CoT by Chia et al. [A] also utilize small evaluation sets or sampled subsets for similar reasons. We will release the subset we used to make our results reproducible.
We also acknowledge that using small evaluation sets could introduce variance in the results. Following your suggestion, we have included evaluations on the entire test sets in $\\textrm{\\color{blue}Table A}$ of the Rebuttal PDF. Currently, we are only able to provide results for CPO and two baselines (i.e., CoT and TS-SFT), as the computational cost for evaluating ToT is substantial.
Our findings is that the relative improvement of CPO over the baselines is consistent with our previous results. We will incorporate these expanded evaluations in our revised paper.
---
***W2: Details about the baseline implementation & terminology***
We apologize for the confusion. Throughout our paper, TS-SFT refers to the model fine-tuned on training examples with reasoning paths discovered by ToT, as described in L176-178. Note that these reasoning paths do not necessarily lead to correct answers, as we do not assume access to ground truth in our setting. The term TS-LLM in L206 was a typo and will be corrected to TS-SFT.
We will clarify these in the revision and consistently use the term TS-SFT to avoid misunderstandings. Thank you for your attention to detail.
---
***W3: Choice of datasets and comparison approaches***
We understand the concerns about our selections and the comparability of our results with established benchmarks.
1. HotpotQA is typically evaluated against wikipedia pages. We selected this dataset because LLMs are generally pre-trained on Wikipedia [B]. This aligns with prior work that tests LLMs on HotpotQA by relying solely on the information encoded in the model's parameters like [C].
2. Our choice of tasks was driven by the performance of the models using the ToT method, which showed improvements on QA and Fact Verification.
3. Based on your suggestion, we have added performance for the LLaMa2-7B base model on the full GSM8k test set in $\\textrm{\\color{blue}Table B}$ of the Rebuttal PDF. As shown in the Table, on LLaMA2-7b-base, CPO enhances performance by 1.5% on GSM8K, compared to CoT. Regarding the Game24, we find ToT can only achieve less than 3% with LLaMA3-8b-base (also observed by [D]). Our method, which solely relies on the inherent capabilities of the LLM for self-improvement without assuming access to human annonated data like [14], faces significant challenges in improving upon these figures. In addition, while ToT was tested on Game24 using GPT-4 in the original paper, GPT-4 does not currently offer an interface that supports DPO finetuning, precluding our ability to apply our methods in that context.
---
***Q1: Metrics for QA datasets***
We primarily reported accuracy as our metric (line 171) following [29]. Additionally, we added F1 scores for the three QA tasks in $\\textrm{\\color{blue}Table D}$ of the Rebuttal PDF. As shown in the table, the performance in terms of F1 scores is consistent with its corresponding accuracy.
---
***Q2: Ablations in 5.3: Are the trends consistent across different models and datasets?***
Following your suggestion, we have included ablations and analysis across different models and datasets in $\\textrm{\\color{blue}Figure A, B, C(a) and C(b)}$ and $\\textrm{\\color{blue}Table C}$ of the Rebuttal PDF. We find the trends are generally consistent across different models and datasets.
---
***Q3: Limitation of 200 instances for constructing preference pairs***
We construct preference pairs with up to 200 instances for the following reasons:
1. In our experiments, approximately 200 samples (e.g., questions in the QA task) on average could generate about 6,531 preference pairs, suggesting that our CPO requires only a small number of samples by design.
2. Constructing preference data is a time-intensive process. The choice of 200 samples represents a practical trade-off between efficiency and effectiveness, allowing us to manage resources effectively while still achieving noticeable improvements.
Following your suggestions, we added an analysis of performance trends from 0 to 3,000 samples in $\\textrm{\\color{blue}Figure C(c)}$ in the Rebuttal PDF. We observed that while performance increases with more samples, the preference data constructed from 200 samples already provides a significant boost in model performance.
---
***Q4: Total number of preference pairs used in training***
On average, we trained on 6,531 preference pairs across all three LLMs and seven datasets. We appreciate your suggestion and will ensure to include this specific number in the main body of the paper for clarity and completeness.
---
***Q5: Stability of approach to different prompts for state evaluation***
In our prior experiments, we tried different prompting templates for evaluation. We slightly changed the prompts by 1) adding more suggestive information, such as "a good annonator will classify it as"; and 2) replacing prompts with synonymous sentences. We find that the reward score returned by the LLM is robust to such kinds of changes. Balancing the performance and the length of different templates, we chose the current prompting template. We will include the discussion in the revision.
---
Rebuttal Comment 1.1:
Title: Thanks for the responses
Comment: Thank you to the authors for their careful rebuttal. I still have concerns about the evaluation given the authors have been selective about which results were shown in the rebuttal PDF.
> we have included evaluations on the entire test sets in Table A
This is greatly appreciated, though this only includes 3 of the 7 considered datasets in the paper. Were the results on the other 4 datasets not positive findings? Running on those datasets is clearly within scope because you included SVAMP and Fever in Table C.
> added performance for the LLaMa2-7B base model on the full GSM8k test set in Table B
Thank you as well for this result. Table B is a good start but these numbers do not constitute a full results that could be added to the paper. Why is it missing the other baselines? I appreciate the improvements by about a point over CoT. However, TS-SFT (and ToT) consistently improve upon CoT. For the other datasets you were outperforming CoT by 3-7 points pretty consistently, but for GSM it is only 1 point-- are the TS-SFT and ToT baselines somewhere inside this 1 point spectrum (raising questions about statistical significance), or are they stronger than CPO?
Moreover, others have found (e.g. in the [Gemma report](https://storage.googleapis.com/deepmind-media/gemma/gemma-report.pdf) that Llama-2-7B matches your 14.6% performance on GSM using just few-shot prompting. This suggests that CPO is not providing any benefits.
> [Ablations]
Thank you for these results, they have somewhat addressed my concern. Which datasets are the Figure C plots computed over?
---
Reply to Comment 1.1.1:
Title: Thank you for your feedback
Comment: Thank you for your detailed feedback. We appreciate your thorough review and would like to address each of your comments below.
---
>Were the results on the other 4 datasets not positive findings?
The full test sets for Fever, Feverous, and Vitaminc are indeed quite large, with approximately 10,000 instances each for Fever and Feverous, and over 50,000 for Vitaminc. Due to time constraints during the rebuttal period, we initially prioritized reporting performance within the QA domain. However, we have since conducted additional experiments, and we now present the results on all seven datasets:
Model | | Bam | 2wiki | hotpotqa | fever | feverous | vitaminc | svamp | Avg.|
| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |---- |---- |
|LLaMa2-7b |CoT |29.6| 23.8| 20.7| 38.6| 51.2| 50.5| 37.7|36.0|
| |TS-SFT| 30.4| 24.1| 22.7| 40.3| 53.0| 53.0 |43.1|38.1|
||CPO| **32.0**| **26.1**| **24.0**| **45.1**| **56.2**|**55.2**| **46.0**|**40.7**|
|LLaMa2-13b| CoT| 48.0| 28.4| 27.0| 47.4| 49.9| 50.8| 40.3|41.7|
| |TS-SFT| 50.8| 29.0| 28.1| 48.0| 48.8| 53.8| 44.6|43.3|
| |CPO| **52.0**| **30.3**| **28.5**| **49.7**| **50.7**| **59.6**| **50.0**|**45.8**|
Mistral-7b| CoT| 41.6|27.5|24.8|53.6|54.0|53.8 |65.3|45.8|
||TS-SFT| 41.6|30.3|25.1 |**56.0**|55.0|57.1| 59.0|46.3|
| |CPO| **45.6**|**31.3**|**25.9**|55.7|**60.1**|**60.0**| **69.3**|**49.7**|
Our findings indicate that the relative improvement of CPO over the baselines is consistent with our previously reported results. We will clarify it and incorporate these expanded evaluations into the revised paper.
---
> Why is Table B missing the other baselines?
Due to time constraints during the rebuttal period, we prioritized demonstrating CPO's potential to improve performance on new tasks, including GSM8k, rather than emphasizing its superiority over other methods. However, as per your suggestion, we have now included the experimental results for ToT and TS-SFT on GSM8k. Since some settings need to be clarified before presenting the results, please refer to our response to the following comments for detailed results and conclusions.
---
>Moreover, others have found (e.g. in the Gemma report that Llama-2-7B matches your 14.6% performance on GSM using just few-shot prompting. This suggests that CPO is not providing any benefits.
The performance of 8-shot prompting on GSM8k using LLaMA-2-7B, as reported in the Gemma report, is comparable to our CPO using 4-shot prompting (see lines 169-170). We extended our evaluation to 8-shot prompting here, consistent with the Gemma report settings:
||ToT|CoT|TS-SFT|CPO|
| ---- | ---- | ---- | ---- |---- |
|GSM8k|16.2|14.2|14.8|15.3|
We found that our COT approach achieves similar performance to that reported in the Gemma report, with CPO further improving the results to 15.3%.
---
> Are the TS-SFT and ToT worse than CPO (raising questions about statistical significance), or are they stronger than CPO?
As shown in the table provided in our previous response, CPO outperforms TS-SFT, as confirmed by a bootstrap significance test, which yielded a p-value of 0.0011 (p < 0.05), indicating a statistically significant difference. However, ToT is superior to CPO. This result is expected, as ToT serves as the teacher method for our CPO method, guiding its improvements.
---
>Which datasets are the Figure C plots computed over?
For $\\textrm{\\color{blue}Figure C (a)}$ and $\\textrm{\\color{blue}(b)}$, as labeled on the x-axis, the results are reported for the Bamboogle and Fever datasets, respectively. For $\\textrm{\\color{blue}Figure C (c)}$, we present the results on the Bamboogle dataset. We will ensure clarity when we include these figures in the final revision. The reason for reporting results on only one dataset is the computational cost associated with using ToT for dataset construction (e.g., over 1740 A100 GPU hours for LLaMA 2-7B on the Fever dataset). However, we are continuing to conduct this ablation and will include the results in the revised version.
---
We hope these efforts address your concerns. If you have any further feedback, we will do our best to respond.
---
Rebuttal 2:
Comment: **References**
[A] Chia et al. Contrastive Chain-of-Thought Prompting, arXiv:2407.03600
[B] Shi et al. Detecting Pretraining Data from Large Language Models, ICLR 2024.
[C] Wang et al. Causal-driven Large Language Models with Faithful Reasoning for Knowledge Question Answering, MM2024
[D] Yang et al. Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models, arXiv:2406.04271
[E] Zhou et al. LIMA: Less Is More for Alignment, arXiv:2305.11206 | null | null | null | null | null | null |
Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial Regularizer | Accept (poster) | Summary: This paper aims to address the issue of over-optimization in RLHF. The authors introduce a method named RPO which concurrently minimizes the maximum likelihood estimation of the loss alongside a reward penalty term. Not only do the authors demonstrate that the proposed method is sample-efficient, but they also outline a straightforward yet effective implementation strategy. Experimental results underscore the efficiency of the method.
Strengths: 1. The research introduces a novel RLHF method specifically designed to tackle the issue of over-optimization.
2. Despite its simplicity in implementation, the method proves to be highly effective.
3. The authors support their proposed method with thorough theoretical analysis, convincingly demonstrating that it benefits from finite-sample convergence guarantees.
Weaknesses: 1. Additional experiments across a broader range of scenarios are required to more comprehensively demonstrate the method's efficiency.
2. The current experiments are limited to evaluations using GPT and log probability, which do not offer intuitive insights into the over-optimization problem. In other words, it remains unclear whether the observed improvements in performance truly indicate a mitigation of the over-optimization issue. A more detailed analysis, perhaps focusing on rewards, could provide the necessary clarity.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The SFT loss appears similar to the commonly used PTX loss in [1]. Could you please elucidate their relationships and distinctions?
2. How is the true reward (human reward) depicted in Figure 1 on the left derived?
3. How does the proposed method address scenarios involving multiple reward models?
[1] Stanford alpaca: An instruction-following llama model.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Additional experiments are required to more comprehensively demonstrate the method's efficiency**
**A1:** Please refer to the **General Response**.
**Q2: The experiments are limited to evaluations using GPT and log probability. It remains unclear whether the observed improvements in performance truly indicate a mitigation of the over-optimization issue.**
**A2:** Thanks for the question! First, we would like to point out that the actions and their chosen probabilities **can be** interpreted as a proxy of analyzing the underlying (estimated) reward model $\widehat{r}$ [1] due to the representation $\pi_{\widehat{r}}(a|x)\propto\pi^{\mathrm{ref}}(a|x)\exp(\beta^{-1}\widehat{r}(x,a))$. Analyzing the (log) probabilities of the actions can be used to detect the mitigation of over-optimization, because according to the representation, an overestimated reward of a poor action would result in a higher probablity of choosing this action, and would also cause a decay in the probability of choosing other better actions (since the probabilities are normalized to $1$).
To further showcase the ability of RPO to address overoptimization (through the lense of probability), consider the following theoretical example with only actions [2] where we can track everything clearly. It has three actions $\{a, b, c\}$ with $R^\star(a) = 1, R^\star(b)=0.5,R^\star(c)=0$. The reference policy $\pi^{\mathrm{ref}}(a)=\pi^{\mathrm{ref}}(b)=0.4, \pi^{\mathrm{ref}}(c)=0.1$, and the dataset consists of one data point $\mathcal{D} = (a,b,1)$ (meaning action $a$ is preferred in the data). Then an ideally solved DPO objective would be $\pi_{\mathrm{DPO}}$ as long as $\pi^{\mathrm{DPO}}(b)=0$, and the value of $\pi^{\mathrm{DPO}}(a)$ can be arbitrarily chosen in $[0,1]$. Thus a possible solution to DPO would be $\pi^{\mathrm{DPO}}(a)=0.5,\pi^{\mathrm{DPO}}(b)=0$, and by the normalizing condition $\pi^{\mathrm{DPO}}(c)=0.5$, which is undesirable since the action $c$ has reward $R^{\star}(c)=0$. In contrast, solving the RPO objective would additionally require the maximization of $\pi_{\mathrm{RPO}}(a)$ due to the SFT regularization term, and thus the solution is shifted towards $\pi_{\mathrm{RPO}}(a)=1, \pi_{\mathrm{RPO}}(b)=\pi_{\mathrm{RPO}}(c)=0$, which is better than the DPO policy. Thus, RPO is able to prevent overoptimization towards poor actions that are less covered by the dataset (action $c$ here), therefore resulting in a better policy.
**Q3: Could you please elucidate the relationships and distinctions between PTX loss in [2] and your SFT loss?**
**A3:** Thanks for the question! The original PTX loss is an imitation loss calculated on the pretraining data. In contrast, the SFT loss in the RPO objective is an imitation loss calculated on the RLHF dataset. In more specific, our experiments use this SFT loss to imitate the chosen responses in the RLHF dataset. Thus the relationship is that they are both imitation loss which aims to mimic certain data distribution. The distinction is that they are calculated on different data sources. Moreover, the SFT loss in the RPO objective naturally comes from our theoretical algorithm and provably serves as an important regularization term to mitigate overoptimization in offline RLHF. We would make this comparison clearer in the revision.
**Q4: How is the true reward (human reward) depicted in Figure 1 on the left derived?**
**A4:** Thanks for pointing this out! Figure 1 (left) is an illustrative figure to showcase the mechanism behind overoptimization as a consequence of distributional shifts. The rewards therein does not correspond to actual human rewards and is plotted for illustrative purposes. Meanwhile, as we demonstrated in the answer of **Q2**, RPO can effectively address overoptimization depicted in Figure 1 (left) where data coverage is not sufficient. We will make this clearer in the revision.
**Q5: How does the proposed method address scenarios with multiple reward models?**
**A5:** Thanks for raising this interesting question! How our methods can address the overoptimization issue in this scenario depends on the specific learning target in the face of multiple objects, e.g., [4, 5, 6] and more references therein.
For instance, when the goal is to find the optimal policy that maximizes a linear scalarized reward model [4, 6], the idea of RPO suggests to use the linearization of the multiple reward models to be learned as the regularizer in the object (3.2), which roughly give the object
$$\max_{\pi}\min_{r^1\in\mathcal{R}^1,\cdots,r^m\in\mathcal{R}^m}\eta\mathbb{E}_{a^1\sim\pi,a^0\sim\pi^{\mathrm{base}}}[\mathbf{w}^\top\mathbf{r}(x,a^1)-\mathbf{w}^\top\mathbf{r}(x,a^0)]-\beta\mathrm{KL}(\pi\|\|\pi^{\mathrm{ref}}) + \mathcal{L}\_{\mathcal{D}^1,\cdots,\mathcal{D}^m}(\mathbf{r}).
$$
Inspired by our theory, such an algorithm can find the optimal policy as long as each of the responses in the data can cover the target policy in terms of the linearized reward (see Assumption 5.2, where the test function $r$ is replaced by the scalarization of the multiple rewards), thus overcoming the issue of overoptimization in offline RLHF. We leave the study of RPO for such multiple reward models as our future work.
**References:**
[1] Rafailov, Rafael, et al. "Direct preference optimization: Your language model is secretly a reward model." NeurIPS 36 (2024).
[2] Xu, Shusheng, et al. "Is DPO Superior to PPO for LLM Alignment? A Comprehensive Study." 41th ICML.
[3] Stanford alpaca: An instruction-following llama model.
[4] Zhou, Zhanhui, et al. "Beyond one-preference-for-all: Multi-objective direct preference optimization." ArXiv:2310.03708 (2023).
[5] Chakraborty, Souradip, et al. "MaxMin-RLHF: Towards equitable alignment of large language models with diverse human preferences." ArXiv:2402.08925 (2024).
[6] Yang, Rui, et al. "Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment." 41th ICML.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I will keep my score positive.
---
Reply to Comment 1.1.1:
Title: Reply to the Official Comment by Reviewer 1DUS
Comment: Dear Reviewer 1DUS,
Thank you for your review and support. We will incorporate your valuable suggestions into our paper as we revise it based on the feedback from all reviewers. Your comments greatly assist us in strengthening the overall quality of our work.
Best regards, Authors | Summary: The paper introduces the concept of RPO, which combines DPO loss with SFT loss. This approach aims to align the policy with human preferences while simultaneously imitating a baseline distribution, effectively mitigating overoptimization. Empirical results from experiments with LLMs demonstrate that RPO outperforms traditional DPO methods, showcasing the practical applicability of the proposed algorithm.
Strengths: The paper provides a robust theoretical framework for addressing overoptimization in RLHF. By identifying the source of misalignment as distributional shift and uncertainty, it offers a principled approach to the problem
The algorithm includes a reward penalty term to prevent the policy from exploiting spurious high proxy rewards, resulting in provable sample efficiency under partial coverage conditions
The paper provides empirical evidence demonstrating that RPO improves performance compared to DPO baselines in aligning LLMs. This practical validation strengthens the theoretical claims made in the study
Weaknesses: The theoretical guarantees provided by the algorithm rely on specific conditions, such as partial coverage. These conditions might not always hold in practical scenarios, potentially limiting the generalizability of the results.
The SFT loss + DPO seems very intuitive.
Technical Quality: 3
Clarity: 3
Questions for Authors: It is noticed believed that math problems are not very suitable with vanilla DPO. Some practitioners found similar algorithms should help the performance. Have you checked the reasoning benchmarks and the proposed algorithm?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: The theoretical guarantees provided by the algorithm rely on specific conditions such as *partial coverage*, which might not always hold in practical scenarios, potentially limiting the generalizability of the results.**
**A1:** Thanks for raising the question. We would like to comment that all the assumptions we imposed to obtain the theoretical guarantees are quite standard in the RL theory literature. Actually, our theory features a *minimal assumption* on the data distribution (the partial coverage assumption) thanks to our algorithm design. Similar kinds of data assumptions also appear in recent theoretical works on RLHF, e.g., [1, 2, 3].
To explain more, the partial coverage assumption (Assumption 5.2) only requires the dataset to cover the policy $\pi$ to compete. As is shown by [4], the partial-coverage-style data assumption is the minimal assumption such that provably sample-efficient offline RL is possible. That being said, this assumption is actually a weak condition in terms of the data distribution, especially in comparison with the stronger notion of uniform coverage [5, 6, 7] where the offline dataset needs to cover all possible policies. Moreover, our theory works in the regime of general function approximation (instead of linear regimes [1, 3]), which also exhibits its generality.
But still, when going from theory to practice, the implementation of our algorithm RPO itself does not require the knowledge of these assumptions. It can be directly applied to handle overoptimization in RLHF for real-world problems. This is demonstrated by the effectiveness of RPO in LLM fine-tuning shown in the paper.
**Q2: The proposed algorithm (SFT loss + DPO loss) seems very intuitive.**
**A2:** Yes! the resulting algorithm does look very intuitive. This is actually an *advantage* of our algorithm design. Typically, to *provably* address overoptimization in offline RLHF with general function approximations, the theoretical algorithm would rely on solving complicated non-convex constrained optimizations over certain confidence regions [1, 2], which is prohibitive to scale to practice like LLMs without modifications or adaptations.
In contrast, our proposed theoretical object (Algorithm 1) naturally induces the simple but equivalent form of RPO (Algorithm 2) after delicate mathematical deductions (see Section 4). That is, it suffices to add an SFT loss to the preference optimization loss to implement the theoretical algorithm in an equivalent manner. Therefore, the simple form of RPO as well as the theoretical guarantees it enjoys is one of our main contributions. Also, our experimental results demonstrate its effectiveness despite its simple form.
Finally, we notice that adding an SFT-style loss as a regularizer in RLHF object is becoming more and more popular, which has been adopted by the fine-tuning of Llama 3.1 [8] (see Section 4.1.4). Given that, our work also serves as a theoretical foundation for such an effective practice in large-scale RLHF.
**Q3: About the reasoning benchmarks of the proposed algorithm**
**A3:** Please refer to the **General Response**.
**References:**
[1] Zhu, B., Jordan, M., & Jiao, J. (2023, July). Principled reinforcement learning with human feedback from pairwise or k-wise comparisons. In International Conference on Machine Learning (pp. 43037-43067). PMLR.
[2] Zhan, Wenhao, et al. "Provable Offline Preference-Based Reinforcement Learning." The Twelfth International Conference on Learning Representations.
[3] Xiong, W., Dong, H., Ye, C., Wang, Z., Zhong, H., Ji, H., ... & Zhang, T. (2024). Iterative preference learning from human feedback: Bridging theory and practice for rlhf under kl-constraint. In Forty-first International Conference on Machine Learning.
[4] Jin, Ying, Zhuoran Yang, and Zhaoran Wang. "Is pessimism provably efficient for offline rl?." International Conference on Machine Learning. PMLR, 2021.
[5] Munos, Rémi. "Error bounds for approximate policy iteration." ICML. Vol. 3. 2003.
[6] Chen, Jinglin, and Nan Jiang. "Information-theoretic considerations in batch reinforcement learning." International Conference on Machine Learning. PMLR, 2019.
[7] Xie, Tengyang, and Nan Jiang. "Q* approximation schemes for batch reinforcement learning: A theoretical comparison." Conference on Uncertainty in Artificial Intelligence. PMLR, 2020.
[8] Dubey, Abhimanyu, et al. "The Llama 3 Herd of Models." arXiv preprint arXiv:2407.21783 (2024).
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I have revised the score.
---
Reply to Comment 1.1.1:
Title: Reply to the Official Comment by Reviewer 9VCJ
Comment: Dear Reviewer 9VCJ,
Thank you for your review and support. We will incorporate your valuable suggestions into our paper as we revise it based on the feedback from all reviewers. Your comments greatly assist us in strengthening the overall quality of our work.
Best regards, Authors | Summary: The paper "Provably Mitigating Overoptimization in RLHF" addresses the issue of overoptimization in aligning large language models (LLMs) with human preferences using reinforcement learning from human feedback (RLHF).
The main contributions include:
1. Identification of Overoptimization Source: The paper identifies the source of reward overoptimization as a distributional shift and uncertainty in learning human preferences.
2. Theoretical Algorithm Proposal: It proposes a theoretical algorithm that minimizes the maximum likelihood estimation of the loss and a reward penalty term to mitigate overoptimization, ensuring provable sample efficiency.
3. Practical Implementation: The algorithm is reformed into an easy-to-implement objective combining preference optimization and supervised learning loss, named Regularized Preference Optimization (RPO), demonstrating improved performance in aligning LLMs compared to existing methods
Strengths: This paper not only provides rigorous analysis but also has solid experiments to solve the overoptimization problem for DPO.
Weaknesses: The partial coverage condition lacks discussions since now it's a pair over (\pi,\pi^{base}), which is different from the traditional coverage condition. For example, for the linear case, $C_{\mu_D}$ would approximately become
$$
\mathbb{E}_{x,a^1\sim\pi^*,a^2\sim\pi^{pref}}\sqrt{(\phi(x,a^1) - \phi(x,a^0))^{\top} \Sigma_D^{-1} (\phi(x,a^1) - \phi(x,a^0))},
$$
where $\pi^{pref}$ means the distribution of the chosen samples. We know that $\Sigma_D$ is composed of pairs of chosen and unpreferred samples, but the $(\phi(x,a^1) - \phi(x,a^0))$ is there pair of optimal policy and the policy represents the chosen samples. Hence, if we want to compete with a policy $\pi$ better than $\pi^{chosen}$, how can the direction of $(\pi,\pi^{chosen})$ be covered by $(\pi^{unprefered},\pi^{chosen})$?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. I just wonder about the extension to online RLHF. For online RL, based on the optimism principle, it seems that then the objective should be subtracted from the SFT loss, which obliviates the wish to avoid overoptimization. So how to balance the exploration and avoiding overoptimization for the online setting?
2. What is the additional computational complexity brought by the gradient of SFT loss? Besides, the author doesn't mention how to approximate the gradient of SFT loss since there are expectations.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: The partial coverage condition lacks discussions since now it's a pair over $(\pi,\pi^{base})$, which is different from the traditional coverage condition. Hence, if we want to compete with a policy $\pi$ better than $\pi^{\mathrm{chosen}}$, how can the direction of $(\pi, \pi^{\mathrm{chosen}})$ be covered by $(\pi^{\mathrm{unchosen}},\pi^{\mathrm{chosen}})$?**
**A1:** Thanks for raising this question! and we appreciate your suggestion to include more detailed discussions and explanations on this partial coverage coefficient. Here, we briefly explain this condition and address your concern on its rationality.
Essentially, a sufficient condition to make this partial coverage condition (Assumption 5.2) hold is that the distribution of the offline dataset, which is $\mu_\mathcal{D}$, can well cover the joint distribution of $(a^1, a^0)\sim (\pi, \pi^{\mathrm{base}})$. We can focus on $\pi^{\mathrm{base}} = \pi^{\mathrm{chosen}}$ as we adopted in the paper.
First, we would like to clarify that the offline dataset distribution $\mu_\mathcal{D}$ is not simply $(a^1, a^0)\sim(\pi^{\mathrm{unchosen}}, \pi^{\mathrm{chosen}})$ as understood by the reviewer, since according to our definition (see Section 2) whether $a^1$ or $a^0$ is chosen is random and is determined by $y \in\{0, 1\}$ obeying the BT model. Thus, $(a^1, a^0)\sim \mu_\mathcal{D}$ can be interpreted as a mixture of $(\pi^{\mathrm{unchosen}}, \pi^{\mathrm{chosen}})$ and $(\pi^{\mathrm{chosen}}, \pi^{\mathrm{unchosen}})$. This mixture probability would not be too small as long as the quality of $(a^1, a^0)$ does not vary too much, i.e., both of them are possible to be chosen, which is the case in practice. As a result, in the offline data distribution $(a^1,a^0)\sim \mu_{\mathcal{D}}$, both $a^1$ and $a^0$ partly comes from the chosen distribution $\pi^{\mathrm{chosen}}$.
Then in order for $\mu_\mathcal{D}$ to cover the joint distribution of $(a^1, a^0)\sim (\pi, \pi^{\mathrm{base}})$, it suffices to argue that $\pi^{\mathrm{chosen}}$ can cover the target policy $\pi$, which is then reduced back to the traditional coverage condition. Thus our assumption essentially requires that $\pi^{\mathrm{chosen}}$ well covers and only needs to cover the target policy $\pi$. This coincides with the spirit of the minimal data assumption in offline RL theory, i.e., the so-called partial coverage condition.
We will make this clearer in the revision of our paper.
**Q2: For online RL, based on the optimism principle, it seems that then the objective should be subtracted from the SFT loss, which obliviates the wish to avoid overoptimization. So how to balance the exploration and avoiding overoptimization for the online setting?**
**A2:** Thank you for pointing this out! Actually online RLHF is a different theoretical setup than offline RLHF. The goal of *regret minimization* in online learning does not face the problem of overoptimization, because the data are not precollected but are collected and updated interactively. This in turn requires exploration and needs the algorithm to be optimistic.
When our technique is applied to online RLHF, it does induce a similar SFT loss subtracted from the preference optimization loss, but the baseline policy $\pi^{\mathrm{base}}$ in the SFT loss needs to be chosen carefully and is not necessarily $\pi^{\mathrm{chosen}}$ as we considered in the offline setup. Possible candidates for the baseline policy could be the LLM at the previous iteration (serve as the reference policy for the current iteration). In this way, the data distribution of the actions (responses generated by the currently learned LLM) can be gradually shifted towards that of the optimal actions.
Thus, theoretically, we only need to subtract a properly designed SFT loss for online RLHF in terms of regret minimization. For practical situations where one might still need to handle the overoptimization issue after online data collection, we conjecture that the optimal way is to use optimism during the online data collection stage (subtract a properly designed SFT loss) and perform pessimism after all the data have been collected (add SFT loss as RPO). Still, online RLHF is beyond the scope of this paper. We leave this interesting question of addressing overoptimization in online RLHF to future work.
**Q3: About the computational complexity and the implementations of the SFT loss gradient**
**A3:** According to the paragraph **Practical implementation** in Section 6, RPO adds an additional SFT loss (the log probability of the chosen labels in the preference dataset) on the original DPO loss, where the SFT loss is a intermediate quantity in the calculation of DPO loss. Hence, our proposed method will not incur any additional computation overhead compared with the vanilla DPO. As for the justification of the approximation of the SFT loss, we use the linear property of the expectation to show that the population form of the RPO loss $\mathcal{L}_ {\text{RPO}}$ can be rewritten as
$$
\mathcal{L}_ {\text{RPO}}(\theta) = \mathbb{E}_ {(x,a_{\text{cho}},a_{\text{rej}})\sim \mu_ {\mathcal{D}}}\Bigl[-\log \pi_\theta(a_ {\text{cho}}\mid x) +\sigma \bigl(\hat r_ \theta(x,a_ {\text{cho}}) - \hat r_ \theta(x,a_ {\text{rej}})\bigr)\Bigr],
$$
where we denote by $\hat r_\theta(x,a)=\beta\cdot\log(\pi_\theta(a\mid x))/\log(\pi_{\text{ref}}(a\mid x))$ and denote $\mu_{\mathcal{D}}$ as the population distribution of the preference dataset $\mathcal{D}$. It suggests that we only need to sample a mini-batch $\mathcal{D}_ {\text{mini}}$ from $\mu_{\mathcal{D}}$ (or equivalently sample from $\mathcal{D}$) and calculate the gradient w.r.t. $\theta$ on
$$
\mathbb{E}_ {(x,a_{\text{cho}},a_{\text{rej}})\sim {\mathcal{D}_ {\text{mini}}}}\Bigl[-\log \pi_\theta(a_ {\text{cho}}\mid x) +\sigma \bigl(\hat r_ \theta(x,a_ {\text{cho}}) - \hat r_ \theta(x,a_ {\text{rej}})\bigr)\Bigr],
$$
which approximates the gradient $\nabla_\theta\mathcal{L}_ {\text{RPO}}(\theta)$.
---
Rebuttal Comment 1.1:
Comment: The authors have addressed my question, and I choose to maintain my score.
---
Reply to Comment 1.1.1:
Title: Reply to the Official Comment by Reviewer JyGr
Comment: Dear Reviewer JyGr,
Thank you for your review and support. We will incorporate your valuable suggestions into our paper as we revise it based on the feedback from all reviewers. Your comments greatly assist us in strengthening the overall quality of our work.
Best regards, Authors | null | null | Rebuttal 1:
Rebuttal: **General Response:**
We thank all the reviewers for their time and effort reviewing our paper and we appreciate all your support of our work! We have responded to each of you detailedly.
Here we provide a general response to **Q3** of **Reviewer 9VCJ** and **Q1** of **Reviewer 1DUS** about more experimental evaluations of the proposed algorithm RPO. To this end, we use extra benchmarks on the math, reasoning, and coding tasks to showcase the effectiveness of our method. Please refer to the PDF document attached to this response for detailed results. Thank you!
Pdf: /pdf/d29799dd7f64193fcb2db39406d5b089051ebeac.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Weight Diffusion for Future: Learn to Generalize in Non-Stationary Environments | Accept (poster) | Summary: This manuscript aims to tackle the evolving domain generalization (EDG) issue, namely the domain gradually evolves in an underlying continuous structure. The paper introduces the idea of Weight Diffusion (W-Diff), a conditional diffusion model in the parameter space to learn the evolving pattern of classifiers. Combining such types of classifier with weight ensembling and a domain-shared feature space allows robust prediction. The effectiveness of the proposed method is examined on two text classification datasets, three image classification datasets, and two multi-variate classification datasets.
Strengths: * The evolving domain generalization is interesting yet important to the research community.
* The manuscript is well-structured. It explains its methodology design clearly and intuitively.
* The manuscript introduces the idea of model weight generation through the diffusion model to the area of evolving domain generalization. The concept itself is interesting. However, the authors still need to discuss the related work carefully.
* The proposed method is examined on two text classification datasets, three image classification datasets, and two multi-variate classification datasets.
Weaknesses: * Unclear novelty. The idea of using a diffusion model to generate model weights/heads appeared in [1, 2]. However, the manuscript did not cite and discuss these two papers (and maybe their follow-up works), making the exact contribution unclear. From the reviewer's perspective, idea 1 of using the conditional diffusion model to model parameter evolution patterns is interesting and intuitive; idea 2 of learning domain-shared feature encoder is standard. Combining idea 1 and idea 2 and applying it to the area of evolving domain generalization is ok, but still needs a careful discussion and ablation study.
* The performance gain over different datasets looks marginal.
* The manuscript needs more ablation studies. E.g., What if instead of generating the classifiers on the fly for the unseen domain, we just leverage the past classifiers for the ensemble prediction, or test-time adaptive classifier ensembling as in [3]?
### Reference
[1] Learning to Learn with Generative Models of Neural Network Checkpoints. https://arxiv.org/abs/2209.12892
[2] Diffusion-based Neural Network Weights Generation. https://arxiv.org/abs/2402.18153
[3] Adaptive Test-Time Personalization for Federated Learning. https://arxiv.org/abs/2310.18816
Technical Quality: 3
Clarity: 3
Questions for Authors: NA
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Sincerely thanks for your efforts in reviewing the paper. Below, we respond to your questions in detail.
> **Q1: Discussion with the two related papers [1, 2].**
Thanks. Our method differs from [1, 2] as follows:
**Focused problem**: G.pt [1] focuses on supervised learning and reinforcement learning, while our W-Diff addresses the evolving domain generalization (EDG) in the domain-incremental setting. In EDG, distribution shifts often hinder models that are trained with full supervision on in-distribution (ID) data from generalizing to out-of-distribution data. Hence, our goal is to address the distribution shift, instead of generating diverse high-performance parameters for ID data.
D2NWG [2] focuses on transfer learning to provide better model parameter initialization for faster fine-tuning convergence on new datasets. However, target domains are unlabeled in EDG, preventing supervised fine-tuning. This makes D2NWG unsuitable for domain generalization. In contrast, our method generates model parameters **applicable directly to the unlabeled target domain without fine-tuning**.
**Condition design for diffusion model**: G.pt [1] collects the loss/error/return of task model checkpoints during training as the condition for the diffusion model. It is designed for a single dataset to which the training data belongs, thus struggling with distribution shifts.
D2NWG [2] uses CLIP to extract features for each sample and Set Transformer to generate dataset encoding from these features. The dataset encoding is used as the condition for diffusion model, while training set samples of a new dataset are required to obtain the dataset encoding. This is infeasible in unlabeled target domains.
Different from [1, 2], we use the classifier weights of historical domain (referred as reference point) and the prototypes of current domain as the condition of diffusion model to generate the difference of classifier weights between reference point and anchor point (i.e., the classifier weights of current domain). Considerations of such design are 1) the difference between reference point and anchor point denotes the evolution of parameters from historical to current domain, which **conduces to modeling the crucial evolving pattern across domains in EDG**; 2) the reference point provides initialization-like information, while current prototypes offer some information about the desired decision boundary, **helping to explore the relationship between generated parameters and the given domain**.
Moreover, we compare W-Diff with G.pt [1] in the following datasets. G.pt shows worse generalization on target domain, due to the limitations discussed above.
----Error rate comparison (%)---
|Method|2-Moons|ONP|
|-|-|-|
|G.pt|4.5$\pm$1.2|35.1$\pm$0.8|
|**W-Diff**|**1.5$\pm$1.0**|**32.9$\pm$0.5**|
> **Q2: The idea is ok, but still needs a careful discussion and ablation study.**
Thanks. We have provided the results of ablation study in **Table 4** of the PDF file. Concretely, we explore the following variants:
* Variant A ablates the consistency loss $\mathcal{L}^t_{con}$ for learning domain-shared feature encoder.
The performance drop of Variant A suggests that learning domain-invariant feature representations is necessary for EDG in the domain-incremental setting. Otherwise, the feature encoder could easily overfit to current domain, prohibiting task model from generalization.
* Variant B ablates the conditional diffusion model and directly uses the incrementally trained classifier for inference.
* Variant C directly uses the historical classifier weights in the reference point queue $Q_r$ to construct the average weight ensemble $\bar{\mathbf{W}}^{test}$ for inference: $\bar{\mathbf{W}}^{test}=\frac{1}{|Q_r|}\sum_{\ddot{\mathbf{W}}^{t^{\prime}}\in Q_r}\ddot{\mathbf{W}}^{t^{\prime}}$.
The inferior results of variant B and C indicate that W-Diff benefits from generating meaningful and customized classifier weights via controlling the condition of diffusion model.
> **Q3: The performance gain over different datasets looks marginal.**
Thanks. To comprehensively evaluate the effectiveness of W-Diff, we have conducted a significance test (t-test) on different datasets. Concretely, a significance level of 0.05 is applied, and if p-value is less than 0.05, the accuracy difference between EvoS [8] and W-Diff is statistically significant. For clearer explanation, the -log(p) of each p-value has shown in red line. In **Fig. 1(b)** of the PDF file, majority of the -log(p) of the performance comparison between EvoS and W-Diff are larger than -log(0.05), which means W-Diff is statistically superior to EvoS at most datasets.
Besides, we also extend W-Diff to regression tasks, where the regression datasets and results are from DRAIN [4]. In **Table 3** of the PDF file, W-Diff still works well. Overall, given the generalization performance on regression and text/image/multi-variate classification datasets, W-Diff is of more versatility.
> **Q4: More ablation studies, e.g., just leveraging the past classifiers for the ensemble prediction, or test-time adaptive classifier ensemble as in [3]?**
Thanks. As you suggested, we have added more ablation study results in **Table 1** of the PDF file.
* Variant C in the reply to **Q2** denotes leveraging the past classifiers for the ensemble prediction.
* Variant D denotes using the APT-batch manner in [3], which first conducts unsupervised adaptation of classifier by back-propagating the gradient of entropy loss to source-trained classifier and then makes independent predictions on each batch.
* Variant E denotes using the APT-online manner in [3], which first updates source-trained classifier with the cumulative moving average of update directions from the first target data batch to current batch, and then makes predictions on current batch.
In **Table 1** of the PDF file, W-Diff outperforms these variants, because they ignore the evolving pattern which is crucial for EDG.
---
Rebuttal Comment 1.1:
Title: Thank you for the rebuttal
Comment: The reviewer thanks for the detailed responses and additional results. The reviewer decided to raise the score to 5.
---
Reply to Comment 1.1.1:
Title: Thanks for upgrading score
Comment: We really appreciate your feedback and are pleased that our rebuttal has addressed your concerns, leading to a raised rating! In future revisions, we will update the manuscript according to your suggestions. | Summary: The paper proposes a novel method called Weight Diffusion (W-Diff), which employs a conditional diffusion model in the parameter space to learn the evolving patterns of classifiers during domain-incremental training. During inference, the proposed method uses an ensemble of classifiers tailored to the target domain for robust predictions.
Strengths: 1. This work is pioneering in applying diffusion models for generating parameters in a practical context.
2. The paper is well-written with detailed descriptions of each component of the algorithm.
3. The paper demonstrates the effectiveness of W-Diff through comprehensive experiments on both synthetic and real-world datasets.
4. The use of an ensemble of classifiers enhances prediction robustness.
Weaknesses: 1. The experiments are conducted on relatively small networks, and the diffusion model is used solely to generate the classifier. It remains unclear whether this method can scale to larger networks.
2. The paper should include a comparison with Variational Autoencoders (VAEs) or other generative models. Given that diffusion models are more complex and challenging to train, it is important to demonstrate the necessity of using them.
3. The paper should include a comparison of training time with state-of-the-art algorithms.
4. It would be good to include a notation table in the appendix, as there are numerous notations used throughout the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: See Weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitation of the paper is that it only considers generating the classifier. It would be interesting to explore whether it can generate the entire network.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your efforts in reviewing the paper and the constructive comments. Below, we have tried our best to address your concerns in detail.
> **Q1: Can this method scale to larger networks and generate the entire network?**
Thanks. Firstly, we have tried larger networks on the fMoW dataset by replacing the DenseNet-121 with DenseNet-161, DenseNet-169, DenseNet-201, respectively. The results are provided in **Table 2** of the PDF file, where further performance improvements are obtained.
Secondly, our method can be extended to generate more parameters, but some modifications are needed to solve the different shapes of parameters from multiple layers and the training efficiency of diffusion models when the number of parameters to be generated is high. One possible solution to the shape difference is to flatten parameters from different layers and then concatenate all the flattened parameters into a vector. The generated parameters in a vector format are finally converted into their original shapes. As for the training efficiency, we can additionally train a VAE to encode the high-dimensional parameter vector into a low-dimensional embedding and then decode the embedding to reconstruct the parameter vector. Later, the diffusion model is trained in the low-dimensional embedding space and the decoder of VAE is used to recover parameters from the generated embedding via diffusion model in the inference stage. As stated in the Limitations section, we will leave this for a future work due to time constraints.
Thirdly, as mentioned above, generating the entire network is possible, but it is not a good choice for evolving domain generalization (EDG).
1) The massive parameters of the entire network require a large number of sequential source domains to accurately model the evolving pattern of the entire network.
2) Not all parameters are unshared. Shallow layers of the network usually extract general knowledge that can be shared, while deep layers are specific to tasks or datasets [5].
3) Previous EDG work [7] has provided theoretical evidence, showing that solely learning dynamic features is insufficient. [6] and [7] both use a static variational encoding network as the feature encoder to extract invariant features and train a domain-adaptive classifier. Hence, based on the experience and conclusion of previous works, we choose to only generate the parameters of task head, e.g., the classifier for classification tasks.
4) Moreover, in the paper, we compare our W-Diff with DRAIN [4] which leverages LSTM to learn the evolving pattern of the entire network. Experimentally, on 2-Moons and ONP datasets, the error rate of target domain is **3.2% (DRAIN) vs. 1.5% (W-Diff), 38.3% (DRAIN) vs. 32.9% (W-Diff)**. These results partially show that generating entire network is less effective for EDG.
> **Q2: Comparison with VAEs.**
Thanks for your advice. We have added the results when replacing the diffusion model with VAE to generate classifier weights, conditioned on the reference point and prototypes. Please refer to **variant F** in **Table 4** of the PDF file. The better generalization performance when using diffusion model to learn the evolving pattern of classifiers shows its superiority in modeling complex distributions and generating high-quality data, which has also been proven by many diffusion-based generation works. Besides, to reduce the difficulty of training diffusion models, we only generate the parameters of the classifier. The number of parameters (MB) for the conditional diffusion model is given in **Table 6** of the PDF file, from which we see that the diffusion model is small and easy to train.
> **Q3: Comparison of training time with SOTA method.**
Thanks. The training time complexity of our method mainly comes from the diffusion model. For simplicity, we take the U-Net with all convolutional layers as an example. Assuming that there are $L$ convolutional layers, the size of feature map in the $i$-th layer is $H_i \times W_i×C_i$ and the kernel size is $k_i\times k_i$. Then the time complexity of one forward pass is $\mathcal{O}(\sum_{i=1}^L H_i \times W_i \times C_i \times C_{i-1} \times k_i^2)$. Let $S$ denote the total time step in diffusion model. Then the time complexity of training diffusion models for $I$ iterations can be approximated as $\mathcal{O}(I \times S \times \sum_{i=1}^L H_i\times W_i \times C_i\times C_{i-1}\times k_i^2)$.
Moreover, in **Table 5** of the PDF file, we compare the training time and GPU memory of our W-Diff, DRAIN [4], EvoS [8] and GI [11] on the RMNIST and Huffpost datasets. We acknowledge that our method has no significant advantage in terms of training time, due to the diffusion model. But this is not a limitation unique to our approach and most methods based on diffusion models have this limitation. As part of future work, we will investigate and try to address the limitation to further enhance the training efficiency.
In addition, it is worth mentioning that GI and DRAIN require huge computational resources during the training process, when they are applied on relatively large networks. Specifically, DRAIN needs to generate the entire network parameters and the fine-tuning stage of GI requires second-order gradients. On the Huffpost dataset with the backbone of DistilBERT-base and a batch size of 64, GI and DRAIN encounter the issue of **GPU memory explosion**. By contrast, our method utilizes diffusion model to generate only classifier weights, and as shown in **Table 6** of the PDF file, the diffusion model is small to train, without the explosion of GPU memory when using the same batch size of 64. Overall, the main contribution of our work is providing a new perspective to address EDG in the domain-incremental setting via delicately tailoring the conditional diffusion model.
> **Q4: Include a notation table in the appendix.**
Thanks for your advice. We will add a notation table in the revision to ease the burden of readers.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your detailed reply. My concerns are addressed and I will keep my score towards acceptance.
---
Reply to Comment 1.1.1:
Title: Thanks for positive feedback
Comment: Thank you so much for your positive feedback and dedicated time to review our paper! We are glad to know that our rebuttal and new experiments have addressed your concerns. | Summary: This paper presents Weight Diffusion (W-Diff), a framework for domain generalization in non-stationary environments. W-Diff leverages a conditional diffusion model in the parameter space to learn the evolving pattern of classifiers during domain-incremental training. Experiments on synthetic and real-world datasets demonstrate its superior generalization performance on unseen future domains
Strengths: **S1.**
This paper introduces a novel approach for capturing evolving patterns at the parameter level using diffusion models.
**S2.**
The proposed method demonstrates good performance in generalizing to unseen domains on diverse datasets.
**S3.**
This paper addresses the practical challenge of sequentially arriving non-static source domains, mimicking real-world scenarios.
Weaknesses: **W1.**
The paper does not sufficiently justify why a diffusion model is chosen for the domain generalization (DG) problem. While diffusion models have shown excellent performance in generation tasks, the specific advantages they offer for DG are not clearly articulated.
**W2.**
The computational complexity of the diffusion model training might be a barrier for very large datasets or real-time applications. A thorough analyses on the computational complexity espeically for training is needed.
**W3.**
The evaluation is limited to classification tasks; applicability to other types of tasks, such as regression, remains unexplored.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please refer to my weaknesses section.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors discussed some limitations of this work in their paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for your efforts in reviewing the paper as well as your constructive comments. Below, we do our utmost to address your concerns.
> **Q1: The specific advantages that diffusion models offer for DG.**
Thanks for your comment. Firstly, instead of learning a deterministic classifier like previous EDG methods [4, 8], we consider the classifier as a distribution, which conduces to improving the robustness and reducing miscalibration. Yet, the prior knowledge of distribution type is unknown and we only have the observed data (i.e., saved classifier checkpoints) from the unknown distribution. Fortunately, diffusion models are quite powerful in modeling complicated and unknown distribution based on observed data. Moreover, we have also tried other generative models, e.g., VAE, to generate the classifier weights. The results on RMNIST dataset are shown in the following table. We see that the generalization performance is worse when using VAE, which demonstrates the superiority of diffusion model in modeling complicated classifier distributions.
-----Accuracy (%) on RMNIST (K = 3)-----
|method|generative model|$\mathcal{D}^{T+1}$|OOD avg.|OOD worst|
|:---:|:---:|:---:|:---:|:---:|
|W-Diff|VAE| 98.66|93.83|87.16|
|W-Diff|diffusion model|**98.70**|**94.12**|**87.36**|
Secondly, in the paper, we focus on the problem of evolving domain generalization (EDG), where the domain gradually evolves over time in an underlying pattern. In addition to the invariant feature learning, it is also critical to excavate the underlying evolving pattern across domains for better predicting the model status on future domains. Inspired by the fact that conditional diffusion models excel at generating specific images using additional information, we delicately design the condition including both the classifier weights of historical domain and the prototypes of current domain, and train the conditional diffusion model to generate the discrepancy of classifier weights between historical domain and current domain. The discrepancy represents the evolving of classifier across domains. In such way, we convey the information on classifier evolving from past to present to the conditional diffusion model.
Thirdly, during inference, we can inject the target data information through the prototypes in the condition to generate target-customized classifiers. Besides, since the diffusion model is stochastic in nature, multiple noise samplings can generate different classifiers, forming an ensemble, which leads to better robustness for generalizing on new domains.
> **Q2: The computational complexity of the diffusion model training.**
Thanks for your advice. For simplicity, we take the U-Net with all convolutional layers as an example. Assuming that there are $L$ convolutional layers, the size of feature map in the $i$-th layer is $H_i \times W_i×C_i$ and the kernel size is $k_i\times k_i$. Then the time complexity of one forward pass is $\mathcal{O}(\sum_{i=1}^L H_i \times W_i \times C_i \times C_{i-1} \times k_i^2)$. Let $S$ denote the total time step in diffusion model. Then the time complexity of training diffusion models for $I$ iterations can be approximated as $\mathcal{O}(I \times S \times \sum_{i=1}^L H_i \times W_i \times C_i \times C_{i-1} \times k_i^2)$.
Moreover, in **Table 5** of the PDF file, we compare the training time and GPU memory of our W-Diff, DRAIN [4], EvoS [8] and GI [11] on the RMNIST and Huffpost datasets. We acknowledge that our method has no significant advantage in terms of training time, due to the diffusion model. But this is not a limitation unique to our approach and most methods based on diffusion models have this limitation. As part of future work, we will investigate and try to address the limitation to further enhance the training efficiency.
In addition, it is worth mentioning that GI and DRAIN require huge computational resources during the training process, when they are applied on relatively large networks. Specifically, DRAIN needs to generate the entire network parameters and the fine-tuning stage of GI requires second-order gradients. On the Huffpost dataset with the backbone of DistilBERT-base and a batch size of 64, GI and DRAIN encounter the issue of **GPU memory explosion**. By contrast, our method utilizes diffusion model to generate only classifier weights, and as shown in the following table, the diffusion model is small to train, without the explosion of GPU memory when using the same batch size of 64. Overall, the main contribution of our work is providing a new perspective to address EDG in the domain-incremental setting via delicately tailoring the conditional diffusion model.
-----------------Number of parameters (MB) of conditional diffusion model $\mathcal{E}_{\boldsymbol{\theta}}$----------------
|Yearbook|RMNIST|fMoW|Huffpost|Arxiv|2-Moons|ONP|
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|2.31|41.6|27.23|41.62|20.01|10.52|15.51|
> **Q3: The evaluation is limited to classification tasks; applicability to other types of tasks, such as regression, remains unexplored.**
Thanks for your advice. We have extended our method to the two regression datasets (i.e., House and Appliance) in DRAIN [4]. For regression tasks, the prototype is calculated at the domain level, i.e., the average of features in a single domain. And the cross-entropy loss is replaced with the mean squared error (MSE) loss and the Kullback-Leibler divergence in the consistency loss is replaced with MSE. Specifically, the results are provided in the following table, where our method still achieves better generalization performance.
-----------------Mean absolute error (MAE) for regression tasks----------------
| Method | House | Appliance |
|:---: | :---: | :---:|
|Offline |11.0$\pm$0.36|10.2$\pm$1.1|
|IncFinetune |9.7$\pm$0.01|8.9$\pm$0.5|
|CIDA |9.7$\pm$0.06|8.7$\pm$0.2|
|GI |9.6$\pm$0.02|8.2$\pm$0.6 |
|DRAIN |9.3$\pm$0.14|6.4$\pm$0.4 |
|**W-Diff** |**9.1$\pm$0.15** |**4.9$\pm$0.3**|
---
Rebuttal Comment 1.1:
Comment: I appreciate the author's responses, which addressed most of my previous concerns, though the computational cost associated with the use of the diffusion model remains a consideration. I will maintain my current score of 5 and take the rebuttal into account in the next phase of discussion.
---
Reply to Comment 1.1.1:
Title: Further response to concerns about the computational cost
Comment: Thanks for your feedback and we are happy to know that our rebuttal has addressed most of your concerns. As for the computational cost during the training process, we would like to provide further clarifications.
**Firstly**, the problem we focus on is the evolving domain generalization (EDG) in the domain-incremental setting, which is an under-explored area, compared with previous EDG. The consideration of domain-incremental setting mimics the dynamics of training domains in the real-world, which is more practical yet challenging. The resulting benefit is that once a new training domain is available, we can incrementally train the model only on the new domain, instead of training from scratch with all old domains and the new domain using previous non-incremental EDG methods. The latter would be inefficient when new domains continually emerge with the passage of time.
**Secondly**, the training time we report in previous response to **Q2** is the total training time when the number of source domains is $T$. In the following table, we further compared the average training time per domain and the performance improvement over the baseline GI. Though our W-Diff indeed requires more training time per domain, we think the increment of training time is acceptable, considering the significant performance improvement over GI. Besides, if a new source domain of similar dataset size comes, W-Diff just requires roughly the average training time to train the model on the new source domain.
-------------------------------Computational cost and performance comparison on **RMNIST** dataset (T=6, K=3)-------------------------------
| Method | Total training time (h) | Average training time per domain (h) | GPU memory (GB) | $\mathcal{D}^{T+1}$ accuracy (%) | OOD avg. accuracy (%) | OOD worst accuracy (%) |
| :-: | :-: | :-: | :-: | :-: | :-: | :-: |
| GI | 2.40 | 0.40 | 1.9 | 97.78 | 91.00 | 82.46 |
| W-Diff | 4.61 | 0.77 | 4.0 | 98.70 | 94.12 | 87.36 |
| Increment $\Delta$ of W-Diff over GI| 2.21 | 0.37 | 2.1 | **0.92** | **3.12** | **4.90** |
-------------------------------Computational cost and performance comparison on **Huffpost** dataset (T=4, K=3)-------------------------------
| Method | Total training time (h) | Average training time per domain (h)| GPU memory (GB) | $\mathcal{D}^{T+1}$ accuracy (%) | OOD avg. accuracy (%) | OOD worst accuracy (%) |
| :-: | :-: | :-: | :-: | :-: | :-: | :-: |
| GI$^\clubsuit$ | 10.04 | 2.51 | 17.3 | 64.96 | 63.11 | 60.15 |
| W-Diff | 12.61 | 3.15 |15.6 | 73.91 | 72.29 | 70.40 |
| Increment $\Delta$ of W-Diff over GI | 2.57 | 0.64 | **-1.7** | **8.95** | **9.18** | **10.25** |
*(GI$^\clubsuit$ denotes that a much smaller batch size is used due to the **GPU memory explosion** when using the method on Huffpost dataset with the backbone of DistilBERT base.)*
**Thirdly**, it is noteworthy that GI requires a pre-training stage on all domains and then sequentially finetunes the pre-trained model on each domain. That is, if a new source domain comes (i.e., $T$ changes), it needs to re-execute the pre-training stage from scratch. By contrast, our W-Diff incrementally trains the previously trained model on the new source domain, rather than training from the first source domain every time. Indeed, in the fixed $T$, our W-Diff is at a disadvantage in terms of training time. But in practice, **in the long run, our method is more advantageous in training time**. To verify this, we record the total training time of GI and W-Diff on RMNIST dataset, when $T$ increases in a sequence of $2, 3, 4, 5, 6$. Concretely, the result is **`6.74h (GI) vs 4.61h (W-Diff)`**. Here, the training time of W-Diff is the same as in the above table, because the training procedure of W-Diff has already simulated the increasing of $T$.
Certainly, further improving the training efficiency of W-Diff is a nice direction for future work. We hope that our response could mitigate your concerns about computational cost. Looking forward to your feedback. Thanks so much! | Summary: This paper deals with the problem of evolving domain generalization in non-stationary environments, where dynamically changing source domains arrive sequentially, but we only have access to data samples from the current domain (and not the past ones). The main idea is to learn a conditional diffusion model that predicts how the evolving pattern of the classifiers, as the source domains keep changing. In a nutshell, the conditional diffusion model learns how to go from past classifier weights (reference points) to the weights of the current classifier (anchor point), conditioned on prototypes of the current domain. One challenge concerns the shared feature encoder: if the encoder is updated using the latest source domain, then it will overfit to it. To avoid this, the authors suggest to enforce prediction consistency among multiple classifiers, so that all classifiers from past and present domains give the same prediction given the same feature representation. This implies that the past reference points do not become obsolete. During inference, the goal is to predict the classifier weights given the current domain prototypes. For this purpose, the framework uses the conditional diffusion model to cheaply generate a large number classifiers, which it then averages for improved robustness. Extensive experiments on synthetic and real datasets show strong performance for the proposed framework.
Strengths: - Overall, the framework is interesting. The use of conditional diffusion models makes sense in thee context of non-stationary adaptation, as such models can capture the dynamically changing source domains. Furthermore, it improves robustness because it is possible to generate multiple weight predictions, and then take their average.
- The consistency constraint is a nice way to force the shared representation that remains valid for old classifiers, even as the source domain keeps changing. The use of reference points makes it possible to completely get rid of old data (except the data related to the current source domain), and is efficient. The diffusion models are able to capture even complex non-stationarities, which gives the model significant expressive power.
- The experimental study is quite extensive and shows strong performance across a variety of benchmarks for the proposed weight-diffusion framework.
Weaknesses: - In inference phase, the authors assume they know the dataset (without the labels) for the future timesteps $\{T+1,\dots,T+K\}$. Of course, this cannot work when the datasets are not given. In such a case, the model must also be in a position to infer the future data distribution for the points $x_i$, in order to then estimate the prototypes $f_{test}$. The current framework seems not to be able to deal with that. I think it would help if the authors explain what the limitations of their framework are, and why for instance it is very difficult to predict the future data distribution in non-stationary environments. The current weight-diffusion model assumes that we can have access to future data but not their labels during inference, but this may not always be the case.
- I like the fact that the authors condition on the context. But I was not clear why the context only consists of the historical weights and the current prototypes. Why not also include the timestep difference between the reference point and the current timestep? For instance, assume we are given the context (reference point + prototype). If the reference point is from 5 timesteps ago, then the predicted weights might be different compared to the case where the reference point is from 10 timesteps ago, even if the reference point value is the same. If we are not given how far into the past the reference point lies, then I was thinking that predicting the current weights is harder. On the other hand, perhaps this is already taken into account implicitly, in the framework, but this is not immediately clear.
- The point above also points to the limitation of the current framework, which is a lack of formal theory. It works very well experimentally, but some aspects are not very clear.
- There are numerous typos and bad grammar throughout the paper. The authors should do a very careful proofreading and fix all the errors.
Technical Quality: 3
Clarity: 2
Questions for Authors: - Why is it not necessary to condition on the timestep difference between the reference point and the current anchor? Intuitively, one might expect that knowing how far into the past the reference point is could lead to better predictions. Did the authors try this?
- I was under the impression that U-Net still needs the parameters $\beta_s$ in Equations (3) and (4).. If yes, how did the authors set these hyperparameters? Where they fixed throughout training?
- I am not sure I can understand Figure 3a. $\tilde{W}^{7\mid 5}$ is concentrated in the upper left, whereas $W^7$ is at the very bottom. And, yet, both of them perform very well in Figure (3b). What does that exactly say about the computed weights? Maybe it means that the visualization does not really tell us anything about test performance?
- In the ablation study in Section 5.3, what is Variant B?
- Can the authors be more clear about the limitations of their framework? Could their framework be modified to also predict the future data distribution (e.g., how the $x_i$ will be distributed in the next timestep)?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: No concern.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks a lot for your efforts in reviewing the paper. Below, we respond to your questions in detail.
> **Q1: Concerns about the assumption of dataset accessibility in the inference phase.**
Thanks. Firstly, we would like to make it clear that for discriminative tasks, test data (i.e., target data in our paper) must have been given, when the model is used for testing. Thus, our method doesn't need to infer future data distributions. We can estimate the prototype matrix directly by feeding test data into the model to get their features and class probability predictions.
Secondly, previous evolving domain generalization (EDG) works [4, 6, 7, 11] are only evaluated on the next one target domain. Following [8, 10], we evaluate on the next $K$ target domains, hoping models can also generalize well on the target domain in the farther future. When $K=1$, it aligns with previous standard evaluations. Besides, the evaluation on each domain is conducted independently.
Finally, given that the whole data of a target domain may not come at once, we also offer a way for estimating the prototype matrix in a batch-data stream scenario, where the prototype matrix for the $j$-th target data batch is estimated using the cumulative moving average. In **Fig. 1(a)** of the PDF file, we provide the results on RMNIST and Huffpost when evaluating W-Diff in the batch-data stream scenario.
> **Q2: Why not include timestamp difference into the condition?**
Thanks. In our method, timestamp difference information is implicitly considered, as domains evolve over time and arrive in chronological order during training. Besides, the difference between classifier weights of current and historical domains also contains implicit timestamp difference information. Moreover, due to different optimization starting points and stochastic gradient descent, it is unlikely that classifiers respectively trained for two domains are exactly the same. Hence, we do not explicitly include timestamp difference in the condition.
Nevertheless, as you suggested, we have also tried to explicitly include the timestamp difference $\Delta t$. Results on RMNIST are shown in **Table 1** of the PDF file, where no obvious improvement is obtained, possibly due to reasons above.
> **Q3: The framework works very well experimentally but lacks formal theory.**
Thanks. The gradient descent algorithm (GDA) is known to be theoretically supported, and we find the denoising process of diffusion models can be seen as a learnable GDA with momentum update and small noise perturbation. Let us consider the following gradient descent algorithm for optimizing the model with parameters $\Phi$ on dataset $D$ via minimizing loss $L$:
$\Phi_{i+1}=\Phi_i-\lambda \cdot \nabla_{\Phi}L(\Phi_i, D),\quad i=0,\ldots,T-1,$
where $\lambda$ controls the update step size.
Alternatively, we generate classifier weights via conditional diffusion model $\mathcal{E}_{\theta}$ with the following denoising process:
$W^t\_{s-1}=\frac{1}{\sqrt{\alpha_s}}\left(W^t\_s - \frac{\beta\_s}{\sqrt{1-{\bar{\alpha}}\_s}} \mathcal{E}\_\theta(W^t\_s,s,\ddot W^{t^{\prime}}, \mu^t)\right) + \sigma\_s \epsilon, \quad \epsilon \sim N(\mathbf 0,\mathbf I),$
where $s$ is diffusion step, $W^t_0$ is the overfitted classifier weights on the $t$-th domain $D^t$, $\mu^t$ is the prototype matrix of $D^t$, and $\ddot W^{t^{\prime}}$ is the saved overfitted classifier weights on historical domain $D^{t^{\prime}}, t^{\prime} < t$.
Then, we can reformulate the denoising process as
$W^t\_{s-1}=W^t\_s-\underbrace{\frac{\beta\_s}{\sqrt{\alpha\_s}\sqrt{1-{\bar{\alpha}}\_s}}}\_{\lambda} \mathcal{E}\_{\theta}(W^t\_s,s, \ddot W^{t^{\prime}}, \mu^t)+\underbrace{(\frac{1}{\sqrt{\alpha\_s}}-1)W^t\_s}\_{\text{momentum update}}+\underbrace{\sigma\_s \epsilon}\_{\text{noise perturbation}}.$
Compared with GDA, the conditional diffusion model can be viewed as learning a descent path of $\nabla_W L(W^t_s, D^t|\ddot W^{t^{\prime}}, \mu^t)$, so that conditioned on historical classifier weights and current prototype matrix, the diffusion model can directly generate classifier weights suitable for current domain.
> **Q4: Typos and grammar errors.**
Thanks. We have proofread the paper carefully and corrected the errors.
> **Q5: Settings of $\beta_s$ in U-Net.**
Thanks. Eq.(3) is obtained by using $\alpha_s=1-\beta_s$. And $\beta_s$ is set via the following code, and betas is fixed throughout training.
```
betas=(torch.linspace(1e-4 ** 0.5, 2e-2 ** 0.5, 1000) ** 2)
```
> **Q6: Explanation of Figure 3a.**
Thanks. What we want to convey is that our generated classifier weights are diverse and generally high-performance. $W^7$ is obtained by fine-tuning the classifier on domain $\mathcal{D}^7$, which may be trapped in local optima. The parameter space is too big to fully explore during fine-tuning. It does not mean that only the area around $W^7$ is good. $\hat W^{7|5}$ generally performs well, meaning that our generated classifier weights are diverse and potentially cover better but unexplored area in the fine-tuning process.
> **Q7: What is Variant B in ablation study?**
Thanks. Variant B ablates the conditional diffusion model and directly uses the incrementally trained classifier along with the learned domain-shared feature encoder for inference.
> **Q8: Could it be modified to predict future data distribution?**
Thanks. Predicting future data distribution is possible by saving partial historical data to learn the evolving pattern at the data level, but it is not wise for discriminative tasks (e.g., classification), which requires further fine-tuning on generated instances to generalize discriminative models to target domain. Moreover, due to varied data types (e.g., image, text, multivariate in our experiments), data diffusion is cumbersome, which requires quite different architectures and lacks of universality. By contrast, our weight diffusion is more general for discriminative tasks on different datatypes.
---
Rebuttal Comment 1.1:
Title: thank you for response
Comment: I thank the authors for their rebuttal. I will keep my original score for now (which anyway leans towards acceptance), but may revise my score upward in the next phase.
---
Reply to Comment 1.1.1:
Title: Thanks for positive feedback
Comment: We sincerely appreciate your valuable reviews and positive feedback. We hope the idea and approach presented in this work can inspire more studies in this direction. | Rebuttal 1:
Rebuttal: Sincerely thank all the reviewers for their efforts in reviewing our paper and providing constructive suggestions. We are greatly encouraged that the reviewers find that
* our framework/idea is **interesting** (*Reviewer EUGW and BuMk*), **novel** (*Reviewer tBNz*), and **pioneering** in applying diffusion models for generating parameters in a **practical** context (*Reviewer Ve2z*);
* the considered problem of evolving domain generalization is **practical** (*Reviewer tBNz*), **interesting yet important** (*Reviewer BuMk*);
* our paper is **well-written** with detailed descriptions (*Reviewer Ve2z*) and **well-structured** (*Reviewer BuMk*);
* the experimental study is **extensive** (*Reviewer EUGW*) and **comprehensive** (*Reviewer Ve2z*);
* and the performance is **strong** across a variety of benchmarks (*Reviewer EUGW*), **good** in generalizing to unseen domains on **diverse** datasets (*Reviewer tBNz*).
As for the concerns and suggestions raised by each reviewer, we have done our best to address them thoroughly and have provided detailed responses to each of them. Below are references that we used in our replies to each reviewer. Additionally, a `one-page PDF` file that includes the relevant figures and tables referenced in our replies has been uploaded. Please refer to this PDF file for detailed results, if needed.
Refs:
[1] Learning to Learn with Generative Models of Neural Network Checkpoints. arXiv:2209.12892, 2022.
[2] Diffusion-based Neural Network Weights Generation. arXiv:2402.18153, 2024.
[3] Adaptive Test-Time Personalization for Federated Learning. In NeurIPS, 2023.
[4] Temporal domain generalization with drift-aware dynamic neural networks. In ICLR, 2023.
[5] How transferable are features in deep neural networks? In NeurIPS 2014.
[6] Generalizing to evolving domains with latent structure-aware sequential autoencoder. In ICML, 2022.
[7] Enhancing Evolving Domain Generalization through Dynamic Latent Representations. In AAAI, 2024.
[8] Evolving standardization for continual domain generalization over temporal drift. In NeurIPS, 2023.
[9] High-resolution image synthesis with latent diffusion models. In CVPR, 2022.
[10] Wild-time: A benchmark of in-the-wild distribution shift over time. In NeurIPS, 2022.
[11] Training for the future: A simple gradient interpolation loss to generalize along time. In NeurIPS, 2021.
Pdf: /pdf/ecd857db7e2dd5a1cdbacc6f02f46e661d1261af.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
IR-CM: The Fast and General-purpose Image Restoration Method Based on Consistency Model | Accept (poster) | Summary: This paper presents IR-CM, a fast and universal image restoration method leveraging consistency models. The key innovations include a novel linear-nonlinear decoupling training strategy and an origin-estimated consistency function to enhance training effectiveness and inference performance. The proposed method is evaluated across several image restoration tasks, including draining, deblurring, denoising, and low-light image enhancement, showing competitive results with minimal inference steps.
Strengths: 1. The proposed original-estimated consistency function (OECF) provides a more stable initial state and reduces the solution space.
2. The origin-guided loss stabilizes training and prevents pattern collapse.
3. Extensive experiments on four restoration tasks are conducted to validate the effectiveness of the proposed method.
Weaknesses: 1. There is an error in Eq. (8), $v_t$ should be $\sqrt{v_t}$. Please have a check.
2. There is an inconsistency in Theorem 1, ``$c_{out}(\eta)=0, c_{out}(\eta)=1$''. Please have a check. I suggest the author carefully check the writing of this paper and avoid such errors or typos.
3. Even though this work contains rich experiments on multiple tasks, more recent diffusion-based methods should be considered as a comparison, such as DiffIR.
4. All the experiments for draining and denoising are based on the synthetic training and testing data. To better validate the practical value in real applications, more real-world datasets should be considered.
5. It would be more comprehensive to provide a comparison on model complexity and runtime.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weakness.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: It would be better to analyze the limitations or some failed cases of the proposed methods.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your recognition and support of our work. We will carefully address the issues you raised and make thoughtful revisions.
For Weakness 1 & 2.
answer:
Thank you for your thorough reading and careful review. We will diligently address the issues you identified and meticulously re-examine our submission.
For Weakness 3.
answer:
Thank you for your useful suggestions. We will include and analyze the comparison results with DiffIR[1] in the revised version.
reference:
[1] "DiffIR: Efficient Diffusion Model for Image Restoration", 2023.
For Weakness 4.
answer:
Thank you for your meaningful suggestions. We will add comparison results on real datasets. Currently, you can refer to Table 2 and Figure 1 in the provided rebuttal PDF materials to see the additional comparison results on the Raindrop[2] dataset. The comparison results show that our method is also competitive for image restoration tasks in real-world scenarios. This experimental results will be included in the main text in the subsequent revised version.
reference:
[2] "Attentive Generative Adversarial Network for Raindrop Removal from a Single Image", 2018.
For Weakness 5.
answer:
Thank you again for your meaningful suggestions. We have added a comparison of inference speeds, which you can view in Table 1 of the provided rebuttal PDF materials. The experimental results show that our method is competitive not only in performance but also in inference speed, making it suitable for applications with high real-time requirements. Similarly, this comparison result will be included in the subsequent revised version.
For Limitation:
answer:
Thank you again for your meaningful suggestions. We will analyze the limitations of our method in the experimental and conclusion sections of the subsequent revised version.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thanks for your response. The response has addressed my concerns. I keep my rating.
---
Reply to Comment 1.1.1:
Comment: We are glad that our response has resolved your issues, and we sincerely thank you for your recognition and support of our work. Your suggestions are crucial for improving the quality of our paper. If our paper is fortunate enough to be accepted, we will make careful revisions in the subsequent version. | Summary: This paper modifies the consistency model from image generation to image restoration with three modifications. Each module contains robust theoretical proof and shows great performance improvement. Linear-nonlinear decoupling training strategy will motivate future works in using consistency model in image restoration. The additional results show the power of the paper both in inference speed and performance.
Strengths: 1. The modification from Consistency Model in image generation to image restoration is terrific. Each module shows robust theoretical proof 2. The extensive experiments show the great power of IR-CM in inference speed and performance.
Weaknesses: 1. The comparison with other methods seems to be unfair. For example, the result of retinexformer in low-light enhancement is tested on full resolution with y-channel calculation. It seems that all of the results of this paper are tested in 512*512 resolution, which is observed by the unexpectedly high SSIM metrics (Table 4 with comparable SSIM but significantly lower PSNR, Table 5 with over 30 PSNR and 0.95 SSIM, etc.). In my view, it is impossible. $\textbf{Please show the result in full resolution}$ (i.e., results in 512 resolution will change the result largely and it is meaningless at all) with uniform metrics (like y-channel calculation) but not directly report the result of other papers.
2. The phenomenon of first-stage training is interesting, as to why setting the $x_0$ equal to $\mu$ will benefit learning image restoration tasks as it is just a denoising task. I understand that this will ease training, but that's still not enough to explain the phenomenon.
3. Mistaking the meaning of the setting. Universal image restoration means using one model to train different degradation types at once, not meaning a framework that can be used to train different tasks (the author can freely search in the arxiv).
4. Typos. Line 143 should change to "the training process" as in the inference process neither strategy uses pre-trained model. Line 152 the $c_{out}(T)=1$, not $\eta$
Technical Quality: 3
Clarity: 3
Questions for Authors: See the weaknesses.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your recognition of our work. We will carefully address your questions and resolve your concerns.
For Weakness 1.
Answer:
We apologize for any misunderstandings. While we adjusted the images in the paper to square shapes to showcase as many comparative experimental results as possible, our method and the baseline methods were tested at the original resolution of each dataset(e.g., 600x400 for the LOLv2 dataset). The experimental result images shown in the paper were adjusted to square shapes purely for ease of presentation. We will make this clear in the revised version.
And our metric settings are same as all the mentioned baseline methods (Retinexformer, LLFlow, DiffLL, GlobalDiff and IR-SDE etc).
For PSNR metric: We perform the calculation in the luminance space (Y channel) for low-light enhancement using the following formula
$$
PSNR=10*log_{10}(\frac{MAX^2}{MSE}).
$$
For SSIM metric: it refer to the paper "Image Quality Assessment: From Error Visibility to Structural Similarity, 2004".
For LPIPS metric: it refer to the paper “The Unreasonable Effectiveness of Deep Features as a Perceptual Metric, 2018”.
We will clarify these information in the subsequent revisions.
We understand your skepticism regarding the low-light enhancement experimental results, as they significantly surpass previous SOTA methods. While this result is genuine, we were samely surprised by it. And note that with single-step sampling inference, the model’s performance is not significantly different from the baseline methods. However, with two-step sampling inference, the model shows a considerable improvement, even the performance can be further enhanced with three-step sampling (see Appendix C Table 8) in terms of PSNR and SSIM, although this may lead to a decrease in the LPIPS metric.
Then, upon objective analysis, we believe this remarkable performance is primarily due to the nonlinear fitting stage. At this stage, the model learns to map from various intermediate degraded images to high-quality images. This is more meaningful for low-light enhancement compared to other image restoration tasks because each intermediate degraded image corresponds to a specific luminance scenario. This allows the model to be more robust to a range of different luminance conditions, leading to better performance.
For Weakness 2.
Answer:
Recall the proposed OECF formula (16) here:
$$
f_\phi(x_t,t) = c_{skip}(t)\mu + c_{skip}(t)[(x(0)-\mu)e^{-\bar{\theta}_t}]
$$
$$
+c_{skip}(t)\sqrt{v_t}+c_{out}(t) \hat{x}_0(x,t;\phi)
$$
As described in Section 3.3, during the linear fitting stage, by setting $x_0$ as the LQ image $\mu$, the forward degradation process becomes adding different levels of noise to the LQ images. And the model learns to fit the first term (linear part) and the third term (noise part) of the formula above. Intuitively, at this stage, the model is not only learning to denoise but also to directly map the LQ images to HQ images (linear fitting). Therefore, actually, after the linear fitting stage is completed, the model can effectively perform single-step sampling inference (as discuss in Appendix C). However, at this point, as described in Section 3.3, the model cannot perform multi-step sampling inference, therefore, we still need to perform nonlinear fitting stage to enable the model to fit the second term of the formula above to perform multi-step sampling inference.
If we train the model in the same training iterations directly using nonlinear fitting from the beginning, it will achieve suboptimal performance (as shown in ablation study Table 6). We claim that the presence of the linear fitting phase allows the model to focus more on the mapping distribution when $t$ is large throughout the training process. In fact, for image restoration tasks, these mappings at larger $t$ values are more important because they signify more difficult restorations. If only nonlinear fitting is performed, the model will evenly focus on all mappings within $[0,t]$. We believe that with a very large number of training iterations, whether or not there is a linear fitting phase will not affect the model's final performance. However, this is impractical because our training time is often limited. Therefore, within limited training iterations, our proposed two-stage training strategy can achieve better performance compared to directly training with nonlinear fitting.
For Weakness 3.
answer: Thank you for pointing out our inappropriate wording; "universal" indeed has ambiguity. In our subsequent revisions, we will use "general-purpose" for clarification.
For Weakness 4.
answer: We greatly appreciate your careful reading, and we will thoroughly correct these typos.
---
Rebuttal Comment 1.1:
Comment: The response has addressed my concerns. I will raise my score.
---
Reply to Comment 1.1.1:
Comment: We are pleased that our response has addressed your concerns, and we appreciate your willingness to raise our score. The issues you highlighted have indeed helped improve the quality of our work. If our paper is fortunate enough to be accepted, we will incorporate these improvements in the subsequent version of our submission. | Summary: The paper introduces a method called IR-CM (Image Restoration Consistency Model) for fast and general-purpose image restoration.
The key idea is to achieve few-step or one-step inference by employing consistency training on specific mean-reverting stochastic differential equations (SDEs).
A novel linear-nonlinear decoupling training strategy is proposed to enhance training effectiveness and inference performance without relying on pre-trained models.
An origin-guided loss is introduced to avoid trivial solutions and stabilize model training.
Experiments conducted on tasks such as image deraining, denoising, deblurring, and low-light image enhancement demonstrate highly competitive results with one-step inference and state-of-the-art performance in low-light image enhancement with two-step inference.
Strengths: - The paper is well-structured and easy to follow.
- The method is designed for fast inference, which is crucial for real-time applications.
- It is a general-purpose model that can be applied to various image restoration tasks without domain-specific prior knowledge.
- The proposed linear-nonlinear decoupling strategy and origin-estimated consistency function enhance training and inference performance.
- IR-CM does not depend on any pre-trained checkpoints, making it an independent model.
- The method achieves competitive or even state-of-the-art results in various image restoration tasks with minimal inference steps.
Weaknesses: 1. While the method is designed to be universal, its performance across different types of image degradations and real-world scenarios might vary.
2. The introduction of a novel training strategy and loss function may complicate the training process compared to more straightforward models.
3. The method's performance may be sensitive to the choice of hyperparameters, such as the weight of the origin-guided loss.
4. Without sufficient data or proper regularization, the model might overfit to the training data, which could reduce its effectiveness on unseen data.
Technical Quality: 3
Clarity: 3
Questions for Authors: Besides the above Weaknesses, I still have some questions:
The one-step inference model cannot perform multi-step sampling, which might limit its application in certain scenarios where more refinement is needed.
How does the method handle different levels of image degradation beyond the tested conditions?
What is the computational complexity of the method, and how does it scale with the size of the image or video?
Can the method be extended to handle video restoration tasks, and if so, what would be the challenges?
How does the method compare to other state-of-the-art methods in terms of computational resources and efficiency?
Are there any specific use cases or applications where the method excels or falls short?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The societal impacts are discussed in Appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your recognition of our work. We will seriously consider your suggestions and address your concerns.
For Weakness 1.
answer:
Our method aims to construct a general-purpose architecture that does not require any prior information. By simply replacing the dataset, it can accomplish different image restoration and enhancement tasks. Our method is not intended to solve multiple image restoration problems with a single checkpoint. We apologize for the ambiguity caused by the term "universal", a more appropriate term should be "general-purpose." We will replace it in subsequent revisions to eliminate any confusion.
For Weakness 2.
answer:
In fact, within the same number of training iterations, our proposed two-stage training strategy and OG loss make the training process more stable and efficient , without introducing obvious additional complexity. The proposed linear-nonlinear decoupling training strategy only introduces a simple assignment operation, i.e., replacing the variable $x_0$ from the low-quality image $\mu$ to the high-quality image. The OG loss merely adds a similarity comparison term to the original loss function. We believe that the introduction of these minimal operations is entirely worthwhile, as they significantly enhance the model's performance (see ablation study Table 6).
For Weakness 3.
answer:
Our hyperparameter choices are derived from extensive experimental testing to achieve optimal results that can adapt to various image restoration tasks. Specifically, we discuss the impact of the OG loss weight $\lambda_{OG}$ on model performance in Appendix B. Empirically, we found that the model performs best when the $\lambda_{OG}$ is set to 0.8.
For Weakness 4.
answer:
To prevent overfitting, we employed regularization techniques with a decay rate set to 0.01, as described in the experimental setup section of Appendix A.
For Question 1.
answer:
Our method, like other SDE-based methods, is capable of multi-step sampling inference. Most of our optimal results are achieved with 2-step inference (see Tables 1-5 in Section 4.1). We also discuss the impact of sampling steps on model performance in Appendix C, where we conclude that 2-step sampling provides the best trade-off between performance and efficiency. The pseudocode for multi-step sampling inference is also included in Appendix C for your reference.
For Question 2.
answer:
We believe that our method has better generalization ability for different levels of degradation compared to other universal image restoration methods (e.g. MAXIM, Restormer etc). Because our training process not only considers the end-to-end mapping from LQ images to HQ images, but also the mapping from various intermediate states between LQ and HQ images to HQ images. This makes our method robust to different levels of degradation.
For Question 3.
answer:
Our backbone network is a conditional Unet, and its computational complexity is proportional to the number of convolutional layers and the size of the image. The complexity can be calculated as follows:
$$
O(S*\sum_{i=0}^{L-1}(\frac{H}{2i}*\frac{W}{2i}*C_{in,i}*C_{out,i}*K^2))
$$
where L is number of convolutional layers, $H, W$ are the height and width of the image, $C_{in,i}, C_{out,i}$ represent the input and output sizes of the convolutional layer respectively, $K$ is the size of the convolutional kernel, and $S$ denotes the sample steps. From the above formula, it can be seen that our model has a linear relationship with the height and width of the input image. And it is also linearly related to the number of sampling steps. This is advantageous for applying our model to larger-sized images. Additionally, by adjusting the number of sampling steps, a trade-off between real-time performance and model performance can be achieved. We reported the inference speed of our method at different image sizes, which can be found in Table 1 of the rebuttal PDF materials. Our method is highly competitive not only in terms of performance but also in inference speed. We will include these contect in the revised version of the paper.
For Question 4.
answer:
Yes, unlike other diffusion-based methods, our method is well-suited for real-time video processing tasks. This is because it does not require a large number of sampling steps and can achieve good inference results with just one or two sampling steps. However, there are still some challenges. For video frames with larger sizes and applications requiring high real-time performance, our method currently struggles to handle these scenarios effectively.
For Question 5.
answer:
We reported the inference speed of our method at different image sizes, which can be found in Table 1 of the rebuttal PDF materials. ou can refer to the provided supplementary materials. The results demonstrate the superiority of our method in terms of computational efficiency.
For Question 6.
answer:
Our method is suitable for most image-to-image translation tasks with paired training data, as it leverages the data-fitting characteristics of diffusion models and overcomes the issue of image generation randomness present in most diffusion-based methods. However, it still struggles with tasks where obtaining paired training data is challenging (e.g., glare removal). Our future research direction will focus on extending IR-CM to train with unpaired data.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response.
Thanks for your positive response to my question. You have done a good job of addressing my question and sparked my interest in using consistency models for fast restoration tasks. Therefore, I will increase my previous rating.
Best regards.
---
Reply to Comment 1.1.1:
Comment: We are glad to have addressed your concerns, and we sincerely appreciate your recognition and support of our work.
We are deeply grateful for your willingness to raise our score, which is very encouraging. If our work is fortunate enough to be accepted, we will further refine and improve it.
Best regards.
---
Rebuttal 2:
Comment: Dear Reviewer,
I noticed that the discussion phase is about to end, and I would greatly appreciate your feedback or any further questions you might have regarding my response. Your insights are invaluable to me. Thank you for your time and effort. | Summary: This paper proposes a SDE-based two-stage network to tackle the image restoration tasks, including deraining, deblurring denoising, and low-light image enhancement. Based on the existing stochastic differential equation works, the proposed method focuses on the training efficiency and inference speed. It proposes a linear and non-linear decoupling training strategy to enhance the training effectiveness and to surpass consistency distillation on inference. Extensive experiments on multiple different image degradation data show that
the proposed method is able to outperforms baseline methods.
Strengths: Strength
1. the proposed method is a fast universal method that is able to solve multiple image degradation problems including deraining, denoising, low-light enhancement etc.
2. Extensive experiments demonstrate the proposed method outperforms the baseline dedicated methods on multiple benchmarking datasets
3. The proposed training schema may bring impact and benefits to the development of SDE study.
Weaknesses: Weakness
1. What does NFE in the quantitative comparison tables (Table 1,2,3,4) stand for?
2. As the paper is claiming the proposed method is a fast and universal method at inference time, it is suggested to add the inferencing time comparison with respect to baseline methods.
3. There is no conclusion in the submitted paper.
4. All the experiments on deraining topic use synthetic rain images. It is more useful and applicable to compare methods on real rain images.
Minors
1. All the functions are lack of punctuations.
Technical Quality: 2
Clarity: 3
Questions for Authors: The authors are suggested to address the issues raised in the weakness section during rebuttal period.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: There is no limitation mentioned in the paper or the supplementary material.
Flag For Ethics Review: ['Ethics review needed: Safety and security']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your recognition of our work, and we will carefully consider the improvement suggestions you have proposed.
For Weakness 1.
answer:
Thank you for your careful review. We indeed overlooked the introduction of NFE. Number of Function Evaluations(NFE) refers to the number of function evaluations required to generate an image or data. In other words, it is the number of evaluations needed at each step of the diffusion process. A lower NFE usually indicates a more efficient model, as it requires fewer computational resources. In diffusion model related work, it often refers to the number of sampling steps during model inference. This introduction will be added to subsequent revision.
For Weakness 2.
answer:
Thank you for your useful suggestion. We will add the comparation results of inference time between methods in subsequent revision. The comparation results are shown in Table 1 in the rebuttal PDF material. You can check them out. The results demonstrate the superiority of our method in terms of computational efficiency.
For Weakness 3.
answer:
Thank you for pointing out our issue. Due to page limitations, we omitted the conclusion section in the previous submission version. In the subsequent revised version, we will add the following text to the conclusion section:
In this paper, we propose a fast, general-purpose image restoration method based on a Consistency Model and Stochastic Differential Equations (SDE), capable of inferring high-quality images in one or a few sampling steps. Specifically, we introduce a novel Consistency Function OECF, and theoretical and experimental results demonstrate its performance enhancement for Consistency Models. To stabilize the consistency training process and improve training efficiency, we propose the OG loss, which effectively avoids trivial solutions and accelerates model convergence. Furthermore, to further improve the model's performance within a limited number of training iterations, we introduce a novel linear-nonlinear decoupling training strategy for our proposed model, with theoretical analysis and experimental results proving its effectiveness. In comparative experiments on tasks such as image de-raining, denoising, and deblurring, our model shows highly competitive performance, even achieving state-of-the-art result in low-light image enhancement tasks with two-step sampling inference. Finally, extensive ablation experiments demonstrate the effectiveness of each component we propose.
For Weaknesses 4.
answer:
Thank you for your useful suggestion. We have supplemented the comparison experiment results on the Raindrop[1] dataset containing 1,119 pairs of real-world rainy/non-rainy images, which can be viewed in Table 2 and Figure 1 of the rebuttal PDF materials. And it will be added to the subsequent revised version. Experimental results indicate that our method is also highly competitive for rain removal tasks in real-world scenarios.
Reference:
[1] "Attentive Generative Adversarial Network for Raindrop Removal from a Single Image", 2018
---
Rebuttal 2:
Comment: Dear Reviewer,
I noticed that the discussion phase is about to end, and I would greatly appreciate your feedback or any further questions you might have regarding my response. Your insights are invaluable to me. Thank you for your time and effort. | Rebuttal 1:
Rebuttal: We would like to thank the PCs, SACs, ACs, and all anonymous reviewers for their patient responses and valuable suggestions. Your contributions have been invaluable in improving the quality of our work.
We have carefully responded to each reviewer's questions and suggestions. Additionally, we have submitted a rebuttal PDF containing supplementary experimental results, which reviewers can refer to at their convenience.
Pdf: /pdf/eb9fb9c3fae8cccd32e3a027a98ac5424d7ba188.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Exclusively Penalized Q-learning for Offline Reinforcement Learning | Accept (spotlight) | Summary: This paper investigates an important problem in offline reinforcement learning (RL), say mitigating unnecessary conservatism in value function. The authors achieve this by selectively penalizing states that are prone to inducing estimation errors, i.e., $f$. The core idea is to train an exclusive penalty $P_\tau=f_\tau^{\pi,\hat{\beta}}(s) \left(\dfrac{\pi}{\hat{\beta}}-1\right)$, where $f_\tau^{\pi,\hat{\beta}}(s)$ is the penalty adaptation factor. $f_\tau^{\pi,\hat{\beta}}(s)$ assigns smaller weights on in-distribution transitions and a larger weight on out-of-distribution (OOD) transitions. Furthermore, the authors propose the *Prioritized Dataset* (PD) trick to reduce unnecessary bias. Based on PD, the authors derive the final optimization objective of their EPQ approach. The authors conduct experiments on numerous D4RL datasets and show that EPQ can outperform previous methods on many datasets.
Strengths: I appreciate that the authors set their focus on value-based offline RL methods and propose an interesting weighting method for mitigating the over-conservatism phenomenon in CQL, akin to mildly conservative Q-learning (MCQ) algorithm. Many recent offline RL algorithms study policy regularization approaches and somewhat neglect the advances of learning offline policies purely from a value function optimization perspective. The overall method is, as far as the reviewer can tell, novel. Though the proposed EPQ method can be seen as one variant of CQL, the proposed method is interesting and can address the unnecessary conservatism on in-distribution data.
The presentation of this paper is very nice and I personally quite like it. The authors include many toy examples (e.g., Figure 3, Figure 4) and illustrations of their method (Figure 2). This is of great help in aiding the readers to quickly capture the key points that the authors would like to convey. The authors also compare their method against numerous strong offline RL value-based and policy regularization algorithms and demonstrate that EPQ exhibits quite strong performance on many datasets.
Weaknesses: This paper has the following potential drawbacks
- EPQ introduces many hyperparameters, and one needs to manually find the optimal ones on a new dataset. This can impede the practical application of EPQ in real-world problems
- The ablation study part is insufficient. The authors only conduct experiments on one single environment in the main text and the appendix (see Figure 6 and Figure 7). This ought to be evaluated on wider datasets to thoroughly examine the hyperparameter sensitivity and how different components/hyperparameters affect the performance of EPQ. The influence of the PD trick also should be investigated on wider datasets. Based on Figure 6, it turns out that EPQ with PD and EPQ without PD exhibit similar performance.
- Equation 3 depicts that the target value is *corrected* by the introduced exclusive penalty term $P_\tau$. The final objective (Equation 4) is also derived based on this. Equation 3 reminds me of the RND-series in offline RL, e.g., SAC-RND [1], and SAC-DRND [2]. Typically, they also subtract a penalty term from the target value to pursue conservatism and mitigate overestimation. I think $P_\tau$ can also be viewed as a similar role. Any comments here? Are there any advantages of the introduced penalty term $P_\tau$ over the penalty term given by RND? How these methods can be connected? What if we directly optimize Equation 3 and tune $\alpha$?
[1] Anti-Exploration by Random Network Distillation. ICML 2023
[2] Exploration and Anti-Exploration with Distributional Random Network Distillation. ICML 2024.
Technical Quality: 2
Clarity: 3
Questions for Authors: I have the following questions
- Is there a reason that you do not report the standard deviation of EPQ on halfcheetah-expert dataset in Table 1?
- it seems CQL($\alpha=0.0$) in Figure 5(a) middle early stops, it there a reason for this?
- how do you expect EPQ to be applied in real-world tasks? Any suggestions or instructions on how to tune the introduced hyperparameters?
Some minor points:
- I noticed that one cannot be directed to the corresponding images/tables by clicking, e.g. Fig 2. This problem hinders the smooth reading and understanding of the content and should be fixed in the revision
- The referenced figures/tables should be distinguished, e.g., Line 90 *Fig. 1(a) and (b)* should be *Fig. 1(a) and 1(b)*. Please check the manuscript and fix all such types of issues.
- Equation 5, $Q\_( s,a)$ ==> $Q(s,a)$
- sometimes the authors just write *adroit* instead of *Adroit* (e.g., Line 232)
- the captions in Figure 5 and Figure 6 are too simple
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors include an honest discussion part on the limitations of their work in the appendix. I personally agree with that.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. We have addressed the feedback regarding Fig. 2 and conducted additional ablation studies to answer the reviewer's concerns related to the proposed prioritized dataset (PD) and the penalty control threshold $\tau$, as detailed in our global response. In response to other concerns raised by the reviewers, we provide the following author response.
**Practical application of EPQ**
We understand the reviewer's concern about the practical application of EPQ, given that tuning many hyperparameters for each environment can be costly. However, as the experimental results in Appendix D suggest, EPQ demonstrates plausible performance even with non-optimal parameters. Additionally, as the reviewer suggested, we conducted additional experiments to test the sensitivity EPQ on our main parameter $\tau$ in Halfcheetah-medium and Hopper-random environments, and reported these findings in Fig. R.3. As explained in the global response, while the optimal $\tau$ varies depending on the variety of possible actions in each task, EPQ showed consistent performance within a certain range of $\tau$ values. Notably, in the Hopper-random environment, EPQ achieved superior performance across a range of $\tau/\rho \in [0.2, 0.5, 1.0, 2.0]$. This suggests that precise tuning of all parameters may not be necessary, as EPQ performs well with a reasonable range of parameter settings.
**Distinction to intrinsic reward based offline RL**
As the reviewer pointed out, many penalty-based offline RL methods apply penalties to the target values to reduce the estimation error of policy actions not present in the dataset [1, 2, 3]. Unlike SAC-RND [1], which uses random network distillation as an uncertainty estimator for penalizing uncertain actions, our approach adjusts the penalty based on whether the actions being evaluated are sufficiently represented in the dataset. This allows for a theoretical analysis of the causes of estimation bias and the appropriate degree of penalty required to mitigate it. Additionally, we observed that directly optimizing Eq. (3) in the paper, which imposes the penalty as an intrinsic reward, can lead to unstable learning due to the large penalty continuously fluctuating the target values. In contrast, our approach ensures stable learning by directly applying the penalties to the $Q$-values, thereby enabling the superior performance of EPQ.
**Clarity issues for figures in the paper**
**Fig. 2:** We have regarded the reviewer's concerns on the clarity of Fig. 2 in our global response. Please refer to it for our detailed response.
**Table 1:** The exclusion of the standard deviation of EPQ on Halfcheetah-expert dataset is a typo. The experimental results showed $107.2 \pm 0.2$. Thank you for pointing it out.
**Early stopped result in Fig. 5:** In the experiment shown in Fig. 5(a), the estimation bias in CQL with $\alpha=0$ became excessively large, causing the gradients to explode and resulting in forced termination of the training. In fact, Fig. 5(a) shows that the estimation bias reached up to $1e^{13}$. Therefore, we reported the results in Fig. 5 up to the point where the experiment was terminated.
**Minor issues**
We will address the reviewer's feedback by correcting typos, distinguishing figure references, and enhancing captions to improve the overall presentation of the paper.
[1] Nikulin, Alexander, et al. "Anti-exploration by random network distillation." International Conference on Machine Learning. PMLR, 2023.
[2] Kumar, Aviral, et al. "Conservative q-learning for offline reinforcement learning." Advances in Neural Information Processing Systems 33, 2020.
[3] Kostrikov, Ilya, Ashvin Nair, and Sergey Levine. "Offline reinforcement learning with implicit q-learning." arXiv:2110.06169, 2021.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarifications. It would be good if the authors could incorporate my suggestions (e.g., ablation study, early stop issue) when preparing the camera-ready manuscript. The main drawback of this paper is that EPQ introduces many hyperparameters, and one needs to manually find the optimal ones on a new dataset, as commented. However, since existing methods like ReBARC also require significant hyperparameter search to achieve good performance, I think it is okay and believe this paper can be accepted. I would vote for acceptance.
---
Reply to Comment 1.1.1:
Comment: Thank you for the prompt response. We are glad to hear that our answers have helped in understanding the paper. If you have any additional questions or need further information, please let us know. | Summary: This paper showed that the popular conservative Q-learning introduces bias into the Q function by its restrictive penalty. It proposed to enforce an adaptive penalty term based on the dataset to avoid bias when the dataset support disagrees with the learned policy. Experimental results supported the authors' claims that the proposed method can effectively reduce estimation bias and outperform many baselines.
Strengths: Conservatism is common in offline RL methods. CQL is one of the most popular algorithms that enforces the conservatism by penalizing its Q value, and hence it runs the risk of introducing bias. This paper nicely illustrated a possible source of bias originating from the difference between dataset actions and policy actions, and proposed to correct the bias by the penalty adaptation factor. The authors justified the proposed penalty with both pedagogical examples and more challenging problems.
Weaknesses: While the paper is well-written in general, I believe the presentation can be further improved by adding making the exlanation more accesible, especially Figure 1 and 2. The two figures served a pivotal role in illustrating the downside of CQL and the core idea of the proposed EPQ. Figure 1 would benefit from separating the information into two parts: (1) overlaps of the dataset/policy actions and (2) estimation bias of CQL under different $\alpha$.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. In Figure 2 the authors suggested the relationship
> let $\tau_2 < \tau_1$ when $N_1 < N_2$\
However, it seems there is chance that if $\tau_2$ is only slightly smaller than $\tau_1$, then $\tau_2$ would only be penalizing $\pi_3$ and not covering $\pi_1, \pi_2$. More generally, this relates to choosing an appropriate threshold. In the experiments the authors swept $(c\cdot \rho, c\in[0, 10])$, but what was the observation or rule of thumb here to make sure the threshold is meaningful?
2. Another question concerns the implementation. It seems the empirical behavior policy $\hat{\beta}(a|s)$ plays a crucial role in the adaptive penalty. In continuous domains the offline datasets often comprise only a single copy for any action, making the estimation of $\hat{\beta}(a|s) = \frac{N(s,a)}{N(s)}$ rather hard and inaccurate. Therefore, existing work like [1, 2] often use sophisticated models to estimate $\hat{\beta}(a|s)$ with the hope that it would generalize better. By contrast, it is surprising to see that Algorithm 1 used only simple BC and that was sufficient to fuel the superior performance of EPQ. What could be the reason?
References:\
[1] Mildly Conservative Q-Learning for Offline Reinforcement Learning
[2] Supported Policy Optimization for Offline Reinforcement Learning
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Potential negative societal impact does not apply and the authors have discussed some of the limitations in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. We have addressed the feedback regarding Fig. 2 and conducted additional ablation studies to answer the reviewer's concerns related to the proposed prioritized dataset (PD) and the penalty control threshold $\tau$, as detailed in our global response. In response to other concerns raised by the reviewers, we provide the following author response.
**Clarity issues for figures in the paper**
**Fig. 1:** Although we could not include the revision of Fig. 1 due to limited space, we will improve the clarity of the paper by separating the illustration of CQL's estimation bias according to various penalizing constant $\alpha$ and the distributions of $\pi$ and $\hat{\beta}$, as suggested by the reviewer.
**Fig. 2:** We have regarded the reviewer's concerns on the clarity of Fig. 2 in our global response. Please refer to it for our detailed response.
**Choice of the penalty control threshold $\tau$**
As noted by the reviewer, it is crucial to select an appropriate $\tau$ to avoid the situation described by the reviewer. Fig. 2 and R.1 suggest that, "with a fixed data distribution", as the number of datasets $N$ increases, actions are sampled more diversely, which implies that $\tau$ should be lower. However, in actual experimental environments, since the data distribution varies across environments and states, these factors need to be additionally considered for choosing the optimal $\tau$.
To address this issue, we provide additional ablation studies regarding $\tau$ and the rule for selecting the threshold $\tau$ are detailed in the global response. Based on the experimental results provided for the global response, the overestimation error arises from samples that have not been visited, so the optimal $\tau$ tends to be determined by how diversely actions are sampled in the dataset for each environment. According to our analysis related to $\tau$, choosing a low threshold $\tau$ is effective when the dataset contains sufficiently diverse state-action pairs, whereas a higher threshold is preferable when the dataset is less comprehensive.
**Behavior cloning**
As the reviewer pointed out, predicting the empirical behavior policy $\hat{\beta}$ significantly impacts the performance of EPQ, especially since our penalty adaptation factor $f^{\pi,\hat{\beta}}_\tau$ is calculated using samples $D$ generated from the predicted behavior policy. Predicting the behavior policy in continuous space is challenging, and various models have been employed for this task, as the reviewer mentions. Instead of using simple behavior cloning (BC), we utilize a prediction model based on the variational lower bound [1], one of the most representative variational inference methods. As described in Appendix B.3, assuming independence among all data samples, the variational lower bound for the likelihood of $\beta$ can be derived as $\log\beta(a|s) \geq \mathbb{E}\_{z\sim p\_\psi(\cdot|s,a)}[\log q\_\psi(a|z,s)]- D\_{KL}(p\_\psi(z|s,a)||p(z)),~\forall s,a \in D$, where $p\_\psi (z|s,a)$ is an encoder model and $q\_\psi(a|z,s)$ a decoder model parameterized by $\psi$, and $z$ is the latent variable whose prior distribution $p(z)$ follows the multivariate normal distribution, i.e., $p(z)\sim N(0,I)$. Please refer to Appendix B.3 for a detailed explanation. By employing this variational approach, we obtained a more precise estimate of the behavior policy $\hat{\beta}$ compared to using simple BC.
[1] Kingma, Diederik P., and Max Welling. "Auto-encoding variational bayes." arXiv:1312.6114, 2013.
---
Rebuttal Comment 1.1:
Title: thanks for the response
Comment: Thanks for the detailed response. I think it makes sense for the new version to include the description posted here for choosing a suitable $\tau$. The authors have addressed my questions satisfactorily. I have raised my score to 7.
---
Reply to Comment 1.1.1:
Comment: Thank you for the prompt response and valuable feedback. We're glad that our answers have been helpful for clarification. Please let us know if you have any additional questions or need further information. | Summary: The paper studies the problem of value estimation bias mitigation in offline reinforcement learning. Specifically, the paper takes the well-known Conservative Q-Learning algorithm as a starting point and improves its penalization scheme with an alternative one that applies provably less value underestimation bias. The paper reports results on the well-known D4RL benchmark and compares against alternative approaches.
Strengths: * The paper points to a key problem in model-free offline reinforcement learning, i.e. model underfit caused by overconservatism due to excessive penalty application.
* The paper follows a solid methodology to the problem that starts from illustrative data-driven toy results, continues with theoretical analysis, and ends up with a well-understood and well-justified improvement to an established algorithm
* The shown results are particularly comprehensive and strong.
* The presentation of the paper at stellar level. It is extremely easy to follow the story line, even though it is technically quite dense.
* The paper addresses the related literature well.
Weaknesses: The paper builds its whole problem statement and solution on a big assumption: The environment dynamics will not be learned. In other terms, the problems it highlights and addresses are specific to the model-free offline RL approaches, although learning an environment model is not such a major limitation in the offline setting, where the assumption is that data and compute sources are generous, training time is also not an issue, unlike the online MBRL setup. The paper seems to miss this key positioning element in problem formulation. Its presentation will improve if this choice is made more explicit and the impact of the proposed solution is presented accordingly.
Minor: I found Figure 2 extremely difficult to grasp. The authors may consider simplifying it a little and extending the caption with an explanation of it.
Technical Quality: 4
Clarity: 4
Questions for Authors: * Is there a particular reason why the EDAC algorithm of An et al. [40] is not in the comparison list? It is also model free, reports results on the same benchmarks and it performs better than MISA on Mujoco Tasks, i.e. 85.2*18 = 1533.6.
* Section 4.2 does not specify whether the estimation bias is compared between the predicted Q-value and the "discounted" observed return. Can the authors confirm that discounting has been applied in the results shown in Fig 6? By bare eye the bias looked to me too high.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The paper does not address its own limitations and does not specify the potential negative societal impact of the presented work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. We have addressed the feedback regarding Fig. 2 and conducted additional ablation studies related to the proposed prioritized dataset (PD) and the penalty control threshold $\tau$, as detailed in our global response. In response to other concerns raised by the reviewers, we provide the following author response.
**Problem formulation in model-free offline RL**
Model-free offline RL focuses on measuring and addressing overestimation bias in policies $\pi$ and solving distributional shift issues based on this measure, whereas model-based offline RL concentrates on how to mitigate distributional shift when learning from samples generated using a dynamics model. As a result, these two domains have different focuses and are both areas of active research. In our work, we considered the model-free setting to focus on how CQL, a model-free offline RL method, induces underestimation bias and how to address this issue. However, as noted by the reviewer, since model-based RL can lead to more efficient learning, extending our method to the model-based setting could be a valuable area for future work. We appreciate the reviewer’s suggestion and will actively consider it in our future research.
**Comparison with EDAC**
There are various approaches addressing the estimation bias of policy actions that may not be present in the dataset in offline RL setups. Our work specifically focuses on directly penalizing $Q$-functions to reduce overestimation, based on the analysis of estimation bias in $Q$-functions. Consequently, we prioritized comparisons with methods like CQL [1] and IQL [2], which share a similar focus on mitigating overestimation. On the other hand, EDAC [3] employs a clipped $Q$-learning method based on the confidence of $Q$-value predictions, presenting a different approach to addressing overestimation. Given that EDAC's methodology differs from ours, we opted to compare offline RL methods that reduce overestimation through penalty or constraint mechanisms, by examining the distributions of $\pi$ and $\beta$ from a methodological perspective rather than solely focusing on performance.
However, we acknowledge the significance of EDAC in the offline RL domain, as highlighted in our Related Works section. In response to the reviewer’s comments, we will directly compare the performance of our proposed EPQ method with EDAC based on reported results for commonly considered environments in both EDAC and our study. For the Mujoco tasks, the average scores across all tasks were similar, with EPQ scoring 85.4 and EDAC scoring 85.2. In the Adroit tasks, however, EPQ demonstrated superior performance with an average score of 27.7 compared to EDAC’s average score of 17.7, showing nearly a twofold improvement. The Adroit task is particularly challenging due to its sparse rewards and limited variety of state-action pairs in the dataset, which makes learning difficult. Thus, EPQ effectively demonstrates its superiority over EDAC in such challenging scenarios.
**Clarity issues for figures in the paper**
**Fig. 2:** We have regarded the reviewer's concerns on the clarity of Fig. 2 in our global response. Please refer to it for our detailed response.
**Large estimation bias in Fig. 5(a):** In Section 4.2, we have conducted experiments to analyze the impact of our overestimation reduction method along various penalizing constants and reported the results in Fig. 5(a). As the reviewer points out, the estimation bias may look quite large to the bare eye since we plotted the squared value of estimation bias to accurately represent the effects of overestimation and underestimation as mentioned in line 242. Regarding to reviewer's concerns for the results shown in Fig. 6, we can confirm that that discounting has been applied. We will provide additional explanation for this in the paper.
[1] Kumar, Aviral, et al. "Conservative q-learning for offline reinforcement learning." Advances in Neural Information Processing Systems 33, 2020.
[2] Kostrikov, Ilya, Ashvin Nair, and Sergey Levine. "Offline reinforcement learning with implicit q-learning." arXiv:2110.06169 2021.
[3] An, Gaon, et al. "Uncertainty-based offline reinforcement learning with diversified q-ensemble." Advances in neural information processing systems 34, 2021.
---
Rebuttal Comment 1.1:
Title: Keep score
Comment: Thanks for your response, which answered my questions satisfactorily. I keep my tendency towards an accept.
---
Reply to Comment 1.1.1:
Comment: Thank you for the prompt response and valuable feedback. We appreciate that our response has helped clarify the issue. Please let us know if you have any additional questions or need further information. | Summary: This paper introduces a novel approach to handling distribution shift in
off-policy RL by the means of Q-function regularization. This is
accomplished by modulating a penalty term that is overly conservative in
CQL. The authors argue that CQL overcompensates for the distribution
shift in cases where state-action pairs exist with non-negligible
density in the dataset, but are still less likely under the reference
policy than the learned policy. Based on this insight, the authors
propose to modulate the CQL value function penalty by a decreasing term
in the density of state-action pairs under the reference policy, which
activates as soon as this density crosses a threshold. Moreover, the
authors propose a prioritized sampling method to further reduce the
underestimation bias. The resulting algorithm, called EPQ, achieves
superior performance relative to its competitors across many familiar
offline RL benchmarks.
Strengths: The motivation for this work was fairly strong, and the authors identified an
interesting shortcoming with the overestimation correction in CQL. Figure 1
followed by Figure 4 do a nice job of depicting the influence of this
shortcoming, and how the proposed method corrects it.
Moreover, the EPQ algorithm is deployed on a large suite of benchmarks, and
outperforms all competitors with remarkable consistency. To complement these
results, the authors conducted experiments to verify that EPQ does in fact
reduce value estimation bias, shown by comparing EPQ value estimates with
Monte-Carlo estimates, as well as predictions from CQL. The experiments are
conducted over four random seeds. While this is relatively few seeds, I think
this is acceptable given the range of tasks that were tested.
Weaknesses: The presentation of the paper (e.g., writing, figures) can be improved.
In particular, many of the figures were difficult to read and/or
interpret. Both facets of Figure 2 took substantial effort to understand
for me (in fact, I think I would have more easily understood the paper
without having seen these figures; see Questions below).
The statement of Theorem 3.1 is not precise enough, particularly for the
latter claim. Moreover, I suspect there are some technical assumptions
missing; see Questions below.
Furthermore, I do not entirely understand the motivation for the
prioritized dataset. Particularly, it is not clear to me that Theorem
3.1 (the theoretical justification for EPQ) actually applies with the
prioritized dataset, since the prioritzation depends on the estimated
Q-function being updated. Beyond that, the ablation of this feature is
not very convincing.
While there is a wealth of empirical results, confidence intervals from
the baselines is largely lacking—this is especially relevant in Table 1.
See for example the `door-cloned` row: EPQ is identified as the best,
but its confidence region definitely overlaps that of CQL, and probably
those of many of the baselines as well. The same goes for
`relocate-cloned`. as well as (I'd suspect) many of the AntMaze tasks.
That said, the results do suggest that EPQ frequently outperforms its
competitors, and rarely does substantially worse.
Finally, it would have been nice to see stronger heuristics for choosing
the threshold parameter $\tau$. Figure 2a suggests that $\tau$ should be
a function of the amount of data in the dataset, but this is not
actually discussed anywhere. Rather, the authors claim to have found a
choice for $\tau$ that is inversely proportional to the volume of the
action space, but results for this choice are only given on one
environment, precluding any conclusion that this choice/trend is good in
general. The proof of Theorem 3.1 also gives a condition for determining
when $\alpha$ is large enough, which is roughly inversely proportional
to the lowest density under the reference policy over all states and
actions – therefore, choosing $\alpha$ to be inversely proportional to
the volume of the action space is only theoretically justified when the
reference policy is uniform.
## Minor Issues
The notation/definition of the Bellman operator $\mathcal{B}^\pi$ s not
exactly correct. In its definition on line 61, $\mathcal{B}^\pi$
averages over all next states $s'$. Then, in the expression on line 63,
you are taking an expectation over all state transitions $(s, a, s')$ in
the dataset with $\mathcal{B}^\pi$ evaluated in this expectation. Since
$s'$ isn't used anywhere in this expression explicitly, my assumption is
that you're using this state as the state to bootstrap from in the
application of $\mathcal{B}^\pi$; but then $\mathcal{B}^\pi$ on line 63
is not the same as its definition on line 61.
On line 83, you refer to the "actual average return $G_0$", but $G_0$
was defined as the random return (not averaged) on line 56.
Formatting of equation (2) is not nice – it almost looks like it's
depicting two separate formulas. It may read easier if you instead
colored the two factors and described their influence in the text below.
In Figure 1 and Figure 4, it would be very helpful to see where $0$ lies
on the y-axis on the estimation bias side.
Figure 2a is very busy and difficult to interpret. Firstly, I think it
would be better to highligh the magnitude of the penalty itself, as
opposed ot the penalty reduction (which implicitly depends on some
initial penalty, I'm guessing from CQL). Moreover, the relationship
between the amount of data and $\tau$ should be discussed before this
figure, even if superficially (e.g., with a sentence that says that
$\tau$ decreases as you collect more data). Then, the figure would have
a much more clear interpretation: increase the amount of data, and the
penalty will be relaxed more aggressively.
There appears to be a formatting error on line 140, "Proof) Proof…".
There is a formatting error in equation (5), $Q_(s, a)$ -\> $Q(s, a)$.
On line 243, $200k$ should be $200\mathrm{k}$.
In Table 1, the "total" rows are not good indications of performance,
firstly because the returns for the different environments are not
normalized. That said, the results for EPQ still look good if you
neglect the "total" rows.
In Figure 5, it would help to use a different line style to emphasize
which curve corresponds to EPQ. I found it difficult to distinguish EPQ
from CQL ($\alpha=0.0$) – fortunately these curves generally occupied
disjoint regions in the graphs.
Technical Quality: 3
Clarity: 2
Questions for Authors: In Figure 1, what is the relationship between $\tau_i$ and $N_i$
($i=1,2$)? Such a relationship has not been discussed up to this point.
In the proof of Theorem 3.1, I believe some assumptions are missing. If
$\pi(\cdot\mid
s)$ is ever supported on an action $a'$ such that
$\hat{\beta}(a'\mid s) = 0$, then $\Delta^\pi_{EPQ}\to\infty$.
Therefore, there would be no $\alpha$ large enough to underestimate the
value function for your argument on line 465 as long as there exists a
single $(s, a)$ in the dataset for which $\xi^\delta(s, a) > 0$. Since
$\hat{\beta}$ was defined to be the empirical conditional distribution
over actions from the dataset, this result actually suggests that
$\alpha$ must be infinite whenever your dataset does not fully cover the
action space (which is always the case in the experiments, where the
action space is continuous).
In table 1, why aren't confidence intervals given for the baseline
methods?
Why are there no confidence regions shown in Figure 6? Particularly, it
would have been helpful to see these in Figure 6a. As it stands, the
effect of the prioritized dataset depicted in this figure is a little
underwhelming.
With regard to the analysis of the penalty threshold, why should we
scale $\tau$ linearly with the density of the uniform distribution over
$\mathcal{A}$ (that is, inversely proportional to the volume of the
action space)? Figure 6b does not indicate whether the choice of
$\tau = 0.2\rho$ is actually a good choice across environments, so
indeed it could have just been that $\tau=0.2\rho$ happens to work well
in `hopper-medium` by chance. You have have investigated (or at least
presented the results) that show how this form of scaling with the
action space performs.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Limitations are mostly discussed, except for potentially missing assumptions in
Theorem 3.1.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. We have addressed the feedback regarding Fig. 2 and conducted additional ablation studies to answer the reviewer's concerns related to the proposed prioritized dataset (PD) and the penalty control threshold $\tau$, as detailed in our global response. In addition, we have included confidence intervals for all results in the author-provided PDF.
**Scaling and selection of the threshold $\tau$**
As mentioned in the global response, the penalty control threshold $\tau$ is proportional to the log-density of $\mathrm{Unif}(\mathcal{A})$. However, we did not intend for the threshold $\tau$ to scale proportionally to the action volume. Instead, since $\tau$ is compared to the log-density of $\hat{\beta}$, a distribution over the action space $\mathcal{A}$, we designed the setup so that similar scale factors $\tau/\rho \in [0.2,...,10]$ yield consistent thresholding effects across different action dimensions, without being overly sensitive to changes in the action space.
Based on the experimental results provided for the global response, the overestimation error arises from samples that have not been visited, so the optimal $\tau$ tends to be determined by how diversely actions are sampled in the dataset for each environment. Fig. 2 and R.1 suggest that, "with a fixed data distribution", as the number of datasets $N$ increases, actions are sampled more diversely, which implies that $\tau$ should be lower. In actual experimental environments, however, since the data distribution varies across environments and states, these factors need to be additionally considered for choosing the optimal $\tau$. According to our analysis related to $\tau$, a low threshold is effective when the dataset contains sufficiently diverse state-action pairs, whereas a higher threshold is preferable when the dataset is less comprehensive.
**Confidence interval of the experimental results**
In offline setups, where performance measurement methods are generally similar, it is common to cite reported results for certain algorithms or experiments [1, 2, 3]. As outlined in Section 4.1, we utilized reported results from the recent MISA paper [1], which did not include standard deviations. For the missing results, we conducted our own experiments to provide the necessary performance data. However, for the experiments we reproduced, such as the modified CQL comparisons in Table 6 for Adroit tasks, we have included confidence intervals as requested by the reviewer. Even when considering these confidence intervals, our proposed EPQ algorithm consistently demonstrates superior performance compared to the CQL baseline. We believe that the proposed EPQ's superiority is well-supported, given its significant enhancement in average performance relative to other baselines.
**Theoretical soundness of Theorem 3.1**
**Penalizing constant $\alpha$:** Theorem 3.1 states that when the penalizing constant $\alpha$ is sufficiently large, the $Q$-values $\hat{Q}^\pi$ learned based on the EPQ penalty $\mathcal{P}\_\tau$ will underestimate the true $Q$-values. The reviewer inquired whether, according to Theorem 3.1, $\alpha$ needs to be infinite for $\hat{\beta}$ approaching zero to allow for underestimation. We observed that this confusion arises from a typo in the proof of Theorem 3.1 in Appendix A.1. Currently, the proof incorrectly states that $\alpha$ must satisfy $\alpha \geq \max_{s,a \in D}[\xi^\delta(s,a)] \cdot \max_{s \in D} \Delta_{EPQ}^\pi(s)$. However, the correct condition should be $\alpha \geq \max_{s,a \in D}[\xi^\delta(s,a)] \cdot \max_{s \in D} (\Delta_{EPQ}^\pi(s))^{-1}$. As $\hat{\beta}$ approaches zero, $\Delta_{EPQ}^\pi(s)$increases towards infinity, allowing the condition to still be satisfied with a relatively small $\alpha$, which is an intuitively natural result. This behavior is empirically supported by our experiments, as shown in Fig. 4(c), where we observe significant underestimation of the $Q$-values when $\hat{\beta}$ is very small within the support of $\pi$. Thank you for pointing out this issue, and we will correct the typo in the proof to clarify this matter.
**Theorem 3.1 with PD:** As the reviewer noted, Theorem 3.1 assumes a scenario without the PD and provides a proof under that assumption. When considering PD, $\hat{\beta}^Q$ continuously changes with $Q$, which makes it challenging for Theorem 3.1 to hold directly. However, if we assume a fixed $Q$ for PD, Theorem 3.1 can be applied to a fixed $\hat{\beta}^Q$ in a similar manner. To address the reviewer's concern in the presence of PD, we use the actual returns from the dataset to obtain $\hat{\beta}^Q$ rather than the learned $Q$ during training, as discussed in Appendix B.4. We will add a more detailed explanation of this part in the main paper.
**Minor issues**
We agree that our notation of the Bellman operator $\mathcal{B}^\pi$ and the return $G_0$ should be changed as the reviewer suggests. For $\mathcal{B}^\pi$, since we are not using the next state $s'$ the expectation should be over $(s,a)$ rather than over all state transitions $(s,a,s')$. For $G_0$ we will address the issue of overlapping notation between the "random return" defined in line 56 and the "actual average return" referenced in line 83, as well as correct any typos. Additionally, we will incorporate the reviewer's suggestions on formatting to improve the clarity of the paper. We appreciate your valuable feedback and will work to enhance the paper based on the reviewer's suggestions.
[1] Ma, Xiao, et al. "Mutual information regularized offline reinforcement learning." Advances in Neural Information Processing Systems 36, 2024.
[2] Kostrikov, Ilya, Ashvin Nair, and Sergey Levine. "Offline reinforcement learning with implicit q-learning." arXiv preprint arXiv:2110.06169, 2021.
[3] An, Gaon, et al. "Uncertainty-based offline reinforcement learning with diversified q-ensemble." Advances in neural information processing systems 34, 2021.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the detailed response.
**Choice of $\tau$**: Thanks a lot for the discussion and the additional experimental data. This is helpful, and this (in my opinion) is a more useful argument / advice for choosing this parameter than what was originally stated in the submission.
**Confidence intervals**: Thanks again for the additional experimental data, I am satisfied with these results now.
**Penalizing constant $\alpha$**: I think I see what you mean, though it would be helpful maybe if you could write out this deduction explicitly. For example, explicitly show why $\alpha \geq \mathsf{corrected bound}$ leads to underestimation.
---
Reply to Comment 1.1.1:
Comment: Thank you for the prompt response from the reviewer. To provide a more intuitive understanding of penalizing the constant $\alpha$, as requested by the reviewer, we will explicitly explain below how the value function of EPQ underestimates the true value.
**Explicit derivation of Theorem 3.1 to show underestimation**
We start from line 462 in Appendix A.1, which states that when $V_{k+1}$ converges to $V_\infty$, then the converged value function $V_\infty$ of EPQ satisfies
$V_\infty(s) = V^\pi(s) + (I-\gamma P^\pi)^{-1}\cdot$ \{ $- \alpha\Delta_{EPQ}^{\pi}(s) + \mathbb{E}_{a\sim\pi}[\xi^\delta(s,a)]$\},
where $\xi^\delta(s,a)$ and $\Delta_{EPQ}^{\pi}(s)$ are positive $\forall~ s,a$, assuming $\pi\neq\hat{\beta}$. (if $\pi=\hat{\beta}$, then there will be no overestimation error.)
If we choose the penalizing constant $\alpha$ that satisfies $\alpha \geq \max_{s,a\in D}[\xi^\delta(s,a)]\cdot\max_{s\in D} (\Delta_{EPQ}^\pi(s))^{-1}$, then
$- \alpha\cdot\Delta\_{EPQ}^{\pi}(s) + \mathbb{E}\_{a\sim\pi}[\xi^\delta(s,a)] $
$\leq- \max\_{s,a\in D}[\xi^\delta(s,a)]\cdot \underbrace{\max\_{s\in D} (\Delta\_{EPQ}^\pi(s))^{-1} \cdot \Delta\_{EPQ}^{\pi}(s)}_{\geq 1} + \mathbb{E}\_{a\sim\pi}[\xi^\delta(s,a)]$
$\leq- \max\_{s,a\in D}[\xi^\delta(s,a)] + \mathbb{E}\_{a\sim\pi}[\xi^\delta(s,a)] \leq 0,~~~~\forall s,$
Since $I-\gamma P^\pi$ is non-singular $M$-matrix and the inverse of non-singular $M$-matrix is non-negative (Please see "M-matrix" in Wikipedia), i.e., all elements of $(I - \gamma P^\pi)^{-1}$ are non-negative, then
$V_\infty(s) = V^\pi(s) + (I-\gamma P^\pi)^{-1}\cdot$ \{ $- \alpha\Delta\_{EPQ}^{\pi}(s) + \mathbb{E}\_{a\sim\pi}[\xi^\delta(s,a)]$ \}$\leq V^\pi(s),~\forall s$.
Thus, we can conclude that $V_\infty$ of EPQ underestimates the true value $V^\pi$. We hope this explanation helps clarify the concept of Theorem 3.1. | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. Based on the reviewers' comments, many expressed difficulties in understanding the figures presented in the paper and suggested that additional ablation studies would be beneficial. Therefore, we provide the following detailed responses to address each of these points:
**Clarity issues for figures in the paper**
In particular, there seems to be difficulty in understanding Fig. 2, which illustrates the motivation behind EPQ, as it lacks sufficient detail. To provide a clearer illustration of our methods, we have revised Fig. 2(a) and Fig. 2(b) into Fig. R.1 and Fig. R.2, respectively, which are included in the author-provided one-page PDF.
**Fig. R.1:** As detailed in Section 3.2, our exclusive penalty is designed to minimize unnecessary bias in the $Q$-function by imposing penalties only when the policy actions are insufficiently represented in the dataset. To illustrate the rationale behind our exclusive penalty, Fig. R.1(a) depicts the log-probability of $\hat{\beta}$ and the thresholds $\tau$ used for penalty adaptation, with $N$ representing the number of data points. In Fig. R.1(a), if the log-probability $\log\hat{\beta}$ of an action $a \in \mathcal{A}$ exceeds the threshold $\tau$, this indicates that the action $a$ is sufficiently represented in the dataset, thus, we reduce the penalty for such actions. Furthermore, as shown in Fig. R.1, when the number of actions increases from $N_1$ to $N_2$, the threshold for determining "enough data" decreases from $\tau_1$ to $\tau_2$, even if the data distribution remains unchanged.
Fig. R.1(b) illustrates the proposed penalty adaptation factor $f\_\tau^{\pi,\hat{\beta}} = \mathbb{E}\_{\pi}[x\_\tau^{\hat{\beta}}]$ for a given $\hat{\beta}$ and policy $\pi$. Here, $x_\tau^{\hat{\beta}} = \min(1.0, \exp(-(\log \hat{\beta} - \tau)))$ represents the amount of adaptive penalty that is reduced as $\log \hat{\beta}$ exceeds the threshold $\tau$. In Fig. R.1(b), $x_{\tau_1}^{\hat{\beta}}$ is larger than $x_{\tau_2}^{\hat{\beta}}$ because $\tau_1 > \tau_2$. Thus, the adaptation factor $f_\tau^{\pi,\hat{\beta}}$ indicates the average penalty that policy actions should receive. As illustrated in Fig. R.1(b), the adaptation factors for different policies vary with their position. Specifically, for threshold $\tau_1$, we have $f_{\tau_1}^{\pi_1,\hat{\beta}} = f_{\tau_1}^{\pi_2,\hat{\beta}} = 1$ and $f_{\tau_1}^{\pi_3,\hat{\beta}} < 1$. For threshold $\tau_2$, $f_{\tau_2}^{\pi_3,\hat{\beta}} < f_{\tau_2}^{\pi_1,\hat{\beta}} < f_{\tau_2}^{\pi_2,\hat{\beta}} < 1$, as depicted in Fig. R.1(b).
**Fig. R.2:** As explained in Section 3.3, we introduce the prioritized dataset (PD) to further reduce the penalty when the policy is highly concentrated on actions that maximize the $Q$-function. To illustrate this, Fig. R.2(a) shows the difference between the original data distribution $\hat{\beta}$ and the modified data distribution $\hat{\beta}^Q$ after applying PD, and Fig. R.2(b) depicts the corresponding penalty graphs. As shown in Fig. R.2(a), when the policy $\pi$ focuses on specific actions, the penalty $\frac{\pi}{\hat{\beta}} - 1$ increases significantly in Fig. R.2(b). In contrast, by applying PD, $\hat{\beta}$ is adjusted to approach $\hat{\beta}^Q \propto \beta \exp(Q)$, aligning the data distribution more closely with the policy $\pi$. Consequently, the penalty is substantially reduced, as depicted in Fig. R.2(b). We believe that Fig. R.1 and Fig. R.2 will provide a clearer understanding of our proposed methods.
**Additional ablation studies**
In response to the feedback from reviewers 36me and Hh4A, we have conducted additional ablation studies for the Hopper-random, Hopper-medium, and HalfCheetah-medium tasks. These tasks demonstrate a significant performance improvement of our method compared to the baseline CQL, as discussed in Section 4.2. Figure R.3(a) provides a component evaluation to illustrate the impact of the proposed PD, while Figure R.3(b) examines performance across various penalty control thresholds $\tau \in [0.2\rho, 0.5\rho, 1.0\rho, 2.0\rho, 5.0\rho, 10.0\rho]$, where $\rho$ represents the log-density of $\textrm{Unif}(\mathcal{A})$. (There is a typo in the paper. We will fix it.) Note that $\rho$ is negative, so $\tau = 10\rho$ is the lowest threshold and $\tau = 0.2\rho$ is the highest.
**The effectiveness of PD:** For the component evaluation, reviewers expressed concerns about the effectiveness of the proposed PD, particularly noting its minimal impact on performance in the Hopper-medium task. However, as shown in Fig. R.3(a), while PD has a limited effect on the Hopper-random and Hopper-medium tasks, it significantly improves performance in the HalfCheetah-medium task, thereby validating the effectiveness of PD.
**The analysis of threshold $\tau$:** Additionally, reviewers inquired about the selection and impact of the proposed hyperparameter $\tau$. Fig. R.3(b) provides insights into the optimal $\tau$ values for each task. The results indicate that in tasks like Hopper-medium, where a variety of actions are not sufficiently sampled, a higher threshold performs better. Conversely, in tasks like Hopper-random, where a broad range of actions is sampled, a lower threshold is more effective. An exception is the HalfCheetah-medium task, which, despite having fewer action variations, visits a diverse range of states. This results in lower overestimation errors for out-of-distribution actions, benefiting from a lower threshold. Additionally, the Adroit task performs well with an extremely high threshold due to minimal noise in the dataset, leading to a limited variety of state-action pairs.
We believe that the additional experiments and analyses, as suggested by the reviewers, robustly validate the effectiveness of the proposed components and significantly enhance the quality of the paper. We again appreciate the reviewers' valuable feedback.
Pdf: /pdf/8787b305023fced677040b1acfe77669c21af685.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
When Is Inductive Inference Possible? | Accept (spotlight) | Summary: The authors *characterize* possible inductive inference by connecting it to online learning.
I find their work extremely interesting!
Strengths: The choice of topic is excellent, delivery is strong.
Weaknesses: See questions.
Technical Quality: 3
Clarity: 4
Questions for Authors: Page 1:
Can you please say a few words about the connection of inductive inference to computational complexity and theory of computation?
I have a feeling that some of the papers of Shuichi Hirahara and Mikito Nanashima should be cited.
Page 2:
Can you please elaborate on the comparison of two pseudocode segments?
Page 3:
Line 111:
Notation in math display is confusing :)
Page 4:
Line 124:
Please avoid contractions.
I do not understand Lines 140 -- 147.
Page 5:
Can you please elaborate on Lines 179 -- 181?
Page 6:
Why do you choose $x = x^{n(h) + 1}$?
Page 7:
Can you please elaborate on your notion of Regret?
Page 8:
I am confused by Lines 268 -- 269.
Page 9:
I like the philosophical interpretations!
Please elaborate on future work.
How is your work connected to computational complexity and computational learning theory?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback! We will address your concerns here.
**Connection to computational complexity (page 1)**: good point! Inductive inference (especially Solomonoff's method) is not only linked to learning theory but also to the theory of computation. We will add some discussion on this connection (e.g. the paper "Learning in Pessiland via Inductive Inference" by the authors you mentioned).
**Comparison between pseudocodes (page 2)**: we list them side by side for a straightforward comparison. Only lines 2,6,9 are different. Here lines 2,6 correspond to the difference in protocols: classic online learning allows a changing ground-truth (line 6), while inductive inference requires the ground-truth to be fixed in advance (line 2). Line 9 corresponds to the difference in criteria: classic online learning requires uniform error bounds, while inductive inference allows the bounds to depend on the ground-truth. We will add a caption to briefly explain the differences.
**Lines 140-147 (page 4)**: this paragraph conveys two messages (1) our framework considers general $\mathcal{H}$, while previous work on inductive inference only considers $\mathcal{H}$ with a countable size (2) our framework incorporates different setting on how Nature chooses $x_t$ (adversarial or stochastic), while most previous work only considers a certain rule of nature's choice on $x_t$. By these two, we reach the conclusion that our framework is a general framework of inductive inference, subsuming the previously considered settings. As an example, we cast the problem of learning computable functions under our framework. If you have further questions, please let us know.
**Lines 179-181 (page 5)**: once $A_n$ makes more than $d_k+k$ errors at some time step $t_0$, then for any $t> t_0$, $e(t,n)\ge d_k+k$, and $e(t,n)+n> d_k+k$ since $n\ge 1$. By the definition of how we pick the index $J_t$ (line 5 in Algorithm 1), index $k$ will always be strictly better than index $n$ for any $t> t_0$ because $e(t,k)+k\le d_k+k$, then our algorithm will never pick $J_t=n$ after $t_0$. Combining it with the fact that only indexes no larger than $d_k+k$ can be possibly invoked, we make at most $(d_k+k)^2$ errors.
**Choice of $x^{n(h)+1}$ (page 6)**: $n(h)+1$ is the smallest number $n$ that guarantees a contradiction, for which we want $n>n(h)$.
**Notion of regret (page 7)**: the regret notion in Theorem 12 is from Definition 6. It's a natural extension of the regret notion in agnostic online learning to the non-uniform setting: we require a uniform dependence on $T$ ($r(T)$ is the same across different $h$), while the overhead $m(h)$ can vary with different $h$.
**Lines 268-269 (page 8)**: Figure 1 can serve as an intuitive explanation. Roughly speaking, we use the claim from lines 261-262 multiple times, to find a sub-tree which is a binary tree with depth $m^*+2$, each node being the root of an $\aleph_1$-size tree. In Figure 1, $v_{(1,1)}$ is the ancestor of the sub-tree, with $v_{(2,1)}$ and $v_{(2,2)}$ being its children (though $v_{(2,1)}$ is not its child in the original tree).
**Future work (page 9)**: as we briefly mentioned, our learning algorithms are inevitably intractable, and we leave a computationally tractable approximate algorithm for future work.
We hope our response has addressed your concerns. If you have further questions, please let us know. Thank you again for your valuable time and insights!
---
Rebuttal Comment 1.1:
Comment: Thank you! :) | Summary: This paper establishes a novel link between inductive inference and online learning theory. It introduces a novel non-uniform online learning framework and proves a very interesting result on hypothesis class characterization: the authors show inductive inference is possible if and only if the hypothesis class is a countable union of online learnable classes, irrespective on the observations' distribution.
Strengths: I found this paper is a thorough, beautifully written piece of work that guides the readers intuitively on the series of theoretical results. The necessary and sufficient characterization on hypothesis class when inductive inference is possible is philosophically interesting and would be an interesting addition to the field. The authors presents a wholistic analysis and contextualizing the current work with past literature: through comparison with classic online learning setting, to analysis on agnostic setting and comparison with non-uniformity with consistency. I find this paper would be a valuable addition to the research community.
Weaknesses: The main weakness of this paper is the unclear practical implications of the paper's main result (Theorem 1) on downstream tasks. For example, given one can characterize the hypothesis class as a countable union of online learnable classes - what would it mean to the general machine learning community, e.g., can it offer guidance towards algorithm designs? Could the authors elaborate more on how the potential practical impact of their work?
Technical Quality: 4
Clarity: 4
Questions for Authors: (See above)
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback! We will address your concern here.
**Practical implications**: this work is devoted to a conceptual link between philosophy and learning theory, therefore practical implication is not the primary objective and is beyond the scope of the current work. We believe the new conceptual finding itself is valuable because understanding inductive reasoning is a fundamental problem.
For concrete practical applications, our algorithm has the potential to be made more practical by designing approximate algorithms (as we briefly mentioned in future directions). The original form of Solomonoff induction is also known to be intractbale, but later works built approximate versions of Solomonoff induction.
Some recent works study the practical use of inductive inference in large language models (for example, "Learning Universal Predictors" by Grau-Moya et al 2024), and we believe our algorithms can be useful in practice by a similar approach, for understanding if large language models implicitly learn such Bayesian algorithms during inference. We leave these as future research directions.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response! | Summary: This paper studies the non-uniform online learning problem, where error bounds can depend on the hypothesis rather than being uniform across hypotheses. In particular, the paper derives theoretical results on (i) conditions for non-uniform online learnability; (ii) regret bounds when the true hypothesis lies outside of the hypothesis class; and (iii) necessary condition for consistency, where error bounds can additionally depend on the data sequence.
Strengths: The paper comprehensively studies the non-uniform online learning problem. The theoretical results seem sound, drawing upon existing uniform learnability results. I would be interested in the opinion of a learning theory expert on the significance of the results.
Weaknesses: Given that a major claimed contribution of the work is the conceptual link between inductive inference and non-uniform online learning, I would have expected a more formal description of the equivalence (e.g. side-by-side mathematical definitions, with references), and a more detailed contextualization of the existing works in both areas and how they are subsumed by the proposed framework. For example, what are the ‘different rules of observations’? It seems to me that the informal statement of inductive inference (finite number of errors given hypothesis) is naturally formulated as Definition 4.
In terms of presentation, I feel that some examples of hypothesis classes and learning problems in the main paper would be helpful in making the motivation of the paper more accessible; along the lines of the Examples in the Appendix but perhaps more diverse. See Qns below for some specifics
Technical Quality: 3
Clarity: 2
Questions for Authors: - Do the authors have an example of a hypothesis class which is a countable union of Littlestone classes but not countable? Examples 21 and 22 are both countable so don’t illustrate the value of the new result.
- Similarly, for the consistency definition, what is an example of a class that is not non-uniform learnable but consistent?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: I do not have sufficient expertise in this area to assess the limitations of the theoretical work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback! We will address your concerns here.
**Significance of the results**: we briefly summarize our contributions here (1) we give a sufficient and necessary condition for inductive inference, a fundamental problem in philosophy, while previous works only provided sufficient conditions (2) to solve this problem, we introduce a new framework called non-uniform online learning, which is closely connected to other learnability notions and can be of independent interest. We hope this addresses your concern on the significance of our results, and we are happy to answer further questions if any.
**More explanation on equivalence**: the equivalence is discussed in lines 140-147. Since our framework is a strictly more general form than previously considered inductive inference by removing constraints on $\mathcal{H}$ and $x_t$, we believe it's unnecessary to have another side-by-side mathematical definitions (as opposed to line 67). We give a representative example of how the problem of learning computable functions is subsumed by our framework in lines 143-147.
**Different rules of observations**: it refers to different Nature's choices of observations $x_t$. Here Definition 4 formulates the adversarial case while Definition 5 handles the stochastic case. By Theorem 9 and Theorem 11, we prove that the two cases share the same sufficient and necessary condition, which implies this condition is also sufficient and necessary for any "rule of observation" between the adversarial and stochastic cases. This corresponds to "different rules of observations" in line 141.
If our explanation on the equivalence still feels unclear to you, we are happy to hear from your further suggestions and revise our writing correspondingly.
**More examples**: we put the examples in the appendix due to space limit, we will add some key examples back to the main paper for readability as you suggested. Below we answer your questions.
**A countable union of Littlestone classes but not countable**: we give a stronger example which is a Littlestone class with an uncountable size. Consider the set of indicator functions on $[0,1]$, i.e. $\{f_c| f_c(x)=1_{x=c}, c\in [0,1]\}$. The size of this hypothesis class is uncountable because $[0,1]$ has the same cardinality as $\mathbb{R}$. This class itself has Littlestone dimension 1. A naive algorithm that makes at most one error on this class is the following: always predict $y_t=0$ until making the first error on some $x_t=c$, then the ground-truth $h^*$ is identified as $f_c$ and we predict w.r.t. $f_c$ afterwards.
**Not non-uniform learnable but consistent**: this is left as an open question, as shown in Table 1. Currently we only know non-uniform learnablity is a subset of consistency and we obtained a new necessary condition for consistency (Theorem 18). It's unclear whether consistency can be separated from non-uniform learnablity or not.
If our response has addressed your concerns, please consider reevaluate our paper. If you have further questions, please let us know. Thank you again for your valuable time and insights!
---
Rebuttal Comment 1.1:
Comment: Thank you for clarifying that the non-uniform online framework is a novel contribution. I've updated my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer,
Thank you for your feedback and appreciation of our results!
We would like to emphasize that the main contribution of this work is answering the philosophy question "when is inductive inference possible?". The new non-uniform online learning framework we introduced only serves as the tool to solve this question, thereby we consider it as a technical contribution. We hope this message is clearly conveyed to you.
**Main contribution**: we study inductive inference, a basic problem in philosophy, which is not only crucial to understanding human reasoning, but also inspired pioneer works in learning theory (e.g. "Occam's Razor" [1]). Different from previous works which only considered countable-sized hypothesis classes and thereby provided sufficient conditions, we provide a **necessary and sufficient** condition for inductive inference. This condition is proven tight across various settings (adversarial/stochastic, realizable/agnostic).
**Technical contribution**: our results are proven via the introduction of a **new learning framework**, non-uniform online learning. This framework is not only a strictly more general form of (previously considered) inductive inference [2], but also a natural combination between classic online learning [3] and non-uniform PAC learning [4]. It's closely connected to other learnability notions (e.g. [5],[6]) and can be of independent interest for future research.
In conclusion, we believe our results make a solid contribution to both the fields of philosophy and learning theory. We are more than happy to provide further clarification or explanation promptly if required.
[1] Occam’s Razor, Blumer et al, Information processing letters 1987
[2] Language identification in the limit, EM Gold, Information and control 1967
[3] Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm, Nick Littlestone, Machine learning 1988
[4] Nonuniform learnability, Benedek and Itai, Automata, Languages and Programming 1988
[5] A theory of universal learning, Bousquet et al, STOC 2021
[6] Non-uniform consistency of online learning with random sampling, Wu and Santhanam, ALT 2021 | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Aligning Individual and Collective Objectives in Multi-Agent Cooperation | Accept (poster) | Summary: The paper lies in the intersection of multi-agent and game theory. In particular, dealing with "mixed-motive cooperative games". This is, games in which the maximization of individual rewards hampers the maximization of collective rewards. This is, when agents seek to maximize their individual reward/utility, and disregard the collective utility they are worst off as a collective. These type of problems are usually called "social dilemmas" in game theory. The objective is to achieve the 'social optimum', this is, the maximum collective reward a group of agents can obtain, beyond the individual reward.
The methodology proposed in the paper is to align the individual rewards towards the collective maximization rewards via "gradient shaping". This is, move the gradients of the individual reward maximization policy towards the collective reward. In order to shift the individual rewards, the algorithm includes an "alignment" parameter that aligns two optimization problems (individual maximization vs collective optimization) that otherwise, will pull against each other.
Since the collective optimization point is an equilibrium point, the algorithm shows convergence.
Strengths: 1.Simplicity: The algorithm is simple and intuitive. It can be easily implemented in the current major multi-agent algorithms.
2. Testing in well selected benchmark environments: The environments selected to test the algorithm are very standard in the field. Results show superiority against the selected benchmarks.
3. Ablation studies. The paper presents ablation studies for the relevant parameters. In particular, the "lambda" term controlling the alignment.
4. Presentation: the narrative is well written and makes the reading very straightforward. The paper provides intuition as well as rigor on the methodology proposed.
Weaknesses: See below.
Technical Quality: 3
Clarity: 3
Questions for Authors: I don't think this is fair or true. The mentioned papers do provide substantial theoretical analysis. This is not the way to place your paper.
"On the other hand, several studies focus on auto-matically modifying rewards by learning additional weights to adjust the original objectives [Gemp et al., 2020, Kwon et al., 2023]. However, these studies often suffer from a lack of interpretability and insufficient theoretical analysis regarding the alignment of individual and collective objectives."
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Authors have addressed the limitations of the paper on the Conclusions section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Question 1:** I don't think this is fair or true. The mentioned papers do provide substantial theoretical analysis. This is not the way to place your paper...
**Response:** We apologize for the misunderstanding and appreciate your careful review. We have revised the statement as follows to make it more clear:
“On the other hand, several studies focus on automatically modifying rewards by learning additional weights to adjust the original objectives [1,2]. Most of these methods leverage Nash equilibria or related concepts from game theory, such as the price of anarchy. The challenge of finding Nash equilibria in nonconvex games is more difficult than finding minima in neural networks [3], and it is not intuitive to explain how individual incentives align with the collective incentives during the optimization process."
[1] I. Gemp, K. R.McKee, R. Everett, E. A. Duéñez-Guzmán, Y. Bachrach, D. Balduzzi, and A. Tacchetti. D3c: Reducing the price of anarchy in multi-agent learning. arXiv preprint arXiv:2010.00575, 2020.
[2] M. Kwon, J. P. Agapiou, E. A. Duéñez-Guzmán, R. Elie, G. Piliouras, K. Bullard, and I. Gemp. Auto-aligning multiagent incentives with global objectives. In ICML Workshop on Localized Learning (LLW), 2023.
[3] Letcher, A., Balduzzi, D., Racaniere, S., Martens, J., Foerster, J., Tuyls, K. and Graepel, T., 2019. Differentiable game mechanics. Journal of Machine Learning Research, 20(84), pp.1-40.
---
Rebuttal Comment 1.1:
Title: Thank you to the authors for their response.
Comment: Thank you for replying to the rebuttal and including my suggestion into your work. My rating stands.good luck with your paper!
---
Reply to Comment 1.1.1:
Title: Thanks
Comment: Dear Reviewer,
Thank you for your encouraging message and for your valuable suggestions, which have significantly strengthened our work. Thanks~
Best,
All Authors | Summary: The paper proposes Altruistic Gradient Adjustment (AgA) which adjusts the gradients of individual and collective losses to align individual and collective objectives. Besides, the authors prove that AgA effectively attracts gradients to stable fixed points of the collective objective while less sacrificing individual interests. Experiments and example illustrations demonstrate the effectiveness of AgA.
Strengths: 1. The paper is well-written and well-organized.
2. This topic is important with less attention in the MARL community.
3. The experiments are performed on several environments and some example illustrations are given to help understand the proposed technique.
4. The most important contribution of this paper is Corollary 4.3. This algorithm tries to seek stable fixed points for the collective objective while previous works aim to seek stable fixed points for individual objectives. This will possibly benefit the collective welfare as well as the individual interest at the same time, which would be useful as demonstrated in the experiments.
Weaknesses: 1. AgA introduces an additional adjustment term, which needs to be tuned case by case.
2. AgA introduces additional computation complexity to compute Hessian-vector products.
3. It is not clear whether AgA also has the ability to always keep all individual objectives best. For example, in Figure 1(c), Simul-Co could achieve a better reward for Player 2 than AgA while Simul-Co also achieves a higher collective reward than AgA.
4. There is only one map of SMAC to be tested.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Could the authors compare the running time of AgA with other baselines?
2. Could the authors provide more discussion about AgA and Simul-Co? For example, if Simul-Co could achieve a higher collective reward than AgA like in Figure 1, one can redistribute the collective reward for each agent so that the redistributed individual rewards for each agent can be better than AgA.
3. In Algorithm 1, are the parameters w=[w_1,…,w_n] configured manually or determined by the task itself?
Confidence: 2
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: NA.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Weakness 1:** AgA introduces an additional adjustment term, which needs to be tuned case by case.
**Response:** Thank you for your feedback. Our AgA method indeed includes an gradient adjustment component, however, **the additional adjustment term is derived automatically based on our theoretical framework**, as outlined in Proposition 4.2 and Corollary 4.3, and is implemented in our derivation code. While the adjustment term itself does not require manual intervention, the parameter $\lambda$ does need to be tuned for different tasks to ensure optimal performance.
**Weakness 3:** It is not clear whether AgA also has the ability to always keep all individual objectives best. For example, in Figure 1(c), Simul-Co could achieve a better reward for Player 2 than AgA while Simul-Co also achieves a higher collective reward than AgA.
**Response:** In mixed-motive settings, the group’s incentives can sometimes align and sometimes conflict [1][2]. Therefore, **it is almost impossible to consistently optimize all individual objectives simultaneously. If all individual objectives could always be optimal, the problem would reduce to a fully cooperative scenario where all incentives are perfectly aligned.** In Section 4.1, we emphasize the mixed-motive nature of our proposed differentiable mixed-motive game: "minimization of individual losses can result in a conflict between individuals or between individual and collective objectives (e.g., maximizing individual stats and winning the game are often conflict in basketball matches)."
[1] McKee, Kevin R., Ian Gemp, Brian McWilliams, Edgar A. Duéñez-Guzmán, Edward Hughes, and Joel Z. Leibo. "Social diversity and social preferences in mixed-motive reinforcement learning." arXiv preprint arXiv:2002.02325 (2020).
[2]Du, Yali, Joel Z. Leibo, Usman Islam, Richard Willis, and Peter Sunehag. "A review of cooperation in multi-agent learning." arXiv preprint arXiv:2312.05162 (2023).
**Question 1:** Could the authors compare the running time of AgA with other baselines?
**Response:** Please see common response.
**Question 2:** Could the authors provide more discussion about AgA and Simul-Co? For example, if Simul-Co could achieve a higher collective reward than AgA like in Figure 1, one can redistribute the collective reward for each agent so that the redistributed individual rewards for each agent can be better than AgA.
**Response:** Thank you for the insightful question regarding the comparison between AgA and Simul-Co. While Simul-Co focuses on maximizing collective rewards, it often overlooks individual agent incentives, potentially leading to dissatisfaction among agents with lower individual rewards. Redistributing collective rewards is also a potential method to address the problem. However, redistribution often encounters the credit assignment problem, one of the most challenging issues in the MARL field. It complicates the redistribution process, typically requiring expert manual design and substantial engineering effort.
In this work, our goal is to propose an automatic method that modifies the gradient to seamlessly align individual and collective interests. Unlike traditional reward-shaping techniques, our approach doesn’t rely on manual intervention but instead automatically adjusts the gradient to ensure a balance between individual incentives and overall social welfare.
**Question 3:** In Algorithm 1, are the parameters w=[w_1,…,w_n] configured manually or determined by the task itself?
**Response:** The parameters $w = [w_1, \dots, w_n]$ are initially defined in Definition 3 as follows: "The parameter set $w=[w_i]^n\in \mathbb{R}^d$ is defined, each with $w_i\in \mathbb{R}^{d_i}$ and $d = \sum_{i=1}^n d_i$. ... Each player $i\in N$ is equipped with a policy, parameterized by $w_i$, aiming to minimize its loss $\ell_i$." In practice, the parameters $w = [w_1, \dots, w_n]$ in Algorithm 1 refer to the neural network parameters, and our method is based on the popular PPO architecture. These parameters are learned and optimized during the training process rather than being manually configured.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed reply. Most of my questions and concerns are addressed. I would like to maintain my score.
---
Reply to Comment 1.1.1:
Title: Thanks
Comment: Dear Reviewer,
Thank you for your feedback and for carefully reviewing our detailed reply.
Best,
All authors | Summary: This paper introduces a novel optimization method called AGA that employs gradient adjustments to progressively align individual and collective objectives. They prove that this method attracts gradients to stable fixed points of the collective objective while considering individual interests. Their method is empirically validated on sequential social dilemmas games, Cleanup and Harvest, and a high dimensional environment, StarCraft II.
Strengths: - Preliminaries are clear
- Proofs are given graphs to explain the intuition
- Good comparison to baselines
- Varied experimental environments (grid game + modified SMAC)
- Strong experimental results
Weaknesses: - No clear weaknesses
Technical Quality: 4
Clarity: 4
Questions for Authors: - Figure 1. It’s not clear how to interpret the reward contours graphs. Can you clarify what the x-axis and y-axis? How are the contours formed? They appear to be integers corresponding to the Player’s actions but where is the action space defined?
- What is the significance of AGA moving along the summit?
- In Proposition 4.1, what is the gradient without the subscript referring to?
- L292, how is the collective loss equation determined?
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Question 1:** Figure 1. It’s not clear how to interpret the reward contours graphs...
**Response:** Thank you for your advice to improve the clarity of the graph description. The x-axis and y-axis represent the actions of the two players, i.e., for player $i$ =\{1, 2\}, $a_i \in \mathbb{R}$, where $a_i$ is the action of player $i$. In our future version, we will include the definition of the action space in Example 4.1. The contours in the graph represent the same rewards that can be achieved by the players based on different actions. The contours are formed by sampling the actions of the two players at regular intervals and calculating the corresponding rewards for each action pair. Points that yield the same reward are then connected to form the contour lines, representing levels of equal reward that can be achieved by the players based on their actions. Thank you again for your valuable advice. We will incorporate this explanation into the manuscript to make the graph clearer.
**Question 2:** What is the significance of AGA moving along the summit?
**Response:** According to the definition of Example 4.1, the reward function for Player 1 has a maximum value of 1. Thus, moving along the summit represents the algorithm's ability to maximize the player's reward during updates, ensuring that the player's interests are not overlooked. On the other hand, the Simul-Co method focuses solely on improving collective rewards and neglect individual player interests.
**Question 3:** In Proposition 4.1, what is the gradient without the subscript referring to?
**Response:** The gradient without the subscript is initially defined in Section 3.1 as follows: "We write the simultaneous gradient $\xi(w)$ of a differential game as $\xi(w) = (\nabla_{w_1}\ell_1, \dots, \nabla_{w_n}\ell_n) \in \mathbb{R}^d,$ which represents the gradient of the losses with respect to the parameters of the respective players." Since this definition is located far from the Proposition, we will reiterate it in our future version to provide clarity.
**Question 4:** L292, how is the collective loss equation determined?
**Response:** The collective loss function is determined experimentally within the context of mixed-motive problems and is inspired by the SVO method. Our motivation is to achieve equality while maximizing social welfare, ensuring that the loss function guides a fair contributions among participants.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response to my comments. I have read your rebuttal and will provide further comments soon, as needed.
---
Rebuttal 2:
Title: Thanks
Comment: Dear Reviewer,
Thanks for taking the time to review our response and your valuable input. We appreciate your continued engagement with our work.
Best,
All Authors
---
Rebuttal Comment 2.1:
Comment: I have no further comments and would like to maintain my score. Best of luck with the paper. | Summary: The paper investigates the topic of cooperation in a mixed-motive multi-agent setting. They first propose the formulation of a mixed motive game as a differentiable game. Leveraging the structure of the latter, they propose a gradient-based optimization algorithm, AgA. The paper both discusses the theoretical guarantees of AgA, and its effectiveness in an empirical setting. Finally, the authors introduce a modification of the MMM2 map in the StarCraft II game, to further test the empirical performances of AgA.
Strengths: - The paper is theoretically sound. The AgA algorithm is simple, but at the same time well justified.
- The experiment are broad and well designed. The Selfish-MMM2 environment addresses some of the limitations of the other ones, and I think is a principled way of showing the performance of the algorithm in larger and more complex environments
Weaknesses: - There are some typos in the paper, and I feel the presentation can be improved. For example, in the Related Work section in page 2, there are some sentences which are incomplete or non-sensical. I am specifically referring to ' While PED-DQN enables agents 79 to incrementally adjust their reward functions for enhanced collaborative action through inter-agent 80 evaluative signal exchanges [Hostallero et al., 2020], Gifting directly reward other agents as part of 81 action space [Lupu and Precup, 2020].'
- I think the author should discuss more in depth the complexity of this method. How does it scale with respect to the other algorithms proposed? There is no experiment that highlights this in the paper
- There is no discussion of how AgA satisfied individual incentives
Technical Quality: 3
Clarity: 1
Questions for Authors: - How much slower is AgA compared to the other algorithms? Do you have any empirical experiments which can shows this in practice?
- Do you have any results which show the individual metrics achieved by the agents when using AgA vs other algorithms? It could be valuable to see some plot which shows both the cooperative and individual rewards, and how these vary depending on the value the hyperparameter $\lambda$ is set to
- Did you run the experiments for Figure 3 for more seeds? Do the results still hold?
Confidence: 2
Soundness: 3
Presentation: 1
Contribution: 2
Limitations: No limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Weakness 1: ... I am specifically referring to ' While PED-DQN enables age ...**
**Response:** Thank you for your thorough and detailed reviews of my paper. We apologize for the unclear sentence and have rewritten it to improve clarity:
*To further promote cooperation, the gifting mechanism—a crucial strategy in mixed-motive cooperation[2]—allows agents to influence each other’s reward functions through peer rewarding.
Besides, PED-DQN[3] introduces an automatic reward-shaping MARL method that gradually adjusts rewards to shift agents’ actions from their perceived equilibrium towards more cooperative outcomes.*
**Weakness 2 & Question 1: ...the complexity of this method... & How much slower is AgA compared to the other algorithm...**
**Response:** Please see Common Response.
**Weakness 3 & Question 2:There is no discussion of how AgA satisfied individual incentives & Do you have any results which show the individual metrics**
**Response:** Thank you for your valuable advice regarding individual metrics. Your suggestions are helpful and will contribute to improving the quality of our experiments and the overall paper. To evaluate individual incentives, we introduce the **Gini coefficient** [1], a commonly used measure of income equality, to assess reward equality in cooperative AI settings [4]. We utilized an expedited method for calculating the Gini coefficient and derived an associated equality metric $E$, defined as $E := 1 - G$. The Gini coefficient $G$ is computed from the ranked payoff vector $p$, which arranges each individual's rewards in ascending order, as $G = \frac{2}{n^2 \bar{p}} \sum_{i=1}^n i(p_i - \bar{p})$, where $\bar{p}$ is the mean of the ranked payoff vector $p$ and $n$ is the total number of players. *A higher $E$ value indicates greater equality.*
**Table 1** presents the comparison of the mean and standard deviation of the equality metric achieved by different methods in the Harvest and Cleanup environments (note that the selfish-MMM2 environment involves heterogeneous agents, making it challenging to directly compare rewards achieved by different types of agents). A value closer to 1 indicates more equal rewards among agents. **As shown in Table 1, our proposed AgA method outperforms the baselines, demonstrating that AgA can effectively consider all interests within the team. Furthermore, as illustrated in Figure 3 of our manuscript, AgA achieves the highest collective rewards. Therefore, our AgA methods adequately address both individual and collective interests.**
*Table 1: The comparison of the equality metric.*
| **Envs** | **Simul-Ind** | **Simul-Co** | **SVO** | **CGA** | **SL** | **AgA ($\lambda = 0.1$)** | **AgA ($\lambda = 1$)** | **AgA ($\lambda = 100$)** | **AgA ($\lambda = 1000$)** |
|-------------|---------------------------|---------------------------|---------------------------|---------------------------|---------------------------|------------------------------|-------------------------|---------------------------|-----------------------------|
| **Harvest** | 0.973 ± 0.005 | 0.975 ± 0.006 | 0.974 ± 0.007 | 0.950 ± 0.051 | 0.972 ± 0.005 | 0.981 ± 0.006 | **0.988 ± 0.003** | 0.982 ± 0.012 | 0.980 ± 0.006 |
| **Cleanup** | 0.841 ± 0.071 | 0.948 ± 0.013 | 0.902 ± 0.019 | 0.903 ± 0.034 | 0.946 ± 0.016 | 0.940 ± 0.017 | 0.956 ± 0.007 | **0.959 ± 0.011** | 0.905 ± 0.022 |
**Question 3:** Did you run the experiments for Figure 3 for more seeds? Do the results still hold?
**Response:** We run all algorithms with three seeds and report the mean and variance with a 95\% confidence interval. The results across different environments, including a toy game, a two-player public goods game, Harvest, Cleanup, and our developed selfish-MMM2 show that our method, AgA, consistently outperforms the baselines. Therefore, we believe that the results would remain consistent with more seeds.
**Reference:**
[1] David, H. A. Gini’s mean difference rediscovered. Biometrika, 55(3):573–575, 1968. ISSN 00063444. URL http://www.jstor.org/stable/2334264.
[2] Lupu, A. and Precup, D. Gifting in multi-agent reinforcement learning. In Proceedings of the 19th International Conference on autonomous agents and multiagent systems, pp. 789–797, 2020.
[3] D. E. Hostallero, D. Kim, S. Moon, K. Son, W. J. Kang, and Y. Yi. Inducing cooperation through reward reshaping based on peer evaluations in deep multi-agent reinforcement learning. In A. E. F. Seghrouchni, G. Sukthankar, B. An, and N. Yorke-Smith, editors, Proceedings of the 19th International Conference on Autonomous Agents and Multiagent Systems, AAMAS ’20, Auckland, New Zealand, May 9-13, 2020, pages 520–528. International Foundation for Autonomous Agents and Multiagent Systems, 2020. doi: 10.5555/3398761.3398825. URL https://dl.acm.org/doi/10.5555/3398761.3398825.
[4] Du, Yali, Joel Z. Leibo, Usman Islam, Richard Willis, and Peter Sunehag. "A review of cooperation in multi-agent learning." arXiv preprint arXiv:2312.05162 (2023).
---
Rebuttal Comment 1.1:
Title: Main concerns addressed
Comment: Dear Authors,
Thanks for the answer and for the additional experiments provided. I suggest the authors to both include the discussion and results provided here on the individual incentive and the computational complexity of AgA in the final version of the manuscript.
Since the main concerns I raised were addressed, I'll increase my score accordingly.
---
Reply to Comment 1.1.1:
Title: Thanks
Comment: Dear Reviewer,
Thank you very much for your positive feedback and for the time you invested in reviewing our manuscript. We are pleased to hear that the additional experiments and the revisions we provided addressed your main concerns. **We will make sure to incorporate these elements to further enhance the clarity and completeness of our work.**
Best,
All Authors | Rebuttal 1:
Rebuttal: Thank you to all the reviewers for your diligent efforts and invaluable feedback. Your comments will greatly contribute to enhancing the quality of our paper. We hope that we have addressed your concerns in our response. If you have any further questions, please don't hesitate to engage in discussion with us, and we will respond promptly.
**Common Response Regarding the Running Time of AgA:**
Thank you for the valuable comments regarding the running time and complexity of our proposed AgA algorithm. In this response, we first address the additional complexity introduced by AgA and then provide an analysis of its practical running time. First, the AgA method is a type of gradient adjustment method, where the modified gradient is given by $\xi_{c} + \lambda \left( \xi + H_c^T\xi_c \right)$ (see Proposition 4.2 in the paper). As discussed in the paper, it is not necessary to compute the Hessian matrix $H_c$ directly. Instead, we compute Hessian-vector products $H_c^T\xi_c$ for the modified gradient, which have a computational cost of $\mathcal{O}(n)$ for $n$ weights [1]. Thus, compared to most current methods in mixed-motive MARL, the additional complexity mainly arises from the Hessian-vector products. **As a result, the running time of AgA is generally expected to be more than twice as long as that of standard gradient-based methods.
The practical running time analysis also demonstrate the conclusion.** Table 1 presents the average running time of baseline methods and AgA in the two-player public goods game (see Sec. 5.1 in the paper). The table includes the total duration, total number of timesteps over 50 runs, the time per step, and the time per step ratio compared to our AgA method.
The experiments are conducted in Macbook Pro with Apple M1 Pro Chip.
The values in the ratio row are calculated by dividing the time per step of each method by the time per step of the AgA method, which facilitates an easy comparison of running times.
The Simul-Ind, Simul-Co, and SL are standard gradient-based methods, while CGA and SGA are gradient-modification methods. **Our findings indicate that the AgA method takes approximately 2-3 times longer per step compared to standard gradient-based methods. Additionally, AgA is slightly slower than other gradient-modification methods due to the more complex operations involved in sign judgment.** Despite having the highest running time, AgA is the most efficient method, requiring only 1389 steps for 50 runs, compared to around 4000 steps for the baselines.
Our experiments in Harvest, Cleanup, and Selfish-MMM2 were conducted on different servers, such as A100, V100, and 3090. Therefore, we cannot directly compare the running times across these setups. However, we have selected some experiments run on the same servers (A100 with 80 GB) to provide a fair comparison of the running times. In the Harvest environment, AgA takes approximately 100.33 minutes on average for training, compared to 47.33 minutes for Simul-Co. This difference, which is about a factor of two, is due to the acceleration provided by the A100 and PyTorch.
*We will incorporate the discussion into the future version. Thank you again to all reviewers for your valuable advice.*
*Table 1: Comparison of the running time between AgA and baseline methods.*
| Metrics | **Simul-Ind** | **Simul-Co** | **SL** | **CGA** | **SGA** | **AgA** |
|-------------|---------------------------|---------------------------|---------------------------|---------------------------|---------------------------|------------------------------|
Total Duration (ms)| 1165.79 | 910.15 | 1149.97 | 3041.84 | 3007.77 | 1034.69 |
Total Steps | 4272 | 3252 | 3887 | 4478 | 4179 | 1389
Time Per Step (ms)| 0.27 | 0.28 | 0.30 | 0.68 | 0.72 | 0.74 |
Ratio| 0.37 | 0.38 | 0.40 | 0.91 | 0.97 | 1.00 |
[1] B. A. Pearlmutter. Fast Exact Multiplication by the Hessian. Neural Computation, 6(1):147–160, 01 1994. ISSN 0899-7667. doi:10.1162/neco.1994.6.1.147. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Visual Anchors Are Strong Information Aggregators For Multimodal Large Language Model | Accept (poster) | Summary: This paper introduces a new method to make a strong connector between a language model and a vision encoder, in order to build a vision-language model. The method is derived from the Perceiver Resampler, but uses a clever initialization for the queries. The authors validate their approach with a series of ablations, comparing the other connectors to theirs.
Strengths: - This work meets a demand in the field of vision-language models, by creating a stronger connector between a language model and a vision encoder.
- The authors obtained better scores with their connector than with the most used and best ones researchers currently use, while being efficient too.
Weaknesses: - The Anchor Selector is to me the meat of the paper. The AcFormer is a Perceiver Resampler with queries benefiting from a better initialization that depends on the input image, instead of the same queries for all images. However, I find that the description of the Anchor Selector algorithm and the justification for why it could work could be better explained in the main paper (the full algorithm is in the appendix). Providing a pseudocode instead of the code could also help.
Technical Quality: 2
Clarity: 2
Questions for Authors: - What is the effect of N, the depth of the AcFormer, on the performance on the benchmarks? Which depth do you recommend taking in practice?
- In the paper, you mention that the strategy of the Anchor Selector is to take the attention map already computed to avoid having to re-compute something, and use it after modification for the initialization to the queries in your AcFormer. Have you tried other approaches where you are not focusing on efficiency and you are allowed to do additional operations, to see if it performs better?
- In Figure 2, we see the effect of the layer on the attention map. Which layer are you considering for the final attention map that you are using for your algorithm? Have you tried different layers to see if it performs better?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thanks your comments. We will address your concern below.
**Q1:** About the anchor selection algorithm.
**R1:** Thanks for your suggestion. We will provide you with the pseudocode below.
Assuming the Visual Feature Map is $V \in \mathbb{R}^{B \times N \times D}$, the Visual Attention Map $A \in \mathbb{R}^{B \times H \times N \times N}$, and the desired Anchor Token Number $T$.
1. **Initialize Variables**:
- `result` is set to empty list `[]`.
- Calculate `Per_head_num` as $(T - 1) / H$, where `[CLS]` is chosen by default.
2. **Iterate Over Batches**:
- For each batch $i$ in the range $B$:
- Initialize `max_indices` with `[0]` (the index of the `[CLS]` token).
- **Iterate Over Heads**:
- For each head $j$ in the range $H$:
- Extract and sort attention scores for the `[CLS]` token, excluding the `[CLS]` token itself.
- Select top `Per_head_num` indices and add to `max_indices`.
- If duplicates occur, select additional indices to ensure the required number of unique tokens.
- **Update Selected Anchors**:
- Fetch `selected_anchor` according to `max_indices` and append it to `result`
3. **Return**:
- Return `result`.
**Q2:** The effect of the depth of the Anchor Former.
**R2:** Thanks for your concern. We present an ablation experiment below, conducted using the Vicuna-7b model, in which we select a total of 145 tokens.
| Depth | TextVQA($\uparrow$) | GQA($\uparrow$) | MMB($\uparrow$) | MME($\uparrow$) | Para Num (M) |
| :--------- | :----------: | :----------: | :----------: | :----------: | :----------: |
| 3 |57.7 |60.9 |68.1 |1816.1 |32.1 |
| 6 |58.0 |61.3 |68.4 |1846.1 |63.1 |
| 9 |58.2 |61.2 |68.3 |1856.1 |125.9 |
From the table above, it is evident that using a depth of 6 with the training data for LLaVA-1.5 is sufficient. Further increasing the depth does not yield significant gains.
**Q3:** About other approaches for anchor selection.
**R3:** Thanks for your question. As we have mentioned in our paper, the **PCA** operation on the hidden states can also be employed for anchor selection. Assuming the output of the Vision Transformer is $H \in R^{N \times D}$, we apply PCA and select the first 3 components to obtain $H_d \in R^{N \times 3}$. We then sum these three components and sort the results to identify visual anchors without relying on the attention map. Experimental results show similar outcomes. Additionally, we calculate the Jaccard distance for the indexs of the selected anchors between the attention method and PCA-based method, which reaches 70%, indicating a high degree of consistency between the two approaches. However, due to the computational expense of PCA, we are not able to fully training the model with this method. In our future work, We will further explore other methods to extract these tokens.
**Q4:** About the layer choice.
**R4:** Thanks for your concern. In our experiment, we choose the last but one layer. We observe that before layer 12, the [CLS] token's attention is primarily focused on the object, gradually integrating with other visual anchors. We test the attention maps of the last 10 layers for anchor selection and compute the Jaccard distance of the selected tokens' index. Our findings indicate that the selected tokens' index exhibit at least 90% overlap across different attention maps. Therefore, to ensure consistency with feature selection, we choose the penultimate layer for our configuration.
---
Rebuttal Comment 1.1:
Title: Answer to authors
Comment: Thank you for providing additional work and answering the questions.
---
Reply to Comment 1.1.1:
Title: Reply to Reviewer Ld45
Comment: Thank you for your acknowledgment of our additional work and responses. We appreciate your constructive feedback that has helped refine our research. Please feel free to reach out if you have further queries or need additional clarification on our work. | Summary: This paper propose AcFormer, a novel vision-language connector in MLLMs.
AcFormer is driven by visual anchors observed in vision transformer's PCA of the feature map and attention map of [CLS] token, where high value is observed in both of the maps.
The author use the attention value of [CLS] token to select the visual anchor in a progressive way, the selected visual anchors are then used as information aggregation tokens through cross attention.
Through extensive experiments the author prove the effectiveness and efficiency of AcFormer comparing with other multimodal connectors.
Strengths: * As is shown in the main experiment result, AcFormer achieves about 2.3/1.7 time faster pretraining speed while retaining the overall performance.
* Extensive ablation result is provided to further prove the effectiveness of AcFormer.
* The writing and experimental setup is clear.
Weaknesses: * It seems to the reviewer that the main advantage of AcFormer comparing with original LLaVA-1.5 is its efficiency. However, only the training time is reported in the main table. Since all of them is less than 20 hours as is shown in table 6, apparently training time is not the main bottleneck of developing LLaVA-1.5 model. To further justify the efficiency of AcFormer, it's enouraged to report the inference time per token of the resulting models.
* The motivation is not clear enough. The visual anchors discovered by the authors is a common phenomenon recently in vision tranformers, for example, [1] finds the outliers in ViT and provs that the outliers emerge because the ViT need some additional tokens to preserve golbal information. In this paper, more experiments should be done to unveil the reason behind the emergence of visual anchors, otherwise it not convincing enough to directly use the anchors to aggregate information, especially given that the performance improvement is limited comparing with the origial model.
* AcFormer selects the visual tokens without considering the actual question, which means the token reduction could be harmful when the question is not about the major subject of the image.
[1] Darcet, T., Oquab, M., Mairal, J., & Bojanowski, P. (2023). Vision transformers need registers. arXiv preprint arXiv:2309.16588.
Technical Quality: 2
Clarity: 3
Questions for Authors: * In line 186-195, what is the difference between AcFormer on $i-1$ th layer and selecting some features from $i$ th layer?
* In line 203-214, how are the 6 layers selected and how are the features fed before the projection layer? This part is not clear enough.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thanks for your review. We will address your concern below.
**Q1:** About the inference time.
**R1:** Thanks for your concern of the effectiveness. We show the **inference time** and **training memory** needed below. We report the inference time for each benchmark using the prompt "Answer the question using a single word or phrase." (Max generation length is set to 3) For inference, we use 8 A100 GPUs, each with a batch size of 1. To test the training memory, we use a batch size of 4 to avoid out-of-memory issues.
| Connector | Resolution | TextVQA($\downarrow$) | DocVQA($\downarrow$) | ChartQA($\downarrow$) | GPU Mem($\downarrow$) |
| :--------- | :----------: | :----------: | :----------: | :----------: |:----------: |
| MLP |336 |125s|198s|64s|31.24g|
| Anchor Former |336 |97s |115s|36s|22.04g|
| MLP |672 |571s|803s|276s|71.58g|
| Anchor Former |672 |384s|470s|141s|32.95g|
| MLP |1008 |-|-|-|OOM|
| Anchor Former |1008 |505s|653s|223s|50.76g|
From above results, it is evident that our model not only accelerate the training stage but also accelerate the inference. Also, it can greatly reduce the training memory, which enables us to handle higher resolution input. (For LLaVA-Next it supports maximum resolution of 672 $\times$ 672)
**Q2:** About the motivation.
**R2:** Thank you for your suggestion. We appreciate you bringing the paper "Vision Transformers Need Registers" to our attention. However, it was not formally published before the NeurIPS 2024 submission deadline, so we had not reviewed it earlier. After examining the paper, we acknowledge that while both our work and this paper identify similar phenomena, our motivation and methodology are distinct.
**Motivation**. The paper "Vision Transformers Need Registers" believe that special tokens can be harmful. They propose using registers (extra tokens) to eliminate these tokens and thus enhance vision transformer backbone. In contrast, our work, delves deeper into the utility of these tokens and utilizing them to build Multimodal Large Language Model (MLLM).
**Analyse on the effectiveness of AcFormer**. We analyze the feature transformation process in Vision Transformers through both the attention map and feature map. Our experiments reveal that information tends to aggregate around these anchors. In Acformer, we leverage these anchors to be queries and all of the visual tokens to be keys and values. By this way, the anchors are trained to extract infromation from the whole image. This is different from Q-Former or Perceiver Resampler, which utilizes learnable queries to aggregate visual information.
Through extensive experiments, we evaluate our method. Results demonstrates comparable performance while significantly reducing costs, as shown above in Q1.
**Q3:** About the token reduction's harm on different question.
**R3:** Thanks for your concern. In our work, the Anchor Former consists of a stack of attention and feedforward layers. Instead of using anchors directly as image features, we use them as queries for the Anchor Former, with all image tokens serving as keys and values. During training, the model optimizes the Anchor Former parameters to implicitly extract visual features that is enough to answer different questions. (Covering both fine-grained and coarse-grained)
This is also evident by our experiment on DocVQA and ChartQA. These two benchmarks are about the document quesiton answering, which is more related to fine-grained visual information understanding. From below results, it can be found that our method can also handle the fine-grained visual question answering. Though sacrificing little performance, it greately accelerate the inference process.
| Connector | Resolution | DocVQA($\uparrow$) | ChartQA($\uparrow$) |Inference Speed($\uparrow$) |
| :--------- |:--------- | :----------: | :----------: | :----------: |
| MLP |672 |65.4|57.2|1 $\times$|
| Anchor Former |672 |64.9|56.7|1.66 $\times$|
| MLP |1008 |OOM |OOM |-|
| Anchor Former |1008 |68.8|58.1|1.21 $\times$|
**Q4:** About the difference between AcFormer on i-1 th layer and selecting some features from i th layer?
**R4:** Thank you for your question. AcFormer is a stack of multi cross attention layers. So in the Anchor Former, we do not directly select specific features. The anchor selection occurs only after the Vision Transformer stage. We utilize the selected anchors as queries and all visual tokens as keys and values to perform cross-attention. Assuming the Anchors are $IA\in R^{M\times D}$ and the vision tokens are $V \in R^{N\times D}$, the computation of the i-th layer in AcFormer can be illustrated as below.
$IA_i = IA_i + Attn_i(query=IA_i, key=value=V)$
$IA_{i+1} = IA_i + FFN_i(IA_i)$
$IA_0$ is the selected anchors after the vision transformer. V is all of the visual tokens.
The final vision representation is $IA_{6}$ from the last layer (We have 6 layers for anchor former). We will revise the relevant sections of the article to achieve a clearer description."
**Q5:** how are the 6 layers selected and how are the features fed before the projection layer?
**R5:** Thanks for your question.
There are six layers in Acformer with each contains an attention module and feed forward module. So there is not layer selection within the Acformer. There are only one anchor selection process between the Vision Transformer and Acformer. The selected anchors are applied as queries while all the visual tokens are applied as keys and values. They are intergrated with cross attention mechanism in Acformer as illustrated in Q4 above. Acformer are trained during all the stage (Pretrain and IFT).
Following the Anchor Former, the output features have a shape of number_of_anchors $\times$ dimensions. We then use an MLP to project these dimensions to match the LLM's hidden size.
---
Rebuttal Comment 1.1:
Comment: Thsnks for the detailed rebuttal.
I appreciate the addition report of inferencing speed and the additional report on the performance. And thanks for the clearification of my questions. These experiment address part of my concern and I would like to increase my rating.
A minor correction: the paper 'Vision Transformers Need Registers' is published in ICLR 2024 on May 7th to May 10th 2024, which is earlier than NeurIPS 2024 deadline, May 22nd 2024. Not to mention this paper was submitted to arxiv on Sep 23rd 2023.
---
Reply to Comment 1.1.1:
Title: Reply to Reviewer r5Ke
Comment: Dear reviewer,
We sincerely thank you for your reply and correction. We will include a discussion with ‘Vision Transformers Need Registers’ in the next version of our paper.
Sincerely, The Authors of Submission 4497 | Summary: This paper introduces a novel vision-language connector, Anchor Former (AcFormer), designed to enhance the efficiency and accuracy of multimodal models. By identifying visual anchors within Vision Transformers and utilizing a cost-effective progressive search algorithm, AcFormer leverages these anchors to aggregate information more effectively.
Extensive experiments demonstrate that AcFormer significantly reduces computational costs while maintaining or improving performance across various vision-language tasks compared to existing methods like Q-Former and Perceiver Resampler.
Strengths: 1. The motivation of this paper is strong, attempting to improve a fundamental building block in recent multimodal models.
2. The paper is well written and easy to follow. Tables and figures are helpful for readers to quickly understand.
3. Experiments are extensive and solid. They cover various types of connectors (linear projection (LLaVA), Q-Former (BLIP-2), and Perceiver Resampler) and datasets (POPE, MME, MMB, MM-Vet, TextVQA, GQA, VQAv2, VisWiz and SQAimg).
4. This paper is insightful and can be useful to the community.
Weaknesses: 1. The proposed method is not very effective. The key result comparison for this paper is AcFormer vs LLaVA-1.5 (linear), but in Table 1 and 2 the accuracy results are mixed (slightly in favor of AcFormer). The proposed method is not significantly better than the linear baseline.
2. Linear connector results are missing in Table 3.
3. Minor: The term Multimodal Large Language Model (MLLM) is self-conflicting. I suggest authors use Large Multimodal Model (LMM).
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. Besides slight accuracy gain and training time speedup, are there other benefits of the proposed method over the baseline connectors?
3. Only pretraining time results are provided. Is the proposed method faster at inference time as well?
2. It seems that the proposed method is more efficient in terms of number of visual tokens. Will this make a big accuracy difference on tasks with high resolution images (documents for example)?
I will raise my rating if the authors can convince me the significance of the key technical contribution of this paper.
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 2
Limitations: The authors discussed them in the limitation section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for your review. And we will address your concern below.
**Q1:** About the effectiveness.
**R1:** Thank you for highlighting this concern.
Our primary motivation in this work is to enhance the efficiency of Large Multimodal Models (LMMs). While various token reduction methods exist, they commonly suffer from performance degradation.Our ablation study consistently indicates that our method outperforms other token reduction techniques, such as C-Abstractor and Perceiver Resampler.
Moreover, in Tables 1 and 2, while the results between LLaVA (Linear projection) and our method are mixed, our method shows an overall improvement when averaged across each benchmark. Although there are slight performance drops on some benchmarks (e.g., MM-Vet ~-0.3, VQAv2 ~-0.1), the speed is greatly improved. Overall, our method reduces computation cost while maintaining comparable performance.
| Connector | TextVQA($\uparrow$) | GQA($\uparrow$) | MMB($\uparrow$) |MME($\uparrow$) |
| :--------- |:--------- | :----------: | :----------: | :----------: |
| C-Abstractor |53.4 |60.2 |67.8 |1775.4|
| Perceiver Resampler |52.1 |56.4 |65.4 |1720.8|
| Anchor Former |58.0 |61.3|68.4|1846.1|
**Q2:** About missing the linear connector result in Table 3.
**R2:** Thanks for pointing out this issue. We have listed the result of linear connector (i.e. the result of LLaVA that uses linear connector) in Table 1 and Table 2. We will add them to Table 3 to make it more completed.
**Q3:** About the name.
**R3:** Thanks for your suggestion. We will use Large Multimodal Model to replace MLLM.
**Q4:** About other benefits. (Inferencing time, Training cost)
**R4:** Thanks for your concern. Our proposed method not only improves accuracy and reduces training time but also significantly decreases training expenses (GPU memory) and accelerates inference. This is particularly important for building high-resolution large multimodal models.
We show the **inference time** and **training memory** needed below. We report the inference time for each benchmark using the prompt "Answer the question using a single word or phrase." For inference, we use 8 A100 GPUs, each with a batch size of 1. To test the training memory, we use a batch size of 4.
| Connector | Resolution | TextVQA($\downarrow$) | DocVQA($\downarrow$) | ChartQA($\downarrow$) | GPU Mem($\downarrow$) |
| :--------- | :----------: | :----------: | :----------: | :----------: |:----------: |
| MLP |336 |125s|198s|64s|31.24g|
| Anchor Former |336 |97s |115s|36s|22.04g|
| MLP |672 |571s|803s|276s|71.58g|
| Anchor Former |672 |384s|470s|141s|32.95g|
| MLP |1008 |-|-|-|OOM|
| Anchor Former |1008 |505s|653s|223s|50.76g|
From the table, it is evident that our proposed method significantly reduces both inference and training costs without significant performance loss (refer to Q6 for more detailed analyse). In addition, this reduction allows us to develop higher resolution models with limited resource. For example, with 8 A-100 GPUs, our method can develop a model for input resolution of 1008x1008 while LLaVA-Next may suffer from out-of-memory issue.
**Q5:** About the inference time.
**R5:** We are grateful for your concern. We list the inference time in the table in Q4. The results demonstrate our method's effectiveness on accelerating the inference.
**Q6:** About performance on high resolution image.
**R6:** We present the performance of the MLP and our method on DocVQA and ChartQA below. For pretraining, we use LLaVA-558k. As LLaVA-Next's dataset is not publicly available, we augment LLaVA-665k with the DocVQA and ChartQA document datasets for Instruction-Finetuning (IFT).
| Connector | Resolution | DocVQA($\uparrow$) | ChartQA($\uparrow$) |Inference Speed($\uparrow$) |
| :--------- |:--------- | :----------: | :----------: | :----------: |
| MLP |672 |65.4|57.2|1 $\times$|
| Anchor Former |672 |64.9|56.7|1.66 $\times$|
| MLP |1008 |OOM |OOM |-|
| Anchor Former |1008 |68.8|58.1|1.21 $\times$|
From the table above, it is evident that our method incurs only a slight performance drop compared to the baseline method. Additionally, our method can handle input resolutions up to 1008 × 1008 with less inference time than an MLP processing 672 × 672 resolutions. Notably, at higher input resolutions, any slight performance drops are compensated for, and even show significant improvements. For example, compared to MLP, our method may have a slight performance drop at 672 × 672, but it operates faster. By using higher resolutions like 1008 × 1008, we achieve better accuracy and still faster speeds, compensating for any initial performance loss.
---
Rebuttal Comment 1.1:
Title: Answer to Authors
Comment: I've read all reviews and rebuttal responses. Thank you for providing detailed responses.
---
Reply to Comment 1.1.1:
Title: Reply to Reviewer i5LS
Comment: Dear reviewer i5LS,
Thanks for your reply! If there are any other concerns, feel free to tell us. We are grateful for your response.
Sincerely, The Authors of Submission 4497 | Summary: This paper proposes a way to select visual tokens, e.g. token pruning, by using attention map scores. This reduces the number of tokens needed in the network which saves compute costs. The method is evaluated on multiple datasets and shows reduced compute while maintaining performance.
Strengths: The paper is fairly well written and the experiments are good and show the benefit from the approach.
Weaknesses: The approach isn't especially novel, many works have explored token pruning, token learning, token reduction before. While this approach is different from the previous ones, the differences are small.
Technical Quality: 3
Clarity: 3
Questions for Authors: The compute cost overall seems to go down, but I'm curious about the implementation of the anchor selection. How long does that part take? How optimized is that algorithm?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your hard work in reviewing! We will address your concern below.
**Q1:** About the novelty.
**R1:**
Thank you for highlighting this issue. Though there are indeed methods for token reduction or token pruning, such as CAbstractor and Perceiver Resampler, ours are different with them in many aspects.
1. Motivation. For C-Abstractor, it mainly focus on maintaining the locality information. For Perceiver Resampler, it use the learnable queries to aggregate visual information. However, in our work, we attempt to propose image-specific information aggregation.
2. Method. In C-Abstractor, the output tokens of vision transformer are aggregated with the conv pooling. In perceiver Resampler, the output tokens of vision transformer are aggregated with learnable queries (all of the image shares the same queries). While in Acformer, we use the anchors as queries and use all of the visual tokens as keys and values to carry out cross attention. By this way, it better maintain important info.
3. Performance. Thanks to the visual anchors, it enables us to aggregate visual infromation with nearly no performance loss, which is evident Table below (parts of Table 3 from our paper).
| Connector | TextVQA($\uparrow$) | GQA($\uparrow$) | MMB($\uparrow$) |MME($\uparrow$) |
| :--------- |:--------- | :----------: | :----------: | :----------: |
| C-Abstractor |53.4 |60.2 |67.8 |1775.4|
| Perceiver Resampler |52.1 |56.4 |65.4 |1720.8|
| Anchor Former |58.0 |61.3|68.4|1846.1|
Through the analyse above, we believe this work provides **valuable insights into aligning visual and language modalities, and it offers a foundation for further exploration into the interpretability of the vision-language integration process**.
**Q2:** About the implementation of anchor selection.
**R2:** Thanks for pointing out the problem for understanding. We employ the top-p method for anchor selection. Specifically, we consider the attention map of the [CLS] token in the corresponding layer of the Vision Transformer (the penultimate layer in our implementation), denoted as $A \in \mathbb{R}^{H \times 1 \times N}$, where $H$ represents the number of heads and $N$ the number of visual tokens (excluding the [CLS] token itself). Our goal is to select $M$ tokens. To achieve this, we calculate the number of tokens each head should contribute using $p = (M-1)/H$ (we include the [CLS] token by default, so we subtract one). For each head, we sort the token indices based on $A$ and select the top-p indices for our results. In cases of duplication, we iteratively select the top-(p+1) indices until the desired number of tokens is achieved. By this way, we finally obtain the chosen visual anchors.
For example, if we aim to select 145 tokens and we are using the OpenAI-CLIP-L model, which has 16 attention heads, we first calculate each head's token contribution as (145-1)/16=9. We then initialize our selected tokens index list with the [CLS] token, res=[0]. For head 0, assuming the sorted token index with the attention map is [1,2,4,6,21,64,33,78,24,98,23,99...], then we append the top-8 index into the result list -> res = [0,1,2,4,6,21,64,33,78,24]. We then process head 1, assuming the sorted index with the attention map is [1,24,2,4,6,98,23,99,45,32,75,34,38,70...], as "1,2,4,6" are already in res, to avoid duplication, we append [24,98,23,99,45,32,75,34,38] into the result list. After processing each head, we have a list with 145 unique tokens' index. Finally, we sort this list and fetch the anchors according to the indext.
**Q3:** About the cost of anchor selection.
**R3:**
Thanks for your concern on the cost of anchor selection. In our experiment, we employ the OpenAI CLIP-L-336 model, which features 16 attention heads in its Vision Transformer and processes 576 tokens (excluding the [CLS] token). The anchor selection primarily involves a single list sort of length 576. After evaluating this anchor selection process over 1000 iterations, we observe an average execution time of 7.5 ms (800 ms for one token generation). The baseline method takes 1300 ms to generate one token. Although our method adds an extra 7.5 ms for token selection, it reduces attention computation time by nearly 500 ms, highlighting its efficiency and lightweight nature. Despite the additional 7.5 ms from anchor selection, the overall attention cost is reduced by approximately 500 ms, leading to a net decrease in total computation time.
| Connector |One token generation time|Anchor selection time|
| :--------- | :----------: | :----------: |
| MLP |1300ms |NA|
| Anchor Former |800ms (-500ms) |7.5ms | | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Dendritic Integration Inspired Artificial Neural Networks Capture Data Correlation | Accept (poster) | Summary: This study explores incorporating channel-wise quadratic neuron, as inspired by dendritic nonlinearity, into artifical neural networks to improve model performance. These models show competitive performance on datasets like CIFAR, ImageNet-1K while maintaining simplicity and efficiency. The theoretical and experimental results highlight the potential of quadratic neurons in enhancing model performance.
Strengths: The paper presents a sound and well-supported exploration of quadratic neurons in ANNs. The theoretical analysis gives a good intuitive demonstration on the reason behind the benefit of quadratic neurons. The experimental verification is clear and show clear advantage given by their architecture.
Overall the paper is well-written and clearly explained.
Weaknesses: The theortical explaination is limited to highly simplified setting.
It is not clear if the emperical analysis on imagenet part is a fair one, give quite sophicated data augumentation is used for their models.
Technical Quality: 3
Clarity: 3
Questions for Authors: See above.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Weaknesses
- "The theortical explaination is limited to highly simplified setting. "
Thank you for your comment. Our theoretical analysis aims to emphasize that quadratic neurons inherently capture second-order information from training samples, which enhances their generalization capabilities compared to conventional neurons. In more complex scenarios, such as CIFAR-10 and ImageNet using deep CNNs, deriving theoretical results becomes challenging. However, theoretical analysis under simplified setting provided insights into why quadratic neurons perform better. Therefore, we conduct ablation study in Section 4.3.1 to show that quadratic neurons indeed capture second-order information for classification when they performs better in these cases, further supporting our conclusions.
- "It is not clear if the emperical analysis on imagenet part is a fair one, give quite sophicated data augumentation is used for their models."
Thank you for your valuable feedback. The comparison is fair from two perspectives. Firstly, Our Dit-ConvNeXt employed the same data augmentation techniques as the original ConvNeXt, which indicates that our model does not require significant modifications to achieve notable improvements. Secondly, the state-of-the-art (SOTA) models we compared in Table 3 of the main text also utilize extensive data augmentations and tuning to achieve their results. | Summary: The paper introduces a new biologically inspired neural network architecture. Rather than using linear layers followed by a nonlinear activation function, the authors propose a quadratic model instead in the inputs. This form is said to explicitly model the covariance between input features offering better accuracy and generalization.
Strengths: The authors present improved accuracy with fewer parameters than existing state of the art models by simply replacing a few layers.
The authors have conducted a thorough series of experiments
Weaknesses: The relation to biological plausibility is unclear and the link is tenuous.
Technical Quality: 3
Clarity: 2
Questions for Authors: Have the authors look into the structured learned by the quadratic term?
How do the CNN filters differ from those previously learned?
Did the authors compare to other smoother activation functions such as gelu?
Confidence: 1
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The limitations are not outlined in the main text but are in the appendix instead.
It would be beneficial to include these in the main text.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Weaknesses
Thank you for the valuable comments, we have provided a biological interpretation of our Dit-CNN in global rebuttal.
## Questions
- "Have the authors look into the structured learned by the quadratic term?"
Thank you for your valuable question. Following your suggestion, we have examined the distribution of the trained quadratic coefficients, as shown in Figure 3.B of the rebuttal materials. We find that most of the coefficients are close to zero, suggesting the potential to further reduce the computational cost of our model. Additionally, the distribution is symmetric and confined to a reasonable range, without extremely large or small values.
- "How do the CNN filters differ from those previously learned?"
Thank you for your question. Following your suggestion, we visualize the CNN filters of the original ConvNeXt and our proposed Dit-ConvNeXt, as shown in Figure 2.B of the rebuttal materials. This visualization highlights the differences in training results between the two models, indicating that our Dit-CNN is not merely a small modification of the original CNNs.
- "Did the authors compare to other smoother activation functions such as gelu?"
Thank you for your helpful question. Following your suggestion, we compared our quadratic neuron with the smoother activation function GeLU on binary classification and the MNIST dataset, as shown in Figure 3.A of the rebuttal materials. Our results demonstrate that while GeLU activation performs better than ReLU in the binary classification task, it still does not achieve the optimal results seen with quadratic neurons. Additionally, GeLU activation performs even worse on the MNIST dataset. For more complex datasets, ConvNeXt already utilizes GeLU activation, and the results for this case are presented in Table 3 of the main text. These results are consistent with our theoretical analysis: the computational advantage of quadratic neurons arises from their direct second-order interactions between inputs, enabling them to better capture data correlations, rather than from the smoothness of the quadratic function.
## Limitations
Thank you for your advice. We will move the limitations to the main text in a later revision.
---
Rebuttal 2:
Comment: I have read the rebuttals. I have updated my score.
---
Rebuttal Comment 2.1:
Comment: We are delighted to see that our clarification and rebuttal were well-received, and we appreciate the increase in the score. Thank you once again for your careful review and valuable recommendations. | Summary: This is an interesting paper looking at quadratic neurons and how they impact performance and/or learning rate of ANNs. The quadratic integration is loosely linked to dendritic integration properties of pyramidal neurons in cortex (though it is unclear whether any real resemblence should be granted). This increase in the degree of nonlinearity , when implemented in CNNS (called Dit-CNNs) works to the advantage on benchmarks like imagenet while also retaining the simplicity of conventional CNNs. Of interest and importance are the elegant anaytical solutions exemplified in figure 1 as well as the ablation study shown in table 4. Overall this is an interesting and timely study.
Strengths: This is an interesting study looking at the impact of relatively recent findings in neurobiology . Even if how these neurobiological findings are loosely connected to the quadratic model, how the model and the associated networks are characterized is thorough and insightful. The ablation part as well as the work on how to integrate the quadratic neurons with minimal overhead are nice contributions that further substantiate the author's decision to account for higher terms that could potentially be linked to dendritic integration.
Weaknesses: The authors use the term "dendritic integration" loosely without exploring various ways that such integration can occur (sub- vs. supra-linear). It would be interesting to consider such cases.
Technical Quality: 3
Clarity: 3
Questions for Authors: could a potentially more nuanced or bio-plausible implementation of dendritic integration further enhance performance? How would that affect the quadratic integration?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: it is left somewhat unclear how the dendritic integration along different compartments of a cell and its result in the neuron's output is reflected in quadratic neurons. It would be interesting to look at sub- vs. supra-linear integration effects along the same lines.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Weaknesses
Thank you for your suggestion, it is indeed an interesting idea. Previous work has shown that the dendritic bilinear integration rule can account for both sub-linear and supra-linear cases, depending on the sign of the quadratic coefficient. In our approach, we directly incorporate this rule into artificial neural networks (ANNs) without restricting the integration to only sub-linear or supra-linear cases, specifically, we do not fix the sign of the quadratic coefficients during training, allowing both sub-linear and supra-linear to happen. An additional observation is that, after training, the quadratic coefficients exhibit both positive and negative components, as illustrated in Figure 3.B of the rebuttal materials. This indicates that our models capture both sub-linear and supra-linear effects.
## Questions
Thank you for your insightful question. The dendritic bilinear integration rule is primarily restricted to subthreshold regime before the neuron generate spikes. However, there are other properties of dendritic integration, such as dendritic spikes [1], that warrant further exploration. Due to time constraints, we have not fully investigated these properties. However, examining how to incorporate these features into artificial neural networks (ANNs) and how to combine them with the bilinear integration rule could potentially enhance performance. This is an area that deserves future research.
[1] Gidon, Albert, et al. "Dendritic action potentials and computation in human layer 2/3 cortical neurons." Science 367.6473 (2020): 83-87.
## Limitations
Thank you for your valuable suggestion. The dendritic bilinear integration rule already captures the somatic response when synaptic inputs are integrated from different compartments of a neuron. Therefore, we can model quadratic neurons without explicitly considering sub- and supra-linear integration along these compartments. However, we agree that exploring the integration effects in dendritic compartments could deepen our understanding of neuronal computation, and this area deserves further investigation. | Summary: This paper explores the computational benefits of quadratic neurons, which are inspired by the quadratic integration rules of dendrites. The authors first present the theoretical analysis on binary classification for normal distributions, showing the existence and uniqueness of the solution with a single quadratic neuron. Then, a few-shot learning experiment on MNIST and Arabic MNIST is conducted to show the better performance of quadratic neurons under few-shot training samples. Finally, this paper integrates quadratic neurons into CNNs along the channel dimension, and demonstrates the superior performance of the model on several datasets including CIFAR and ImageNet.
Strengths: 1. This paper considers the quadratic integration rule inspired by biological dendrites and successfully applies it to advanced artificial neural networks such as ConvNeXt.
2. Experiments show promising performance on relatively large-scale datasets, e.g., ImageNet.
Weaknesses: 1. The presentation and organization of the paper are poor and loose. There are also many informal claims without rigorous justification.
(1.1) The link between the theoretical analysis, the so-called “enhanced generalization capabilities”, the few-shot learning advantage, and the integration into CNNs is poor. I do not see that the theoretical analysis of the existence and uniqueness of the solution of a single quadratic neuron under a simplified setting provides effective insight or explanation for the following practice. There is no analysis of deep networks. There is no explanation for the connection between the practice of CNNs and theories.
(1.2) In the discussion of Theorem 1 and 2, it is said using the gradient descent algorithm. However, there is no analysis for training dynamics or learnability in current Theorems.
(1.3) There is no formal definition or rigorous justification for the so-called “superior generalization capability over traditional neuron”.
(1.4) It is claimed that the integration of quadratic neurons into CNNs along the channel dimension is “biologically plausible”. However, I do not see any detailed justification. Indeed, only considering quadratic integration along the channel dimension does not correspond to synaptic connections between neurons.
2. The novelty and significance of this paper are not clear enough. Quadratic neurons have been studied in many previous works, both from theoretical and practical perspectives, e.g., [1] has shown strong results. While this paper interprets it as inspiration from biological neurons, the implementation does not correspond to the biological computation form. So there should be more differentiation with existing quadratic neuron works. Considering Table 1, the biological interpretation in this paper is not strong enough, and there are also theories in previous works [1]. Only changing how the quadratic operation is used makes little contribution.
3. There is no formal definition and analysis showing “enhance data correlation” in the title.
[1] Fan, F. L., Dong, H. C., Wu, Z., Ruan, L., Zeng, T., Cui, Y., & Liao, J. X. (2023). One neuron saved is one neuron earned: On parametric efficiency of quadratic networks. arXiv preprint arXiv:2303.06316.
Technical Quality: 2
Clarity: 2
Questions for Authors: Will incorporating quadratic neurons into multiple layers continually increase the performance? How is the suitable layer candidate identified?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors discussed limitations in Appendix D.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Weaknesses
1. Thank you for your insightful feedback, and I apologize for any lack of clarity in our presentation.
(1.1) Our main analysis aims to emphasizes that quadratic neurons inherently capture second-order information from training samples, which enhances their generalization capabilities compared to conventional neurons.
To support this, we provide a theoretical proof demonstrating how quadratic neuron capture second-order information to achieve optimal result in binary classification for two normal distributions characterized by first and second-order moments. And our numerical experiments shows that quadratic neurons indeed effectively capture this information while conventional neurons can not.
We generalize the theorem to multi-class classification tasks (Theorem 5). We also perform numerical experiments on MNIST to verify that quadratic neurons capture second-order information in this case (Appendix B, Figure 5).
In more complex scenarios, such as CIFAR-10 and ImageNet using deep CNNs, deriving theoretical results becomes challenging. However, previous theoretical analyses provided insights into why quadratic neurons perform better. So we conduct ablation study in Section 4.3.1 to show that quadratic neurons indeed capture second-order information for classification when they performs better in these cases, further supporting our conclusions.
(1.2) Thank you for your observation. Theorem 1 elucidates a possible analytical solution (existence of critical point) for gradient flow algorithm. And Theorem 2 establish the uniqueness of that critical points under certain assumptions. Under these conditions, it can be theoretically proven that if the gradient flow algorithm eventually converges, it will drive the parameters to this unique critical point [1][2]. We will clarify this in the updated version. And the numerical results presented in Section 3.1 (Figure 1 with sufficient training samples) further confirm the correspondence between the theoretically identified critical point and the numerically converging one.
[1] Quarteroni, Alfio, Riccardo Sacco, and Fausto Saleri. Numerical mathematics. Vol. 37. Springer Science & Business Media, 2006.
[2] Ambrosio, Luigi, Nicola Gigli, and Giuseppe Savaré. Gradient flows: in metric spaces and in the space of probability measures. Springer Science & Business Media, 2008.
(1.3) The superior generalization capability means the ability of models incorporating quadratic neurons to achieve lower generalization error. For instance, we demonstrate this through the smaller generalization error observed in binary classification tasks where the target function is known, as well as higher test accuracy on the MNIST, CIFAR-10, and ImageNet-1 datasets. We will clarify the meaning of “superior generalization capability over traditional neuron” in the updated version.
(1.4) Thank you for the valuable comments, we have provided a biological interpretation of our Dit-CNN in global rebuttal.
2. Thank you for your feedback. We clarify the novelty and significance as follows:
- **Biological Interpretation:** Our channel-wise quadratic model offers a comprehensive biological interpretation, as explained in global rebuttal, whereas other quadratic methods lack such evidence.
- **Efficiency:** Our channel-wise quadratic form demonstrates significantly higher efficiency, utilizing only about one-third of the number of trainable parameters compared to other quadratic methods. This is detailed in Table 2 of the main text, where we also include a comparison with the model referenced in your review ([12] in the main text).
- **Scaling Property:** We have added a visualization of the test accuracy comparison in Figure 2.A of the rebuttal materials. This shows that our model consistently improves in accuracy as the model size increases, while other quadratic methods tend to saturate, indicating superior scaling property for our model.
- **Focus on Generalization:** Our analysis does not concentrate on the universal approximation properties of networks, as seen in the theorems of the paper you provided. Instead, we aim to explain the advantages of quadratic neurons from the generalization capability perspective, emphasizing their ability to capture second-order (correlation) information from data distributions. We believe these insights differentiate our work from existing studies on quadratic neurons and contribute to a various understanding of their practical benefits.
3. Thank you for your helpful feedback. In this paper, "enhanced data correlation" refers to the model's ability to capture second-order information (correlation) effectively, which is verified through our numerical experiments on MNIST, CIFAR, and ImageNet. To clarify, we will change “enhance data correlation” to “capture data correlation” in the updated version.
## Questions
Thank you for your insightful questions.
- "Will incorporating quadratic neurons into multiple layers continually increase the performance?"
Due to limited time and resources, we have only experimented with incorporating quadratic neurons into multiple layers on CIFAR-10, which resulted in a gradual increase in performance. However, our observations suggested that replacing only one layer is often the best choice for balancing computational cost and performance.
- "How is the suitable layer candidate identified?"
We discuss this in Section 4.3.2 of the main text. Suitable layer candidates are identified based on the trade-off between computational cost and performance improvement. In our practice, we initially incorporate quadratic neurons into each layer of a fixed architecture, such as ResNet, to evaluate performance in smaller models like ResNet-20. Once a suitable layer for incorporating quadratic neurons is identified, the same replacement is applied to deeper models, such as ResNet-110.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their detailed responses. I acknowledge that the empirical results (efficiency, scaling property, etc.) are good, which I have listed as strengths, while my concerns remain that the theoretical presentation and claims for biological plausibility are not satisfying. Considering the theoretical part, the authors emphasize that they focus on generalization. However, there is a large gap between "capture second-order information (correlation)" and generalization in general settings, and there is no formal analysis for generalization error. Considering biological plausibility, the quadratic term in Dit-CNN is element-wise spatially rather than considering convolution with receptive field, which means some synapses are used for quadratic integration while some are only summed. This discrepancy is still not explained. I understand that these parts may not be actually necessary for brain-inspired algorithms with good performance, but if the authors claim this as an important part of the contribution, I think the current presentation is not complete and satisfying. So I keep my score for the current version of the paper.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your thoughtful feedback and for taking the time to review our rebuttal. We would like to further clarify your concerns.
Regarding analysis, our theoretical proof demonstrates a clear computational advantage of quadratic neurons: they capture second-order information, which distinguishes them from traditional neurons. Following our observation of a significant improvement in test accuracy in more complex scenarios, we have confirmed that quadratic neurons effectively capture second-order information in these contexts. So the capability of capturing second-order information likely plays an important role in small generalization errors observed in our models.
Regarding biological plausibility, we had considered the quadratic interactions between convolutions and encountered training challenges, such as gradient explosion. Alternatives such as considering both spatial-wise and channel-wise quadratic interactions, may significantly increase the computational cost (e.g., a $3\times 3$ convolution with both spatial-wise and channel-wise quadratic interactions leads to a 81 times increase in FLOPs compared to our Dit-CNN). Therefore, we proposed Dit-CNN, which achieves good performance while maintaining practical computational costs. We believe we are the very first to explore quadratic methods based on the dendritic bilinear integration rule. And Our Dit-CNN is inspired by the visual pathways in the neural system, as illustrated in Figure 1.C of the rebuttal materials (a modification with good performance and low computation cost ).
We hope these explanations address your concerns. Once again, thank you for your patience and for raising such constructive points. | Rebuttal 1:
Rebuttal: Thank you to all the reviewers for their high-quality reviews. To address the reviewers' concerns, we have added more experiments and provided additional details about the model. We hope that our responses effectively address the reviewers' feedback .
We have provided a comprehensive biological interpretation of our Dit-CNN, as illustrated in Figure 1.C of the rebuttal materials. Our Dit-CNN is inspired by neural networks in the visual system. For example, different types of cone cells encode various color (channel) information, and retinal ganglion cells receive inputs from multiple types of cone cells [1], the responses can be modeled as having receptive fields (convolutional kernels) related to different color channels ($w_1 *x_1, w_2 *x_2, w_3 *x_3 $). When multiple channel inputs are present, traditional CNNs simply linearly sum the corresponding responses. In contrast, neurons integrate these inputs with an additional quadratic term based on the dendritic bilinear integration rule. This approach leads to the formulation of our Dit-CNN after simplification. We believe this integration reflects a more biologically plausible mechanism compared to conventional methods.
[1] Kandel, Eric R., et al., eds. Principles of neural science. Vol. 4. New York: McGraw-hill, 2000.
Pdf: /pdf/5c2c68fc5567838635444c1a705ac99553887afe.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper theoretically demonstrate that quadratic neurons inspired from dendritic computing inherently capture correlation within structured data. The quadratic rules are integrated with convolution networks and established the so-called Dit-CNNs. Experiments on CIFAR and ImageNet datasets demonstrates competitive performance of the proposed model.
Strengths: The paper provides theoretically demonstration of critical points existence of parameters of quadratic neuron. An illustrative binary classification experiment is constructed. The quadratic rules are further verified on MLP network with few-shot MNIST learning task and convolution networks in CIFAR and ImageNet classification tasks. These experiments demonstrate competitive performance of the proposed model to other related methods.
Weaknesses: 1. The paper demonstrates that there exist critical points for the parameters of quadratic neuron, however, there lacks experimental proof that the training process actually drives these parameters towards the critical points. It would be more convincing if the author could provide such evidence, for datasets with different scales.
2. Compared to previous quadratic dendritic methods, the improvement in accuracy of the proposed model is not so significant, I suggest that the author provide comparisons on computation cost to further demonstrate the advantage of the proposed model.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. In the ablation study 4.3.1, table 4, why performance dropping level are significantly different for CIFAR and ImageNet?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: See weaknesses and questions
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Weaknesses
1. Thank you for your suggestion. We have indeed demonstrated the uniqueness of critical points under certain assumptions (Theorem 2 in the main text). Under these conditions, it can also be theoretically proven that if the gradient flow algorithm converges, it will drive the parameters to this unique critical point [1][2]. We will clarify this in the updated version. The numerical results presented in Section 3.1 (Figure 1 with sufficient training samples) further confirm the correspondence between the theoretically identified critical point and the numerically converging one.
For more general cases, such as MNIST, Figure 5 in Appendix B shows that the post-training results are consistent with the theoretically identified critical point established in Theorem 5. For deeper networks and more complex datasets like CIFAR-10 and ImageNet, computing the critical points analytically becomes challenging.
2. Thank you for your valuable feedback. In Table 2 of the main text, we compare our Dit-CNNs with other quadratic methods. Our models show improvements in accuracy while utilizing only about one-third of the number of trainable parameters, highlighting its efficiency.
Additionally, following your suggestion, we visualized the test accuracy in Figure 2.A of the rebuttal materials, which demonstrates that our model consistently improves in accuracy as the model size increases, while other quadratic methods tend to saturate. This indicates superior scaling properties for our approach.
[1] Quarteroni, Alfio, Riccardo Sacco, and Fausto Saleri. Numerical mathematics. Vol. 37. Springer Science & Business Media, 2006.
[2] Ambrosio, Luigi, Nicola Gigli, and Giuseppe Savaré. Gradient flows: in metric spaces and in the space of probability measures. Springer Science & Business Media, 2008.
## Questions
Thank you for your insightful question. There are two possible reasons accounting for the difference: architecture and dataset (i.e., ResNet for CIFAR and ConvNeXt for ImageNet). To investigate the reason, we have trained Dit-ResNet on ImageNet-1K. After training, when we omitted the quadratic terms, the test accuracy dropped to 0.1%, which aligns with the Dit-ResNet results on CIFAR-10. So the key factor contributing to the differing performance drop levels in the ablation study is the use of entirely different network architectures for CIFAR and ImageNet. | null | null | null | null | null | null |
Partial Transportability for Domain Generalization | Accept (poster) | Summary: This paper tackles to problem of transportability in domain generalization. Multiple datasets are provided together with graphical information stipulating which causal mechanisms are shared across domains and which mechanisms are going to remain invariant in the target domain. The goal is then to estimate the error committed by a given classifier in the target domain. More specifically, the authors propose a way to estimate the worst-case error of the classifier given the observed distributions as well as the graphical information. A first result shows how the problem can be simplified using canonical SCMs and a second result shows how Neural Causal Models (NCM) can be used to estimate this bound. Furthermore, an approach named Causal Robust Optimization (CRO) is proposed to learn a classifier that minimizes the worst-case risk. Experiments experiments with the NCM approach to estimate the upper-bound of various classifiers, but does not tackle the CRO approach to learn a robust optimizer.
Strengths: - The paper reads nicely (even for a fairly technical paper) and gives a good background on SCMs and graph terminology for the non-expert. The choice of notation is good and remains consistent throughout the paper.
- The problem appears to be very important, and the approach to tackle it seems principled.
- The approach appears to be novel and interesting. However, I don't have a good knowledge of the literature on transportability and domain generalization and thus my judgment's credibility is limited.
- Many examples are provided to make the discussion more concrete, I enjoyed that.
Weaknesses: - Some theoretical results, in their current form, seems unlikely to be true. Specifically in Theorem 2, I was surprised to see that this result does not mention anything about the expressive power of the NCM, what if the neural networks do not have enough capacity to express the causal mechanisms? Clearly what you would end up with wouldn’t be an upper bound right? I was also surprised to see that the optimization problem is formulated in a finite-sample fashion, but the conclusion concerns the actual expectation (i.e. not finite-sample), this sounds unlikely, no? Don’t you need to take a limit as the dataset grows for this bound to hold?
- Confusion surrounding canonical SCM: At line 181, it is mentioned that [41] showed that every SCM M can be written in a canonical form (with discrete exogenous variables)... However, after quickly skimming through the paper, I could not find something resembling that statement. What am I missing? Overall I thought the discussion on canonical SCM was confusing. I’m left unsure as to which type of SCM can be cast to a canonical form (definition 5) and which paper showed this fact.
- Some section were hard to parse. Mainly lines 270 to 289, including Algorithm 1. I'm not sure I got the point. I was following up to that point. I give some minor suggestions to improve clarity below.
- Algorithm 2 (Causal Robust Optimization) is interesting, but it looks quite costly. Do the authors believe it could be made applicable? I was a bit disappointed to see it was not implemented in the experimental section. It would be nice to have experiments for CRO. Could this approach be applied to the Coloured MNIST dataset?
- The background is very well written, but the end of the paper felt a bit less polished. Since these are the most important part, I believe they deserve more space. For instance, I had a hard time understanding the experiments with colored MNIST.
Minor:
- Example 1: Why using arrows for assignment? The symbol “:=” was used earlier…
- Definition 3: Should S_{ij} be in fact S_i here? (line 125)
- Example 2: Long sequences of “if”s would benefit from some punctuation. The “circled plus sign” is an XOR right?
- Definition 4: Might be helpful to spell out \mathbb{M}_0 completely in (4), since \mathcal{M}^*_0 appears in the expectation but is not defined (the reader has to deduce that this is the target SCM belonging to \mathbb{M}_0)
- Definition 5: might be useful to end with “where h_V^{(r_V)} is a function from supp_{pa_V} to supp_V”. I’m actually not certain the very last sentence adds something to the definition, since it is already obvious that f_V(pa_V, r_V) is a function of pa_V for all r_V… I’m also a bit confused by this sentence: “The set of endogenous variables V is discrete.” What does it mean for a set to be discrete? Do you mean the endogenous variables are discrete? Also, I’m a bit confused by the way the word “support” is used here. Usually it means the set of values for which the random variable has positive probability (at least for discrete variables that’s the definition), but here I believe this is not how “support” is used, correct?
- Theorem 1: I believe the first constraint should not include the case i = *, no? I thought P* was not observed. In fact, you don’t have the case i=* on line 248 and in Theorem 2
- Figure 1 is unclear, what does the color mean?
- Line 353: Bayesian inference procedure?
Technical Quality: 3
Clarity: 3
Questions for Authors: See above.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: I didn't find any discussion of the limitations of the approach in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the time and effort for reviewing our paper and the positive assessment that our paper “reads nicely”, “the problem appears important” and that our approach is “principled”. Please find below our response to the review, and let us know if you have any further concerns.
> **Q 1.** *”Theorem 2, expressive power of the NCM. Don’t you need to take a limit as the dataset grows for this bound to hold?”*
Thank you for raising this point. The claim of Theorem 2 holds in the limit of data from the source domains, and increasing expressivity of the feed-forward Neural Networks that parameterize the NCMs. This fact was more clearly stated in the proof of Theorem 2 (line 610 and 615) and we will correct the statement in the main body of the paper, thank you for pointing this out.
As an additional observation, we could note that with finite data (provided that the Neural Networks are expressive enough) we would achieve a valid upper bound, since the set of SCMs compatible with finite data is a superset of the set of SCMs compatible with the underlying distributions. The important distinction, however, is that with finite data the bound will not be tight in general.
> **Q. 2.** *”Confusion surrounding canonical SCM: At line 181, it is mentioned that [41] showed that every SCM M can be written in a canonical form (with discrete exogenous variables)... However, after quickly skimming through the paper, I could not find something resembling that statement. What am I missing? Overall I thought the discussion on canonical SCM was confusing. I’m left unsure as to which type of SCM can be cast to a canonical form (definition 5) and which paper showed this fact.”*
Thank you for your careful reading. The reference [41] should be labeled as [42] referring to Zhang et al. “Partial counterfactual identification from observational and experimental data”. Specifically, Theorem 2.4 shows that for any SCM defined over a set of discretely-valued (and finite) endogenous variables, there exists a canonical SCM (as defined in our Def. 5) that is equally expressive, i.e., it generates the same observational, interventional, and counterfactual distributions.
To answer your question more directly: any SCM over discretely-valued (and finite) endogenous variables can be cast as a canonical model without loss of generality. Having said that, and since this is a fundamental part of the background, we will add a new appendix introducing these results to make the presentation more self-contained.
> **Q 3.** *”Some section were hard to parse. Mainly lines 270 to 289, including Algorithm 1. I'm not sure I got the point. I was following up to that point. I give some minor suggestions to improve clarity below.”*
Lines 270 to 289 refer to Example 4 that we use to convey the fact that joint distributions $P^*(\boldsymbol V)$ may be factorized into several terms (as in line 217). Given the assumptions encoded by the selection diagram and the source data, some of those terms may be point identifiable and evaluated uniquely through the source data, i.e. no bounding necessary, while others may not be evaluated uniquely. More broadly, the point of example 4 is to show that the parameter space can be cleverly decoupled and the computational cost of the optimization problem can be significantly improved: since only a subset of the conditional distributions need to be parameterized and optimized. This component of our work is a contribution over the existing work by Xia et al. 2021 ([35] in the paper) where the entire SCMs are parameterized by neural networks rather than only non-identifiable components.
This observation motivates Alg. 1, that (1) decomposes the query, (2) computes the identifiable components, and (3) parameterizes the components that are not point identifiable, and then proceeds with NCM optimization. We have added a more thorough discussion and illustration of Alg. 1 in the Appendix.
> **Q 4.** *”Algorithm 2 (Causal Robust Optimization) is interesting, but it looks quite costly. Do the authors believe it could be made applicable? I was a bit disappointed to see it was not implemented in the experimental section. It would be nice to have experiments for CRO. Could this approach be applied to the Coloured MNIST dataset?”*
We appreciate the suggestion and we have now implemented CRO and conducted experiments on synthetic examples (Example 2 and 3) and Colored MNIST. Please find all performance details in the global rebuttal and attached pdf. We have observed CRO to terminate in less than 4 iterations in all experiments (3 iterations for Colored MNIST) which we believe makes it very practical even in higher-dimensional problems such as image classification.
Indeed, we did not provide any guarantees for the number of iterations it takes for CRO to terminate; the concern about efficiency is quite valid from the theoretical perspective and a deeper analysis would be interesting.
> **Q 5.** *”The background is very well written, but the end of the paper felt a bit less polished. Since these are the most important part, I believe they deserve more space. For instance, I had a hard time understanding the experiments with colored MNIST.”*
We appreciate the comment. With the additional experiments on CRO and additional space in the camera-ready version of the paper, we will better describe the setting and results of our experiments. Please find a recap of CRO and the colored-MNIST example in the global rebuttal.
> **Q 6.** *”Minor comments.”*
Thank you for pointing these out. We will make sure to clarify / correct as required.
> **Q 7.** *"I didn't find any discussion of the limitations of the approach in the paper."*
Broader impact and limitations are discussed in Appendix B.
---
Rebuttal Comment 1.1:
Comment: With the discussion period coming to its end, we were wondering whether you had a chance to check our rebuttal. We hope to have answered all concerns to your satisfaction. If not, please don't hesitate to get in touch if there is any concern we could still help to clarify.
Thank you again for your time and attention. | Summary: The paper extends the formulation of canonical models to encode the constraints for the transportability tasks. Then it adapts Neural Causal Models for the transportability task and introduces an iterative method Causal Robust Optimization to find a predictor with the best worst-case risk.
Strengths: - The paper presents a unified framework for addressing transportability problems using canonical models and neural causal models. These transportability problems are broad and highly relevant in the field of machine learning.
- I appreciate the examples illustrated throughout the main text, as they greatly help the reader understand the definitions and intuitions.
- The theorems are solid and the experiments align well with the theories.
Weaknesses: Disclaimer: I am not familiar with this field, so my observations and suggestions are based on my general understanding on the paper.
- Some definitions are not provided, making it difficult for readers to understand certain parts of the paper. For example line 133, 134 $\bigoplus$.
Minor:
1. Page 1 line 9, “such as”.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. What does $\bigoplus$ mean?
2. Could you provide some experimental results for CRO and also compare it with other algorithms?
Confidence: 1
Soundness: 4
Presentation: 3
Contribution: 2
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the positive assessment of our work, thank you. The following response answers each question sequentially. We would be happy to expand on it if needed.
> **Q 1.** *”Some definitions are not provided, making it difficult for readers to understand certain parts of the paper. For example line 133, 134 (xor operator)”*
The $\bigoplus$ denotes the xor operator: $A\bigoplus B$ evaluates to 1 if $A\neq B$ and evaluates to 0 if $A=B$.
> **Q 2.** *”Could you provide some experimental results for CRO and also compare it with other algorithms?”*
Yes, thank you for this suggestion. We have added experimental results for CRO for all experiments, including for the synthetic simulations (Examples 2 and 3) and for Colored MNIST. This analysis is provided in more detail in the global rebuttal and attached pdf, with figures describing the learning process of CRO as well as explicit comparisons with baseline algorithms. As a summary, recall that the CRO algorithm uses Neural-TR as a subroutine to iteratively enhance a classifier and arrive at the best worst-case classifier. In our experiments, CRO converges in 4 iterations or less. In both the simulated and the Colored MNIST experiment we verify in every case that CRO achieves the best possible worst-case performance (known in these examples by construction), as suggested by Thm. 3.
We also note that the poor performance of the baseline algorithms should not be blatantly compared to that of CRO, since CRO has access to background information that can not be communicated with the baseline algorithms. In this sense, CRO can be viewed as a meta-algorithm that operates with a broad range of assumptions encoded in a certain format (i.e., the selection diagram), while the baseline algorithms lack this capacity, and therefore, CRO is able to find the theoretically optimal classifier for domain generalization while the baseline algorithms fail to achieve that and perform poorly in the worst-case scenario.
---
Rebuttal Comment 1.1:
Title: After Rebuttal
Comment: Thanks to the authors for their reply. I appreciate the experiments they add. I will keep my score. | Summary: This paper studies the problem of domain generalization through the lens of partial transportability, and introduces some new results for bounding the value of a functional of the target distribution, given data from source domains and assumptions about the data generating mechanisms. Authors adapt existing parameterization schemes such as Neural Causal Models to encode the structural constraints necessary for cross-population inference. Some experiments and examples are also provided.
Strengths: - Introducing partial transportability and canonical SCM to domain generalization is novel.
- The theoretic results are sound and have complete proofs. They are potentially useful.
Weaknesses: - The paper organization and presentation seem complicated to convey the idea of the paper (see below).
- It is not clear how this bound can in turn help improve general DG problems.
Technical Quality: 3
Clarity: 2
Questions for Authors: Based on my understanding, authors consider bounding the queries of generalization errors in non-transportable settings, by obtaining a bound for the worst-case losses. Authors use the formulation of canonical models as a way of encoding constraints to derive such a bound.
1. I believe there are existing works that assume some prior information regarding target domain and then derive bounds for the target loss in DG problems. If my understanding is correct, then I would like to treat the current work as **an alternative way** of encoding information about the target domain. Then the current way of presentation and organization indeed make it complicated to understand the paper. In particular, I feel that examples 1 and 2 somehow distract reader's attention, while it is better to directly formulate the central problem of the paper.
2. Other questions:
1. line 125: what does $j$ indicate in the notation $S_{ij}$?
2. definition 4: is there any constraint on $q_{max}$ ? there may exist trivial but useless upper bound for some loss functions.
3. Suggest give more introductions about canonical SCM, to make the paper more self-contained.
3. about Colored MNIST experiment: I don't get how this experiments serves. Do you want to show the proposed bound is correct? However, there is no mention of the value of $R_{P*}(h)$ here.
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: see above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review, we appreciate the positive feedback. In the following response we address comments and concerns pointed out by the reviewer. Please let us know if we can help clarify any part of it.
> **Q 1.1.** *The paper organization and presentation seem complicated to convey the idea of the paper (see below). I believe there are existing works that assume some prior information regarding target domain and then derive bounds for the target loss in DG problems. If my understanding is correct, then I would like to treat the current work as an alternative way of encoding information about the target domain.*
The current work (and the field of causal transportability as a whole) can be interpreted as a more general version of the transfer learning problem that operates using qualitative information about the commonalities and differences among the causal mechanisms of different domains (instances are [3,4,9,16,22] in the paper). As the reviewer suggests, other lines of research make different types of assumptions. One alternative is invariance learning methods, mentioned in the introduction, that exploit statistical invariances to induce robustness guarantees. We make sure to include more details in the related work section of the Appendix.
> **Q 1.2.** *Then the current way of presentation and organization indeed make it complicated to understand the paper. In particular, I feel that examples 1 and 2 somehow distract reader's attention, while it is better to directly formulate the central problem of the paper.*
Please notice that introducing Def. 4 earlier in the manuscript is challenging since it involves notions of transportability and partial identification. We believe that Examples 1 and 2 can be a useful for contextualizing the problem for the general reader and illustrate the definition of domain discrepancies, selection diagrams, etc. to set a common ground for discussing the challenge of the domain generalization problem.
> **Q 1.3** *”It is not clear how this bound can in turn help improve general DG problems.”*
Thank you for this question.
Interpreting your question literally: “How does the bound returned by Neural-TR in Alg. 1 help improve the task of learning a predictor with robustness guarantees?” The bound improves general DG in the sense that Neural-TR is used explicitly by CRO in Alg. 2 that provably returns a predictor with optimal worst-case performance. In other words, finding the worst-case performance of predictors gives us a basis for comparing them, and eventually finding the one with the minimum worst-case risk, through CRO.
Interpreting your question at a higher level: “How practical is CRO and its assumptions in real-world problems of domain generation?” We could start by noting that to solve any instance of the domain generalization problem, some notion of relevance between target domain and the source domains is required – as if domains are allowed to be arbitrarily different, transfer learning would be impossible. The advantage of CRO in practical problems is that it is completely non-parametric, making no assumptions on the distributional family of the variables involved. Moreover, selection diagrams only require the qualitative specification of commonalities and discrepancies across domains without needing to specify the underlying functional form of the domains.
In case we didn’t fully grasp the intended meaning of “general DG”, could you please rephrase your question?
> **Q 2.** *”Other questions: line 125: what does j indicate in the Sij notation? definition 4: is there any constraint on q_max? there may exist trivial but useless upper bound for some loss functions. Suggest give more introductions about canonical SCM, to make the paper more self-contained.”*
$S_{ij}$ should read $S_i$, apologies for the typo. Def. 4 does not constrain $q_{max}$ a priori. Indeed, trivial upper bounds exist for all problems, e.g. fix $q_{max}$ be the upper end-point of the range of the loss function. Note that $q_{max}$ obtained by the Neural-TR algorithm is guaranteed to be a tight bound in the limit, meaning that there exists no better (valid) upper bound, as shown in Thm. 2.
> **Q 3.** *“about Colored MNIST experiment: I don't get how this experiments serves. Do you want to show the proposed bound is correct? However, there is no mention of the value of R_P* here.”*
The motivation behind the Colored MNIST example is to compute the worst-case risk of different classifiers, as characterized by the source data and the selection diagram; these classifiers in our experiments are ERM and IRM, but the same analysis applies to any arbitrary classifier. The upper-bound for the risk (i.e., the worst-case risk) that is obtained with Neural-TR procedure, as suggested by Thm. 2, is asymptotically tight.
In our analysis, we report in line 343 the value $R_{P^*}(h)$ of the classifiers $h:=\\{h_{ERM},h_{IRM}\\}$ on the worst-case $P^*$ found by Neural-TR. In addition, we supplement this analysis with an evaluation of CRO's classifier on all datasets. Those experiments are described and reported in the global rebuttal. In short, our experiments on evaluating CRO for simulated examples and on the Colored MNIST examples highlight the fact that CRO's output has the best worst-case risk among all classifiers, and its performance is contrasted to ERM and IRM.
---
Rebuttal Comment 1.1:
Comment: Thanks for your reponse that mostly addresses my concerns.
- regarding "general DG problem": thanks for pointing this out and I should not use "general" here. To be accurate, I mean these problems or datatsets that appear in many deep learning papers, e.g., ColoredMNIST, VLCS, etc. (or check the paper: In Search of Lost Domain Generalization, ICLR 2021). How does the result of this paper improve emprical performance of these tasks?
- regarding $q_max$ in Def. 4: if there is no further constraint, then there are many functions (e.g., every bounded function for some loss functions) that are partially transportable. Does this matter here?
---
Rebuttal 2:
Comment: > "regarding "general DG problem": thanks for pointing this out and I should not use "general" here. To be accurate, I mean these problems or datatsets that appear in many deep learning papers, e.g., ColoredMNIST, VLCS, etc. (or check the paper: In Search of Lost Domain Generalization, ICLR 2021). How does the result of this paper improve emprical performance of these tasks?"
Thank you for the clarification. We can give a concrete answer for the Colored MNIST dataset, where CRO entails a classifier with optimal worst-case generalization performance subject to variation in the correlation between the image color and label. The global rebuttal provides a description of that experiment (first bullet point and attached PDF), with figures describing the learning process of CRO as well as explicit comparisons with baseline algorithms.
More generally, recall that CRO can be viewed as a meta-algorithm that is designed to exploit background information on the discrepancies and invariances between source and target domains, and guarantees optimal classifiers subject to these constraints. In this sense, if, for a particular DG task, background information can be specified (in the form of selection diagrams), CRO can be applied and will deliver a classifier that achieves best worst-case performance, as in the Colored MNIST experiment. Note, however, that background information might not necessarily be available for an arbitrary task defined in "In Search of Lost Domain Generalization, ICLR 2021". Out-of-the-box comparisons with more data-driven baselines, such as ERM and IRM, should be considered with care as these algorithms exploit different sets of assumptions that may not be appropriate in all settings.
> "regarding qmax in Def. 4: if there is no further constraint, then there are many functions (e.g., every bounded function for some loss functions) that are partially transportable. Does this matter here?"
Our objective is to infer the tightest upper bound qmax that satisfies the constraints (observational equivalence and structural invariance) following the definition of partial transportability. This notion is useful as it characterizes statements such as "the query is partially transportable with upper bound qmax" even though bounded functions are, by definition, already bounded from above. Considering the bounds in the context of worst-case performance of classifiers for the DG task, what you mention is correct since, with symmetric 0-1 loss, a trivial upper-bound for the risk of all classifiers is $R_{P^*}(h) \leq 1$, however, this bound is not informative or useful for classification in the DG problem. The bounds achieved by Neural-TR procedure are tightest valid bounds considering the source data and the domain knowledge, and provide a basis for comparing the DG performance of candidate classifiers.
---
Rebuttal Comment 2.1:
Comment: Thanks for further clarifications. I main my score and am happy to see this paper accepted. | null | null | Rebuttal 1:
Rebuttal: In this global rebuttal, we take the opportunity to discuss experimental results of CRO using the figures in the attached pdf as a support.
**Summary.** In the domain generalization task, the source domains $\mathcal{M}^1,\mathcal{M}^2,\dots,\mathcal{M}^K$ ,and the target domain $\mathcal{M}^*$ must be related for any learning to take place. The relatedness of the domains is expressed via a causal graph called selection diagram $\mathcal{G}^\Delta$ that encodes assumptions about the causal structure within each domain, as well as match/mismatch of mechanisms across the source and target domains. The source data $\mathbb{P} = \\{P^1,P^2,\dots,P^K\\}$ together with the selection diagram $\mathcal{G}^\Delta$ characterize the set of SCMs that are compatible as the target domain. Through canonical (Thm. 1) and neural (Thm. 2) parameterization, we implement procedures that obtain an upper-bound w.r.t. $\mathbb{P},\mathcal{G}^\Delta$ for an arbitrary target quantity; a especial case of our interest is the risk of a given classifier $h$ under the target distribution denoted by $R_{P^*}(h)$. This risk upper-bound is tight in the limit of data and expressivity of the parameterization, and represents the worst-case performance of the classifier at hand w.r.t. $\mathbb{P},\mathcal{G}^\Delta$. Neural-TR enhances the optimization procedure by cleverly decomposing the optimization objective, and computing the terms that are readily available from the source data to reduce parameter size and increase sample efficiency. Further, we introduce the CRO algorithm that uses Neural-TR as a subroutine and searches for a classifier with small worst-case risk. We prove that CRO terminates and outputs a classifier ideal for the domain generalization task which has the smallest worst-case risk across all classifiers. Below, we describe in detail how CRO operates.
**CRO.** We start with a random classifier; one might start with an ERM warm-start or any classifier of choice. At the first iteration, we use Neural-TR to compute the worst-case performance of the classifier at hand, witnessed by an NCM that entails the source data $\mathbb{P}$ and induces the selection diagram $\mathcal{G}^\Delta$. Next, we draw samples $D^{*1}$ from this NCM, and add them to a collection of datasets $\mathbb{D}$ that we maintain through the CRO procedure. Finally, we update the classifier at hand to be the minimizer of the maximum [empirical] risk over the datasets in $\mathbb{D}$. Since $\mathbb{D} = \\{D^{*1}\\}$ has only one dataset, the classifier updates to be the ERM of $D^{*1}$. We repeat the above process; at the second iteration, we use Neural-TR again to find yet another NCM that is compatible with $\mathbb{P}, \mathcal{G}^\Delta$ while it incurs the worst risk on the classifier at hand. We draw samples $D^{*2}$ from this NCM, and add them to a collection of datasets $\mathbb{D}$. Finally, we update our classifier to be the minimizer of the maximum [empirical] risk over the datasets $\mathbb{D} = \\{D^{*1}, D^{*2}\\}$. The process continues util the maximum risk witnessed by datasets in $\mathbb{D}$ converges to the true worst-case risk witnessed by Neural-TR.
In the attached pdf, you can find:
- **Figure 1: CRO on Colored-MNIST.** In section 5.2. we discussed Colored-MNIST, an instance of domain generalization task where the learner must classify the digits based on their colored image. The relationship between the color and the label is prone to change across the domains, and classifiers that rely on the color feature fail to generalize. In the two source domains, we intentionally set a high correlation between the color and the digit, thus classifiers that seek optimizing for risk in the source domains inevitably pick color as a determinant feature. We use this example to illustrate both the training process of CRO as well as the performance of the final classifier $h_{\mathrm{CRO}}$. We find that CRO converges in three iterations to an optimal predictor (in the worst-case sense) that ignores the color of the digit and instead makes a prediction based on the shape of the digit. CRO achieves an error of approximately 0.25, as shown in Figure 1d, which is theoretically optimal in this experiment. For comparison, the worst-case errors of ERM and IRM were evaluated to be 0.9 and 0.6 respectively. We also note that the poor performance of the baseline algorithms should not be blatantly compared to that of CRO, since CRO has access to background information that can not be communicated with the baseline algorithms. In this sense, CRO can be viewed as a meta-algorithm that operates with a broad range of assumptions encoded in a certain format (i.e., the selection diagram), while the baseline algorithms lack this capacity, and therefore, CRO is able to find the theoretically optimal classifier for domain generalization while the baseline algorithms fail to achieve that and perform poorly in the worst-case scenario.
- **Figure 2: CRO on Examples 2 & 3.** Examples 2 and 3 are synthetic instances of the domain generalization task that are small enough to be analyzed in fine granularity. In Example 2, three classifiers are considered (Table 1); total feature set, causal feature set, and non-causal feature set. Surprisingly enough, the non-causal feature set yields the best risk in the held-out domain. In Example 3, we considered a simple example with binary $X$ and $Y$. In this example, the considered classifiers are $0,1,X,\neg X$. Using Neural-TR, we compute the worst-case risk of considered classifiers in both examples, as reported in Figures 4a \& 4b in the original manuscript, and 2b \& 2d in the attached pdf. We also run CRO in both examples, and find that it discovers the best worst-case classifier in both examples; non-causal feature set in Example 2 and $\neg X$ in Example 3. Figures 2a \& 2c in the attached pdf demonstrate runs of Neural-TR for the classifier generated by CRO.
Pdf: /pdf/cc1d468f7befcc104160e7d2b03ade470fbfbe9e.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Automating Data Annotation under Strategic Human Agents: Risks and Potential Solutions | Accept (poster) | Summary: This paper investigates an intriguing problem regarding how machine learning models evolve during dynamic retraining using model-annotated samples, incorporating strategic human responses. The authors discover that it becomes increasingly likely for individuals to receive positive decisions as the model undergoes retraining, although the proportion of individuals with positive labels may decrease over time. To stabilize the dynamics, the authors propose a refined retraining process. They also examine how these retraining processes can impact algorithmic fairness and find that enforcing common fairness constraints in every retraining round may not benefit the disadvantaged groups in the long term. Experiments conducted on both synthetic and real-world data validate the findings of this study.
Strengths: 1. The paper thoroughly analyzes how humans, acting as strategic agents, adapt their behavior in response to ML systems and how this behavior impacts the retraining process of ML systems. By formalizing these interactions and analyzing their long-term dynamics, the paper provides a theoretical foundation for understanding and predicting these complex interactions. This in-depth analysis helps uncover potential systemic issues and offers concrete theoretical support for improving models.
2. The paper not only identifies potential risks of retraining ML models with model-annotated data but also proposes an improved retraining method using a probabilistic sampler to enhance the quality of model-annotated samples. This method aims to stabilize the dynamics of acceptance and qualification rates, reducing classifier bias. The proposed solution is innovative and practical, helping to mitigate negative impacts in real-world applications.
3. The paper combines theoretical analysis with experiments on semi-synthetic and real data to validate the findings. The experimental results, which show consistent dynamics with theoretical predictions, enhance the credibility and applicability of the research. This approach ensures the reliability of the research findings by providing both theoretical and empirical support.
Weaknesses: 1. While the conclusions of the paper help understand the impact of human strategic behavior on ML systems, many of these conclusions are somewhat intuitive and straightforward. For instance, the increase in acceptance rates and the potential decrease in qualification rates over time are logically reasonable but do not provide particularly new insights. Delving deeper into underlying mechanisms or revealing more complex interactions could make the research more innovative and impactful.
2. The scale of the datasets used in experiments is relatively small, which may not fully capture the complexity of system dynamics in large-scale data environments.
3. The paper primarily focuses on linear models and specific distributions. The research conclusions might not fully apply to non-linear models and complex data distributions.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N.A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the comments. We address your questions point by point as follows.
> While the conclusions of the paper help understand the impact of human strategic behavior on ML systems, many of these conclusions are somewhat intuitive and straightforward.
We believe the theoretical results are not intuitive. In section 3, we show that although the decision-maker keeps retraining the classifier, the increasing $a_t$ and decreasing $q_t$ demonstrate the model becomes more biased (Theorem 3.3 - 3.5). In section 4, we not only display the fairness dynamics of the retraining process but also research the influence of fairness interventions. In particular, Theorem 4.2 reveals that fairness intervention can even prohibit the disadvantaged group from becoming advantaged, which is also counter-intuitive.
> The paper primarily focuses on linear models and specific distributions. The research conclusions might not fully apply to non-linear models and complex data distributions.
- Note that the linearity of the decision model is standard setting in strategic classification (e.g., [1,6,7,11,12,17,25,31,35] in the main paper). In practice, one can first use a non-linear feature extractor to learn the embeddings of agents' preliminary features and then apply a linear model to the newly generated embedded features. Given the knowledge of both feature extractor and model, agents will best respond with respect to new features, and all our results still hold. The generalization to non-linear settings is also discussed in [13,28] in the main paper and [Levanon and Rosenfeld, 2021].
- To further validate the above arguments, we provide an additional case study using ACSIncome data [Ding et al., 2022], which contains information on more than $150K$ agents. In this study, the goal is to predict whether a person has an annual income $> 50000$ based on $53$ features such as education level and working hours per week. Specifically, we consider a decision-maker who first learns 2-D embeddings from $53$ original features using a neural network and then regards the embedding as the new feature. A linear decision model is trained and used on these new features to make predictions. We divide the agents into 2 groups based on their ages. Similar to the credit approval data, we then fit Beta distributions on the 2 groups and then verify the monotonic likelihood assumption (Figure 1,2 in the attached pdf). We then plot the dynamics of $a_t, q_t, \Delta_t$ for both groups when the systematic bias is either positive or negative. The results show that similar trends still hold for this large dataset (Figure 3 in the attached pdf).
- Finally, it is an interesting direction to extend our setting into non-linear decision policy while the agents can only respond to the classifier with the original features (not the embedding). This setting can be highly intricate because both the decision boundary and the agent best response will vary. We hope our paper can be a starting point and a solid foundation of more future work.
> The scale of the datasets used in experiments is relatively small, which may not fully capture the complexity of system dynamics in large-scale data environments.
- Our paper focuses on strategic classification settings [1,6,7,11,12,17,25,31,35]. According to the previous literature, human strategic behaviors are prevalent in high-stakes domains such as hiring, lending, and college admission, where humans have more incentives to improve their features for favorable outcomes. We believe the datasets adopted in our experiments are sufficiently representative and cover well-known data widely used in strategic classification literature.
- The additional case study illustrated in the above answer uses a larger dataset.
### References
Ding, F., Hardt, M., Miller, J., & Schmidt, L. (2021). Retiring adult: New datasets for fair machine learning. Advances in neural information processing systems, 34, 6478-6490. | Summary: This paper addresses the dynamics of machine learning systems when retrained with model-annotated and human-annotated samples in the presence of strategic human agents. It explores how these dynamics affect various welfare aspects, including those of applicants, decision-makers, and the broader social context. The work emphasizes the potential risks and unintended consequences of retraining classifiers in strategic settings, emphasizing the complex interplay between agent strategies and ML system updates.
Strengths: The paper addresses an underexplored aspect of machine learning: the interaction of strategic human agents with ML systems over iterative retraining cycles. The incorporation of both model-annotated and human-annotated data into the retraining process, alongside the strategic adaptations of agents, presents a novel problem formulation.
Weaknesses: 1. The theoretical results depend on assumptions that may not hold in more complex or varied real-world scenarios, and the Gaussian, German Credit, and Credit Approval datasets in the experiment violate some conditions in the theoretical analysis. This limitation may affect the generalizability of the results.
2. It would be beneficial to organize the experiments section of the main paper, providing a more detailed display of results related to the refined retraining process, which is a main contribution of this work.
3. The study primarily examines binary classification scenarios with only two outcome classes (e.g., admitted/not admitted). It would be beneficial if the authors could explore how their findings and methodologies might be applicable to more complex machine learning tasks, such as multi-class classification or ranking problems, which are more common in real-world settings.
Technical Quality: 4
Clarity: 2
Questions for Authors: Please see the weaknesses part.
Confidence: 4
Soundness: 4
Presentation: 2
Contribution: 3
Limitations: The authors have adequately addressed the limitations and societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the comments. We address your questions point by point as follows.
> Assumptions of the theoretical results and the generalizability
- Our paper focuses on strategic classification settings [1,6,7,11,12,17,25,31,35]. According to the previous literature, human strategic behaviors are prevalent in high-stakes domains such as hiring, lending, and college admission, where humans have more incentives to improve their features for favorable outcomes. We believe the datasets adopted in our experiments are sufficiently representative and cover well-known data widely used in strategic classification literature. While the Gaussian, German Credit, and Credit Approval datasets violate some conditions, the experimental results still show similar trends as the theoretical results, demonstrating the robustness and practical value of the theory.
- The linearity assumption of the decision model is standard setting in strategic classification. In practice, one can first use a non-linear feature extractor to learn the embeddings of agents' preliminary features and then apply a linear model to the newly generated embedded features. Given the knowledge of both feature extractor and model, agents will best respond with respect to new features and all our results still hold. The generalization to non-linear settings is also discussed in [13,28] in the main paper and [Levanon and Rosenfeld, 2021].
- To further validate the above arguments, we provide an additional case study using ACSIncome data [Ding et al., 2022], which contains information on more than $150K$ agents. In this study, the goal is to predict whether a person has an annual income $> 50000$ based on $53$ features such as education level and working hours per week. Specifically, we consider a decision-maker who first learns 2-D embeddings from $53$ original features using a neural network and then regards the embedding as the new feature. A linear decision model is trained and used on these new features to make predictions. We divide the agents into 2 groups based on their ages. Similar to the credit approval data, we then fit Beta distributions on the 2 groups and then verify the monotonic likelihood assumption (Figure 1,2 in the attached pdf). We then plot the dynamics of $a_t, q_t, \Delta_t$ for both groups when the systematic bias is either positive or negative. The results show that similar trends still hold for this large dataset (Figure 3 in the attached pdf).
> Organize the experiments section of the main paper
Thanks for the suggestion. We will move the content of App F.3 to the main paper.
> The study primarily examines binary classification scenarios with only two outcome classes (e.g., admitted/not admitted).
- Similar to most studies in strategic classification [1,6,7,11,12,17,25,31,35], we focus on binary classification which is a standard setting in strategic classification and has many real applications (e.g., hiring, admission, loan approval) that involve binary decision-making on humans. Compared to existing works, ours is the first that examines the long-term impact of automating data annotation under strategic human agents.
- Although the theoretical modeling and analysis under binary classification is already non-trivial, we may extend the analytical framework and insights to multi-class classification. Specifically, consider strategic agents with categorical labels $Y\in \mathcal{Y}$. Instead of improving features toward one specific favorable outcome (e.g., acceptance in lending/hiring, as given in Eq. (1)), each agent may have its own target $y^*\in \mathcal{Y}$ and they may best respond based on $x_t = argmax_z {Pr(f_{t-1}(z)=y^*)-c(z,x)}$. For the decision-maker, the model retraining process remains the same, i.e., it augments training data with model-annotated samples and human-annotated samples at every round. Under this broader setting, we may consider two scenarios: (i) all agents have the same target $y^*$; (ii) agents have different target $y^*$. For case (i), because agents move toward one class, the result is similar to a binary setting: instead of having the acceptance rate (resp. qualification rate) increasing (resp. decreasing) over time, the probability $P(f_t(X)=y^*)$ (resp. $P(Y=y^*)$) increases (resp. decreases) under certain conditions. For case (ii), the results will be more complicated because different targets could induce highly diverse agent behavior that disrupts the monotonicity change of distributions $P(f_t(X))$ and $P(Y)$. This is an interesting future direction.
- Last, we agree with the reviewer that it is interesting to extend our work to multi-class classification or ranking, which we hope to study in the future. Indeed, such extension is non-trivial and has been an ongoing effort of the community. For example, [Liu et al., 2022] consider competition among strategic agents and is the first work studying strategic ranking to the best of our knowledge. We hope our paper can provide insights and shed light on future works.
### References
Ding, F., Hardt, M., Miller, J., & Schmidt, L. (2021). Retiring adult: New datasets for fair machine learning. Advances in neural information processing systems, 34, 6478-6490.
Liu, Lydia T., Nikhil Garg, and Christian Borgs. "Strategic ranking." International Conference on Artificial Intelligence and Statistics. PMLR, 2022.
Levanon, S., & Rosenfeld, N. (2021, July). Strategic classification made practical. In International Conference on Machine Learning (pp. 6243-6253). PMLR.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response and the efforts made to address the initial concerns. We appreciate the clarifications and the additional experiments you provided. However, some concerns remain regarding the alignment between the theoretical assumptions and the empirical data, especially with datasets that do not fit these assumptions. While the trends appear consistent, this misalignment could limit the practical applicability of the theoretical outcomes. Additionally, we would benefit from a more detailed exploration of the specific challenges encountered when adapting your theoretical results to more complex class distributions to fully appreciate the breadth and impact of your contributions. Therefore, I will maintain my current score. | Summary: The paper explores a scenario where a machine learning system retrains itself over time by collecting data generated through ML system annotation itself as well as human-annotated data while allowing the distribution of training samples to evolve over time. This evolution is influenced by the strategic behavior of samples that may be marked as positive by the system. Based on this model, the paper investigates the dynamic processes of several metrics, including acceptance rate, qualification, and classifier bias.
Strengths: 1. The problem is well-motivated.
2. The high-level model construction is interesting. For instance, the model considers the sequential aspects of the problem and analyzes how certain metrics change over time, which is more realistic than previous works.
3. Additionally, the author incorporates 'systematic bias' into the modeling and discusses numerous real-world examples related to algorithmic fairness, an important topic for the community.
Weaknesses: 1. Some aspects of the results seem trivial or straightforward given the assumptions in the model. (Please see my questions below.)
2. The writing could be improved, as some statements in the paper are confusing. (Please see my questions below.)
3. The figure size, legend, and overall organization could be improved. For example, in Figure 2, the blue and red dots in the legend are too small, making the figure difficult to interpret. Additionally, the y-axis legend in Figure 2 uses different scales, complicating comparisons.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Assuming $N \gg K$ and that strategic agents, using the output of the previous classifier, adapt to the classifier's outcome, it seems straightforward that the acceptance rate should increase. Could you clarify what the non-trivial or surprising aspect of Theorem 3.3 is?
2. To what extent does the conclusion/results in Theorem 3.5 hold for classifiers beyond linear ones?
3. In what situations can the convexity assumption in Theorem 3.5 be validated in practice?
4. What does the term *cumulative* density function exactly refer to in the statement of Theorem 3.5? The domain of a standard cumulative function should be $\mathbb{R}$ based on the usual definition. How, then, can it be restricted to a half-space $J \subset \mathbb{R}^d$ in Theorem 3.5? Is this a typo, or am I missing something?
5. Line 99: What does the continuity of $P(Y | X)$ means here when $Y$ takes values in $\{0, 1\}$?
6. Lines 187-188: Why does $\mathcal{S}{m,t-1}$ have a higher qualification rate than $ \mathcal{S}{t-1} $? Shouldn't that be the acceptance rate? Or am I missing something?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have discussed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the comments. We address your questions point by point as follows.
> Theorem 3.3
The result in Theorem 3.3 (i.e., $a_t > a_{t-1}$) does **not** require $N >> K$ but only needs $N > 0$. Condition $N >> K$ is only used in Proposition 3.4, under which the acceptance rate converges to $1$ as $t \to \infty$ (i.e., all agents are admitted in the long run). For general cases with arbitrary positive $N$, we believe the increasing trend of the acceptance rate (Theorem 3.3) under complex dynamics between retrained model and strategic agents is not straightforward, rigorously proving requires non-trivial efforts by induction (see detailed proof in Appendix).
> Theorem 3.5 beyond linear classifiers?
- Note that the linearity of the decision model is standard setting in strategic classification (e.g., [1,6,7,11,12,17,25,31,35]). In practice, one can first use a non-linear feature extractor to learn the embeddings of agents' preliminary features and then apply a linear model to the newly generated embedded features. Given the knowledge of both feature extractor and model, agents will best respond with respect to new features, and all our results still hold. The generalization to non-linear settings is also discussed in [13,28] in the main paper and [Levanon and Rosenfeld, 2021].
- To further validate the above arguments, we provide an additional case study using ACSIncome data [Ding et al., 2022] with information on more than $150K$ agents. The goal is to predict whether a person has an annual income $> 50000$ based on $53$ features such as education level and working hours. Specifically, we consider a decision-maker who first learns 2-D embeddings from $53$ original features using a neural network and then regards the embedding as the new feature. A linear decision model is trained and used on these new features to make predictions. We divide the agents into 2 groups based on their ages. Similar to the credit approval data, we then fit Beta distributions on the 2 groups and then verify the monotonic likelihood assumption (Figure 1,2 in the attached pdf). We then plot the dynamics of $a_t, q_t, \Delta_t$ for both groups when the systematic bias is either positive or negative. The results show that similar trends still hold for this large dataset (Figure 3 in the attached pdf).
> In what situations can the convexity assumption in Theorem 3.5 be validated in practice?
- In lines 236-238 we discussed the situations where the conditions of Theorem 3.5 hold theoretically (e.g. when $P_X$ belongs to Uniform distribution, Beta distribution, or Gaussian distribution when the first decision policy admits 50\% or more agents). In practice, the decision-maker can fit the empirical distribution of the real data to get $F_X$ and then verify the convexity. Particularly, if the cdf does not have analytical form, we can verify the convexity empirically by sampling the points from the domain and check the convexity inequality $\forall x_1, x_2 \in J , \lambda \in (0,1), f(\lambda x_1 + (1 - \lambda) x_2) \leq \lambda f(x_1) + (1 - \lambda) f(x_2)$.
- The assumptions are sufficient but not necessary, i.e., Theorem 3.5 may still hold even when assumptions are violated. This is mentioned in lines 238-239 and validated in experiments. Among all datasets adopted in experiments, only the "uniform-linear" dataset satisfies all assumptions in Theorem 3.5, while the Gaussian dataset satisfies Theorem 3.5 only if the initial qualification rate of agents is larger than 0.5. The German Credit and Credit Approval dataset only satisfies the linear classifier assumption and the monotonic likelihood assumption. However, the empirical results on all datasets demonstrate the validity of Theorem 3.5. Thus, we believe the phenomenon in Theorem 3.5 should exist in reality even when some assumptions are not strictly satisfied.
> Cumulative density function
$F_X$ can have a large domain such as $\mathbb{R}$, but Theorem 3.5 only needs it to be convex in a **subset** $J$ of its domain. Formally, $\forall x_1, x_2 \in J , \lambda \in (0,1), f(\lambda x_1 + (1 - \lambda) x_2) \leq \lambda f(x_1) + (1 - \lambda) f(x_2)$. As an example, consider a 1-d Gaussian distribution $X \sim N(\mu, \sigma^2)$, we know it is convex if $J = (-\infty, \mu)$.
> Continuity of $P(Y|X)$
$P_{Y|X}(y|x)$ (where $y$ can be $0,1$) is the conditional probability for an agent with feature $x$ to have label $y$. This is a function of $x$ and we assume it is continuous in $x$. For example, the logistic function $P_{Y|X}(1|x) = \frac{1}{1+exp(-\beta^Tx)}$ is continuous when $x \in \mathbb{R}$.
> $S_{m, t-1}$ in lines 187-188
Yes, your understanding is correct. Because $S_{m, t-1}$ includes the model-annotated samples at $t-1$, the actual qualification of each sample is unobserved and is annotated using the pseudo-label generated by the model. In this sense, the "qualification rate" of $S_{m, t-1}$ is in fact the "acceptance rate." The reason we used "qualification rate" is because from the decision-maker's perspective, when updating its model using empirical risk minimization, it doesn't distinguish model-annotated samples from human-annotated ones and would regard pseudo-label as the actual qualification. In other words, the dataset the decision-maker uses to update the model has more and more fractions of people with positive labels (regardless of whether labels are acquired from humans or model), i.e., qualification rate of $S_{t}$ (augmented by $S_{m, t-1}$) is higher than that of $S_{t-1}$. We will add more clarification to the paper.
### References
Levanon, S., & Rosenfeld, N. (2021, July). Strategic classification made practical. In International Conference on Machine Learning (pp. 6243-6253). PMLR.
Ding, F., Hardt, M., Miller, J., & Schmidt, L. (2021). Retiring adult: New datasets for fair machine learning. Advances in neural information processing systems, 34, 6478-6490.
---
Rebuttal Comment 1.1:
Comment: Dear Authors,
Thank you for your response. I am currently reviewing your rebuttal and will have another reply to it. In the meantime, I am still unclear about the statement and conditions of Theorem 3.5. Could you please clarify the definition of "cumulative density function" as used in Theorem 3.5? It appears that the domain of this function is $\mathbb{R}^{d}$, given that it is assumed its restriction to a half-space in $\mathbb{R}^{d}$ is convex. If this is the case, could you also clarify what you mean by a function in $\mathbb{R}^{d}$ being decreasing, as mentioned in the proof of Theorem 3.5?
Thank you,
Reviewer
---
Reply to Comment 1.1.1:
Comment: Thanks for your timely reply!
You are correct that "non-decreasing" in lines 957-958 means for each dimension $x_i$ in a $d$-dimensional feature $x \in J$, $F_X, P_X$ are non-decreasing. We will modify this sentence in the proof to make it clear. $F_X$ itself is non-decreasing in each dimension because it is CDF. Note that $P_X$ is the derivative of $F_X$, so the convexity itself ensures $P_X$ is non-decreasing in each dimension when $x \in J$.
Thanks again for your careful review and suggestions.
---
Rebuttal Comment 1.2:
Comment: Dear Authors,
Thank you for your response. Based on your answer, I assume that when you work with the cumulative distribution function (not "cumulative density function" as mentioned in line 228 of your draft), you compute the CDF coordinate-wise. Also, when you say a function on $\mathbb{R}^{d}$ is non-decreasing, you mean it is non-decreasing in each coordinate. Please clearly state these conventions in your work, as they are not unique or standard assumptions about functions or distributions on $\mathbb{R}^{d}$. On the same line, regarding the assumption that $P_{Y|X}$ is continuous (mentioned in line 99), please specify that you mean $P_{Y|X}(1 \text{ or } 0 \mid x)$ is continuous over $x$. Otherwise, the way you have it there may confuse the readers. Additionally, in line 140, the domain of $h_{t}$ should be $\mathbb{R}^{d}$ (not $\mathbb{R}$ as you have it)? Please clarify.
Assuming the above, I still have some clarifying questions regarding the conditions in Theorem 3.5. As I understand it, you are working with linear models, so $\mathcal{J}$ would be a subspace of $\mathbb{R}^{d}$ (as it is the decision boundary of a linear model). In your rebuttal above, you mentioned that $\mathcal{J}$ could be $(-\infty, \mu)$. However, it seems to me that $(-\infty, \mu)$ cannot arise in the context of Theorem 3.5, as $(-\infty, \mu)$ is not a subspace of $\mathbb{R}$. Am I missing something here?
With that in mind, could you please elaborate on why Gaussian distributions or Beta distributions satisfy the convexity conditions in Theorem 3.5 (as $\mathcal{J}$ could be any subspace of $\mathbb{R}^{d}$)?
Thanks,
Reviewer
---
Reply to Comment 1.2.1:
Comment: Thanks for your follow-up. We address your concerns as follows.
- We agree with your points in paragraph 1 and will be happy to modify our draft as you suggested. Also, you are correct on the definition of CDF and the domain of $h_t$, and we will correct the typo.
- We kindly clarify that $J$ is not the **decision boundary** itself. In line 101, we define $f_t$ as the classifier from $\mathbb{R}^d \rightarrow$ {0,1} (not $[0,1]$) which directly maps the feature to the label. Therefore, $J =$ { $x|f_0(x) = 0$ } is actually the **half-space** (not a subspace) in $\mathbb{R}^d$ separated by the decision boundary as we write in Theorem 3.5.
- With these clarifications, Gaussian and Beta distributions possibly satisfy Theorem 3.5 and we were using $1$-d cases as examples: (i) Gaussian example says when $f_0$ sets an admission threshold smaller than or equal to $\mu$, then the convexity assumption is satisfied because the CDF $\frac{1}{\sqrt{2\pi}}exp \frac{(x-\mu)^2}{2\sigma^2}$ is convex in $(-\infty, \mu)$ and $J \in (-\infty, \mu)$; (ii) Similarly, consider the CDF of the Beta distribution parameterized by $\alpha, \beta$, we can derive the sign of its second order derivative is the same as $(\alpha + \beta - 2) \cdot (\frac{\alpha-1}{\alpha+\beta-2} - 1)$. Then if $\beta = 1$ and $\alpha \ge \beta$, the second-order derivative will always be positive to ensure the convexity in its domain. Note that we do not say any Gaussian/Beta CDF will satisfy Theorem 3.5. Instead, these are examples that these distributions may satisfy Theorem 3.5.
Based on our discussion, we are happy to emphasize that $f$ is the classifier again in Theorem 3.5 to avoid confusion and elaborate more on the examples (e.g., discuss the convexity of Gaussian/Beta CDF in more detail) in the Appendix.
---
Rebuttal 2:
Comment: Thanks for your reply. We apologize that our previous response regarding the high-dimensional case has some inaccurate statements. We present the precise analysis regarding all situations from 1-dimensional to high-dimensional ones and answer your questions.
> Theorem 3.5
1. All our previous statements are precise for the **1-dimensional** setting where Gaussian, Beta, and Uniform distributions satisfy the conditions in Theorem 3.5;
2. Theorem 3.5 itself is precise for the high-dimensional settings. However, you are correct that the convexity of CDF for $X$ and $Y$ does not ensure the convexity of the joint CDF $(X, Y)$. Consider the simplest independent 2-dimensional distribution setting where $F_X, F_Y$ are convex in 1-d space, we can derive the Hessian of the joint CDF as:
$\begin{bmatrix} F’’_X \cdot F_Y & F’_X \cdot F’_Y \\\
F’_X \cdot F’_Y & F_X \cdot F''_Y \end{bmatrix}$
The Hessian is PSD if $ F_X''(x) \cdot F_X(x) \ge F_X’(x)^2$ and $ F_Y''(y) \cdot F_Y(y) \ge F_Y’(y)^2$ hold for any $(x,y) \in J$, which is equivalent to say $log(F_X)$ and $log(F_Y)$ are convex functions. Thus, features with log-convex CDFs will satisfy the conditions in Theorem 3.5 such as the Uniform CDF and any CDF in the form of $F(x) = \lambda e^{\lambda x}$ if $\lambda \ge 1$.
3. However, as we stated, Theorem 3.5 is a sufficient condition. In the proof of Theorem 3.5 under high-dimensional distribution settings with feature $(X_1, ..., X_d)$, we can derive the same results if the joint CDF $F$ is convex w.r.t each coordinate $X_i$ in the full domain of $X_i$. This means a multivariate feature distribution will result in a decreasing qualification rate if each feature has convex CDF and they are independent. This is to say, If each variable satisfies Beta distributions with $\alpha \ge \beta, \beta = 1$, the results in Theorem 3.5 still hold.
4. Regarding the situation where multiple features are dependent, the analysis can be increasingly challenging. However, in practice, the decision-maker often uses domain expertise and dimension-reduction methods (e.g., PCA) to do feature selection first, preventing severe violations of independence.
> The FICO score example
Theorem 3.3 and 3.5 in the FICO score example can be interpreted as follows: the decision-maker sets the score threshold to give loans, but the qualification means whether they can repay the loans and this is hidden. Theorem 3.3 illustrates that the decision-maker will set a lower score threshold when retraining goes on (e.g., from 700 to 600), but the qualification rate (i.e., the proportion of agents who will repay the loans) decreases. “An applicant’s score is above the threshold but they are not qualified for the loan” happens since the decision-maker lowers the acceptance threshold due to the misleading information it gets during the retraining. This phenomenon is one of our main discoveries that has not been studied in previous literature.
> Proposed modifications of the manuscript
Taking all the discussions together, we are happy to make the following modifications to the content related to Theorem 3.5: (i) add the convexity condition w.r.t each coordinate $X_i$ as mentioned above; (ii) add the FICO example in the appendix to illustrate the use cases in practice; (iii) Add more detailed discussions on whether different distributions in high-dimensional space satisfy Theorem 3.5.
---
Rebuttal Comment 2.1:
Comment: Dear Authors,
Thank you for your response. Please address the points mentioned earlier in our discussion in the revised draft to clarify the scope and applicability of the model for the readers.
Despite some limitations in the theoretical results that are important to note (as previously discussed), I find the model to have some interesting features, particularly its dynamic approach to strategic classification and the incorporation of model-annotated samples. The topic of strategic classification and the deployment of ML models in social domains is indeed important, and I believe the work has the potential to benefit the community.
Please clearly discuss the limitations and applicability of the model, so that these can be addressed/improved in future research. Consequently, I am raising my score from 4 to 5 and also increasing my confidence score.
Thank you for your detailed and thorough responses during the discussion period, which helped me to better understand your work.
Sincerely,
Reviewer
---
Reply to Comment 2.1.1:
Comment: Thanks for your endorsement and for increasing the score. We appreciate your suggestions and will revise the draft as highlighted in our discussion. | Summary: This paper studies the effects of retraining machine learning (ML) models with data annotated by both humans and the models themselves, especially in social domains where human behavior is influenced by the ML systems. The authors explore how strategic human agents, who adapt their behaviors to receive favorable outcomes from ML models, can create feedback loops that affect the model's performance and fairness over time.
The paper begins by formalizing the interactions between strategic agents and ML models. It shows that as agents adapt to the decision policies of the model, they increasingly receive positive outcomes. However, this becomes problematic where the proportion of agents with truly positive labels may decrease over time. To address this issue, the authors propose a refined retraining process designed to stabilize these dynamics.
The authors analyze how fairness constraints imposed during each round of model retraining might impact disadvantaged groups in the long run. They find that enforcing common fairness constraints can sometimes fail to benefit these groups, highlighting the complexity of maintaining fairness in dynamic environments. The empirical section of the paper includes experiments on both semi-synthetic and real datasets. These experiments validate the theoretical findings, demonstrating that the proposed retraining process can mitigate some of the adverse effects of strategic behavior. The results show that while acceptance rates of agents tend to increase, the actual qualification rates may decrease under certain conditions, leading to a growing discrepancy between perceived and actual qualifications.
Strengths: - The paper addresses a critical and relatively unexplored issue in ML – the long-term impacts of retraining models in the presence of strategic human behavior.
- The formalization of interactions between strategic agents and ML models provides a robust foundation for analyzing these dynamics.
- The investigation into how retraining processes affect algorithmic fairness is an important contribution of this work
- The use of semi-synthetic and real data to validate the theoretical findings strengthens claims of the study
Weaknesses: - The theoretical analysis depends on several assumptions, such as the monotone likelihood ratio property and the availability of perfect information about agent distributions, which may not hold true in all real-world scenarios.
- The mathematical models and proofs are complex, potentially making it challenging for practitioners to implement the findings directly.
- While the experiments support the theoretical findings, the datasets used might not fully represent the diversity and complexity found in real-world applications.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Can you provide more empirical evidence or case studies to support the validity of the assumptions made in your theoretical analysis?
- Have you considered using more diverse and larger datasets to validate your findings? How might the results change with different types of data?
- How do you envision your findings being applied to other domains beyond college admissions and hiring? Are there specific adjustments needed for other applications?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: - The study mainly focuses on specific social domains like college admissions and hiring, and its applicability to other fields remains uncertain.
- The proposed solutions, including the refined retraining process and fairness interventions, may not always be effective in practice. Real-world conditions can vary significantly from controlled experimental settings, which might affect the outcomes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the comments. We address your questions point by point as follows.
> Empirical evidence or case studies to support the validity of the assumptions
- The monotonic likelihood ratio property (MLRP):
This property is rather standard and has been widely used in literature (e.g., [11,12] in the main paper; Jung et al., 2020; Khalili et al., 2021; Barman et al., 2020); it means that each feature dimension is a good indicator of the target variable and an individual is more likely to be qualified when his/her feature value increases.
Regarding the empirical evidence, we demonstrate the distribution of each of the two features in the Credit Approval dataset in Figure 10, where the fitted beta distribution satisfies the monotonic likelihood distribution. Moreover, [11] referred to in the main paper used FICO credit data and verified the monotonic likelihood assumption holds for people's FICO scores in Figure 15 of [11]. FICO has been widely used in the United States to assess people's creditworthiness and is a good practical example. In addition to the lending domain, MLRP has been studied in many other contexts, e.g., research has shown that higher wealth levels are often associated with higher likelihoods of certain favorable economic behaviors (like investment and consumption patterns), empirical data on insurance claims shows that higher levels of risk factors (like age, smoking status) are monotonically associated with higher likelihoods of claims.
- The perfect information about agent distributions
We want to clarify that the knowledge of agent distributions is only used when deriving theorems. In all the experiments, the decision-maker updates models by running empirical risk minimization on training data without knowing agent distribution $P_X$. To verify whether the conditions/assumptions in theorems hold, the decision-maker in practice may fit $P_X$ using the initial training set (e.g., Figure 10 for credit approval data).
Last, we emphasize that the assumptions made in the theoretical analysis are sufficient but not necessary, i.e., the results may still hold even when assumptions are violated. This is validated in our experiments, where only the "uniform-linear" data satisfies all assumptions in the paper but all results are consistent with theorems.
> Diverse/larger datasets and other domains
- Our paper focuses on strategic classification settings [1,6,7,11,12,17,25,31,35]. According to the previous literature, human strategic behaviors are prevalent in high-stakes domains such as hiring, lending, and college admission, where humans have more incentives to improve their features for favorable outcomes. We believe the datasets adopted in our experiments are sufficiently representative and cover well-known data widely used in strategic classification literature.
- We want to emphasize that our theoretical findings do not have any restriction on the dataset and are broadly applicable to other domains such as recommendation systems, fraud detection, etc., as long as they involve human strategic behavior and the decision-maker uses a linear model to make decisions. Note that the linearity of the decision model is a standard setting in strategic classification. In practice, one can first use a non-linear feature extractor to learn the embeddings of agents' preliminary features and then apply the linear model to the newly generated embedded features. Given the knowledge of both feature extractor and model, agents will best respond with respect to new features, and all our results still hold. The generalization to non-linear settings is also discussed in [13,28] in the main paper and [Levanon and Rosenfeld, 2021].
- To further validate the above arguments, we provide an additional case study using ACSIncome data [Ding et al., 2022], which contains information on more than $150K$ agents. In this study, the goal is to predict whether a person has an annual income $> 50000$ based on $53$ features such as education level and working hours per week. Specifically, we consider a decision-maker who first learns 2-D embeddings from $53$ original features using a neural network and then regards the embedding as the new feature. A linear decision model is trained and used on these new features to make predictions. We divide the agents into 2 groups based on their ages. Similar to the credit approval data, we then fit Beta distributions on the 2 groups and then verify the monotonic likelihood assumption (Figure 1,2 in the attached pdf). We then plot the dynamics of $a_t, q_t, \Delta_t$ for both groups when the systematic bias is either positive or negative. The results show that similar trends still hold for this large dataset (Figure 3 in the attached pdf).
### Conclusion
While our theoretical analysis relies on some assumptions and the results are mainly evaluated in domains of lending and admission, the assumptions and datasets are mostly standard and commonly used in strategic classification literature. Nonetheless, we want to emphasize that our results are broadly applicable to other datasets and applications that involve human strategic behavior.
### References
Christopher Jung, Sampath Kannan, Changhwa Lee, Mallesh Pai, Aaron Roth, and Rakesh Vohra. Fair prediction with endogenous behavior. In Proceedings of the 21st ACM Conference on Economics and Computation, pages 677–678, 2020.
Mohammad Mahdi Khalili, Xueru Zhang, Mahed Abroshan, and Somayeh Sojoudi. Improving fairness and privacy in selection problems. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 8092–8100, 2021.
Barman, S. and Rathi, N. Fair cake division under monotone likelihood ratios. In Proceedings of the 21st ACM Conference on Economics and Computation, pp. 401–437, 2020.
Ding, F., Hardt, M., Miller, J., & Schmidt, L. (2021). Retiring adult: New datasets for fair machine learning. Advances in neural information processing systems, 34, 6478-6490. | Rebuttal 1:
Rebuttal: # Global Rebuttal
We thank the reviewers and AC for reviewing our paper. Here we present a global response to the questions shared by multiple reviewers.
## Assumptions and Applicability of Our Model
- The monotonic likelihood ratio property (MLRP)
This property is rather standard and has been widely used in literature (e.g., [11,12] in the main paper; Jung et al., 2020; Khalili et al., 2021; Barman et al., 2020); it means that each feature dimension is a good indicator of the target variable and an individual is more likely to be qualified when his/her feature value increases.
Regarding the empirical evidence, we demonstrate the distribution of each of the two features in the Credit Approval dataset in Figure 10, where the fitted beta distribution satisfies the monotonic likelihood distribution. Moreover, [11] referred to in the main paper used FICO credit data and verified the monotonic likelihood assumption holds for people's FICO scores in Figure 15 of [11]. FICO has been widely used in the United States to assess people's creditworthiness and is a good practical example. In addition to the lending domain, MLRP has been studied in many other contexts, e.g., research has shown that higher wealth levels are often associated with higher likelihoods of certain favorable economic behaviors (like investment and consumption patterns), empirical data on insurance claims shows that higher levels of risk factors (like age, smoking status) are monotonically associated with higher likelihoods of claims.
- The linearity assumption
The linearity of the decision model is a standard setting in strategic classification (e.g., [1,6,7,11,12,17,25,31,35]). In practice, one can first use a non-linear feature extractor to learn the embeddings of agents' preliminary features and then apply a linear model to the newly generated embedded features. Given the knowledge of both feature extractor and model, agents will best respond with respect to new features, and all our results still hold. The generalization to non-linear settings is also discussed in [13,28] in the main paper and [Levanon and Rosenfeld, 2021].
Finally, all assumptions made in our theorems are sufficient but not necessary, i.e., Theorem 3.5 may still hold even when assumptions are violated. This is mentioned in lines 238-239 and validated in experiments. Among all datasets adopted in experiments, only the "uniform-linear" dataset satisfies all assumptions in Theorem 3.5, while the Gaussian dataset satisfies Theorem 3.5 only if the initial qualification rate of agents is larger than 0.5. The German Credit and Credit Approval dataset only satisfies the linear classifier assumption and the monotonic likelihood assumption. However, the empirical results on all datasets demonstrate the validity of Theorem 3.5. Thus, we believe the phenomenon in Theorem 3.5 should exist in reality even when some assumptions are not strictly satisfied.
## Empirical Evidence of Our Model
- The datasets: our paper focuses on strategic classification settings [1,6,7,11,12,17,25,31,35]. According to the previous literature, human strategic behaviors are prevalent in high-stakes domains such as hiring, lending, and college admission, where humans have more incentives to improve their features for favorable outcomes. We believe the datasets adopted in our experiments are sufficiently representative and cover well-known data widely used in strategic classification literature.
- **The additional case study**: To further validate the above arguments, we provide an additional case study using ACSIncome data [Ding et al., 2022], which contains information on more than $150K$ agents. In this study, the goal is to predict whether a person has an annual income $> 50000$ based on $53$ features such as education level and working hours per week. Specifically, we consider a decision-maker who first learns 2-D embeddings from $53$ original features using a neural network and then regards the embedding as the new feature. A linear decision model is trained and used on these new features to make predictions. We divide the agents into 2 groups based on their ages. Similar to the credit approval data, we then fit Beta distributions on the 2 groups and then verify the monotonic likelihood assumption (Figure 1,2 in the attached pdf). We then plot the dynamics of $a_t, q_t, \Delta_t$ for both groups when the systematic bias is either positive or negative. The results show that similar trends still hold for this large dataset (Figure 3 in the attached pdf).
### References
Christopher Jung, Sampath Kannan, Changhwa Lee, Mallesh Pai, Aaron Roth, and Rakesh Vohra. Fair prediction with endogenous behavior. In Proceedings of the 21st ACM Conference on Economics and Computation, pages 677–678, 2020.
Mohammad Mahdi Khalili, Xueru Zhang, Mahed Abroshan, and Somayeh Sojoudi. Improving fairness and privacy in selection problems. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 8092–8100, 2021.
Barman, S. and Rathi, N. Fair cake division under monotone likelihood ratios. In Proceedings of the 21st ACM Conference on Economics and Computation, pp. 401–437, 2020.
Ding, F., Hardt, M., Miller, J., & Schmidt, L. (2021). Retiring adult: New datasets for fair machine learning. Advances in neural information processing systems, 34, 6478-6490.
Pdf: /pdf/8decd1538bdf5284115b9390bac4f1c1d12ade3d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Overcoming Common Flaws in the Evaluation of Selective Classification Systems | Accept (spotlight) | Summary: The authors introduce a new metric AUGRC for evaluating classifiers under the Selective Classification framework, whereas the classifier
has an option to reject low-confidence predictions. The authors introduce desirable properties for evaluating the Selection Classifiers, and show that the new metric has all those properties, unlike any of the currently used metrics. Experiments show that AUGRC produces significanly different rankings of Confidence Scoring Functions compared with currently used metrics.
Strengths: The authors address relevant question and make useful contribution. The paper is well written, clear and easy to read. The experimental analyses are, to my judgement, sound. The code is provided.
Weaknesses: Fig. 4: the risk vs. coverage curves for AURC vs. AUGRC do not look that different. Sure, there is non-monotonicity in AURC, but that is
in the low-coverage region, which presumably is not of practical interest. The rest looks monotonic and fairly similar. It is also true
that the two metrics suggest different CSFs, but again, they aren't that different. To be clear, I still think AUGRC is favorable, just
saying that the difference appears modest.
I think the key conclusion, which is the recommendation to adopt AUGRC, is valid, however the language in the section is a bit too strong (there is "substantial" twice and "significant limitations"). I recommend to tone-it-down.
Technical Quality: 3
Clarity: 4
Questions for Authors: The authors claim that Selective Classification is a crucial components for clinical diagnostics ML systems (among other application domains). Can you provide examples of classifiers in clinical use that incorporate Selective Classification?
What is AUROC_f? It is introduced on line 172, however the description is unclear and there are no references.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: There is no limitations section. I don't have a good idea regarding limitations of this manuscript.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you again for your valuable comments, and for taking the time to read our general reply, as well as considering our point-by-point comments here:
---
W1. “Fig. 4: the risk vs. coverage curves for AURC vs. AUGRC do not look that different. Sure, there is non-monotonicity in AURC, but that is in the low-coverage region, which presumably is not of practical interest. The rest looks monotonic and fairly similar. It is also true that the two metrics suggest different CSFs, but again, they aren't that different. To be clear, I still think AUGRC is favorable, just saying that the difference appears modest.”
* While the overall shape of the Selective Risk-Coverage is indeed similar to that of the Generalized Risk-Coverage curve, we demonstrate in our experiments that even smaller visual changes in the curves lead to alterations in method rankings and are thus highly relevant.
* Further, we want to point out that the low-coverage region may also be relevant in practice, for instance considering a clinical deferral-system where a large fraction of cases may be deferred to ensure very high accuracy on the accepted cases. When evaluating specific applications for which certain risk or coverage intervals are known to be irrelevant, adaptations such as a partial AUGRC (analogous to the partial AUROC) may be considered. In response to your feedback, we added the note on partial evaluation of the AUGRC to the conclusion section.
---
W2. “I think the key conclusion, which is the recommendation to adopt AUGRC, is valid, however the language in the section is a bit too strong (there is "substantial" twice and "significant limitations"). I recommend to tone-it-down.”
* Thank you for this helpful comment. In the updated manuscript, we toned down the conclusion, as you proposed, removing the word “significant” in “the current metrics have significant limitations” (line 276) and “substantial” in “substantial deviations from intended and intuitive performance assessment behaviors” (line 282-283).
---
Q1. “The authors claim that Selective Classification is a crucial components for clinical diagnostics ML systems (among other application domains). Can you provide examples of classifiers in clinical use that incorporate Selective Classification?”
* Three interesting examples would be the following:
* Dvijotham et al., “Enhancing the reliability and accuracy of AI-enabled diagnosis via complementarity-driven deferral to clinicians”: The authors introduce the SC-based CoDoC deferral system for breast cancer screening and TB triaging, showing “that combined AI-clinician performance using CoDoC exceeds that currently possible through either AI or clinicians alone”.
* Leibig et al., “Combining the strengths of radiologists and AI for breast cancer screening: a retrospective analysis”: The authors propose an SC-based decision-referral approach for breast-cancer screening based on mammography data.
* Bungert et al., “Understanding Silent Failures in Medical Image Classification”: The authors comprehensively evaluate SC in the biomedical field and develop an interactive tool that facilitates identifying silent failures.
---
Q2. “What is AUROC_f? It is introduced on line 172, however the description is unclear and there are no references.”
* The “failure AUROC” AUROC$_f$ is the standard AUROC but computed on the binary failure labels (correctly classified vs. failure, denoted $Y_f$ in this paper) and the confidence scores. We added a corresponding reference in Section 2.4.
---
Thank you for your constructive feedback. As we believe in having resolved your comments, please let us know in case there are remaining concerns that would prevent you from recommending a stronger acceptance of our work.
---
Rebuttal Comment 1.1:
Comment: I appreciate your responses. I'll raise my rating to 7.
---
Reply to Comment 1.1.1:
Title: Thank you for your response
Comment: Thank you very much for taking the time to read our response and re-considering your assessment based on the provided updates. | Summary: The paper presents 5 requirements for multi-threshold metric for Selective Classification and a novel metric to evaluate selective classifiers called AUGRC. The proposed metric satisfies 5 requirements that are not met by current approaches. The proposed metric changes the rankings on 5 out of 6 datasets considered by the authors.
Strengths: The paper's main strengths are:
1. the paper aims to tackle a long-standing concern in abstaining classifiers literature, i.e. how to evaluate with a single measure these classifiers
2. the theoretical derivations seem sound
3. the contribution is well framed within current literature
Weaknesses: The main concerns of the paper are:
1. The empirical evaluation can be improved
2. The interpretability of the proposed measure is not that straightforward
3. The paper presentation can be improved.
Regarding the empirical evaluation, I have a few remarks.
* there are some contradictory lines, which should be double-checked: for instance, in lines 268-269, it is not clear to me why you claim that AURC erroneously favors DG-RES over MCD-PE, as DG-RES's performance seems to be better than MCD-PE (accuracy-wise and ranking-wise). Similarly, it is not clear to me why in Figure 4 the authors state DG-RES is favored "despite a lower classification performance and ranking quality", while in Figure 4.a is reported a higher accuracy and higher $AUROC_f$ for DG-RES;
* second, the authors never specify whether they correct for multiple outcomes testing (e.g. using Bonferroni correction). I think since the authors are performing multiple pairwise ranking tests, they also have to account for this.
Regarding the interpretability requirement, the authors claim (correctly) that the AUROC can be interpreted as "the probability of a positive sample having a higher score than a negative one". I did not fully grasp what is the straigthforward interpretation of the AUGRC score in this context (lines 206-208).
Regarding paper presentation, the paper heavily relies on acronyms. I would personally reconsider this choice to improve the overall readability.
Technical Quality: 2
Clarity: 2
Questions for Authors: I have a few questions for the authors:
* [Q1] can the authors clarify my concerns regarding the empirical evaluation?
* [Q2] can the authors discuss further my concern regarding interpretability? Is there a straightforward interpretation of the AUGRC score?
* [Q3] when considering a degenerate abstaining classifier that always abstains, i.e., whose confidence score is always zero, is the AUGRC 0 (as the one obtained by a perfect classifier)? If so, is AUGRC favoring classifiers that tend to be underconfident and asbtain too much? Is there a way to avoid such a behaviour?
* [Q4] if I correctly understand how AUGRC works, why do not the authors consider the _classification quality_ by [Condessa et al, 2017] instead of the Generalized Risk Score? Such a metric also considers the failures that are correctly rejected and could avoid favoring abstaining classifiers that over reject.
Typos:
line 128: schore $\to$ score
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The authors do not directly discuss limitations of their proposed approach in the main paper. For instance, I think a brief discussion regarding the computational time required to compute AURGC should be included in the main paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you again for your valuable comments, and for taking the time to read our general reply, as well as considering our point-by-point comments here:
---
W1. “The empirical evaluation can be improved.”
W1.1. “there are some contradictory lines [...]”
* Thank you for pointing out the mistakes in Section 4.2. It should read “erroneously favors MCD-PE over DG-RES” in line 268-269, and it should read “favoring MCD-PE [...] compared to DG-Res” in the caption of Figure 4. We corrected the sentences in the manuscript.
W1.2. “the authors never specify whether they correct for multiple outcomes testing [...]”
* Thank you for pointing this out. Please find our detailed response in the general response to all reviewers above.
---
W2. “[...] straigthforward interpretation of the AUGRC score in this context (lines 206-208).”
* Arguably most intuitively, the AUGRC corresponds to the rate of silent failures averaged over working points. In the suggested context, the interpretation is: On drawing two random samples, it corresponds to the probability that either both are failure cases or that one is a failure and has a higher confidence score than the non-failure. This can be read from Equation 7:
$$AUGRC = (1 − AUROC_f ) · acc · (1 − acc) + (1 − acc)^2 / 2$$
The second term is the probability of drawing two failures. The first term is the probability that one is a failure times the probability that the failure is ranked higher (1 − AUROC$_f$). As an example, a CSF that outputs random scores yields AUROC$_f=0.5$, and hence AUGRC $=(1-acc)/2$.
* In response to your feedback, we changed “expected risk of silent failures across working points” in lines 201-202 to “rate of silent failures averaged over working points”. Further, we extended the explanation on Equation 7 to include the AUROC-analogous interpretation as well as the example case of a random CSF.
---
W3. “[...] the paper heavily relies on acronyms”
* Thank you for raising this point. In response to your feedback, we reintroduced the abbreviations at important places in the paper (e.g. beginning of Section 2) or used the full term instead. We further updated Section 2.4 to introduce all acronyms with the full term for improved readability.
---
W4. “[...] a brief discussion regarding the computational time required to compute AURGC should be included in the main paper.”
* Thank you for bringing up this point. The main bottleneck of the AUGRC computation is the sort operation on the confidence scores (O(n log n)). This is the same as for the AURC and there is no computational overhead as compared to the AURC.
* Benchmarking the computational time for evaluating the AURC, AUGRC, and AUROC$_f$ for 1k random scores and failure labels, we obtain the following results:\
AUROC$_f$: 730μs ± 13μs (sklearn implementation)\
AURC: 564μs ± 2μs (ours)\
AUGRC: 562μs ± 2μs (ours)
* In response to your feedback, we added a note on the AURC computation time in the beginning of Section 4, and details and benchmark results to a new section in the Appendix: “A.2.3 Computation Time”.
---
Q1. & Q2.
Addressed in W1-W3.
---
Q3. “[...] confidence score is always zero, is the AUGRC 0 [...]? If so, is AUGRC favoring classifiers that tend to be underconfident and asbtain too much? [...]”
* To clarify, whether a SC model abstains given a predicted confidence score depends on a threshold which is fixed on application. However, a multi-threshold evaluation, like in AUGRC, does not involve a fixed threshold but aggregates across all working points. More precisely, no explicit abstention decision is made in the AUGRC, the same as no classification decision is made in the classification AUROC.
* Illustrating the latter with an example: A binary classifier that always outputs $P(Y=1)=0$ will yield an AUROC of ½ being equivalent to the random classifier. Analogously, if the confidence scores are always zero, the AUROC$_f$ is ½ and following Equation 7 the AUGRC is $(1-acc)/2$. Thus, the AUGRC does not favor underconfident classifiers.
* We extended the explanation on Equation 7 (see our response to W2).
---
Q4. “[...] why do not the authors consider the classification quality [...]? [...]”
* [2] define the Classification quality (Q) as the fraction of rejected failures plus the fraction of non-rejected non-failures.
* Using 1-Q as a replacement of Generalized Risk in the AUGRC deviates from the intention of evaluating the silent failure risk, as this assigns a loss of 1 to rejected but correctly classified samples. As an example of resulting unintended behavior, two classifiers with accuracy 0 and 1, respectively, both yield a Q-based AUGRC of ½. This breaks requirement R2.
* Please note that instead of a holistic evaluation suitable for method benchmarking, [2] derives “performance measures [...] [that] correspond to a reference operating point”. Thus, Q may be used for working point selection but is not suitable for replacing Generalized Risk in the AUGRC.
* We added to Section 2.1: “Aside from the selective risk, other performance measurements for working point selection include for example Classification quality and Rejection quality [2]”.
[2] Condessa et al., “Performance measures for classification systems with rejection”
---
Thank you for your constructive feedback. As we believe to have resolved your comments, please let us know if there are any remaining concerns that would hinder a recommendation for acceptance.
---
Rebuttal Comment 1.1:
Title: Response to the Rebuttal
Comment: I thank the authors for their clarifications and the new experiments.
A few comments:
regarding Q1, I thank the reviewers for implementing my suggested changes.
regarding Q2,
> the interpretation is: On drawing two random samples, it corresponds to the probability that either both are failure cases or that one is a failure and has a higher confidence score than the non-failure. This can be read from Equation 7:
I think clarifying the interpretation of AUGRC (meaning adding these lines to the paper) is very important and can help the reader improving its understanding.
Regarding Q3 and Q4, I see your points, and I have no further questions.
Finally, I thank the authors for providing a small analysis regarding the required time for computing AUGRC.
Hence, after the authors' rebuttal and clarifications, I will increase my score to a weak accept.
---
Reply to Comment 1.1.1:
Title: Thank you for your response
Comment: Thank you very much for taking the time to read our response and re-considering your assessment based on the provided updates. | Summary: The authors propose 5 requirements that should be satisfied by selective classification (SC) metrics such that they can be successfully used to rank SC models for a task. They then propose a new metric, called AUGRC, which is shown to satisfy all 5 requirements. Finally, the authors show empirically that their method performs better rankings thank previous metrics.
Strengths: 1. The paper is well-motivated, clearly written, and has an excellent coverage of previous works.
2. The 5 requirements were well-chosen and a metric that satisfies them is likely to be a good choice for SC ranking.
3. The metric is simple, yet effective, and the maths seems sound.
4. Results are convincing and statistically analysed; the toy dataset was didactic.
5. There's enough information in the main body of the paper to fully understand the experiments and corresponding results, i.e. it's not necessary to read the appendix, even if further details are available there.
Weaknesses: 1. Limited discussion of future works.
2. NLL and Brier Score are listed among the multi-threshold metrics, but there's no thresholding involved in their evaluation. In fact, it should be possible to use them as risk measures, which would cover more for than just 0/1 loss, especially where probabilities are important for decision making.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. A similar task to SC is classification with rejection of OOD samples (sometimes called open-set recognition). In such cases, there isn't a ground truth to evaluate the risk on (in practice). Can the authors envision a way to adapt their metric to these scenarios?
2. The metrics were scaled by ×1000 in Figure 2. Were they also rescaled for all the other results?
3. Regarding interpretability, the authors say that "The optimal AUGRC is given by the second term in Equation 7". What does the optimal AUGRC mean? What would be the AUGRC of the Bayes-optimal model for a task? Is there an AUGRC value associated to a random classifier, as in AUROC?
5. Any intuitions about why the rankings of AURC and AUGRC where the same for iWildCam?
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you again for your valuable comments, and for taking the time to read our general reply, as well as considering our point-by-point comments here:
---
W1. “Limited discussion of future works.”
* Thank you for this helpful comment. In response to your feedback, we extended the Conclusion Section to point out directions of future work: Our proposed evaluation framework provides a solid basis for future work in Selective Classification, including developing novel methods as well as analyzing the properties of individual methods.
---
W2. “NLL and Brier Score are listed among the multi-threshold metrics, but there's no thresholding involved in their evaluation. In fact, it should be possible to use them as risk measures, which would cover more for than just 0/1 loss, especially where probabilities are important for decision making.”
* Thank you for raising this point. Indeed, the NLL and Brier Score do not involve thresholding of confidence scores. In response to your feedback, we rephrased line 167 to: “Importantly, proper scoring rules such as the Negative-Log-Likelihood and the Brier Score are technically not multi-threshold metrics. Yet, we include them here as they also aim for a holistic performance assessment, i.e. assessment beyond individual working points.”
* Regarding your proposal to use NLL or Brier Score as risk measure: NLL and BS both assess the calibration of confidence scores in addition to their ranking. Generally, the calibration task can be viewed as orthogonal to that of Selective Classification where rejection is solely based on the ranking of scores. For a detailed discussion on the relation between calibration and confidence ranking we refer to Appendix C in [1]. Further investigating the relation between confidence ranking and calibration and on evaluating SC in settings where calibrated scores are of interest is an interesting direction of future work, which we added to the conclusion section in the updated manuscript.
[1] Jäger et al., “A Call to Reflect on Evaluation Practices for Failure Detection in Image Classification”
---
Q1. “A similar task to SC is classification with rejection of OOD samples (sometimes called open-set recognition). In such cases, there isn't a ground truth to evaluate the risk on (in practice). Can the authors envision a way to adapt their metric to these scenarios?”
* We understand you are referring to settings where the SC model is validated on data containing classes that were not part of the training data. The AUGRC metric based on the binary failure label is directly applicable to these scenarios, as any prediction on unknown classes is per definition a misclassification (for a visualization, see Figure 5 in [1]). In Table 3, we report AUGRC values evaluated on various distribution shifts, also containing semantic and non-semantic new-class shifts.
* In case you are referring to a different kind of open-set recognition, we would ask for a more concrete task formulation for us to comment on.
---
Q2. “The metrics were scaled by ×1000 in Figure 2. Were they also rescaled for all the other results?”
* Thank you for pointing out the missing information. We rescaled all results apart from Figure 5. More precisely, we scaled the AURC and AUGRC values by 1000 in Figure 4 and Table 3. For the color-coded AUGRC in Figure 5, no rescaling was done. In response to your feedback, we updated the caption of Figure 4 to clearly state which values were rescaled.
---
Q3. “What does the optimal AUGRC mean? What would be the AUGRC of the Bayes-optimal model for a task? Is there an AUGRC value associated to a random classifier, as in AUROC?”
* The optimal AUGRC refers to the AUGRC for an optimal CSF, i.e. one that ranks all failure cases lower than correctly classified cases (AUROC$_f = 1$), but is conditioned on the performance of the underlying classifier. Please note that the CSF is not directly linked to the classifier output, but can be any (external) function that provides continuous confidence scores per classified sample. As such, in the general formulation, classifier and CSF are two independent entities and AUGRC assesses the performance of both jointly.
* The Bayes-optimal classifier by itself does not directly define a CSF and hence does not correspond to a specific AUGRC value. We believe that investigating the relation between Bayes-optimality and SC is an interesting direction of future work.
* On the level of detecting failure cases, i.e. given a fixed classifier with accuracy $acc$, assigning random confidence scores yields a AUROC$_f=0.5$. Following Equation 8, this corresponds to AUGRC $=(1-acc)/2$.
The same holds if the classification itself is random. There, however, the accuracy depends on the number of classes $K$, i.e. $acc=1/K$.
* In response to your questions, we re-formulated line 206 to: “The AUGRC for an optimal CSF (AUROC$_f=1$) is given by the second term in Equation 7”. We also added the AUGRC of the random classifier to the same section.
---
Q4. “Any intuitions about why the rankings of AURC and AUGRC where the same for iWildCam?”
* For the iWildCam dataset, we observe very robust method rankings for both AURC and AUGRC which don’t change between the two metrics. Overall, the volatility of method rankings varies from dataset to dataset (Figure 3).
* We agree that the investigation of specific methods and datasets is an interesting direction of future work, which we added to the conclusion section in the updated manuscript.
---
Thank you once more for your constructive feedback. As we believe to have resolved your comments, please let us know in case you have remaining suggestions to further increase the quality of our work. | Summary: The paper tackles the problem of Selective Classification (SC). The authors show problem with the existing metrics used in evaluating SC and propose to use the Area under the Generalized Risk Coverage curve (AUGRC). Empirical results provide useful insights about the effect of using this metric and shows how this changes the relative ordering of state-of-the art methods.
Strengths: - The problem tacked is very important, especially for the safe deployment of ML models which requires the knowledge of when not to trust the model.
- The proposed metric is well-motivated.
- The experiments are extensive.
- It is important for the community to know about this work as this changes the perception of SC and the best method to use.
Weaknesses: - It is important to add the details of the methods in the main paper. For example, DeepGamblers (DG) is referred to as DG in the main paper and this abbreviation is only explained in the appendix. Similarly for other baselines.
- Although the experiments are extensive, it is good to include the recent state-of the art SC methods such as Feng et al (2023) to understand which method is the best to use.
References:
Feng et al: Towards Better Selective Classification
Technical Quality: 3
Clarity: 3
Questions for Authors: See the previous section
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you again for your valuable comments, and for taking the time to read our general reply, as well as considering our point-by-point comments here:
---
W1. “It is important to add the details of the methods in the main paper. For example, DeepGamblers (DG) is referred to as DG in the main paper and this abbreviation is only explained in the appendix. Similarly for other baselines.”
* Thank you for pointing out that the abbreviations DG and MCD-PE were used in the main text without reference to the explanation in the appendix. This was particularly the case in Section 4.2, where we explicitly compare these two confidence scoring functions. In response to your feedback, we adapted Section 4.2 to properly introduce the abbreviations DG and MCD-PE.
* While the main paper is now fully self-contained and all abbreviations are introduced, we would like to comment on the general decision to provide the details of utilized confidence scoring functions in the appendix. This is because the focus of our work is not on the confidence methods like in other studies, but on the metric, which is why we find it important to allocate sufficient space in the main paper to metric-related descriptions.
---
W2. “Although the experiments are extensive, it is good to include the recent state-of the art SC methods such as Feng et al (2023) to understand which method is the best to use.”
* Please note that the focus of our experiments is to show the relevance of the proposed metric rather than the performance of individual methods. We argue that this metric relevance can be demonstrated with a diverse and representative set of prevalent confidence scoring functions, but it is not necessary to include all existing methods.
* The main conclusion in [2] is that “[...] selecting according to classification scores is the SOTA selection mechanism for comparison” (based on experiments with three SC methods, including DG). We want to point out that with DG-MCD-MSR, DG-PE, and DG-TEMP-MLS, our empirical evaluation already includes multiple CSFs that are trained with DG loss attenuation but are based on the classifier’s logits, aligning with the recommendation in [2].
* We agree that testing and comparing further novel methods with our proposed AUGRC metric is an interesting direction for future work. However, running all experiments required for this method within one week is not feasible for us.
In response to your feedback, we extended the first paragraph in Section 4 to include the method of Feng et al. in the discussion about future work.
[2] Feng et al., “Towards better Selective Classification”
---
Thank you once more for your constructive feedback. As we believe to have resolved your comments, please let us know in case you have remaining suggestions to further increase the quality of our work. | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers for their valuable comments. The reviewers generally agreed on the added value of our work , noting that “The paper is well written, clear and easy to read” (QM33) , “The metric is simple, yet effective” (xFt7), “The experiments are extensive” (BBe8), and “It is important for the community to know about this work as this changes the perception of SC and the best method to use” (BBe8).
However, one reviewer (izgv) did not yet recommend acceptance of the paper. Thus, additionally to the point-by-point responses below, we would like to address izgv’s main point of criticism here:
---
**Empirical evaluation: Missing correction for multiple testing in Section 4.1.** Reviewer izgv noted that a correction for multiple testing may be required when performing multiple pairwise ranking tests.
* We thank reviewer izgv for bringing up this point. We agree that when deducing overall stability of method rankings based on multiple pairwise tests, a correction for multiple testing such as the Bonferroni [1] or Holm [2] method is necessary.
* To address this concern, we conducted the empirical evaluation with both the Holm correction and the more conservative Bonferroni correction for multiple testing.
* Please find the detailed **results in the additional PDF**. Across the whole study, 913/936 pairwise differences were significant at alpha=0.05 without correction, 906/936 with the Holm correction, and 906/936 with the Bonferroni correction. In summary, the corrections did not alter our original statement that the results of the pairwise signed-rank tests “indicate stable method rankings for both AURC and AUGRC”.
* In response to the feedback we updated Figure 3 in the manuscript to include the Holm correction for multiple testing, following [3]. Further, we added the results without correction and with the Bonferroni correction to the appendix in section A.3 Additional Results.
[1] Dunn, “Multiple comparisons among means”\
[2] Holm, “A simple sequentially rejective multiple test procedure”\
[3] Wiesenfarth et al., “Methods and open‑source toolkit for analyzing and visualizing challenge results”
---
We have thoroughly revised our manuscript to address the provided feedback. The list of changes we performed includes:
* We improved the explanations on the interpretability of the AUGRC in Section 3 based on the comments and questions by the reviewers xFt7 and izgv.
* We corrected two mistakes in the main text in Section 4.2, pointed out by izgv.
* We updated the significance maps displayed in Figure 3 to include the Holm correction for multiple testing. (izgv)
* We toned down strong formulations in the conclusion. (QM33)
* We renamed “multi-threshold metrics” to “holistic metrics” in several parts of the manuscript. (xFt7)
* We extended the Conclusion Section to point out directions of future work. (xFt7)
* We added details on the AUGRC computation time in the Appendix and Section 4. (izgv)
* We reduced the amount of acronyms used to improve clarity and readability. (izgv & BBe8)
---
We believe that these updates resolve the stated concerns of all reviewers. Please find our point-by-point answers in the respective reviewer sections.
Pdf: /pdf/9ca60360aa2465b964a947021fe21bd24724192c.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
DiffuserLite: Towards Real-time Diffusion Planning | Accept (poster) | Summary: The paper introduces a lightweight framework employing a Plan Refinement Process (PRP) for generating trajectories from coarse to fine-grained levels. This approach reduces redundant information modeling, significantly enhancing planning efficiency. DiffuserLite achieves an impressive decision-making frequency of 122.2Hz, making it 112.7 times faster than existing frameworks and suitable for real-time applications. Additionally, it can be integrated as a flexible plugin to accelerate other diffusion planning algorithms. Experiments demonstrate state-of-the-art performance on multiple benchmarks, highlighting DiffuserLite's robustness, efficiency, and practical applicability across various domains.
Strengths: - DiffuserLite achieves an impressive decision-making frequency of 122.2Hz, making it 112.7 times faster than existing frameworks, which is crucial for real-time applications.
- The PRP generates trajectories from coarse to fine-grained levels, reducing redundant information modeling and significantly enhancing planning efficiency.
- DiffuserLite can be integrated as a flexible plugin to accelerate other diffusion planning algorithms, demonstrating its adaptability and potential for widespread use across various domains.
Weaknesses: The paper is well-written and easy-to-follow, I do not see obvious weakness.
Technical Quality: 3
Clarity: 3
Questions for Authors: - The proposed DiffuserLite is a classifier-based conditional diffusion model. How does it provide gradients?
- There may exist some physical constraints for the generated trajectory, which are not differential and can not provide gradients. How does DiffuserLite work in that case?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your effort in reviewing and acknowledging our work. Based on the questions you have raised, it seems you are particularly interested in the implementation details of DiffuserLite and its potential applications. We have already released the initial version of the code on [diffuserlite/diffuserlite.github.io](https://github.com/diffuserlite/diffuserlite.github.io/tree/main) to help you understand the algorithm better. And below is our response to your questions in the hope of addressing your concerns:
---
- **Q1:** Your question offers a different perspective on the planning procedure of DiffuserLite, particularly regarding treating the value critic as a classifier, which is indeed correct. This classifier plays two key roles in the planning framework: (a) guiding the Diffusion model to generate high-performance trajectories, a process that can be achieved either through gradient calculations (referred to as CG, **commonly utilizing automatic differentiation in DL software packages**) or by using the value as input to a neural network (referred to as CFG). Although mathematically equivalent, due to the high computational cost of gradient calculations that significantly reduce decision frequency, we have chosen to use CFG in DiffuserLite. Another reason for choosing CFG is that the Rectified flow backbone cannot use CG for guidance. Rectified flow is our alternative generative model backbone. To our knowledge, DiffuserLite is the first to validate its effectiveness in decision-making domains and leverage its reflow procedure further enhanced decision frequency. As Rectified flow cannot use CG, this is why we opted for CFG. To demonstrate the impact of using CG on decision frequency, **we tested DiffuserLite-D on MuJoCo benchmarks, resulting in an average decision frequency of 27.2Hz (CG) versus 68.2Hz (CFG), suggesting that CFG should be preferred when high decision frequency is crucial.**
- **Q2:** This question is highly relevant as real-world applications often encounter non-differentiable physical constraints. As a diffusion planning algorithm, **DiffuserLite offers various ways to address this issue**: (a) Explicitly filtering out trajectories that do not meet physical constraints during planning: DiffuserLite generates multiple candidate trajectories at each decision step and evaluates their quality to select the optimal one. Similar to search methods, this procedure allows for the explicit filtering of candidates that do not adhere to constraints. (b) Incorporating additional guidance: While many physical constraints are non-differentiable, we can soften these constraints and approximate them using a neural network classifier to guide the generation of trajectories that best satisfy these constraints. Previous research has shown significant improvements in the physical plausibility of generated trajectories by utilizing gradient guidance from the environment dynamic model [1]. Given DiffuserLite's flexible, plugin-based framework, this method can be incorporated with DiffuserLite to ensure compliance with physical constraints. In conclusion, **we believe that a combined approach using (a) and (b) can lead to favorable performance in most real-world applications**. As demonstrated in the paper, Robomimic benchmark offers a real-world robot arm dataset with inherent physical constraints. With method (a), DiffuserLite has already achieved very good performance on this benchmark.
---
I hope our response addresses your concerns. Please feel free to reach out if you need further clarification. Thank you once again for your insightful review.
---
[1] Ni, et al, "MetaDiffuser: diffusion model as conditional planner for offline meta-RL," in *Proceedings of the 40th International Conference on Machine Learning*, 2023. | Summary: The paper introduces a method to accelerate diffusion model-based planning called the Plan Refinement Process (PRP). This method divides the planning of the entire trajectory into several temporal spans, focusing on the most recent spans in each planning stage, thereby discarding redundant distant information. The paper also employs techniques such as critic design and rectified flow to ensure both the frequency and accuracy of planning. Experimental results show that the proposed DiffuserLite, which incorporates PRP, achieves a high decision-making frequency, which is significantly faster than predominant frameworks. This significantly enhances the real-time applicability of diffusion planning in various benchmarks. Additionally, DiffuserLite's flexible design allows it to serve as a plugin to improve other diffusion planning algorithm.
Strengths: One of the advantages of the paper is the combination of a coarse-to-fine architecture with diffusion model based planning. This method avoids redundant information in long-term planning and significantly improves planning efficiency. The authors also integrate classifier-free guidance (CFG), introduce a new critic, and use techniques such as rectified flow. As a result, the proposed DiffuserLite possesses the capability for fast and successful planning. Another advantage of the paper is the ability to flexibly apply the proposed method to other diffusion planning frameworks. This approach significantly increases the speed of planning inference with only a slight sacrifice in performance. The comprehensive comparisons in the paper demonstrate the potential of the proposed PRP in diffusion planning applications.
Weaknesses: One of the primary innovations of the paper is the introduction of the Plan Refinement Process (PRP). However, this hierarchical approach is not entirely original, as similar concepts have been introduced in prior works such as HDMI and HD-DA, mentioned in the 6. Related Works section. Additionally, many of the other improvements, such as the use of rectified flow, rely on existing methods. As a result, the paper's novelty is somewhat limited.
Technical Quality: 3
Clarity: 2
Questions for Authors: In the Abstract, the website provided by the authors does not contain updated details and results. Please remember to update it.
In line 126, the authors mention that agents often struggle to reach distant states proposed by the planning process. Could you provide a deeper explanation for this? Otherwise, what is the significance of long-term planning?
In lines 176-178, why does this critic design make it difficult to distinguish better trajectories in short-term planning?
What is the "uniform critic" mentioned in line 186? Could you further explain it?
What are the specific metrics used in Tables 2, 3, and 4?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough review. The concerns you have raised are insightful, and we aim to address them effectively in the following responses. To be concise, below, we use "Lite" to refer to "DiffuserLite".
---
**Weakness: Limited Novelty**
We aim to elucidate the differences between Lite and HDMI as well as HD-DA (It is actually a concurrent work and both it and Lite are published on arXiv in January 2024) from the perspectives of **Motivation**, **Technology**, and **Reproducibility**.
**1. Motivation:** The motivation behind Lite lies in reducing redundant information during planning to increase decision frequency, whereas HDMI and HD-DA focus on generating higher-quality plans through "subgoal planning" and "goal reaching". The primary contribution of Lite lies in speed enhancement. Experimental results indicate that in terms of D4RL score, Lite outperforms HD-DA and HDMI, with a significantly higher increase in decision frequency **(Lite-R1: 52.2x $\gg$ HDMI: 1.8x $\approx$ HD-DA: 1.3x, compared to Diffuser)** [1,2]. We argue this disparity is due to the failure of the other two works to eliminate redundant information in planning, unlike Lite. *DiffuserLite addresses a distinct problem* and is not merely an improvement upon HDMI.
**2. Technology:** While all three algorithms exhibit a hierarchical structure, Lite offers additional insights:
- Lite demonstrates high-performance diffusion planning WITHOUT meticulous dataset handling or the inclusion of prior knowledge, unlike HDMI.
- Lite proves that disregarding distant redundant information and using a significantly shorter planning horizon can still yield excellent results and greatly enhance speed compared to HDMI and HD-DA.
- Lite conducts extensive ablation experiments discussing the impact of the number of hierarchical levels $L$ and interval at each level $I_l$ on performance, **summarizing best practices in hierarchical structure design**, a discussion lacking in HDMI and HD-DA (and they only use $L=2$).
- Lite validates its algorithm on **real-world datasets** like Robomimic and FinRL, showcasing its practical applicability, which is not extensively covered in HDMI and HD-DA.
- Lite boasts a simpler structure that can serve as a flexible plugin for other diffusion planners, as validated on AlignDiff, setting it apart from HDMI and HD-DA.
**3. Reproducibility:** Currently, only DiffuserLite has released code on [Lite/Lite.github.io](https://github.com/Lite/Lite.github.io), making its results the most reliable.
---
**Questions**
**Q1:** Thank you for your reminder. The website content is updated. : )
**Q2:** Planning-based methods suffer from compounding errors, leading agents to deviate from the plan over time [3]. Hence, agents execute only a few steps based on the plan (typically 1 step in planning-based RL algorithms). Therefore, we believe that detailing every aspect of plan generation is unnecessary. We believe successful long-term planning requires: (a) Sparser details for more distant parts (as long as they can accurately reflect the plan's quality). (b) More details for nearer parts to ensure decision consistency with the plan.
**Q3:** In lines 176-178, the described critic used in Diffuser/DD predicts the cumulative return within the horizon, leading to greedy plans, lacking of long-term judgment. For example, Hopper in Table 2 is a single-legged hopping robot, faster movement yields higher rewards. A common failure mode in Diffuser and DD is that the agent jumps abruptly and then falls, attributing this to the short-sightedness induced by greedy planning within the horizon. Critics with values can alleviate this issue by providing long-term judgment.
**Q4:** In line 186, a 'uniform critic' refers to assigning the same judgment to all plans, assuming equal quality. DD generates one high-performance plan a priori for execution without a plan selection process. To align with our unified diffusion planning framework, DD can be viewed as utilizing a uniform critic for plan selection.
**Q5:** The metric used in Table 2 is the normalized D4RL score, which is the episodic return normalized against expert performance (100 means expert-level [4]). The metric used in Table 3 is the success rate of manipulation tasks in Robomimic [5]. The metric used in Table 4 is the episodic return in the realistic stock market simulation [6]. Across all tables, higher values indicate better performance.
---
I hope our response addresses your concerns. Please feel free to reach out if you need further clarification. Thank you once again for your insightful review. Moreover, we have released the code at https://github.com/diffuserlite/diffuserlite.github.io to facilitate a better understanding of the implementation details. I hope it can help you : )
---
### **References**
[1] Chen, et al, "Simple Hierarchical Planning with Diffusion," in The Twelfth International Conference on Learning Representations, ICLR, 2024.
[2] Li, et al, "Hierarchical Diffusion for Offline Decision Making," in Proceedings of the 40th International Conference on Machine Learning, ICML, 2023.
[3] Xiao, et al, "Learning to Combat Compounding-Error in Model-Based Reinforcement Learning," in arXiv,1912.11206, 2019.
[4] Fu, et al, "D4RL: Datasets for Deep Data-Driven Reinforcement Learning," in arXiv,2004.07219, 2021.
[5] Mandlekar, et al, "What Matters in Learning from Offline Human Demonstrations for Robot Manipulation," in Conference on Robot Learning (CoRL), 2021.
[6] Liu, et al, "FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading in Quantitative Finance," in arXiv,2011.09607, 2022.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer 7Mwp,
---
As the deadline for the discussion phase is approaching, we are particularly eager to receive your feedback, as we believe some of your comments may stem from misunderstandings. Your engagement in discussion and feedback is truly important to us. We also greatly hope that the detailed explanations, additional experiments, and code releases in the rebuttal can fully address your concerns.
---
Warm regards,
Authors of submission 9426.
---
Rebuttal 2:
Title: Reminder to reviewer
Comment: Reviewer 7Mwp,
As the author-reviewer discussion phase comes to a close, have the authors satisfactorily addressed your concerns? Please engage in the discussion if you still have any unsolved concerns.
Best,
AC | Summary: This paper introduces DiffuserLite, a lightweight framework that utilizes progressive refinement planning to reduce redundant information generation and achieves real-time diffusion planning.
Key contributions are:
- Introduced the plan refinement process (PRP) for coarse-to-fine-grained trajectory generation, reducing the modeling of redundant information.
- Introduced DiffuserLite, a lightweight diffusion planning framework, which significantly increases decision-making frequency by employing PRP.
Results:
- Achieved a decision-making frequency of 122.2Hz, 112.7x faster than existing frameworks and SOTA performance on multiple benchmarks, e.g. D4RL, Robomimic, and FinRL
Strengths: Originality:
This paper introduced a novel approach in diffusion planning: PRP, and leads to various benefits, resulting in SOTA in various benchmarks.
Quality:
Results are impressive.
The experiment design is rigorous with comprehensive evaluation across multiple benchmarks.
There are detailed information about how the experiments are set up and carried out.
Clarity:
The paper is well written.
The figures are in good quality and helps illustrate key points.
Significance:
The improvements and the flexibility of the method could potentially benefits real-time decision-making applications.
Weaknesses: Details on Implementation and design choices:
Discussion in the main paper about design choices and hyperparameter selection could benefit this paper. More ablation study would be helpful to understand how this method can be generalized.
Many of the details are in appendix, it requires people to read though the appendix to fully understand the paper.
Some of the tables can benefit from clearer presentation. e.g. table 6. While it does save some space, but it's really hard to read.
Complexity and generalization: While the paper shows very impressive results on the benchmarks in the paper, the multi-level refinement process introduce quite some complexity. Additional theoretical analysis or more experimental results would help. It could be useful to provide data-driven recommendations on how to make design choices (e.g. for different levels) to make it more useful.
Further discussion on the scalability of this method to more complex, real-world scenarios would be valuable.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The paper discussed L = 2,3,4, what would the behavior be if L >=5?
2. If the paper can share any any specific limitations or trade-offs the authors find during the development of this method, that could be helpful to discuss. It will guide people on how to use the method and strengthen the paper.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors addressed limitations in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful suggestions. I am deeply touched by your diligent review and appreciate your dedication. I summarize the questions you raised and respond below:
---
- **Implementation Details**: We have released the code at [diffuserlite/diffuserlite.github.io](https://github.com/diffuserlite/diffuserlite.github.io) to facilitate a better understanding of the implementation details. Furthermore, in response to your suggestion, we will incorporate more details on implementation and design choices into main text during revision to avoid the necessity of referring to Appendix for complete comprehension.
- **Clearer Presentations**: We commit to double-checking all figures and tables during revision to ensure they are presented more clearly and address any potential ambiguity issues.
- **Real-World Tests**: Robomimic and FinRL benchmarks provide real-world datasets that significantly differ from D4RL. The strong performance of DiffuserLite on these benchmarks demonstrates its potential for real-world applications. Besides, our released code allows for easy adjustments of model size and training datasets to evaluate scalability in more complex real-world scenarios. We will explore this in future work.
- **PRP-Introduced Complexity**: While the methodology may appear complex, the implementation of our algorithm is not significantly more intricate than one-shot generation. After excluding components such as neural networks, generative backbones, and the reflow procedure, **the core algorithm comprises approximately 100 lines of code**, alleviating concerns regarding complexity.
- **Design Choice Recommendations**: More PRP levels allow for more redundant information to be discarded but may also accumulate errors across levels. In the revision, we will provide theoretical justifications from this perspective. However, due to numerical calculation and many estimations in PRP, theoretical justifications may not suggest an optimal configuration. As such, we conduct additional experiments to explore various design choices, as detailed in the following table. **This table is an expanded version of Table 6 in the paper.** You can see that the third column is the right part of Table 6. **The results align with the recommendations in Appendix C, emphasizing that 3 levels are a practical choice balancing performance** and decision frequency, while too many levels can lead to error accumulation and reduced decision frequency, and too few levels may complicate trajectory distribution simplification, resulting in lower performance. **It is also important to balance the horizon length at each level**, avoiding overweighting at the top or bottom. Additional design choice recommendations can be found in Appendix C, and we will integrate these details into the main text based on your feedback for the revision.
| Number of Levels | 2 | 3 | 4 | 5 |
| --- | --- | --- | --- | --- |
| Design Choice | [5,33]/[9,17]/[17,9]/[33,5] | [3,5,17]/[5,5,9]/[9,5,5]/[17,5,3] | [3,3,3,17]/[5,3,5,5]/[5,5,3,5]/[17,3,3,3] | [3,3,5,3,5] |
| cheetah-me | 57.6/75.6/89.1/82.2 | 85.6/88.5/88.6/89.0 | 81.6/88.3/88.9/84.9 | 85.5 |
| antmaze-ld | 0/0/69/15.3 | 34.7/68.0/67.3/34.0 | 34.3/68/70/60 | 26.7 |
| Design Choice | [5,13]/[7,9]/[9,7]/[13,5] | [3,3,13]/[4,5,5]/[5,5,4]/[13,3,3] | [3,3,3,7]/[3,4,3,5]/[5,3,4,3]/[7,3,3,3] | [3,3,3,3,4] |
| kitchen-p | 69/72.8/72/74.5 | 66.7/74.4/74.2/31.7 | 72.8/74.4/73.8/74 | 71.7 |
| Average Cost | 0.0175s | 0.0209s | 0.0266s | 0.0321s |
- **Performance with $L=5$**: To address your query, we include an experiment with $L=5$ in the table above (column 4). Increasing the number of levels beyond a certain point may lead to error accumulation and reduced decision frequency. Considering these factors, we still recommend using $L=3$.
---
I hope our response addresses your concerns. Please feel free to reach out if you need further clarification. Thank you once again for your insightful review.
---
Rebuttal Comment 1.1:
Comment: Thank you for your comments and the detailed results. They helped solve many of my questions.
"the core algorithm comprises approximately 100 lines of code" -> Just to clarify, by complexity, I mean more on the complexity required when making design choices and generalize to other tasks, base models, etc. Therefore, complexity may not be measured by the lines of code, e.g. even if the solution is just one line of code, but there are 100 parameters to tune to ensure good results, I still think the complexity are there.
Thank you for the detailed results for design choice recommendations. It could be super helpful if they can be included in a certain way in the final version.
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive feedback on our work and your appreciation of our rebuttal! I now better understand what you meant by "complexity." I'm sorry to say that introducing PRP does indeed require the selection of additional design choices. However, in practice, as you can see from the experimental results, selecting these design choices is not particularly challenging, and the performance is not very sensitive to these choices. I believe that this introduced "complexity" won't be a significant issue.
Thank you once again! If you have any other questions during the discussion period, please don't hesitate to contact us. We'll do our best to respond and further improve the paper! | Summary: TLDR; Unlike traditional methods that generate the entire trajectory at once, this process gradually refines the plan at each stage, reducing computational costs and improving real-time performance.
DiffuserLite is a lightweight diffusion planning framework designed to increase decision-making frequency in real-time applications. Traditional diffusion planning methods are limited by the high computational costs of modeling long-horizon trajectory distributions, leading to low-frequency decision-making. To overcome these limitations, DiffuserLite introduces a Progressive Refinement Process (PRP) that reduces redundant information and progressively generates fine-grained trajectories. As a result, DiffuserLite demonstrates superior performance compared to existing methods across various benchmarks such as D4RL, Robomimic, and FinRL. Additionally, DiffuserLite is designed as a flexible plugin that can be easily integrated with other diffusion planning algorithms, making it a valuable contribution to future research and practical applications.
Strengths: A simple idea with a order of magnitude speed improvement.
- The PRP process reduces time cost by starting with coarse goals and refines them into detailed goals.
- It reduces unnecessary redundant planning and ensures faster inference.
- With the speed improvement, DiffserLite shows reasonable performance.
Flexibility (RQ3)
- In the form of a plugin, it can be applied to other planning methods such as AlignDiff.
Weaknesses: - The terms or concepts such as "reflow" lack adequate explanation. Although they are mentioned in the appendix, it is necessary to provide an explanation or a reference to this section in the main text. Even a basic explanation within the main part would be beneficial.
- The abbreviation “PRP” is used to abbreviate multiple terms. On page 2, it refers to "plan refinement process," while on page 4, it stands for "progressive refined planning." Although these terms appear to be used interchangeably, it would be helpful to clarify this in the text.
- It would be beneficial to have a theoretical justification for the experimental results. The concept is straightforward, but it would be helpful to explain why certain configurations of levels are better in the ablation study comparing the levels of DiffuserLite. Currently, the analysis seems too intuitive and empirical.
- DiffuserLite adopts Conditional Filtered Guidance (CFG) instead of Conditional Guidance (CG) to prevent speed degradation. However, the adaptation of CFG requires adjusting the target condition at each level in the multi-level structure. It would be beneficial to compare the performance and speed of DiffuserLite using CG in addition to CFG.
- Equation 10 in the paper shows that the optimal value function is added as a term in the Critic to solve zero Critic values issue in sparse reward settings for fine-grained planning. However, using this optimal value requires an offline methodology, which limits the approach. Methods studied in Online RL may replace optimal value. For example, it would be possible to consider studies on exploration strategies such as intrinsic motivation or curiosity-based reward, or value-based method enhancement studies such as double Q-learning or dueling DQN.
Technical Quality: 2
Clarity: 2
Questions for Authors: - What is the default model for Lite w/o PRP that you mentioned in section 5.5? You mentioned a one-level model, but the default design in Table 6 has three temporal horizons [5,5,9]. Is the entire planning horizon used as the temporal horizon? It would be helpful to clearly specify the actual temporal horizon used in the study.
- In Figure 1 and Table 2, the scores of DD and Diffuser for the antmaze diverse dataset are recorded as 0. However, in the ICLR 2024 paper "Reasoning with Latent Diffusion in Offline Reinforcement Learning," those models show higher scores for antmaze-diverse-v2. This discrepancy suggests that either a different dataset version was used or the hyperparameter settings need to be adjusted for a fair comparison.
- Trade off between depth of searching and removing redundant information seems good idea. But is there a optimal point for this trade off? Or does it considered by task?
- I think critic design is one the most important part in this paper, but there are lack of explanation. Were there any literature to solve this problem by modifying Critic parts? Were there any other research that used the sum of discounted reward and discounted optimal value to solve the problems caused by sparse rewards?
- According to Equation 10, Critic has the optimal value term. However, this approach is limited to offline methodology. Are there any alternatives or methods to overcome this limitation?
- The paper explains that CFG is adopted instead of CG to prevent speed degradation. Given that CFG requires adjusting the target condition at each level in the multi-level structure, are there any methods to mitigate this limitation? Additionally, could you provide experimental results comparing the performance and speed of DiffuserLite using CG?
- Table 2 shows that DiffuserLite-R1, which uses rectified flow, achieves the best performance across most metrics. However, the paper lacks a detailed analysis of these results. Could you explain the specific mechanisms through which rectified flow contributes to performance improvement?"
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: - It is insufficient to prove the way to choose the number of planning levels and the temporal horizons. It would have been beneficial to compare the number of planning levels or temporal horizon with respect to the learning / inference frequency, analyzing the trade-offs when these factors are changed. Additionally, comparing whether it is better to choose the temporal horizon in a bottom-up or top-down manner according to the levels would have provided a more comprehensive analysis.
- DiffuserLite may also have some societal impacts, such as expediting the deployment of robotic products, and it could potentially be utilized for military purposes." If they provide more examples that highlight the practical aspects of DiffuserLite, readers may more easily understand the potential of the paper.
- For some baseline(DD and Diffuser), the results shown in Table 2 are too low to make the experimental results appear unfair. Refer to results from Table 1 of the existing study (arXiv:2309.06599) on diffusion-based models performing one-shot planning. For the same benchmarks (antmaze-diverse, kitchen-partial, kitchen-mixed), they showed better baseline performance. It is expected that even with the step-by-step planning method of this study, proper hyper-parameter tuning could achieve at least similar performance.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful suggestions. I am deeply touched by the diligent review and appreciate your dedication. I summarize the issues and respond below to address your concerns:
---
- **Paper revision:** Following your suggestion, during revision, we will further introduce *Reflow* in main text, address PRP’s multiple abbreviation issue, update the D4RL score report for Diffuser/DD, and shift the design choices from Appendix to main text.
- **About PRP configuration:** More PRP levels allow more redundant information discarded but may also accumulate errors across levels. We will provide theoretical justifications from this perspective during revision. However, due to numerical calculation and many estimations in PRP, theoretical justifications may not suggest an optimal configuration. As such, we conduct additional experiments to explore various design choices, as detailed in the following table. **This table is an expanded version of Table 6 in the paper.** You can see that the third column is the right part of Table 6. **The results align with the recommendations in Appendix C, emphasizing that 3 levels are a practical choice balancing performance and decision frequency. It is also important to balance the horizon length at each level**, avoiding overweighting at the top or bottom. Additional design choice recommendations can be found in Appendix C, and we will integrate these details into the main text based on your feedback for the revision.
| Number of Levels | 2 | 3 | 4 | 5 |
| --- | --- | --- | --- | --- |
| Design Choice | [5,33]/[9,17]/[17,9]/[33,5] | [3,5,17]/[5,5,9]/[9,5,5]/[17,5,3] | [3,3,3,17]/[5,3,5,5]/[5,5,3,5]/[17,3,3,3] | [3,3,5,3,5] |
| cheetah-me | 57.6/75.6/89.1/82.2 | 85.6/88.5/88.6/89.0 | 81.6/88.3/88.9/84.9 | 85.5 |
| antmaze-ld | 0/0/69/15.3 | 34.7/68.0/67.3/34.0 | 34.3/68/70/60 | 26.7 |
| Design Choice | [5,13]/[7,9]/[9,7]/[13,5] | [3,3,13]/[4,5,5]/[5,5,4]/[13,3,3] | [3,3,3,7]/[3,4,3,5]/[5,3,4,3]/[7,3,3,3] | [3,3,3,3,4] |
| kitchen-p | 69/72.8/72/74.5 | 66.7/74.4/74.2/31.7 | 72.8/74.4/73.8/74 | 71.7 |
| Average Cost | 0.0175s | 0.0209s | 0.0266s | 0.0321s |
- **About CG and CFG:** We do not use CG primarily for two considerations: (a) CG requires NN gradients computation, which is very expensive. A comparison of decision frequency for DiffuserLite-D on MuJoCo, **27.2Hz (CG) vs. 68.2Hz (CFG)**, suggests that CFG should be preferred when high decision frequency is crucial. (b) Rectified flow cannot use CG for gudiance. Therefore, we solely employ CFG in Lite. Although CFG requires specifying a target condition for each level, various methods can address this: (a) using an additional NN to predict the max reachable return for the current state as the target or employing more sophisticated methods in [1], (b) we also observe that Level 0 has the most significant impact on decision performance, with most cases requiring only the tuning of target condition for Level 0.
- **About critic design:** (Why is it helpful when sparse rewards?) Assuming $x_s$ as the planned trajectory and $\tau$ as $x_s$ and its future trajectory, $R$ estimates the trajectory's discounted return. A critic with only reward terms optimizes $\mathbb E_{q_0(x_0),q(\epsilon),s,\tau}\left[\Vert\epsilon_\theta(x_s,s,R(\tau))-\epsilon\Vert^2\right]$, whereas a critic with value terms optimizes $\mathbb E_{q_0(x_0),q(\epsilon),s}\left[\Vert\epsilon_\theta(x_s,s,\mathbb E_\tau[R(\tau)])-\epsilon\Vert^2\right]$. When rewards are sparse, a critic with value terms can receive more stable conditions during training, leading to better performance in sparse reward tasks. Additionally, **DiffuserLite does not restrict the use of any specific critic**; as demonstrated in AlignDiff-Lite, we utilized an attribute strength model [2] as a critic for preference aligning. **You can also deploy DiffuserLite in online tasks using curiosity-based rewards or value-based methods**.
- In Section 5.5, *Lite w/ only last level* refers to one level with a horizon of 9, while *Lite w/o PRP* refers to one level with a horizon of 129. I apologize for any lack of clarity.
- **Low Performance of Diffuser and DD in Antmaze:** In Antmaze, we run official code for Diffuser and referenced the report in [3] for DD. **We did not intentionally report low scores.** The higher scores in LDGC [4] may be due to (a) their use of an inpainting version of Diffuser as the original version yielded poor results (page 9, line 2), and (b) careful tuning of DD hyperparameters. To ensure a fair comparison, **we will update the score report during revision**. However, **Diffuser and DD still exhibit significantly lower scores than Lite.** Regarding Kitchen tasks, we have double-checked to confirm that both LDGC and our reports align with official reports.
- **About DiffuserLite-R1:** We attribute its best performance to the straight flow property of Rectified Flow, which confers an advantage in very few sampling steps. Compared to Lite-D, we only change the generative backbone from diffusion model to rectified flow, without any other tricks.
- **Practical Aspects:** Robomimic and FinRL provide real-world datasets with substantial differences compared to D4RL. The strong performance of Lite on these datasets underscores its potential for real-world applications.
---
I hope our response addresses your concerns. Please feel free to reach out if you need further clarification. Thank you once again for your insightful review.
---
[1] Yu, et al, "Regularized Conditional Diffusion Model for Multi-Task Preference Alignment," in arXiv, 2404.04920, 2024.
[2] Dong, et al, "AlignDiff: Aligning Diverse Human Preferences via Behavior-Customisable Diffusion Model," in The Twelfth International Conference on Learning Representations, 2023.
[3] Hong, et al, "Diffused Task-Agnostic Milestone Planner," in arXiv, 2312.03395, 2023.
[4] Venkatraman, et al, "Reasoning with Latent Diffusion in Offline Reinforcement Learning," in arXiv, 2309.06599, 2023.
---
Rebuttal 2:
Comment: I have carefully reviewed your detailed responses to my comments. Your commitment to addressing the points raised is appreciated. Given the promised revisions and the forthcoming code release, I believe the simple yet effective idea of PRP could indeed be valuable in reducing the notorious planning time of diffusion models for planning.
While I find the empirical results compelling, I would have been even more supportive if there were stronger theoretical justifications for the approach. Nonetheless, the practical benefits of the method are clear.
Considering these factors, I am revising my overall assessment from 5 to 6.
Here are a few remaining questions.
1. Regarding CFG and CG: In your response, you mentioned that CG was not used for several reasons, including decision frequency. Is there a difference in performance (score) between using CG and using CFG without considering rectified flow? In Section 4, you stated that you used CFG for decision frequency. However, in the conclusion, you mentioned the limitations of DiffuserLite with CFG being the main cause of these limitations. Could using CG address this issue? It would be helpful to explain why it was necessary to improve decision frequency by opting for CFG instead of CG. As it currently reads, your writing suggests, "We use CFG because CG is not good, but CFG also has limitations!" If CG can address this issue, it would be better to explain why decision frequency is important despite those limitations. If not, it would be clearer to explain that the issue is not about the limitations between CG and CFG.
2. Additional question about the plugin: Have you tried using this method as a plugin for other backbones besides AlignDiff? It would be beneficial to include any tests that show whether it can be applied to more major methods like Diffuser or HDMI, rather than AlignDiff.
---
Rebuttal Comment 2.1:
Comment: Thank you very much for your thoughtful feedback and for increasing the score. I sincerely apologize for the delayed response (it took some time to train and test the models). I hope the following reply addresses your questions:
**A1.** The normalized scores for the CG/CFG guided versions are listed below. We observe a slight performance drop when using CG. This drop was also noted in our earlier experiments, which was one of the initial reasons for choosing CFG. I believe the first work to propose using CFG was Decision Diffuser (DD), motivated by CFG's superior performance in image generation tasks compared to CG. As for why CFG performs better in decision-making tasks, there's still a lack of systematic research. Since this is not the focus of DiffuserLite, we didn't discuss it in detail. However, I have observed that CG-generated trajectories are more prone to OOD issues as guidance strength $w$ increases, which might be one reason for the performance drop.
|env|CG|CFG|
|---|---|---|
|halfcheetah-me|87.9±1.6|88.5±0.4|
|kitchen-p|66.2±4.1|74.4±0.6|
|antmaze-ld|46.1±6.2|68.0±2.8|
**A2.** Can CG address the issue of adjusting the target return in CFG? I don't think it can fully address this. In practice, CG requires tuning one hyperparameter (guidance strength), while CFG needs two (guidance strength and target return). Although it seems CG has one less parameter to tune, the guidance strength has a more significant impact on performance. The target return is normalized to [0, 1] based on the max/min values in the dataset, so during inference, choosing a relatively large value like [0.7-1.1] usually works well and is easy to tune. On the contrary, we must carefully tune the guidance strength for both CG and CFG.
**A3.** Why did we choose AlignDiff as the backbone? This is because Diffuser/DD/HDMI are all reward-maximizing algorithms (And DiffuserLite can be seen as DD+PRP). We felt that these algorithms with PRP would still be solving the same problem in the same domain, which wouldn't effectively demonstrate the "flexible plugin" capability. Therefore, we chose AlignDiff, a preference-matching algorithm, to see if DiffuserLite could be applied to a completely different task domain - and the answer is yes. Can DiffuserLite be used with Diffuser and HDMI? Absolutely. To integrate with Diffuser, the CG version mentioned in A1 & A2 can be viewed as Diffuser+PRP (if we ignore the inverse dynamic model). To integrate with HDMI, HDMI's heuristic method for automatically selecting high-quality sub-goals could also be used to provide different levels of training data for DiffuserLite, allowing PRP to generate more meaningful and high-quality subgoals. However, implementing HDMI+PRP might have to wait until they open-source their code.
Thank you again for your insightful comments and patience! If you have any further questions during the discussion phase, please don't hesitate to contact us. We'll do our best to respond and further improve the paper. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Stochastic Optimization Algorithms for Instrumental Variable Regression with Streaming Data | Accept (poster) | Summary: This paper proposes two stochastic optimization algorithms for instrumental variable regression (IVaR) that operate on streaming data without requiring matrix inversions or mini-batches. When the true model is linear, the paper proves that TOSG-IVaR converges at a rate of $\mathcal{O}(\log T/T)$ and OTSG-IVaR at a rate of $O(1/T^{1 - \iota})$ for any $\iota > 0$, where $T$ is the number of iterations. The proposed approaches avoid the "forbidden regression" problem of having to estimate the nuisance parameter relating $Z$ and $X$. Numerical experiments validate the theoretical results.
Strengths: 1. The paper provides a novel perspective on instrumental variable regression (IVaR) by formulating it as a conditional stochastic optimization problem, which allows for the development of fully online algorithms that operate on streaming data.
2. The proposed algorithms, TOSG-IVaR and OTSG-IVaR, avoid the need for matrix inversions and mini-batches, making them computationally efficient and suitable for large-scale datasets.
3. By directly solving the conditional stochastic optimization problem, the algorithms avoid the "forbidden regression" issue and the need to approximate a dual variable over a continuous functional space, which is a limitation of prior minimax formulations.
4. This paper is well-structured. The background, contributions, and limitations are explicitly presented. The proof techniques is clearly summarized and the experimental results do support the performance of both algorithms.
Weaknesses: 1. The paper does not provide a comprehensive comparison with existing IVaR methods, particularly in terms of computational complexity and empirical performance on real-world datasets. Such a comparison would strengthen the paper's contributions.
2. The assumptions made for the theoretical analysis, such as the identifiability conditions and moment assumptions, may be restrictive in some practical settings. A discussion on the robustness of the proposed algorithms under violations of these assumptions would be valuable.
Technical Quality: 3
Clarity: 4
Questions for Authors: The theoretical analysis focuses on the linear model setting. What are the challenges in extending the analysis to non-linear models, and what kind of assumptions or modifications would be required?
Confidence: 2
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: The limitations and future works are thoroughly discussed in the paper. Besides from that, I do not see more limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer AcA7,
Thank you for your insightful review. Here we provide response to your questions and concerns.
### **Weaknesses**
**W1** - Please note that many existing IV methods are not applicable for online/streaming setting. We also wish to clarify that in Appendix B Table 1 we have compared the computational complexities (specifically, per-iteration arithmetic and memory complexity) of our method to that form [DVB23b] (which is a recently proposed online/streaming IV method) and demonstrated our benefits. In our experimental results (Section 4), we have also compared our algorithms against that in [DVB23b] and demonstrated the benefits. In particular, we wish to highlight that our algorithm converges much faster.
Finally, as a part of the rebuttal, we have provided empirical results on real-world dataset as well. Compared to [DVB23b], our method also performs better on the considered real-world datasets.
If you are aware of another IVaR method that is applicable to the online/streaming setting that we have missed in our literature review, we would greatly appreciate if you could point it out specifically to us. We will be happy to compare against that in our revision as well. Thank you in advance.
**W2** - First, we would like to point out that the identifiability conditions and moment assumptions we make are rather standard in the literature and can be satisfied in multiple cases (see, e.g., Lemma 1). It is also satisfied in our simulation experiments. Further relaxations of the moment and identifiability conditions have indeed been made in the literature in the offline settings. Taking your suggestion into account, we will add a discussion to them in our revision. A detailed study of developing algorithms and sample complexity results under relaxed assumptions is truly beyond the scope of this work.
Regarding the robustness aspect, we point out under the setting in Section 4, identifiability conditions (Assumption 2.1) can be easily verified. Moment assumptions (Eq. (5), (6), (7), (8)) also hold:
- (5)$$\begin{aligned}&\mathbb{E}\Big[\|X'X^\top-\mathbb{E}\_{X|Z}[X]\mathbb{E}\_{X|Z}[X]^\top\|^2\Big]\\\\
=&\mathbb{E}\Big[\|c(h'+\epsilon_x')\phi(\gamma_{\ast}^\top Z)^\top+c\phi(\gamma_{\ast}^\top Z)(h+\epsilon_x)^\top+c^2(h'+\epsilon_x')(h+\epsilon_x)^\top-c\mathbf{1}\_{d_x}\phi(\gamma_{\ast}^\top Z)^\top-c\phi(\gamma_{\ast}^\top Z)\mathbf{1}\_{d_x}^\top-c^2\mathbf{1}\_{d_x}\mathbf{1}\_{d_x}^\top\|^2\Big]\\\\
\leq&3\mathbb{E}\Big[\|c((h'+\epsilon_x')-\mathbf{1}\_{d_x})\phi(\gamma_{\ast}^\top Z)^\top\|^2\Big]+3\mathbb{E}\Big[\|c\phi(\gamma_{\ast}^\top Z)((h+\epsilon_x)-\mathbf{1}\_{d_x})^\top\|^2\Big]+3\mathbb{E}\Big[\|c^2((h'+\epsilon_x')(h+\epsilon_x)^\top-\mathbf{1}\_{d_x}\mathbf{1}\_{d_x}^\top)\|^2\Big]\\\\=&\mathcal{O}(c^2d^2+c^4d^2).
\end{aligned}$$
- (6)$$\begin{aligned}&\mathbb{E}\Big[\|YX'-\mathbb{E}\_{Y|Z}[Y]\mathbb{E}\_{X|Z}[X]\|^2\Big]\\\\=&\mathbb{E}\Big[\|(\theta_{\ast}^\top X+c\cdot(h_1+\epsilon_y))X'-(\theta_{\ast}^\top\mathbb{E}\_{X|Z}[X]+c)\mathbb{E}\_{X|Z}[X]\|^2\Big]\\\\\leq&3\mathbb{E}\Big[\|(X'X^\top-\mathbb{E}\_{X|Z}[X]\mathbb{E}\_{X|Z}[X]^\top)\theta_*\|^2\Big]+3c^2\mathbb{E}\Big[\|(h_1+\epsilon_y-1)\phi(\gamma_{\ast}^\top Z)\|^2\Big]+3c^4\mathbb{E}\Big[\|(h_1+\epsilon_y)(h'+\epsilon_x')-\mathbf{1}\_{d_x}\|^2\Big]\\\\=&\mathcal{O}(\|\theta_*\|^2(c^2d^2+c^4d^2)+c^2d_x^2+c^4).\end{aligned}$$
- (7)$$\begin{aligned}&\mathbb{E}\Big[\|\mathbb{E}\_{X\mid Z}[X]\cdot\mathbb{E}\_{X\mid Z}[X]^\top-\mathbb{E}_Z\Big[\mathbb{E}\_{X\mid Z}[X]\cdot\mathbb{E}\_{X\mid Z}[X]^\top\Big]\|^2\Big]\\\\
\leq&3\mathbb{E}\Big[\|\phi(\gamma\_{\ast}^\top Z)\phi(\gamma\_{\ast}^\top Z)^\top-\mathbb{E}\_{Z}\Big[\phi(\gamma\_{\ast}^\top Z)\phi(\gamma\_{\ast}^\top Z)^\top\Big]\|^2\Big]+3c^2\mathbb{E}\Big[\|\mathbf{1}\_{d_x}\phi(\gamma\_{\ast}^\top Z)^\top-\mathbb{E}\_Z\Big[\mathbf{1}\_{d_x}\phi(\gamma\_{\ast}^\top Z)^\top\Big]\|^2\Big]+3c^2\mathbb{E}\Big[\|\phi(\gamma\_{\ast}^\top Z)\mathbf{1}\_{d_x}^\top-\mathbb{E}\_{Z}\Big[\phi(\gamma\_{\ast}^\top Z)\mathbf{1}\_{d_x}^\top\Big]\|^2\Big]\\\\
=&\mathcal{O}(d_z+c^2d_xd_z).\end{aligned}$$
- (8)$$\begin{aligned}&\mathbb{E}\Big[\|\mathbb{E}\_{Y\mid Z}[Y]\cdot\mathbb{E}\_{X\mid Z}[X]-\mathbb{E}\_Z\Big[\mathbb{E}\_{Y\mid Z}[Y]\cdot\mathbb{E}\_{X\mid Z}[X]\Big]\|^2\Big]\\\\\leq&2\mathbb{E}\Big[\|(\mathbb{E}\_{X\mid Z}[X]\cdot\mathbb{E}\_{X\mid Z}[X]^\top-\mathbb{E}\_{Z}\Big[\mathbb{E}\_{X\mid Z}[X]\cdot\mathbb{E}\_{X\mid Z}[X]^\top\Big])\theta\_{\ast}\|^2\Big]+2c^2\mathbb{E}\Big[\|\phi(\gamma\_{\ast}^\top Z)-\mathbb{E}\_{Z}\Big[\phi(\gamma\_{\ast}^\top Z)\Big]\|^2\Big]\\\\=&\mathcal{O}((d_z+c^2d_xd_z)\|\theta_*\|^2+c^2d_z).
\end{aligned}$$
In particular, the constant $c$ represents the strength of endogeniety and as shown above we could precisely characterize how this strength of endogeniety affects the number of iterations (or samples in the streaming setting). This demonstrates an instance of quantifying the robustness of the made assumptions (here the robustness is wrt to the constant $c \in (0,\infty)$. In the general response, we have also added a detailed discussion of Algorithm 2's performance under certain non-linear modeling assumptions, which is yet another robustness study of the proposed algorithms.
### **Questions**
**Q1** - We would like to point out Proposition 1 extends the linear setting to the non-linear setting, in which we require boundedness of the variance of the gradient estimator $v(\theta)$. Relaxing the boundedness of the variance condition while still achieving convergence under the non-linear setting is challenging and left as our future work. Please also see the explanation under **Non-linear settings** in general response.
We sincerely hope that our responses have answered your questions and concerns, in which case, we would appreciate if you could raise your scores appropriately. If you have additional questions, please reach out to us during the discussion period and we will be happy to clarify.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your detailed response. I will keep my original score. | Summary: This paper tackles the instrumental variable regression. The problem setting assumes a model $Y=g_{\theta^*}(X) + \epsilon_1$, but unlike the ordinary regression model, there are correlations between $X$ and $\epsilon_1$. The model assumes in addition an instrument variable $Z$ such that $Y$ and $X$ are independent conditional on $Z$, and $X=h_{\gamma^*}(Z)+\epsilon_2$. The target is to estimate $\theta^*$. The canonical approach is to use the two-stage least square (2SLS) method, where we first regress $X$ on $Z$ and then regress $Y$ on the $\widehat{X}$ that is estimated by $Z$.
This paper considers an online setting where samples $(X_t, Y_t, Z_t)$ arrive sequentially. They propose two stochastic gradient descent (SGD) based algorithms. The first algorithm assumes that for each $Z_t$ we can resample $X_t'$ from the conditional distribution of $X | Z=Z_t$. The second algorithm does not make this assumption and can be considered an SGD version of the 2SLS algorithm. The paper provides the L2 convergence guarantee for both algorithms and they design experiments to demonstrate the effectiveness of their algorithms.
Strengths: This paper is clearly written and contains rigorous theoretical analysis. It presents online algorithms to address the instrumental variable regression problem. Compared to their offline counterparts, these online algorithms are more memory-efficient and computationally stable. The proof of their Theorem 2 introduces an intermediate sequence $\widetilde{\theta}_t$ with known dynamic, and evaluates the convergence rate of $|\theta_t - \widetilde{\theta}_t|$. This technique is of separate interest.
Weaknesses: 1. The contribution of the paper appears marginal. The first algorithm requires data resampling and this limits its applicability. The second algorithm is an intuitive adaption of the 2SLS algorithm to the SGD setting. The theorems are proved in the simple linear setting.
2. Apart from avoiding matrix inversion, the paper lacks necessary explanations as to why we should prefer an SGD version of 2SLS over the canonical offline 2SLS. It would be beneficial if more content were devoted to the advantages of Algorithm 2 compared to traditional benchmarks. For example, a theoretical comparison with 2SLS or an empirical comparison using real-world data would be helpful.
3. Could you explain why using two different synthetic settings for Algorithm 1 and Algorithm 2 in the Numerical Experiments section?
Technical Quality: 2
Clarity: 3
Questions for Authors: See "Weakness".
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: See "Weakness".
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer WDCF,
Thank you for your review. We provide responses to your concerns in 'weaknesses'.
**W1** - We strongly disagree with your view that the paper's contribution is marginal. As noted in Prop 1, Alg. 1 is not restricted to linear models; it can handle non-linear and non-convex cases, including DNNs, without explicitly specifying or estimating the model between $Z$ and $X$.
We also contest your claim that Algorithm 1's data resampling limits its applicability. Instead, it offers a novel approach for online data collection to avoid forbidden regression issues.
Regarding the proof techniques for Alg 1 (Thm 1), classical SGD convergence methods do not apply directly because the gradient estimator $v(\theta)$ does not meet the bounded variance ([Lan20]) or expected smoothness ([KR20]) assumptions. We provide a novel analysis based on weaker statistical assumptions for streaming data compared to those in the optimization literature.
We now highlight the additional challenges in the proof of Alg. 2 (see, also, the part after Remark 1).
**Challenge 1 (Interaction between iterates):** The major challenge towards the convergence analysis of $\\{\theta_t\\}_t$ lies in the interaction term $\gamma_t Z_tZ_t^\top\gamma_t\theta_t$ between $\gamma_t$ and $\theta_t$ in equation (13). Note that this multiplicative interaction term is neither a martingale-difference sequence not does it have finite variance which could have led to a simpler analysis. This involved dependence between the noise in the stochastic gradient updates for the two stages does not appear in existing problem setups and corresponding analysis of non-linear two time-scale algorithms [MP06,DTSM18,Doa22] (more related references in the paper).
**Challenge 2 (Biased Stochastic Gradient):**
In our setting, as shown below, the stochastic gradient in equation (13) evaluated at $(\theta_t,\gamma_t)$ is biased unlike existing works on two-time-scale algorithms (see [MP06,DTSM18,Doa22] for example).
$$
\begin{aligned}
&\mathbb{E}\_{t,Z_t}[\gamma_t^\top Z_t(Z_t^\top\gamma_t\theta_t-Y_t)]=\mathbb{E}\_{t,Z_t}[\gamma_t^\top Z_t(Z_t^\top\gamma_t\theta_t-Z_t^\top\gamma_*\theta_*)]=\mathbb{E}\_{t}[\gamma_t^\top\Sigma_Z(\gamma_t\theta_t-\gamma_*\theta_*)]\\\\=&\gamma_t^\top\Sigma_Z\gamma_t(\theta_t-\theta_*)+\gamma_t^\top\Sigma_Z(\gamma_t-\gamma_*)\theta_* \neq \gamma_{\ast}^\top\Sigma_Z\gamma_*(\theta_t-\theta_*) =\nabla_\theta F(\theta_t).
\end{aligned}
$$
**Challenge 3 (No Bounded Variance Assumption of Stochastic Gradient):**
We do not assume boundedness of $\{\theta_t\}$ iterates unlike existing works (see Assumption 1 in [WZZ21], Assumption 2 in [XL21] and Theorem 2 in [MSBPSS09]). This assumption ensures uniform boundedness of the variance of the stochastic gradient requiring a simpler analysis compared to us.
Resolving these issues firstly require proving that $\mathbb{E}[\|\theta_t\|_2^4]$ is bounded (Lemma 5) which is non-trivial and requires carefully chosen stepsizes satisfying
$\sum_{t=1}^\infty(\alpha_t^2+\alpha_t\sqrt{\beta_t})<\infty.$
Using this bound on $\mathbb{E}[\|\theta_t\|_2^4]$, we prove the convergence of sequence $\delta_t\coloneqq \theta_t-\tilde{\theta}_t$, the error between true iterates and auxiliary iterates we defined. This requires a novel proof technique where we first provide an intermediate bound (see Lemma 6) and then progressively sharpen to a tighter bound (see Lemma 7).
[MSBPSS09] H. Maei, C. Szepesvari, S. Bhatnagar, D. Precup, D. Silver, and R. Sutton. Convergent temporal-difference learning with arbitrary smooth function approximation. NeurIPS 2009
[Lan20] G. Lan. First-order and stochastic optimization methods for machine learning. Springer, 2020
[KR20] A. Khaled and P. Richtárik. Better theory for SGD in the nonconvex world.
**W2** - Alg. 2, the SGD version of 2SLS is applicable for various emerging applications of online/streaming IVaR, i.e., mobile health applications like Just-In-Time Adaptive Interventions (e.g., [TM17],[DVB23a]). Comparing to other online IVaR methods, ours achieve a much better per-iteration computational and memory costs as illustrated in Appendix B Table 1.
From an algorithmic perspective, integrating non-linear models into offline 2SLS requires additional optimization, typically handled by Stochastic Gradient Descent (SGD) methods. Recent research in both machine learning (e.g., [VSH+16, DVB23a]) and economics (e.g., [CLL+23]) has focused on developing online methods for IVaR. In our general response, we outlined the specific non-linear models which could be handled by our Alg. 2. Note that Alg. 1 already handles non-linear models (see Prop 1). We believe our work significantly advances streaming IV regression and expect further research to extend our methodology to more general non-linear settings.
In addition, we added experiments of Alg 2 on real-world dataset. Please see the general response and the attached PDF for the results.
**W3** - Alg. 1 and Alg. 2 have different settings (oracles, assumptions, etc.). In Alg. 1, we assume that we have access to 2-sample oracles (i.e., 2 independent samples $X, X'$ can be drawn given $Z$), and the algorithm can handle the cases when $g(\theta; X)$ is linear (Thm 1) and non-linear (Prop 1). Hence we consider both linear ($\phi(s) = s$) and non-linear ($\phi(s) = s^2$) settings in the experiments of Alg. 1. In Alg. 2, we mainly consider the case when we only have access to 1-sample oracle (i.e., 1 sample $X$ can be drawn given $Z$), and the model is linear.
[TM17] A. Tewari, and S. A. Murphy. "From ads to interventions: Contextual bandits in mobile health." Mobile health: sensors, analytic methods, and applications (2017)
We sincerely hope that our responses have answered your questions and concerns, in which case, we would appreciate if you could raise your scores appropriately. If you have additional questions, please reach out to us during the discussion period and we will be happy to clarify.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed reply, I am now more convinced of the paper's technical contribution. I will raise my score from 5 to 6. | Summary: The paper shows algorithms for instrumental variable regression that dont need matirx inversions and mini-batches. At the same time, the paper give the rates of convergence.
Strengths: The proposed method offers robust theoretical guarantees and is validated through comprehensive experimental results.
Weaknesses: None
Technical Quality: 3
Clarity: 3
Questions for Authors: I am curious about how the matrix inversion harm a algorithm performance in this setting.
Confidence: 1
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer hwGZ,
Thank you for your comments. Below we provide our response to your questions.
### **Question**
>Q1. Matrix inversion
**Response:**
Please kindly refer to Appendix B for a summary table (Table 1) on arithmetic operations complexity per-iteration. Using the method with matrix inversion (DVB23b, v1), it requires $d_x^3+t d_x^2$ arithmetic operations where $d_x$ is the dimension of the input $X$ and $t$ denotes the $t$-th iteration. Using the method in their updated version (DVB23a, v3), the computational and memory complexity per-iteration is still much higher than our methods. This quantifies the inefficiency of matrix-inversion based methods.
We have also provided real-world experiments demonstrating the improved and efficient performance of our algorithm over other approaches.
We sincerely hope that our responses have answered your questions and concerns, in which case, we would appreciate if you could raise your scores appropriately. If you have additional questions, please reach out to us during the discussion period and we will be happy to clarify. | Summary: This paper proposes and analyzes on-line algorithms for instrumental variable regression (IVaR) with streaming data. Specifically, the authors consider the model:
$Y = g_{\theta^*}(X) + \epsilon_1$
where the covariate $X$ and noise $\epsilon_1$ are possibly correlated, but an instrumental variable $Z$ is available that satisfies:
$X = h_{\gamma^*}(Z) + \epsilon_2$
with $\epsilon_2$ being a centered (unobserved) noise. Building on the prior works [MMLR20] and [HZCH20], the authors consider the IVaR problem formulated as a conditional stochastic optimization problem, presented in Equation (2):
$Minimize_g F(g) := E_Z E_{Y|Z} [ ( Y - E_{X|Z} [g(X)])^2 ].$
Considering a parameterized family of regression functions $G := \{ g(\theta; X) | \theta \}$, the authors observe that the gradient of F admits the expression as in Equation (3):
$\nabla F(\theta) = E_Z[ (E_{X|Z}[g(\theta;X)] - E_{Y|Z}[Y]) \cdot \nabla_{\theta} E_{X|Z}[g(\theta; X)]].$
This paper propose and analyze two streaming algorithms to solve the optimization problem formulation above. Notably, the proposals in this work do not require reformulating it to a minimax optimization problem as done in [MMLR20] or employing a nested sampling technique to reduce the bias in gradient estimation as in [HZCH20]. Specifically, the authors assume the availability of an oracle that can generate a sample $X$ (or two independent samples $X$ and $X’$) conditioned on $Z$, and then propose a stochastic gradient descent for IVaR. Additionally, the authors establish the rate of convergence assuming linear models. The proposed algorithms and claims are supported by numerical experiments with synthetic data.
Strengths: This work provides a simple yet effective algorithmic solution to solve the IVaR problem formulated as a stochastic optimization problem, overcoming challenges highlighted in prior work [MMLR20] and adapting the method in [HZCH20] for streaming settings. This avoidance of the need for nested sampling (=generating batches) is enabled by leveraging the structure of the quadratic loss in the gradient expression.
The paper is well-organized and presents its core ideas clearly. Section 2 introduces the two-sample oracle assumption, which, while somewhat idealistic, is reasonable for discrete-valued $Z$ as remarked by the authors. Section 3 then transitions to a more realistic one-sample oracle, focusing on linear models and modifying the algorithm and analysis from Section 2 accordingly. All required assumptions and propositions are stated explicitly and clearly.
Weaknesses: While this work makes several significant theoretical contributions, especially in advancing the analysis, there are areas for potential improvement:
**1. Motivation for IVaR Problems (with Streaming Data)**: The importance of IVaR problems, especially with streaming data, should be highlighted more. Discussing example scenarios in addition to citing references would better motivate and convince readers of the problem's relevance.
**2. Further Experimental Validation**: Although this work is primarily theoretical, augmenting the experiment section with a more comprehensive set of experiments would be beneficial. It would be particularly valuable to see how the proposed algorithms perform on real-world datasets.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. While Algorithm 1 is generally applicable with the availability of two-sample oracles, Algorithm 2 seems to hinge critically on linear models. The authors note in lines 244-245 that a detailed treatment of the nonlinear case is left for future work due to its complexity in analysis, but I am curious if designing a working algorithm based on the insights in this paper would be feasible at least. Could the authors provide insights on the following:
(a) How would Algorithm 2 (or a variant) perform with nonlinear models?
(b) Do the authors believe extending Algorithm 2 to nonlinear models is feasible, and what challenges might arise in the algorithm's design?
2. In line 57, can authors elaborate on why "Eq. (3) implies that one does not need the nested sampling technique to reduce the bias" with more details?
3. Suggestions
(a) **Lines 150 - 151**: I suggest the authors state Assumption 2.3 as follows: "... $P_{Y|X}$. There exist constants such that $C_x, C_y, C_{xx}, C_{yx} > 0$ such that ..."
(b) **Line 161**: I guess it might make more sense to move Assumption 2.4 up to Line 147, right after the sentence "... pre-collected dataset."
(c) **Page 9**: The authors may want to summarize the information in Lines 315 - 323 in the captions of Figures 3 and 4 for readers' convenience. Also, it could be helpful to summarize the experimental results in the main text explicitly.
(d) **Miscellaneous/Potential typos**:
i) Line 52: $X|Z$ instead of $Z|X$?
ii) Line 154: remove "under"
iii) Eq. (10) and Line 159: Please consider ending the sentence in Eq. (1) and start a new sentence in Line 159. Also, "if" in Line 159 should be capitalized.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: This is primarily a theoretical work, and the authors discussed the potential limitations of their assumptions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer E97U,
Thank you for your insightful review. Below we provide our response to your concerns and questions.
### **Weaknesses**
>W1. Motivation
**Response**:
As mentioned in the general response "Adv 3 - Emerging applications", a motivation for developing online/streaming IVaR is that of mobile health applications like Just-In-Time Adaptive Interventions (see, e.g., [TM17]) which also serves as a motivation for the work of [DVB23a]. Our work is directly applicable for several such applications. Taking your suggestion into account, we will be adding a discussion about this application in detail in our revision.
[TM17] Tewari, Ambuj, and Susan A. Murphy. "From ads to interventions: Contextual bandits in mobile health." Mobile health: sensors, analytic methods, and applications (2017): 495-517.
>W2. Experimental Validation
**Response**:
Please see our general response where we have added two real-data experiments. The results show the benefits of the proposed algorithms, thereby supporting the theoretical results.
### **Questions**
> Q1. While Algorithm 1 is...
**Response**:
We thank the review for asking this insightful question. Please see the explanation under **Non-linear settings** in the general response. We will add the above explanation as a remark in our revision.
>Q2. In line 57...
**Response**:
For general conditional stochastic optimization, its objective and gradient are of the form
$$F(\theta) = \mathbb{E}_{Z} l (\mathbb{E}\_{X\mid Z} [g(\theta;X)] - \mathbb{E}\_{Y \mid Z} [Y])$$
and
$$\nabla F(\theta) = \mathbb{E}_{Z}[\nabla\_{\theta} \mathbb{E}\_{X\mid Z}[g(\theta;X)] \nabla\_{\theta} l(\mathbb{E}\_{X\mid Z}[g(\theta;X)] - \mathbb{E}\_{Y\mid Z} [Y])],$$
where $l$ is a non-linear and non-quadratic function. Therefore, $\nabla_\theta l$ is non-linear. For the composition of non-linear function and expectation term $\nabla\_\theta l(\mathbb{E}\_{X\mid Z}[g(\theta;X)] - \mathbb{E}\_{Y\mid Z} [Y])$ for a given $Z$, getting a low-bias estimator requires a large batch of samples from the conditional distribution of $(X,Y)$ given $Z$ for the following observation: $\|\nabla\_\theta l(\mathbb{E}\_{X\mid Z}[g(\theta;X)] - \mathbb{E}\_{Y\mid Z} [Y]) - \mathbb{E}\nabla\_\theta l(\frac{1}{m}\sum_{i=1}^m g(\theta;X_i)- \frac{1}{m}\sum_{i=1}^m Y_i)\|$ $\leq S_l \mathbb{E}\|\mathbb{E}\_{X\mid Z}[g(\theta;X)] -\frac{1}{m}\sum_{i=1}^m g(\theta;X_i) + \frac{1}{m}\sum_{i=1}^m Y_i- \mathbb{E}\_{Y\mid Z} [Y]\|=\mathcal{O}(1/\sqrt{m})$. Here $S_l$ denotes the Lipschitz smooth parameter. However, when $l$ is just $\|\cdot\|^2$ as in the instrumental variable regression, $\nabla l$ is linear. Thus the right-hand-side of the above inequality simplifies to $0$, meaning that for any $m$, it is an unbiased estimator. Thus it suffices to use $m=1$ and avoids to use batch. This observation, while being straightforward in hindsight, has not been observed in previous work (which all hence used more complicated algorithmic design).
>Q3. Suggestions
**Response**:
We really appreciate your careful reading and insightful suggestions. We will incorporate them in our revision.
We sincerely hope that our responses have answered your questions and concerns, in which case, we would appreciate if you could raise your scores appropriately. If you have additional questions, please reach out to us during the discussion period and we will be happy to clarify.
---
Rebuttal Comment 1.1:
Title: Response to the Authors' Rebuttal
Comment: I thank the authors for addressing my concerns and questions. I find most of their responses satisfactory, and would like to encourage them to incorporate the explanations and remarks into the revision.
However, I am still not clearly seeing how Algorithm 2 can be immediately applied to the scenario where the relationship between $X$ and $Z$ is non-linear, as the authors claim. Specifically, the update rules in Eqs. (13) and (14) -- which stem from Eq. (11) and include an additional modification trick to promote stability as discussed in Lines 220 -- 226 -- seem to rely on the linear model assumption. Thus, it is unclear to me how these would translate to the non-linear model setting described by the authors, namely, the setting where $Y = {\theta^*}^{\top} X + \epsilon_1$ with $X = h_{\gamma^*}(X) + \epsilon_2$. I believe the update rule should involve $\nabla h_{\gamma_t}$ and possibly a similar modification to promote algorithmic stability.
Could you please clarify the update rules for this setting by specifying the counterparts of Eqs. (13) and (14), and any necessary modifications (if needed) that corresponds to replacing $g(\theta_t, X_t) = X_t^{\top} \theta_t$ with $Z_t^{\top} \gamma_t \theta_t$ in the linear setting?
---
Reply to Comment 1.1.1:
Title: Thank you for the insightful question
Comment: It is indeed true that learning $\gamma_*$ requires incorporating $\nabla h$ in the update equation as we have alluded to in our global response in the section "**Alg. 2 convergence proof intuition**" but could not elaborate on due to lack of space. We elaborate on it here.
**The same stability issue happens in this framework as we saw in the linear setting (Line 220 - 226) and is avoided by the same trick that we used in the linear setting.**
Consider the following version of the equation (11) adapted to the non-linear setting.
$$\theta_{t+1}=\theta_t-\alpha_{t+1} h_{\gamma_t}(Z_t)(X_t^\top \theta_t-Y_t)\qquad
\gamma_{t+1}=\gamma_t-\beta_{t+1} \nabla h_{\gamma_t}(Z_t)^\top( h_{\gamma_t}(Z_t)-X_t).\qquad \text{(11NL)}$$
Equation (11NL), similar to equation (12) in the main paper, can be expanded in the following manner.
$$\theta_{t+1}-\theta_* = {\hat{Q}}\_t^{NL} ( \theta_t-\theta_* )+\alpha_{t+1}\mathbb{E}\_{\gamma_t}[(h_{\gamma_t}(Z_t)-h_{\gamma_*}(Z_t)) Y_t)]+\alpha_{t+1}D_t^{NL}\theta_*+\alpha_{t+1}\left(\mathbb{E}\_{\gamma_t}[h_{\gamma_t}(Z_t)h_{\gamma_*}(Z_t)^\top]-h_{\gamma_t}(Z_t)h_{\gamma_*}(Z_t)^\top\right)(\theta_t-\theta_*)$$ $$~~~~~~~~~~~~~~~~~~~~+\alpha_{t+1}\left(\mathbb{E}\_{\gamma_t}[h_{\gamma_t}(Z_t)h_{\gamma_*}(Z_t)^\top]-h_{\gamma_t}(Z_t)h_{\gamma_*}(Z_t)^\top\right)\theta_*+\alpha_{t+1}((h_{\gamma_t}(Z_t)-\mathbb{E}\_{\gamma_t}[h_{\gamma_t}(Z_t)])Y_t)
-\alpha_{t+1}h_{\gamma_t}(Z_t)\epsilon_{2,t}^\top\theta_t. \qquad \text{(12NL)}$$
where $\hat{Q}\_t^{NL}=(I-\alpha_{t+1}\mathbb{E}\_{\gamma_t}[h_{\gamma_t}(Z_t)h_{\gamma_*}(Z_t)^\top])$, $D_t^{NL}=\mathbb{E}\_{\gamma_t}[(h_{\gamma_*}(Z_t)-h_{\gamma_t}(Z_t)))h_{\gamma_*}(Z_t)^\top]$.
First, let's focus on the **stability issue** associated with the first term on the RHS, i.e., $\hat{Q}\_t^{NL}(\theta_t-\theta_*)$. Just like the linear setting, here too, the matrix $\mathbb{E}\_{\gamma_t}[h_{\gamma_t}(Z_t)h_{\gamma_*}(Z_t)^\top]$ is not guaranteed to be positive semi-definite. So, we replace the term $X_t^\top\theta_t$ in equation (11NL) by $h_{\gamma_t}(Z_t)^\top\theta_t$ which leads to the following modified Algorithm 2 updates.
$$\theta_{t+1}=\theta_t-\alpha_{t+1} h_{\gamma_t}(Z_t)(h_{\gamma_t}(Z_t)^\top \theta_t-Y_t)\qquad \text{(13NL)}$$
$$\gamma_{t+1}=\gamma_t-\beta_{t+1} \nabla h_{\gamma_t}(Z_t)^\top( h_{\gamma_t}(Z_t)-X_t),\qquad \text{(14NL)}$$
where $\gamma_t\in\mathbb{R}^{d_\gamma}$, and $\nabla h_{\gamma_t}(Z_t)\in\mathbb{R}^{d_x\times d_\gamma}$ is the Jacobian of $h$ with respect to $\gamma_t$.
For (13NL), we will have $\hat{Q}\_t^{NL}$ of the form $\hat{Q}\_t^{NL}=(I-\alpha_{t+1}\mathbb{E}\_{\gamma_t}[h_{\gamma_t}(Z_t)h_{\gamma_t}(Z_t)^\top])$. Here $\mathbb{E}\_{\gamma_t}[h_{\gamma_t}(Z_t)h_{\gamma_t}(Z_t)^\top]$ is positive semi-definite leading to the stability of the dynamics just like the linear case.
**Now, we just have to show that rest of the terms on the right hand side of (12NL) converges similar to (12).** Recall that, except the first term, we control all the other terms on the right hand side of equation (12) mainly by using the martingale-difference property of the noise variables, and Lemma 3, i.e. the convergence of $\mathbb{E}[\||\gamma_t-\gamma_*\||^2]$. In (12NL), the martingale-difference property of the noise variables in the fourth to seventh terms on the right hand side clearly holds.
It remains to obtain a result analogous to Lemma 3. To do so, we look at equation (14NL). Analysis of equation (14NL) to establish a convergence rate for the $\gamma_t$ updates to $\gamma_*$ or $h_{\gamma_t}$ to $h_{\gamma_*}$ is straightforward as long as $H(\gamma)\coloneqq\mathbb{E}[\||X-h_{\gamma}(Z)\||^2]$ is strongly-convex [PJ92], or satisfies Polyak-Łojasiewicz (PL) inequality [KNS16]. This increases the model flexibility considerably as PL inequalities are satisfied for a wide class of non-linear DNN models [LZB22]. Beyond strong-convex and PL cases, the analysis is challenging although the same algorithmic framework still applies as a methodology.
Putting the above pieces together, it is possible to obtain the rates of convergence for the case when $Z$ to $X$ is non-linear.
Please reach out to us if you have any additional question. | Rebuttal 1:
Rebuttal: Dear reviewers,
Thank you for your comments and questions. Below we provide our general response. We first present real-data experiments, and then re-emphasize points which were potentially overlooked.
### **Real Data Examples**
We illustrate Alg. 2 on 2 datasets: Angrist and Evans (1998) Children/Parents' Labor Supply Data and U.S.Portland Cement Industry Data. Please see the global response PDF (attached) for the plots and experimental details.
### **Advantages of our approach**
- Avoiding "forbidden regression": Note that one of the main benefits of the online approach to IVaR that we discovered in our work is that of avoiding "forbidden regression" under the 2-sample oracle. To our knowledge, this solution is not available for any existing online and offline procedures in the current literature, and provides a novel data-collection mechanism for practitioners.
- Computational benefits: SGD-type algorithms have played a crucial role in scaling up statistical ML methods. Recent works ML (e.g., [VSH+16, DVB23a]) and economics (e.g., [CLL+23]) develop online IVaR methods. Our algorithms have state-of-the-art computational complexities (Appendix B) for online IVaR in linear settings, even compared to offline procedures like 2SLS.
- Emerging applications: Another motivation for developing streaming IVaR is mobile health applications like Just-In-Time Adaptive Interventions (see, [TM17] and [DVB23]). Our work is directly applicable for several such applications. We will add a discussion about these applications in our revision.
[TM17] A. Tewari, and S. A. Murphy. "From ads to interventions: Contextual bandits in mobile health." Mobile health: sensors, analytic methods, and applications (2017): 495-517.
### **Non-linear settings**
Our work is not restricted to just the linear setting. For Alg. 1, Prop. 1 extends the results to the non-linear setting.
**For Alg. 2, it can be immediately applied to the setting where the relation between $X$ and $Z$ is non-linear.** Consider the model:
\begin{align*}
Y=\theta_{\ast}^\top X+\epsilon_1\quad\text{with}\quad X=h_{\gamma_*}(Z)+\epsilon_2
\end{align*}
### **Unbiasedness of the Alg. 2 output**
Assume that one can learn $h_{\gamma_*}$ efficiently, i.e., say we have $h_{\gamma_t}\approx h_{\gamma_*}$ for some large $t$ (see below for the required conditions). Considering the update for $\theta_t$ in Algorithm 2 (Eq. (13)), we have
$$\theta_{t+1}=\theta_t-\alpha_{t+1} h_{\gamma_t}(Z_t)(h_{\gamma_t}(Z_t)^\top \theta_t-Y_t)\approx \theta_t-\alpha_{t+1} h_{\gamma_*}(Z_t)(h_{\gamma_*}(Z_t)^\top \theta_t-Y_t).$$
It is easy to see that the limit point $\theta_\infty$ of this update will satisfy, $$\mathbb{E}[h_{\gamma_*}(Z)h_{\gamma_*}(Z)^\top]\theta_\infty= \mathbb{E}[h_{\gamma_*}(Z)h_{\gamma_*}(Z)^\top]\theta_*$$
which implies that $\theta_t$ is an unbiased estimator of $\theta_*$ as long as $\mathbb{E}[h_{\gamma_*}(Z)h_{\gamma_*}(Z)^\top]$ is invertible (Assumption 2.2).
### **Alg. 2 convergence proof intuition**
In the proof of Thm. 2, the major challenge is to control the interaction term $\gamma_t Z_tZ_t^\top\gamma_t\theta_t$. We use Lemma 3 which establishes the convergence rate of $\mathbb{E}[\|\gamma_t-\gamma_*\|^2]$. Analogously, in the nonlinear setting, we need to prove the convergence of $\gamma_t$ to $\gamma_*$ or $h_{\gamma_t}(Z)$ to $h_{\gamma_*}(Z)$. This is possible when the loss function $\mathbb{E}[\|X-h_{\gamma}(Z)\|^2]$ is strongly-convex, or satisfies Polyak-Łojasiewicz (PL) inequality [KNS16]. This allows for considerable flexibility in the model choices including basis expansion based non-linear methods and wide Neural Networks [LZB22]. Since the update equation for $\gamma_t$ (Eq (14)) does not involve $\theta_t$, the rest of the proof follows by similar techniques except for the obvious changes required due to the above modification, e.g., the term $\gamma_{\ast}^\top\Sigma_Z\gamma_*$ will be replaced by $\mathbb{E}[h_{\gamma_*}(Z)h_{\gamma_*}(Z)^\top]$.
[KNS16] H. Karimi, J. Nutini, and M. Schmidt. "Linear convergence of gradient and proximal-gradient methods under the polyak-łojasiewicz condition." In ECML PKDD 2016.
[LZB22] C. Liu, L. Zhu, and M. Belkin. "Loss landscapes and optimization in over-parameterized non-linear systems and neural networks." Applied and Computational Harmonic Analysis 59 (2022): 85-116.
Now, consider the case where the upper level is non-linear, i.e.,
\begin{align*}
Y=g_{\theta_*}(X)+\epsilon_1\quad\text{with}\quad X=\gamma_{\ast}^\top Z+\epsilon_2.
\end{align*}
In this case, the lower level problem can be solved efficiently as it is a simple linear regression problem. For brevity, assume that $\gamma_*$ is known. In that case, the update for $\theta_t$ in Alg. 2 takes the following form.
\begin{align*}
\theta_{t+1}=\theta_t-\alpha_{t+1}\nabla g_\theta(Z_t^\top\gamma_*)(g_{\theta}(Z_t^\top\gamma_*)-Y_t).
\end{align*}
Under suitable convergence conditions, this update finds the root of $$\mathbb{E}[\nabla g_\theta(Z^\top\gamma_*)(g_\theta(Z^\top\gamma_*)-Y)]=0,$$ or equivalently, the root of,
\begin{align*}
&\mathbb{E}[\nabla g_\theta(Z^\top\gamma_*)(g_\theta(Z^\top\gamma_*)-g_{\theta_*}(Z^\top\gamma_*+\epsilon_2)-\epsilon_1)]=0\\
&\mathbb{E}[\nabla g_\theta(Z^\top\gamma_*)g_\theta(Z^\top\gamma_*)]=\mathbb{E}[\nabla g_\theta(Z^\top\gamma_*)g_{\theta_*}(Z^\top\gamma_*+\epsilon_2)].
\end{align*}
It is easy to see that for general non-linear $g$, $\theta=\theta_*$ may not be a solution to the above equation, i.e., Alg. 2 will find a biased solution. This demands a modified version of Alg. 2 involving an additional debiasing step. As the analysis of Alg. 2 is already quite challenging for the model where the upper layer is linear (see "Proof Techniques" after Remark 1 in the paper and our reply to Reviewer WDCF), we defer the analysis of the case where $Y$ depends non-linearly on $X$ to the future work.
Pdf: /pdf/b2fa41adf27fd095ebffffa6e195eeef14db1566.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Globally Q-linear Gauss-Newton Method for Overparameterized Non-convex Matrix Sensing | Accept (poster) | Summary: This paper introduces a custom Guass-Newton method dubbed AGN to solve general over-parametrized matrix sensing problems, and demonstrated that 1) This new method is a descent method under benign assumptions and 2) it achieves Q-linear convergence under restrictive RIP assumptions.
Strengths: 1. Introduced a new algorithm beyond GD based ones to solve this hard non-convex problem with some theoretical guarantees, which is nice. The framework of the problem is relatively relaxed, and is not tailored to very specific instances of matrix sensing.
2. Showcases that AGN will not be trapped inside hessian points theoretically.
3. The authors proves the quick convergence of this algorithm under small RIP constant.
4. The structure of this paper is easy to follow, addresses appropriate prior works, and offers appropriate level of details. Overall, this paper is very well-written and a pleasure to read.
Weaknesses: 1. I still find computational cost and difficulty of obtaining a good AGN update to be hard to understand. It is well-known that GN methods perform better than GD in terms of landscape and convergence rate, while at the cost of computation. Therefore the readers need to know where this tradeoff stands in this scenario.’
2. Section 5 offers the analysis under a very restrictive setting, which is an extremely small RIP constants. Such constants are hard to find in real life, and even GD methods perform very well under this setting, some achieving linear convergence when not too far from the ground truth. The readers need to know under this easy setting, how does AGN shine?
3. Theorem 2 offers the convergence results in terms of function value, but this is a tricky choice because we don’t know whether the distance between $X_t$ and ground truth also shrinks at a linear rate, which is what we are after. Since in Theorem 2 a small RIP constant is used, I imagine the difference in function value can be transformed into matrix distances with a minor twist, i still find it intriguing why the authors made this choice.
Technical Quality: 4
Clarity: 3
Questions for Authors: See some above.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewer for your positive evaluation of our work, and we also thank you for your valuable and constructive suggestions. As for your concerns, we make detailed responses as follows.
**1. Question: The computational cost of AGN over GD.**
**Answer:** In general, the computational cost of the Gauss-Newton method is higher than that of GD. However, in our low-rank matrix recovery problem, the proposed AGN benefits from the low-rank structure, making the computational cost of solving equation (8) not much higher than that of GD, given that the dimension $d$ is much smaller than $n$. Specifically, equations (33) and (34) indicate that the main computational overhead of AGN compared to GD is the inversion of a matrix of size $d \times d$, which is manageable for small $d$.
**2. Question: About the RIP constant and the advantage of AGN over GD under special initialization.**
**Answer:** We aim to show that the proposed AGN method can effectively avoid saddle points with linear convergence of the function value. As a result, we may not need to improve the RIP constant, though this could be a valuable direction for our future research. With this small RIP constant, even with good initialization, GD converges sub-linearly for the over-parameterized low-rank matrix sensing model, as noted by [15]. Additionally, the convergence of GD highly depends on the condition number of the target matrix, as indicated by [26]. In contrast, AGN guarantees linear convergence that is independent of the condition number of the target matrix.
**3. Question: The convergence in terms of the function value vs. the variables X.**
**Answer:** Our primary objective is to demonstrate that the AGN method avoids getting trapped in saddle points where the function value is high, but the gradient norm is low. We aim to illustrate the decrease in function value (as adopted by [15]), based on the premise that if an optimization method is hindered by saddle points, its function value will decrease very slowly near these points, as shown for GD in Fig. 1 of the main paper. The linear convergence at a constant rate of the AGN on function value clearly demonstrates that AGN does not get trapped in saddle points.
Moreover, with a small RIP constant, it is straightforward to derive the convergence of the variable $X$ based on the linear convergence of the function value and the following distance metric ${\rm{dist}}(X_t, X^*) = \min_Q \frac{1}{2}||U_tQ-U^*||_F^2 + \frac{1}{2}||V_tQ-V^*||_F^2$ with orthogonal matrix $Q$, where $X_t= \begin{bmatrix} U_t^{\top} & V_t^{\top} \end{bmatrix}^{\top}$, $X^* = \begin{bmatrix} U^{* \top} & V^{* \top} \end{bmatrix}^{\top}$ is the global optimal solution. We thank the reviewer for the helpful comment. We will include the convergence of ${\rm{dist}}(X_t, X^*)$ in our revision.
---
Rebuttal Comment 1.1:
Comment: I thank the author for your explanation, which is helpful in general. I still think favorably of this work and would like to keep my original rating.
---
Reply to Comment 1.1.1:
Title: Response to reviewer
Comment: We sincerely appreciate your time and efforts in providing us with your response. Your support has made a significant difference in our work, and we are confident that it will lead to a stronger final product. | Summary: This paper focuses on the optimization of overparameterized, non-convex low rank matrix sensing (LRMS)—an essential component in contemporary statistics and machine learning.
This paper introduces an approximated Gaussian-Newton (AGN) method for tackling the non-convex LRMS problem. Notably, AGN incurs a computational cost comparable to gradient descent per iteration but converges much faster without being slowed down by saddle points. This paper proves that, despite the non-convexity of the objective function, AGN achieves Q-linear convergence from random initialization to the global optimal solution. Moreover, under certain conditions on the sensing operator, AGN demonstrates super-linear convergence rate. The global Q-linear convergence of
AGN represents a substantial enhancement over the convergence of the existing methods for the overparameterized non-convex LRMS.
Problem:
There are some severe typos that will affect the readability of this paper.
(1) Line 134, it should be $J(x_t) = \phi'(x_t)$ instead of $J(x_t) = \psi'(x_t)$.
(2) Line 155, $\mathcal{B}(X,X) = \mathcal{A} (PXX^\top Q)$, so, what is $\mathcal{B}(\Delta, X_t)$ in line Eq.(7)?
(3) Why does Eq. (7) relate to Gauss-Newton? I known $\phi(x) = \mathcal{B}(X,X) - b$, but what is its Jacobian? what is $J\Delta$?
Due to these problems, I think this paper is hard to read.
Strengths: This paper focuses on the optimization of overparameterized, non-convex low rank matrix sensing (LRMS)—an essential component in contemporary statistics and machine learning.
This paper introduces an approximated Gaussian-Newton (AGN) method for tackling the non-convex LRMS problem. Notably, AGN incurs a computational cost comparable to gradient descent per iteration but converges much faster without being slowed down by saddle points. This paper proves that, despite the non-convexity of the objective function, AGN achieves Q-linear convergence from random initialization to the global optimal solution. Moreover, under certain conditions on the sensing operator, AGN demonstrates super-linear convergence rate. The global Q-linear convergence of
AGN represents a substantial enhancement over the convergence of the existing methods for the overparameterized non-convex LRMS.
Weaknesses: No
Technical Quality: 3
Clarity: 2
Questions for Authors: No
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: No
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our sincere gratitude for your valuable comments on our work. As for your concerns, we give detailed response below which we hope can help you fully understand our work. Please feel free to let us know if you have any further concerns. We will deeply appreciate that you can raise your score if you find our responses resolve your concerns.
**1. Question: About the typos.**
**Answer:** We apologize for the typo here and will thoroughly review the entire manuscript to eliminate any remaining errors.
**2. Question: For the expression $\mathcal B(X,X)= \mathcal A(PXX^TQ)$.**
**Answer:** We will restate the bilinear function as $\mathcal B(X,Y)= \mathcal A(PXY^TQ)$ to ensure clarity, such that $\mathcal B(\Delta, X) = \mathcal A(P\Delta X^TQ)$ is unambiguous.
**3.Question: About the Jacobian and the first-order approximation of $\mathcal B(X,X) - b$.**
**Answer:** It is straightforward to find that $J\Delta = \mathcal B(X, \Delta) + \mathcal B(\Delta,X)= \mathcal A(P\Delta X^TQ) + \mathcal A(PX\Delta^TQ)$.
Note that $\phi(X+\Delta) = \mathcal A(P(X+\Delta)(X+\Delta)^TQ) = \mathcal A(PXX^TQ) + \mathcal A(P\Delta X^TQ) + \mathcal A(PX\Delta^TQ) + \mathcal O(\Delta^2)$. Therefore the first-order Taylor expansion is $\mathcal A(PXX^TQ) + \mathcal A(P\Delta X^TQ) + \mathcal A(PX\Delta^TQ)$.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer L5qb,
We appreciate the time and effort you’ve dedicated to reviewing. As the discussion period nears its end, please feel free to reach out with any additional concerns. We would be happy to address them. If our responses have resolved your concerns, we would greatly appreciate it if you could consider raising your score.
Thank you very much !
---
Reply to Comment 1.1.1:
Comment: Dear reviewer,
Since the end of the discussion period is approaching, if you have any further concerns please feel free to let us know and we are pleased to discuss with you. Thank you very much! | Summary: In this submission, the authors proposed an approximated Gaussian-Newton (AGN) method for overparameterized non-convex low-rank matrix sensing problem. The authors presented the corresponding theoretical analysis and partially explained the reason the proposed AGN method achieves fast convergence rates.
Strengths: In this submission, the authors proposed a new AGN method The main idea is natural and the authors provide both theoretical analysis as well as some numerical experiments that provide empirical results consistent with the theoretical results. The main structure of this submission is clear and the presentation is of high quality.
Weaknesses: There are major two drawbacks of this paper.
1. The authors should put all the empirical results from the numerical experiments into one single section. The authors should proved more numerical experiments to compared the proposed method with other state-of-art algorithms and put all the figures and tables in one numerical experiment section.
2. The authors didn't present or prove the superlinear convergence rate of the proposed AGN algorithm.
Technical Quality: 3
Clarity: 3
Questions for Authors: The authors claims that the proposed method has superlinear convergence rate. Where is the theorem about the superlinear convergence rate?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors has presented the limitations in the last section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We deeply appreciate the reviewer for your careful review, constructive suggestions and positive feedbacks. The followings are our responses to your concerns. We will greatly appreciate that you can raise your score if you find our responses resolve your concerns.
**1. Question: About the experiments.**
**Answer:** Thank you for the reviewer’s suggestion. We have added comparisons of the iteration complexity of the proposed AGN with GD, PrecGD, and ScaledGD ($\lambda$), as in the following table
| reference | algorithm | iteration complexity |
|:----------:| :----------------:|:-------------------|
|[19] | GD | $\kappa^8 + \kappa^6\log(\kappa n / \epsilon)$ |
| [15] | PrecGD | $\kappa^8 + \log(1/\epsilon)$|
|[27] |ScaledGD($\lambda$) | $\log \kappa \cdot \log \kappa n + \log(1/\epsilon)$|
| ours | AGN | $\log(1/\epsilon)$|
||
where $\kappa$ is the conditional number of the target matrix, $n$ is the matrix dimension. These results are also presented in Table 1 (in the rebuttal PDF file). Additionally, we provide comparisons of the computational time of these competing methods across different dimensions in Table 2 (in the rebuttal PDF file). Both theoretical and experimental results demonstrate the superiority of the proposed AGN over the competing methods. We will include details of the experimental settings and the above comparisons in the revision.
**2. Question: About the superlinear convergence.**
**Answer:** The superlinear convergence is indicated by $c_q=0$ in Theorem 2, which is presented at the end of the Theorem 2. We will make it more clearer in the revision.
---
Rebuttal Comment 1.1:
Title: Discussion
Comment: Dear reviewer KSmV,
We understand that reviewing is a time-consuming task and we want to express our gratitude for your dedication. We value your expertise and opinion, and hope that you can take time to have discussions with us if you have any further concerns. We would greatly appreciate it if you could raise your score if our responses have addressed your concerns.
Thank you very much !
---
Reply to Comment 1.1.1:
Comment: Dear reviewer,
Since the end of the discussion period is approaching, if you have any further concerns please feel free to let us know and we are pleased to discuss with you. Thank you very much! | null | null | Rebuttal 1:
Rebuttal: Dear ACs and reviewers,
Thank you very much for your valuable comments. We truly appreciate the time and effort you've taken to review our work. We're glad the reviewers found our work valuable and provided positive feedback. Your feedback is important to us, and in accordance with the reviewers' comments, we have carefully revised the manuscript and provided detailed responses to all your concerns (please refer to our rebuttal for each reviewer for more detailed responses).
Saddle points can significantly slow the convergence of gradient descent methods in non-convex optimization problems. Previous works have shown that over-parameterization further reduces the convergence rate of gradient descent from linear to sublinear for low-rank matrix sensing. This paper demonstrates that by employing an approximation of the Gauss-Newton method, AGN can effectively and efficiently escape all saddle points with a linear convergence rate, significantly improving upon the results of gradient descent. We include in the one-page rebuttal PDF comparisons of iteration complexity (Table 1) and computational time (Table 2) of AGN with most recent state-of-the-art methods.
We greatly appreciate the reviewer’s comments on the experiment section. In our revision, we will provide more detailed settings and analyses for the figures and tables in the experiment section. We are also grateful to the reviewers for pointing out the typos and their concerns, which are invaluable for improving our work.
Once again, thank you for taking the time to review our work and provide valuable insights. We will take all the reviewers' suggestions into consideration for future improvements. We are also open to further discussions with the reviewers if there are any additional concerns.
Regards,
Authors of #4376
Pdf: /pdf/bc071ff0eab79f195e5ae2bd31daa812ca527706.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Utilizing Image Transforms and Diffusion Models for Generative Modeling of Short and Long Time Series | Accept (poster) | Summary: In this paper, authors propose to convert general time sequences into images by employing invertible transforms and incorporate advanced diffusion vision models to process short- and long-range time-series within the same framework. Through experiments, improvements have been made on multiple tasks, such as unconditional generation, interpolation and extrapolation.
Strengths: 1. The idea of converting time sequences to images is interesting.
2. Extensive experiments are conducted on several datasets of different tasks to evaluate the effectiveness of the proposed model. Besides, sufficient analysis and discussion make the results more convincible.
3. The overall organization of the paper is clear, and the writing is easy to follow.
Weaknesses: 1. The time-series to image (ts2img) transform methods and diffusion backbone used in the paper are all from other existing works, which compromise the novelty and significance of the paper.
2. In Sec.5.1 and Sec.5.2, the ts2img transform methods used for short- and long-range generation are different. It is hard to tell whether the transform choice or the diffusion model plays a more important role. Taken together with Sec.5.5, the proposed method is only effective (i.e. outperforms others) with some specific transform method on different tasks, which conflicts with the authors’ claim that the proposed model can seamlessly process sequences with different lengths.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please see the weaknesses part above.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's recognition of the comprehensiveness of our experiments and the clarity of our writing. We also thank the reviewer for raising concerns and points that helped deepen the discussion. Below, we address these points and are more than happy to respond to any further concerns.
> ***The time-series to image (ts2img) transform methods and diffusion backbone used in the paper are all from other existing works, which compromise the novelty and significance of the paper.***
Our work considers the problem of generative modeling of varying-length (short-to-very-long) sequences. The significance of our paper corresponds to the high significance of this problem in the time series community. Thus, any new advances (including ours) on the front of generative modeling of varying-length sequences is *significant*. In particular, our paper is first to set a uniform strong baseline approach for generating time series of short-to-very-long lengths, where previous works fail to do so. Particularly, while simple and easy to implement, our method is robust and outperforms other methods across different length setups with lower computational complexity. Finally, we also show that enlarging the number of parameters is not enough to improve performance of previous works.
The *innovation (novelty)* of our paper stems from our novel view of the problem as a visual task, and from our novel solution (illustrated in Fig. 1). To the best of our knowledge, these novel contributions were not suggested in the literature. Specifically, our approach bridges progress in generative diffusion vision research with time series generation through a novel combination and design of time series to image transforms and diffusion models, yielding a robust and innovative framework for generative time series modeling. We agree with the reviewer that some of the components we use (the diffusion model, the time series to image transforms) are not new. However, we do not believe that using established building blocks compromises novelty. Many papers suggest novel solutions to challenging problems based on existing building blocks, and our work aligns with this line of research.
> ***In Sec.5.1 and Sec.5.2, the ts2img transform methods used for short- and long-range generation are different. It is hard to tell whether the transform choice or the diffusion model plays a more important role. Taken together with Sec.5.5, the proposed method is only effective (i.e. outperforms others) with some specific transform method on different tasks, which conflicts with the authors' claim that the proposed model can seamlessly process sequences with different lengths.***
Thank you for highlighting this topic and allowing us to clarify. In Table 2 in the attached PDF, we directly compare the results of Delay Embedding (DE) with the second-best methods. Our results demonstrate that our model using DE significantly and consistently outperforms the second-best results (LS4 in the long setup and DiffTime in the short setup). In Sec. 5.5, we did not initially include a comparison to the second-best method per benchmark. However, including such comparison for long and short setups, makes it clear that our framework with DE is robust, outperforming the competition in 9 out of 10 comparisons, sometimes by as much as 50%. In the final revision, we plan to suggest users to first consider DE for generative modeling, before moving on to other time series to image transforms.
Additionally, we would like to emphasize that the type of time series to image transform is set by a hyper-parameter in our framework. All modern learning algorithms are subject to certain hyper-parameters, and their performance varies depending on the specific hyper-parameter values. This dependence on hyper-parameters of ML algorithms is not considered typically as conflicting with consistency or stability claims. Finally, we also note that while our framework remains consistent across different length setups, LS4 fails completely in short generation, and DiffTime, as shown in our responses to reviews by SryL and iRua, fails in long sequence generative modeling. This further underscores the robustness of our framework with the DE transform to multiple lengths.
---
Rebuttal Comment 1.1:
Comment: Thank you for the additional information. After reconsidering the contribution of this paper, I agree with your views on novelty. And the extra experiments help address my concerns. As you said, the type of time-series to image transform is set by a hyper-parameter, but I didn’t find the details about it. Instead of manual choice, it would be better if the transform type could be automatically chosen according to the time-series length. All things considered, I will increase my rating. Good luck to you.
---
Rebuttal 2:
Title: Response
Comment: Thank you for your response and for reconsidering the evaluation of our work.
We will clarify in the main paper that image transformation is selected as a hyperparameter and direct readers to Sections B.1, B.2, and B.3 of the appendix, where all relevant hyperparameter details, including image transformation, are described. We appreciate your feedback, which has improved the clarity of our work.
Regarding the automatic choice based on sequence length, we will emphasize in the final revision that some transformations, like Delay Embedding, are robust across any length, while others, such as STFT, are better suited for specific lengths, like long or ultra-long sequences.
We're happy to address any further questions you may have. | Summary: The paper argues for the use of image generative modelling architectures for the time-series generative modelling task. Doing so involves converting a time-series to an image-shaped object, modelling it as an image, and then converting back.
Strengths: - This is a simple idea that is shown to work well in most of the experimental settings
- It allows the use of image architectures, which have been extensively investigated in the literature, for other domains
- The paper is mostly clearly written
Weaknesses: Weaknesses:
- The authors mention that their approach "requires slightly higher computational resources". They go into more detail in Appendix C.10 but I would appreciate further detail. In particular, the cost is given in terms of hours/minutes, but I cannot see anything about e.g. GPUs used or FLOPs used. Are the number of GPUs matched between methods? A comparison in terms of GPU-hours for the same GPU type would be informative. To be really confident of the improvement, ideally there would be a comparison in which the methods were compared when given equal GPU-hours. Investigating how the performance of different methods scales as training FLOPs are increased would also be very interesting.
- It is a little unclear whether the advantage shown comes from the fact that image architectures are just better-explored than time-series architectures, or whether the inductive bias of transforming into an image shape is helpful. Can the authors comment on this? An interesting potential experiment would be to try training an image architecture with an older architecture and compare its performance.
- A more detailed description of the Predictive and Discriminative metrics used in Section 5.1 would be helpful.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Adequately addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are thankful to Reviewer SryL for generally identifying the simplicity of our approach and recognizing its ability to solve existing shortcomings and the extensive evaluation where we outperform baselines. We also would like to thank them for their observations, comments, and suggestions that helped deepen our discussion and improve the paper. Below, we address the reviewer's concerns. Given the opportunity, we will incorporate the responses below into the final revision.
> ***... is given in terms of hours/minutes, but I cannot see anything about e.g. GPUs used or FLOPs used. Are the number of GPUs matched between methods?***
Yes, the comparison is done in exactly the same environment. The software environments we use are CentOS Linux 7 (Core) and PYTHON 3.9.16, and the hardware is NVIDIA RTX 3090. In addition, all experiments run on a single GPU for all methods.
> ***... Investigating how the performance of different methods scales as training FLOPs are increased would also be very interesting.***
We analyze the FLOPs used per method and show the results in Table 7. We observe that our method is the most efficient FLOP-wise across all sequence lengths. We could not run LS4 on KDD Cup with $>100$M parameters, and thus, we omit the FLOPs computation for $100$M and $150$M set-ups.
> ***...It is a little unclear whether the advantage shown comes from the fact that image architectures are just better-explored than time-series architectures, or whether the inductive bias of transforming into an image shape is helpful...***
We thank the reviewer for pointing this out. Our goal in this paper is to leverage recent advancements in computer vision to develop an elegant and robust solution for time-series data, addressing different sequence lengths and setting a baseline for handling short, long, and ultra-long sequences. We aim to take advantage of the fact that image architectures are more thoroughly explored. We hypothesize that the improvements we observe are largely due to the more advanced development of image architectures compared to time-series architectures. To further investigate this and following the reviewer's suggestion, we used NVAE [2], and StyleGAN [3], instead of the diffusion model. We observe the following results:
| Model | Dataset | Marginal ↓ | Classifier ↑ | Predictor ↓ | Disc ↓ | Pred ↓ |
|-------------|-----------|------------|--------------|-------------|------------|------------|
| **Style GAN** | KDD | 0.02 | 0.001 | 0.233 | - | - |
| | NN Daily | 0.02 | 0.091 | 2.1 | - | - |
| | Stocks | - | - | - | 0.276 | 0.042 |
| **NVAE** | KDD | 0.008 | 0.031 | 0.107 | - | - |
| | NN Daily | 0.02 | 0.089 | 0.6 | - | - |
| | Stocks | - | - | - | 0.081 | 0.049 |
The results imply that using recently better-explored architecture yields better results when using the same transformations. This understanding strengthens our hypothesis for the robustness and efficiency of diffusion models.
> ***...A more detailed description of the Predictive and Discriminative metrics...***
We used the benchmark proposed in [1] to evaluate short-term unconditional generation, and we adhere to its evaluation protocol. This protocol comprises two scores: a predictive score and a discriminative score. Discriminative score: given the original data labeled as 'true' and generated data labeled as 'fake', we split the data into $80$%, $20$% for train and test data, respectively. Then, We train a model to discriminate between the samples. Finally, we report $|0.5 - \text{pred}|$ where pred is the models' accuracy over the test set. The lower the score, the better is the generated data since the model struggles to discriminate between the two. Predictive score (train-on-fake-test-on-real): given the generated data, we train a sequence-prediction model to predict next-step temporal vectors over each generated input sequence. Finally, we evaluate the trained model on the original data. Performance is measured with the mean absolute error (MAE). Importantly, these metrics are used in most of the time series generation papers for short sequences.
The rest of our tables are presented in the attached additional PDF file.
[1] Time-series Generative Adversarial Networks by Yoon et al.
[2] NVAE: A Deep Hierarchical Variational Autoencoder by Arash Vahdat, Jan Kautz.
[3] A Style-Based Generator Architecture for Generative Adversarial Networks by Karras et al.
---
Rebuttal Comment 1.1:
Comment: Thank you for your thorough response to my concerns. Your comment that your aim is to "leverage recent advancements in computer vision to develop an elegant and robust solution for time-series data" does help to clarify the contribution of this paper for me. I have raised my score to a 6. | Summary: The paper proposes using invertible transforms to map varying-length time series to images. Using this technique, generative modeling of time series can be done using diffusion vision models. The authors demonstrate state-of-the-art performance on unconditional generation, interpolation, and extrapolation on short and long time series benchmarks. An additional contribution of the paper is the introduction of a novel benchmark for ultra-long (>10k timesteps) time series.
Strengths: - The main idea (transforming time series to images to exploit existing image-generation methods) is quite elegant and investigated well.
- The experiments are thorough: unconditional and conditional generation is evaluated on short and long time series benchmarks. Furthermore, the results are convincing.
Weaknesses: - One crucial point is that the proposed method involves 1-2 orders of magnitude more parameters than e.g. LS4, a close competitor (Table 16). I think this point should be more clearly emphasized and investigated in the main paper.
- The authors mention that in terms of wall-clock time, their method's training and inference efficiency is comparable despite the difference in size. However, this seems to be mostly a statement on how much work has been done improving efficiency of image diffusion models -- it may be possible to drastically improve efficiency of existing time series methods.
Technical Quality: 3
Clarity: 3
Questions for Authors: - How does the number of parameters compare between the proposed method and all the other methods you compare to, e.g. in Tables 1 and 2?
- Were any experiments done evaluating how the proposed method compares to e.g. a parameter-matched LS4 model?
- Similarly, were any scaling law experiments done to investigate how the performance of the proposed method improves as the size of the image diffusion model increases?
- For example, it would be quite interesting to find that the scaling of the image diffusion model architecture is better than the scaling of e.g. LS4 or time series diffusion architectures.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately address the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are thankful to Reviewer iRua for generally identifying the elegancy of our approach and its generalizability. We also would like to thank them for their observations, comments, and suggestions that helped deepen our discussion and improve the paper. Below, we address the reviewer's concerns. Given the opportunity, we would be happy to incorporate the reviewer's concerns into a final revision.
> ***One crucial point is that the proposed method involves 1-2 orders of magnitude more parameters than e.g. LS4, a close competitor...***
Thank you for raising this question. Our original submission included models with 1-2 orders of magnitude more parameters vs. LS4. However, following the reviewer's questions and suggestions, we performed a thorough scaling study. We find that our approach still achieves SOTA results on short sequences with one order of magnitude less parameters in comparison to LS4. We attain SOTA results on longer sequences with models that are on par to LS4 in size. Our analysis is provided in Table 3 and Table 4 for the DiffTime method, Table 5 for LS4, and Table 6 for our method. In addition, scaling law results for each method are in Fig. 1 in the attached additional PDF file.
> ***... how much work has been done improving efficiency of image diffusion models -- it may be possible to drastically improve efficiency of existing time series methods.***
The primary motivation in our work was to leverage recent advancements in computer vision to enhance time-series data analysis. Our design is such that it can directly utilize the work done to improve diffusion models, as well as future extensions. In contrast, while it is possible to drastically improve time series methods, it is a major unknown. At the very basic, our approach uses convolutions whose computation is supported in GPU hardware, whereas time series techniques are mostly based on the transformer, which is not yet implemented in hardware.
> ***...number of parameters compare between the proposed method and all the other methods...***
We have extended Tab. 16 from the App. C.10 and included the updated table in Table 1. Specifically, we provide a comparison with the second-best methods for short sequences, DiffTime and GT-GAN. However, we emphasize that these methods do not scale well for ultra-long sequences (DiffTime) and even for long sequences in the case of GT-GAN, whose time complexity depends on the sequence length.
> ***... evaluating how the proposed method compares to, e.g., a parameter-matched LS4 model...***
> ***... scaling law experiments done to investigate how the performance of the proposed method improves as the size of the image diffusion model increases...***
We extend our evaluation by increasing the parameters of LS4 and DiffTime while decreasing the parameters of our model (Tables 3,4,5,6). This evaluation includes three datasets: Stocks (short), nn5daily (long), and KDD Cup (ultra-long). Evidently, increasing the number of parameters does not improve model performance. We believe this is a significant contribution of our paper, as it demonstrates that simply enlarging previous methods does not necessarily enhance results. Moreover, in the case of LS4 on the KDD Cup dataset, increasing the model's parameters to 100 million results in memory collapse, making it infeasible to run a batch size of one with the current resources used for training all models.
Finally, we present our results in Figure 1. We appreciate the reviewer for highlighting this issue and would like to include these results in our revision.
The figure and the tables are presented in the attached additional PDF file.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the response! The new results (regarding matched parameter / FLOPs comparisons) are enlightening and compelling. I will raise my score accordingly.
---
Reply to Comment 1.1.1:
Title: Response
Comment: Thank you for your response and for reevaluating our work. We are happy that our response address all your concerns and questions, and we would be more than willing to address any additional ones you may have. | null | null | Rebuttal 1:
Rebuttal: We have attached the PDF. References from the rebuttal comments are directed to the tables or figures in this PDF.
Pdf: /pdf/7190df8857151b9fbc3d43c5a2e493ca8e4d685a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Learning from Noisy Labels via Conditional Distributionally Robust Optimization | Accept (poster) | Summary: This paper studies the problem of learning from noisy labels by using conditional distributionally robust optimization (CDRO) to estimate true label posterior. The authors formulate the problem as minimizing the worst-case risk within a distance-based ambiguity set centered around a reference distribution and derive upper bounds for the worst-case risk.
Strengths: 1. Learning from noisy labels is an important and practical topic.
2. This paper provides rigorous theoretical analyses of the generalization bounds for the worst-case risk.
3. The authors offer a guideline for balancing robustness and model fitting by deriving the optimal value for the Lagrangian multiplier.
Weaknesses: 1. In section 3.1, the theoretical analyses are only conducted in the case of binary classification while multi-class classification is more common in practice.
2. In lines 192-193 and 198, the authors make assumptions about the function $\mathcal{T}$, however, the convexity or the concavity of $\mathcal{T}$ may be too strict.
3. According to Table 2, the performance improvement of the proposed method is not significant across four real datasets.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Can the theoretical analyses be extended to the multi-classification case?
2. In Algorithm 1 line 1, does the procedure of "Warm up classifiers $\psi^{(1)}$ and $\psi^{(2)}$" affect the generation of pseudo-empirical distribution? What if the warm-up is not good enough for pseudo-empirical distribution?
3. The intuition and rationale of using the CDRO framework to improve the performance of learning from noisy labels are not clear, please explain them.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: No, the authors do not mention the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for reviewing our manuscript. We appreciate your thoughtful comments and suggestions. We will carefully incorporate the necessary revisions to address your feedback in the new version of the manuscript. Below, we highlight our responses to each of your comments.
1. Notably, building on our initial development on binary classes, we have now extended our theoretical results to the multi-class scenario. This extension provides a comprehensive understanding of our approach's applicability across different classification settings.
* We extend the optimal action result in Theorem 3.1 to the multi-class scenario by identifying the extreme points of the corresponding linear programming problem.
* Theorem 3.2 from the initial submission extends naturally to the multi-class scenario using analogous proof techniques as those employed for the binary case.
* For brevity, we have omitted the detailed multi-class results but are prepared to include the extended proofs in the revised manuscript. The specific results for Theorems 3.1 and 3.2 can be found in the **Official Comment**.
2. Our theoretical results are based on the assumption that the function $\mathcal{T}$ is convex or concave with respect to its arguments, rather than the network parameters or the input data. In the context of our experiments, we utilize the cross-entropy loss function $\ell$. This results in $\ell$ being the logarithmic function, which is known to be concave in its argument. Therefore, our theoretical framework is aligned with the properties of the functions used in our experiments.
3. The discrepancy may arise because real datasets often have more complex noise generation processes compared to the simplified models used for noise estimation. To address this, we have employed advanced methods for estimating noise transition matrices to better reflect the intricate noise patterns observed in real data. The results using these improved transition matrix estimation techniques are detailed in **Table 4** of the attached PDF.
4. Answers to the questions
* (i) Yes, our theoretical analyses can indeed be extended to multi-class classification scenarios, as previously discussed. In response to your feedback, we will refine the presentation in our revision to explicitly address and integrate these multi-class scenarios, which will ensure a clearer and more comprehensive treatment of the subject.
* (ii) About the warm up stage
* Thank you for this thoughtful question. In Algorithm 1, the procedure of ``warming up classifiers $\psi^{(1)}$ and $\psi^{(2)}$" is intended to stabilize the classifiers before they generate pseudo-empirical distributions. In our experiments, we follow established practices [E] and use 30 warm-up epochs, selecting the model with the highest validation accuracy during this phase.
* Our results, shown in **Figure 1** of the attached PDF, indicate that the model tends to overfit, especially under higher noise rates. This observation suggests that the results presented in our paper are not derived under the most optimal warm-up conditions, which may affect the quality of the pseudo-empirical distributions.
* To rigorously assess the impact of the warm-up stage, we have now conducted experiments with varying numbers of warm-up epochs (10, 20, 30, 40) on both our method and baseline approaches that also rely on warm-up. These results are presented in **Figure 2** of the attached PDF, demonstrating the effects of different warm-up durations on performance.
* After the warm-up phase, classifiers $\psi^{(1)}$ and $\psi^{(2)}$ continue to be updated using the proposed method (as outlined in Line 8 of Algorithm 1), and the pseudo-empirical distribution is constructed using these updated classifiers. This approach ensures that the classifiers are continuously refined, which helps in mitigating any initial limitations from the warm-up stage and improves the overall reliability of the pseudo-empirical distributions.\\
* (iii) About the intuition and rationale of using the CDRO framework
* The CDRO framework addresses the challenge of noisy labels by focusing on model robustness in the presence of potential misspecifications. Many existing methods estimate the true label posterior $P^*_{y|x,\tilde{y}}$ using a model denoted $P_{y|x,\tilde{y}}$. However, this estimated posterior can deviate from the true distribution, known as model misspecification [H].
* By Bayes’s theorem, the true label posterior is proportional to the noise transition probability $P^*_{\tilde{y}|x, y}$. Existing work [F-G] shows that with large sample sizes, the estimated noise transition matrix converges to the true $P^*_{\tilde{y}|x, y}$ asymptotically, and the Wasserstein distance between them converges to zero. Therefore, we use a Wasserstein ball $\Gamma_\epsilon(P_{\tilde{y}|x, y})$ centered around the estimated posterior (referred to as the ``reference probability distribution") to measure and mitigate misspecification, as defined in Equation (2) in our paper.
* In this context, the Wasserstein distance is employed to measure the misspecification of estimated true label posterior. When the sample size $n$ is sufficiently large, the true distribution $P^*_{\tilde{y}|x, y}$ lies within $\Gamma_\epsilon(P_{y|x,\tilde{y}})$. Thus, by minimizing the worst-case risk over $\Gamma_\epsilon(P_{y|x,\tilde{y}})$, we can minimize an upper bound of the risk based on the underlying true label posterior.
* Our approach does not aim to precisely estimate the noise transition matrix or the true posterior but instead focuses on minimizing the worst-case risk over this Wasserstein ball. This method robustly trains the classifier despite potential misspecifications, which serves as a valuable complementary approach to traditional methods that directly apply the estimated posterior.
---
Rebuttal Comment 1.1:
Comment: Thanks for your responses. I am still wondering if the warm-up is not good enough for generating pseudo-empirical distribution, will the proposed method work robustly? Or will the warm-up guarantee a good pseudo-empirical distribution?
---
Reply to Comment 1.1.1:
Title: Regarding the warm-up phase in generating a pseudo-empirical distribution
Comment: Thank you for highlighting the concerns regarding the warm-up phase in generating a pseudo-empirical distribution. We appreciate your continued feedback.
The warm-up alone cannot guarantee a good pseudo-empirical distribution. However, our algorithm is inherently designed to accommodate inaccurate pseudo-empirical distribution in the warm-up phase. Therefore, even if the warm-up is not good enough, our method can still work robustly, as shown in **Figure 2** of the attached PDF.
Below, we provide a detailed explanation of why our approach remains effective, even when the warm-up models are not good enough.
1. The classifiers $\psi^{(1)}$ and $\psi^{(2)}$ are continuously updated as the algorithm proceeds, and the pseudo-empirical distribution is then **constructed based on these updated classifiers**. Therefore, even if the warm-up is not good enough stage, as our algorithm keeps refining the classifier, it will help mitigate any initial bias from the warm-up stage and enhancing the overall reliability of the pseudo-empirical distributions.
2. In addition, **the way we construct the pseudo-empirical distribution also enables a more robutst estimation**. More speciically, we leverage the approximated true label posterior, $P_{y|x,\tilde{y}}$, to assign robust pseudo labels and subsequently construct the pseudo-empirical distribution. Instead of simply assigning the label corresponding to the highest probability as the pseudo label [a][b], **our approach takes into account both the highest and second-highest predicted probabilities**. We assign a pseudo label only if the ratio of these probabilities exceeds a specified threshold. This strategy ensures that pseudo labels are given only to instances with high confidence, effectively filtering out uncertain data when the warm-up models may not be sufficiently accurate.
Here we also provide additional empirical results to demonstrate the robustness of our approach to the imperfect warm-up models. As shown in Plots (a), (c), and (e) of **Figure 1** in the attached PDF, the model is underfitted with a 10-epoch warm-up phase and overfitted with a 40-epoch warm-up phase, especially at higher noise rates, suggesting that the warm-up models are not good enough. To assess the quality of the generated pseudo-empirical distributions in these scenarios, we present the average accuracies of the robust pseudo-labels selected using the proposed method during training in **Tables 1-3**. Specifically, as shown in **Table 1**, when the noise ratio is high and the model is warmed up for only 10 epochs, the initial average accuracy of the robust pseudo-labels is $89.41_{\pm 3.66}$, reflecting a less reliable warm-up model. However, accuracy increases to $97.72_{\pm 2.57}$ by the final epoch, demonstrating the robustness of the proposed method in constructing the pseudo-empirical distribution.
Thank you once again for your response. In preparing a revised manuscript, we plan to include detailed comments that address the issue of the warm-up phase to provide greater clarity. We hope our replies have sufficiently addressed all the concerns raised. Should you require any additional details, we would be happy to provide them. We are open to further discussions and ready to clarify any remaining questions or concerns you may have.
**References**
[a] Tanaka, Daiki, et al. "Joint optimization framework for learning with noisy labels." CVPR (2018).
[b] Han, Jiangfan, Ping Luo, and Xiaogang Wang. "Deep self-learning from noisy labels." ICCV (2019). | Summary: This paper studied the issue of potential misspecification of estimated true label posterior in learning from noisy labels. To alleviate the impact of this issue, it formulated learning from crowds as a conditional distributionally robust optimization problem, where a robust pseudo-empirical distribution is used as a reference probability distribution. Experiments on multiple crowdsourcing datasets verified the effectiveness of the proposed method.
Strengths: 1. The proposed methodology has a solid theoretical foundation.
2. The writing is very clear and easy to understand.
3. The performance of the proposed method is very promising in the experiments.
Weaknesses: 1. The focused problem, i.e., the potential misspecification of estimated true label posterior, has not been defined and measured. And it is not very clear to what extent the proposed method has solved the impact of this problem.
2. The contribution of this work is about "learning from noisy labels", while the context of this work seems limited to "learning from crowds". They are not consistent, since "learning from noisy labels" also includes the case with one annotator. However, this work didn't discuss and test on the case with one annotator.
3. Although the way to construct a robust pseudo-empirical distribution has a theoretical motivation, it seems similar to the pseudo-labeling methods [1,2] in learning with noisy labels. What are the differences and advantages of the proposed way and the pseudo-labeling methods?
4. As we know, in learning with noisy labels or learning from crowds, the estimation of noise transition probabilities is very important, while the way to approximate noise transition probabilities in this work is heuristics. Why not use those thoroughly studied methods, e.g., [3,4,5], or more advanced instance-dependent transition matrix estimation methods [6,7]?
[1] Joint optimization framework for learning with noisy labels. CVPR 2018
[2] Deep Self-Learning From Noisy Labels. ICCV 2019
[3] Deep learning from crowdsourced labels: Coupled cross-entropy minimization, identifiability, and regularization. ICLR 2023
[4] Learning from noisy labels by regularized estimation of annotator confusion. CVPR 2019
[5] Deep learning from crowds. AAAI 2018
[6] Label correction of crowdsourced noisy annotations with an instance-dependent noise transition model. NeurIPS 2023
[7] Transferring annotator- and instance-dependent transition matrix for learning from crowds. TPAMI 2024
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. What {(·, ·)}^p means in Line 133?
2. There is a little abuse of notations. ψ means the classifier in Line 88, while it represents the predicted probabilities in Line 190.
3. How to address the annotation sparse problem when approximating noise transition probabilities?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for reviewing our manuscript. We appreciate your thoughtful comments and suggestions. We will carefully incorporate the necessary revisions to address your feedback in the new version of the manuscript. Below, we highlight our responses to each of your comments.
### 1. About the focused problem of our paper
* In the problem of learning with noisy labels, many existing approaches use various algorithms to estimate the true label posterior $P^*\_{y|x,\tilde{x}}$, typically with a model denoted as $P_{y|x,\tilde{y}}$. However, the estimated posterior can deviate from the true underlying distribution, a phenomenon known as model misspecification [H].
* By Bayes’s theorem, the true label posterior is proportional to the noise transition probability $P^*_{\tilde{y}|x, y}$. Existing works [F-G] demonstrate that the estimated noise transition matrix asymptotically converges to the true $P^*_{\tilde{y}|x, y}$ under certain conditions. According to the Vitali convergence theorem [I], the associated $L_1$ Wasserstein distance converges to zero in this situation. Therefore, we consider a Wasserstein ball $\Gamma_\epsilon(P_{y|x,\tilde{y}})$ centered around the estimated true label posterior (referred to as the "reference probability distribution"), as defined in Equation (2) in our paper. In this context, the Wasserstein distance is used to measure the misspecification of estimated true label posterior. When the sample size $n$ is sufficiently large, the true distribution $P^*_{\tilde{y}|x, y}$ will lie within $\Gamma_\epsilon(P_{y|x,\tilde{y}})$. Thus, by minimizing the worst-case risk over $\Gamma_\epsilon(P_{y|x,\tilde{y}})$, we can effectively minimize an upper bound on the risk based on the true label posterior.
* Note that our work does not aim to precisely estimate the noise transition matrix or alleviate the misspecification of estimated true label posteriors. Instead, we focus on **robustly training a classifier despite misspecification** by considering the worst-case risk within a Wasserstein ball. Therefore, our work complements existing methods that rely directly on the estimated true label posterior, and thus, providing a robust alternative that accounts for potential inaccuracies, as shown in **Table 4** of the attached PDF.
### 2. Learning from noisy labels v.s. Learning from crowds
* Thank you for highlighting this issue. The theoretical framework presented in our paper is applicable to both single-annotator ($R=1$) and multiple-annotator ($R>1$) scenarios. In our experiments, we generate a total of $R$ annotators and then randomly select one annotation per instance from these $R$ annotators. This approach underscores that our focus is on "learning from noisy labels" instead of "learning from crowds" as the latter implies a specific emphasis on aggregating multiple annotations per instance.
* To thoroughly evaluate the scenario with a single annotator, we have now set $R=1$ and conducted additional experiments on both the CIFAR10 and CIFAR100 datasets. The results of these experiments are detailed in **Table 2** of the attached PDF. We will incorporate these results into our paper to address this scenario comprehensively in the forthcoming revision.
### 3. About the pseudo-empirical distribution
* The pseudo-labeling methods discussed in [1-2] rely solely on the highest predicted probability for each instance, assigning the label corresponding to this highest probability as the pseudo label. In contrast, our approach considers both the highest and the second-highest predicted probabilities. We assign a pseudo label only if the ratio of these probabilities exceeds a specified threshold. This strategy ensures that pseudo labels are assigned only to instances with high confidence, effectively filtering out uncertain data. The accuracies of the pseudo labels throughout the training process are detailed in Figures 3-4 in Appendix B.2 of our paper, which demonstrates the effectiveness of our robust approach.
### 4. About estimation of noise transition probabilities
* Our work does not focus on precisely estimating the noise transition matrix; thus, we employ a straightforward estimation method in our experiments for simplicity. However, our approach is versatile and can be integrated with various methods for estimating the noise transition matrix or the true label posterior. We have conducted additional experiments using more advanced transition matrix estimation methods, and the results are presented in **Table 4** of the attached PDF. As shown in Table 4, incorporating these advanced estimation methods significantly improves the test accuracies of our proposed approach, which highlights the robustness and adaptability of our method.
### 5. Answers to the questions
* Thank you for pointing out the typo. It should indeed be ${c(\cdot,\cdot)}^p$. We appreciate your attention to detail and will correct this error in the revised version of the paper.
* Thank you for highlighting this. We will clarify this in the revision. Specifically, we use $\psi:\mathcal{X}\rightarrow \mathcal{S}^{K-1}$ to denote the predicted probabilities, where $\mathcal{S}^{K-1}$ represents the $(K-1)$-dimensional simplex. Consequently, the classifier is defined as $\max_{j \in [K]} \psi(\mathbf{x})_j$.
* In our experiments, we generate $R=5, 10, 30, 50, 100$ annotators and select a single annotation per instance for training. As demonstrated in Figure 1 of our paper, our proposed method consistently outperforms the baselines, even as annotation sparsity increases with the total number of annotators. This robust performance highlights the effectiveness of our approach in handling varying levels of annotation density.
---
Rebuttal 2:
Comment: I have read through the comments of other reviewers and the corresponding authors' responses. I thank the authors for the detailed response. It has addressed most of my concerns. My remaining concerns are about the analysis of how to solve the sparse annotation problem in the proposed method:
- The noise transition estimation method in section 3.2 which achieves the estimation via frequency
counting, in my opinion, is not robust to typical sparse annotation cases. In typical cases, the annotators may only label a small number of instances, while the small data for each annotator will make the estimation very inaccurate especially when the class space is large, even the data with small losses will be less than the original data. And, the inaccuracy of the estimated noise transition may further influence the accuracy of pseudo-empirical distribution.
- As mentioned in the response, when the sample size $n$ is sufficiently large, the true distribution $P^*_{\tilde{y}|x, y}$ will lie within $\Gamma_\epsilon(P_{y|x,\tilde{y}})$. Is the sparse annotation problem will influence it?
- Although the experiments with different levels of annotation sparsity (5-100 annotators) have been conducted in CIFAR10 (Fig.1), the setting seems not very typical. For example, in the real-world CIFAR10N dataset (10 classes), there are 747 annotators, each labeling 201 instances on average; in the CIFAR100N dataset (100 classes), there are 519 annotators, each labeling 96 instances on average; in the LabelMe dataset (8 classes), there are 59 annotators, each labeling 47 instances on average. I think the severe annotation sparsity may influence the performance improvement of the proposed method in real-world datasets.
---
Rebuttal Comment 2.1:
Title: Regarding the impact of sparse annotation
Comment: Thank you for raising these important points regarding the noise transition estimation method and the impact of sparse annotation. In preparing a revised manuscript, we plan to include additional comments to highlight these issues.
* We acknowledge that the frequency-counting method for noise transition estimation may face challenges when the number of labeled instances per annotator is small. However, our method has already accounted for the potential misspecification of the estimated true label posterior and exhibits tolerance and robustness. Specifically,
* By the nature of our design, the proposed algorithm can accommodate imperfect transition estimation by distributionally robust optimization (i.e., Eq. (2) in our paper). For a less accurate estimated true label posterior, we can **choose a larger** $\epsilon$ in the uncertainty set $\Gamma_\epsilon(P_{y|x,\tilde{y}})$ to tolerate the inaccuracy.
* Additionally, **the way we construct the pseudo-empirical distribution also enables a more robust estimation of it**. More specifically, we leverage the approximated true label posterior, $P_{y|x,\tilde{y}}$, to assign robust pseudo labels and subsequently construct the pseudo-empirical distribution. Instead of simply assigning the label corresponding to the highest probability as the pseudo label [a][b], **our approach takes into account both the highest and second-highest predicted probabilities**. We assign a pseudo label only if the ratio of these probabilities exceeds a specified threshold. This strategy ensures that pseudo labels are given only to instances with high confidence, effectively filtering out uncertain data.
* Empirically, as shown in Figure 4 on page 25 of our paper, when we increase the total number of annotators (thus increasing annotation sparsity), the accuracy of the selected pseudo labels remains high (around 95% even at high noise rates). This further demonstrates the robustness of our method in generating a reliable pseudo-empirical distribution.
* On the other hand, we truly appreciate your suggestion, which can further enhance the quality of this work. We will add a limitation section to discuss this issue, and add additional experiments to incorporate the following possible solutions:
* One possible approach is to **employ regularization techniques** to mitigate the impact of small sample sizes by smoothing the estimates and reducing sensitivity to outliers. For instance, the theories in [a] are established under incomplete labeling paradigm.
* Another approach is to **incorporate subgroup structures** for the annotators using a multidirectional separation penalty (MDSP) [b-c].
* Additionally, as mentioned in Remark 3.4 of our paper, the estimation of the true label posterior $P^*_{y|x, \tilde{y}}$ is not limited to Bayes’s rule alone. We can also **directly model** $P^*_{y|x, \tilde{y}}$ by aggregating the data and noisy label information by maximizing the $f$-mutual information gain as in [d].
* Your concern about whether sparse annotation affects the true distribution $P^*_{\tilde{y}|x, y}$ is indeed valid and insightful. Our theoretical framework assumes that, given a sufficiently large sample size $n$, the true distribution will be captured within $\Gamma_\epsilon(P_{y|x,\tilde{y}})$. In finite sample settings, sparse annotation can impact the accuracy of this estimation. In this case, we choose a larger $\epsilon$ in the uncertainty set to incorporate the potential misspecifications. Specifically, according to the proof of New Theorem 3.2, the $\epsilon$ in the uncertainty set should be taken in $(1,1/K)$ for $K$-classification problem. Therefore, we can select a larger $\epsilon$ (closer to $1/K$ rather than 0) if the estimated true label posterior $P^*_{y|x, \tilde{y}}$ is not sufficiently precise.
* Thanks for your suggestions. Regarding the real-world datasets, the results using the frequency-counting method for noise transition estimation are presented in Table 2 of our paper, and Table 4 of the PDF attached to our rebuttal. Notably, our method consistently outperforms other baselines, especially when more advanced noise transition matrix estimation methods are incorporated (i.e., Table 4 of the attached PDF). Moreover, to further integrate your feedback, we will also incorporate the solutions to the sparse annotation scenario mentioned in the second bullet point and conduct additional experiments on both real-world datasets and the CIFAR10 dataset with sparser annotations. **Due to time constraints, we will try to share the experimental results in the Official Comment in 24 hours.**
In summary, we appreciate your continued feedback and recognize the challenges posed by sparse annotation in noise transition estimation. Our revised manuscript will incorporate additional comments and clarifications to address these concerns. Thank you again for your constructive feedback.
---
Reply to Comment 2.1.1:
Title: References
Comment: [a] Ibrahim, Shahana, Tri Nguyen, and Xiao Fu. "Deep learning from crowdsourced labels: Coupled cross-entropy minimization, identifiability, and regularization." ICLR (2023).
[b] Tang, Xiwei, Fei Xue, and Annie Qu. "Individualized multidirectional variable selection." Journal of the American Statistical Association (2021).
[c] Xu, Qi, et al. "Crowdsourcing Utilizing Subgroup Structure of Latent Factor Modeling." Journal of the American Statistical Association (2024).
[d] Cao, Peng, et al. "Max-mig: an information theoretic approach for joint learning from crowds." ICLR (2019).
---
Reply to Comment 2.1.2:
Title: Additional Experimental Result (1)
Comment: * We conducted additonal experiments to address the annotation sparsity issue. In particular, we incorporated regularization techniques (specifically, GeoCrowdNet (F) and GeoCrowdNet (W) penalties) into our method. We then compared the results against those obtained using the traditional frequency-counting approach to estimate the noise transition matrices.
* For these experiments, we generated three groups of annotators with average labeling error rates of approximately 26%, 34%, and 42%, labeled as IDN-LOW, IDN-MID, and IDN-HIGH, respectively. These groups represent low, intermediate, and high error rates, allowing us to evaluate the robustness of our method under varying levels of noise. Due to time constraint, we generate $R=200$ annotators for each group. However, in the revised manuscript, we plan to use a larger
$R$ to further validate our findings and strengthen the results.
* Table 1 presents the performance of our proposed method on the CIFAR10 ($R=200$), CIFAR10N, and LabelMe datasets, comparing the outcomes when different approaches are used to estimate the noise transition matrices. In addition, Tables 2–5 displays the average accuracies of the robust pseudo-labels generated by our method in the training process. These pseudo-labels play a crucial role in constructing the pseudo-empirical distribution.
**Table 1:** Accuracies of learning the CIFAR10 ($R=200$), CIFAR10N, and LabelMe datasets with different noise transition matrix estimation methods.
| | CIFAR10: IDN-LOW | CIFAR10: IDN-MID | CIFAR10: IDN-HIGH | CIFAR10N | LabelMe | Animal10N
| :---- | :---- | :---- | :---- | :---- | :---- | :---- |
Ours + frequency-counting | $86.01_{\pm 0.67}$ | $85.48_{\pm 0.58}$ | $85.07_{\pm 0.59}$ | $88.07_{\pm 0.34}$ | $83.35_{\pm 1.16}$ | $82.35_{\pm 0.34}$ |
Ours + GeoCrowdNet (F) penalty | $90.89_{\pm 0.21}$ | $90.27_{\pm 0.46}$ | $89.25_{\pm 0.63}$ | $88.30_{\pm 0.13}$ | $86.20_{\pm 0.48}$ | $83.12_{\pm 0.42}$ |
Ours + GeoCrowdNet (W) penalty | $90.99_{\pm 0.42}$ | $90.23_{\pm 0.27}$ | $89.42_{\pm 0.29}$ | $87.81_{\pm 0.12}$ | $83.32_{\pm 0.51}$ | $82.41_{\pm 0.04}$ | | Summary: This work addresses learning from noisy annotations by using conditional distributionally robust optimization (CDRO).
To account for variability in estimating the true label posteriors, the authors propose an approach that minimizes the maximum expected risk with respect to a probability distribution within a distance-based ambiguity set centered around a reference distribution (the posterior distribution of the true labels).
Deriving the dual problem, the authors are able to provide upper bounds for risk. Moreover they derive generalization bounds for the upper bound of the risk.
Additionally, a closed-form expression for empirical robust risk and the optimal Lagrange multiplier is provided.
An analytical solution for dual robust risk is found for loss functions of a particular form. Starting from the analytical solution and introducing a robust pseudo-label collection algorithm.
Experiments are performed on CIFAR-10 and CIFAR-100 introducing on synthetic and on the real-world datasets CIFAR-10N, CIFAR-100N, LabelME, animal-10N.
Strengths: 1) The proposed approach is really interesting, novel and principled, with a sound theoretical foundation.
2) The paper is clearly written, making the concepts and methodology understandable.
3) The authors provide a generalization bound for the upper bound of the risk function they define.
Weaknesses: 1) Some results in the experiments are not fully convincing me, I would like to hear the authors feedback:
- Regarding Figure 1. How can the accuracy change so little with an increasing number of annotators? According to Figure 1 in [7], the noise rate of aggregated labels decreases significantly as the number of annotators increases, even for high initial noise rates. How is it possible that the accuracy in Figure 1 remains almost constant despite the increase in annotators?
- Why is the performance of the ResNet34 model on the clean CIFAR-100 dataset so low? The overall performances for CIFAR 100 and CIFA10-N seem lower than those reported in ProMix [1] (Tables 1 and 2) or SOP [3] (Tables 1 and 3). Although the noise settings differ, the model trained on clean data should exhibit higher performance on CIFAR 100.
Similarly, the results for Co-teaching method appear inconsistent with those in other studies.
2) I believe a better justification for the chosen baselines is needed. For example, since Co-teaching, using two networks and which is designed for scenarios with a single label per sample is included, why not include more SOTA methods such as ProMix [1], Divide-Mix [2], and SOP [3]?
I am also curious why these methods were not included. Additionally, for methods that aggregate labels, considering other approaches like IWMV [4], or those that train models on soft labels, such as IAA [5] and soft-labels average [6], would be beneficial.
While not asking the authors to include all these methods, I would appreciate a clearer justification for the choice of baselines.
3) In my opinion, the limitations are not fully discussed. Please see the Limitations section for further details.
[1] Wang, Haobo, et al. "Promix: Combating label noise via maximizing clean sample utility." arXiv preprint arXiv:2207.10276 (2022).
[2] Li, Junnan, Richard Socher, and Steven CH Hoi. "Dividemix: Learning with noisy labels as semi-supervised learning." arXiv preprint arXiv:2002.07394 (2020).
[3] Liu, Sheng, et al. "Robust training under label noise by over-parameterization." International Conference on Machine Learning. PMLR, 2022.
[4] Li, Hongwei, and Bin Yu. "Error rate bounds and iterative weighted majority voting for crowdsourcing." arXiv preprint arXiv:1411.4086 (2014).
[5] Bucarelli, M. S., Cassano, L., Siciliano, F., Mantrach, A., & Silvestri, F. (2023). Leveraging inter-rater agreement for classification in the presence of noisy labels. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 3439-3448).
[6] Collins, Katherine M., Umang Bhatt, and Adrian Weller. "Eliciting and learning with soft labels from every annotator." Proceedings of the AAAI conference on human computation and crowdsourcing. Vol. 10. 2022.
[7] Wei, Jiaheng, et al. "To aggregate or not? learning with separate noisy labels." Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: - What loss function is used in the experiments, namely what function $ \mathcal{Tau}$ is used in the loss?
see also questions in Weaknesses.
Suggestions:
Write in the main paper the type of network used in the experiments would facilitate the reader (ResNet-18 architecture for CIFAR-10 and CIFAR-10N, and ResNet-34 architecture for CIFAR-100 and CIFAR-100N datasets. )
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Partially. some possible improvements are shortly discussed in the conclusions.
Some limitations:
- An overlooked limitation is the need for $P_j(x,\tilde{y}y)$, that is the posterior $ P(Y=j| \mathbf{X}= \mathbf{x}, \mathbf{\tilde{Y}} = \mathbf{\tilde{y}})$ to define the optimal action. This requires the posterior distribution, which can only be obtained after training the model for some epochs. The number of epochs necessary depends on the dataset's characteristics and how quickly the model overfits, but the sensitivity to this factor is not discussed. Also the number of samples for the $\mathcal{D}^{*}_0$ dataset are not specified.
- Another limitation that is not mentioned is the fact that the method needs to classifiers.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for reviewing our manuscript. We appreciate your thoughtful comments and suggestions. We will carefully incorporate the necessary revisions to address your feedback in the new version of the manuscript. Below, we highlight our responses to each of your comments.
### 1. About the feedback on initial experiment results
* About accuracy with an increasing number of annotators.
* While we generate $R=5, 10, 30, 50, 100$ annotators, we **randomly select only one annotation per instance** for the training dataset to assess the algorithms in a partial labeling setting with sparse annotations. In contrast, Figure 1 in [7] shows the noise rates of the aggregated labels when **all the $R$ labels** are provided.
* To further evaluate model performance with varying numbers of annotations per instance, we use $R = 30$ annotators and randomly select $l = 1, 3, 5, 7, 9$ labels from these $R$ annotators for each instance. The noise rates of the majority vote labels are provided below. The test accuracies of the proposed method and other annotation aggregation methods are shown in **Table 1** of the attached PDF.
| | $l=1$ | $l=3$ | $l=5$ | $l=7$ | $l=9$
:---- | :---- | :---- | :---- | :---- | :----
CIFAR10: IDN-LOW | 0.20 | 0.09 | 0.04 | 0.02 | 0.01
CIFAR10: IDN-MID | 0.38 | 0.26 | 0.16 | 0.10 | 0.07
CIFAR10: IDN-HIGH | 0.51 | 0.42 | 0.31 | 0.25 | 0.20
* Performances of some baselines.
* We followed the experimental settings used in [A-B], which differ from those in ProMix or SOP. Specifically, all models in our paper are trained on CIFAR100 for 150 epochs with a batch size of 128. In contrast, according to their source code, ProMix is trained on CIFAR100 for 600 epochs with a batch size of 256, and SOP is trained for 300 epochs. Additionally, ProMix and SOP employ further data augmentations, such as AutoAugment [C] and mixup augmentation [D], which we did not use in our setting.
* The discrepancy in the results for the Co-teaching method compared to other studies is due to differences in the noisy label generation method used in our paper. Specifically, for each instance, we generate $R$ instance-dependent annotators and randomly select one noisy annotation from these $R$ annotations.
### 2. Justification for the chosen baselines
* Our method addresses learning from noisy annotations, especially with potential misspecifications in estimated true label posteriors. We select baselines that either directly use estimated transition matrices or true label posteriors (MBEM, CrowdLayer, TraceReg, Max-MIG, CoNAL). We also consider baselines that aggregate labels in various ways (CE (MV), CE (EM), DoctorNet, CCC). Since our theoretical framework applies to both single-annotator and multiple-annotator scenarios, we include baselines designed for single noisy labels (LogitClip), especially methods that use two networks (Co-teaching, Co-teaching+, CoDis), given that our method utilizes two networks to serve as priors for each other.
* The results for the proposed method and the baselines were obtained using simple data augmentations. As a result, we did not include baselines such as ProMix [1], Divide-Mix [2], and SOP [3], which use additional data augmentations. However, our method is compatible with these augmentations and can be adapted to incorporate them. We have now applied mixup augmentation to our method and some other baselines, with results shown in **Table 3** of the attached PDF. For a fair comparison, all results in Table 3 are based on training ResNet18 for 120 epochs, except for ProMix, which was trained for 300 epochs. This is fewer epochs than those used in the original papers for DivideMix and ProMix.
* Methods that aggregate labels.
* We have selected several baselines that utilize aggregated labels, including majority voting (“CE (MV)”), the EM algorithm (“CE (EM)”), MBEM, CoNAL, DoctorNet, and Max-MIG.
* The algorithms proposed in IWMV [4], IAA [5], and soft-labels average [6] require multiple annotations per instance to estimate labels or the agreement matrix, which makes them unsuitable for sparse labeling scenarios. Additionally, IAA [5] assumes a common transition matrix for all annotators, which differs from the noisy annotation generation process described in our paper.
* To compare with these label aggregation methods, we have now conducted additional experiments by randomly selecting $l=3,5,7,9$ annotations for each instance from the $R=30$ annotators. The results are displayed in **Table 1** of the attached PDF.
### 3. Question about the loss function
* We use cross-entropy loss for the loss function $\ell$, meaning that $\mathcal{T}$ is the log function, which is concave in its argument. To meet the required conditions on $\mathcal{T}(\psi(x))$, we clip the predicted probabilities $\psi(x)$ to the range $[0.01, 1 - 0.01]$ to ensure that $\mathcal{T}(\psi(x))$ remains bounded.
* Thank you for your suggestion. We will include details about the type of network used in the experiments in the main paper when we prepare the revision.
### 4. About limitations
* Thank you for raising this point. We use 30 warmup epochs in our experiments, as adopted from existing works [E]. The learning dynamics of the algorithms are shown in **Figure 1** of the attached PDF. It can be observed that the model already overfits label noise with 30 epochs. To assess the sensitivity to warm-up epochs, we have now conducted experiments with varying warm-up epochs on our method and the baselines that also employ a warm-up stage, as shown in **Figure 2** of the attached PDF. The results indicate that performance can be further improved with an appropriate number of warm-up epochs.
* The number of samples in $\mathcal{D}^*_0$ is not predetermined. After the warm-up stage, we include an instance in $\mathcal{D}^*_0$ if its predicted probability exceeds 0.5.
* Thank you for pointing this out. We will address this limitation in our paper when preparing the revision.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing all my concerns. Regarding my initial comment, I realize now what might have caused my confusion. What is the difference between using 50 annotators with identical noise transition matrices and providing one label per sample, versus the traditional approach where each sample has a single noisy label and noise is modeled by a transition matrix?
I appreciate the supplementary material provided in the PDF attached to your rebuttal, as it significantly enhances the paper's quality and clarity, particularly in the experimental section, which was previously a bit weak. I recommend including this material in the final submission in the case the paper is accepted. The additional details, especially Table 1 with experiments involving multiple annotators and Table 4 with various methods for estimating the noise transition matrix, are particularly valuable (this is especially relevant as it could eliminate the need for the warm-up stage in your approach, right?)
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful feedback and for taking the time to review the supplementary material. I'm glad to hear that it has helped clarify the paper, particularly in the experimental section.
1. Regarding the difference between using $R=50$ annotators and the traditional single noisy label scenario:
* If the $R=50$ annotators generates labels with **identical noise transition matrices**, and each annotator provides only one label per sample, the data will be **distributed the same way as in the traditional approach** where each sample has a single noisy label and noise is modeled by a transition matrix. In this context, the noisy data can be considered independent and identically distributed (iid) random variables. In particular, let $P^{(r)}(\tilde{y}|x,y):=T(\tilde{y}|x,y)$ for $r\in[R]$ represent the identical transition matrix, and let $\tilde{y}_i$ denote the one label for instance $x_i$ provided by annotator $r_i$. By the law of total probability, we have: $$P(x\_i,\tilde{y}\_i)=\sum\_{y\in[K]}P(x\_i,y)\sum\_{r\in[R]}P(r\_i=r)P^{(r)}(\tilde{y}\_i|x\_i,y)=\sum\_{y\in[K]}P(x\_i,y)T(\tilde{y}\_i|x\_i,y).$$ Thus, we can set $R=1$ in our method and **only estimate one transiton matrix** to approximate the true label posterior, which is then used to construct the pseudo-empirical distribution in our algorithm.
* In our experiments, we study the more challenging setting, where we generate $R$ annotators with **different instance-dependent transition matrices** using Algorihtm 2 from [a], as described in Appendix B.1 of our paper. Each annotator is referred to as an IDN-$\tau$ annotator if its mislabeling ratio is upper bounded by $\tau$. For example, for $R=50$, we generate the following groups of annotators:
* **IDN-LOW.** 18 IDN-10% annotators, 18 IDN-20% annotators, 14 IDN-30% annotators;
* **IDN-MID.** 18 IDN-30% annotators, 18 IDN-40% annotators, 14 IDN-50% annotators;
* **IDN-HIGH.** 18 IDN-50% annotators, 18 IDN-60% annotators, 14 IDN-70% annotators.
Then we randomly select one noisy label for each instance. In this case, we approximate the true label posterior by **estimating $R$ transition matrices**, which allows us to incorporate the labeling expertise of the annotators. As shown in Table 1 below, when annotators generate noisy annotations with different transition matrices, even if only one label is provided per instance, the results are better when estimating $R$ different transition matrices, compared to ignoring the annotator information and estimating only one transition matrix.
**Table 1:** Average accuracies of learning the CIFAR-10 dataset with $R=50$ annotators.
| | IDN-LOW | IDN-MID | IDN-HIGH |
:---- | :---- | :---- | :---- |
estimate $R=50$ transition matrices | $86.94_{\pm 0.33}$ | $85.17_{\pm 0.26}$ | $84.09_{\pm 0.49}$ |
estimate ONE transition matrix (ignore the annotator information) | $86.58_{\pm 0.32}$ | $84.23_{\pm 0.31}$ | $82.33_{\pm 0.36}$ |
2. Regarding your observation about Table 4:
* Yes, you're right —- the additional methods for estimating the noise transition matrix could indeed make the warm-up stage unnecessary. For instance, if we use the GeoCrowdNet method in **Table 4** of the attached PDF to estimate the transition matrices, we can initialize the transition matrices with the identity matrix and then simultaneously train the classifier and the transition matrices.
Thank you again for your valuable insights and recommendations.
**References**
[a] Xia, Xiaobo, et al. "Part-dependent label noise: Towards instance-dependent label noise." NeurIPS (2020). | null | null | Rebuttal 1:
Rebuttal: Dear Reviewers,
Thank you for your thoughtful feedback and for the time you dedicated to evaluating our paper. We deeply appreciate your insights and constructive comments. We are pleased to hear that you recognized the strengths and contributions of our paper, which we would like to recap as follows.
* Our paper addresses a significant and practical topic and proposes a novel and principled CDRO framework to tackle the challenge of potential misspecifications in the estimated true label posterior (Reviewer 7dsE, 62sF).
* We provide rigorous theoretical analyses on the upper bound for the worst-case risk and the optimal action for constructing the pseudo-empirical distribution, which serve as valuable guidelines for designing our algorithm (Reviewers 7dsE, xLiU, 62sF).
* The proposed method demonstrates promising performance in our experiments. The results also validate the effectiveness of our approach and its practical utility (Reviewer xLiU).
* We appreciate the feedback on the clarity and ease of understanding of our paper (Reviewers 7dsE, xLiU). We will carefully prepare a revised manuscript to fully address reviewers' comments and suggestions to ensure our presentation to remain accessible and clear.
We have thoroughly reviewed each of your queries, concerns, and remarks. In response, we have prepared a **one-page PDF** detailing additional experimental results, which are designed to address your points comprehensively. The references cited in our responses are listed at the end of the document. For your convenience, the following summary highlights the key updates:
* We appreciate your valuable insights into emphasizing the intuition and rationale behind the CDRO framework (Reviewer xLiU, 62sF). In response, we have added a more comprehensive explanation of the motivation for using a Wasserstein ball-based uncertainty set $\Gamma_\epsilon(P_{\mathrm{y}|\mathbf{x},\tilde{\mathbf{y}}})$ to address potential misspecifications.
* Your suggestions to justify the chosen baselines (Reviewer 7dsE), examine the impact of the number of warm-up epochs (Reviewers 7dsE, 62sF), and explore different estimation methods for the noise transition matrices (Reviewers xLiU, 62sF) are invaluable. When preparing a revision, well incorporate these recommendations to better highlight the strengths and robustness of our approach.
* We have extended our theoretical results to address the multi-class case, as questioned by Reviewer 62sF. The updated theoretical insights are detailed in the **Official Comment**. When preparing the revised manuscript, we will update the presentation to cover both the binary and multi-class cases comprehensively, and include the new proofs to support these results. This enhancement will ensure a thorough understanding of our approach across different classification scenarios.
* We appreciate your constructive feedback on improving the clarity and presentation of our paper (Reviewers 7dsE, xLiU). We have carefully considered your suggestions and will implement effective revisions in the updated manuscript to enhance its readability and effectiveness.
We believe that our responses have thoroughly addressed all the concerns raised. However, should you require any additional details, justifications, or further results, we are more than willing to provide them to ensure all aspects are comprehensively covered.
**References**
[A] Ibrahim, Shahana, Tri Nguyen, and Xiao Fu. "Deep Learning From Crowdsourced Labels: Coupled Cross-Entropy Minimization, Identifiability, and Regularization." ICLR (2023).
[B] Yang, Shuo, et al. "Estimating Instance-dependent Bayes-label Transition Matrix using a Deep Neural Network." ICML (2022).
[C] Cubuk, Ekin D., et al. "Autoaugment: Learning augmentation strategies from data." CVPR (2019).
[D] Zhang, Hongyi, et al. "mixup: Beyond Empirical Risk Minimization." ICLR (2018).
[E] Zheng, Songzhu, et al. "Error-bounded correction of noisy labels." ICML (2020).
[F] Khetan, Ashish, Zachary C. Lipton, and Animashree Anandkumar. "Learning From Noisy Singly-labeled Data." ICLR (2018).
[G] Guo, Hui, Boyu Wang, and Grace Yi. "Label correction of crowdsourced noisy annotations with an instance-dependent noise transition model." NeurIPS (2023).
[H] Grace, Y. Yi. "Statistical analysis with measurement error or misclassification" Springer (2016).
[I] Peskir, Goran. "Vitali convergence theorem for upper integrals." Proc. Funct. Anal. IV (1993)
Pdf: /pdf/fbc6e0202a45ec43aa816cc48bfb47f6df98aeca.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
On the cohesion and separability of average-link for hierarchical agglomerative clustering | Accept (poster) | Summary: The authors analyse the theoretical properties of the so-called average-link approach for clustering points in metric spaces. They formulate cohesion and separability criteria that capture the goodness of a clustering, essentially formalising the intuition that good clusters should be densely packed and well-separated. They prove previously unknown bounds on the cohesion and separability of clusters obtained through average-link.
Strengths: - The authors analyse the properties of average-link rigorously and provide mathematical proofs for their claims.
- The work establishes previously unknown bounds on the quality of clusterings obtained through average-link with respect to cohesion and separability.
Weaknesses: - The work is technically involved and therefore difficult to understand in detail for readers not trained in mathematics (such as myself). I would have welcomed a less technical summary of the paper's key points.
- The paper only mentions complete-linkage and single-linkage as alternative linkage methods for clustering. A brief discussion of other clustering methods and how they compare to average-link would have been useful. Specifically, the paper suggests using average-link rather than complete-linkage or single-linkage when cohesion and separability are relevant but does not relate average-link's performance to other methods with respect to cohesion and separability.
- The empirical results are somewhat intransparent since averaging has been done across datasets. It would be easier for the reader to examine the results if they were also presented on a per-dataset basis (for example in the appendix).
Technical Quality: 4
Clarity: 3
Questions for Authors: - The results on empirical datasets in tables 1, 3, and 4 appear to not fully agree with the theoretical results, specifically, complete-linkage and single-linkage out-perform average-link in some cases, suggesting that average-link should not always be the preferred choice. How can this be explained?
- The second-last sentence in the introduction mentions that Dasgupta's function does not reveal how good the clusters for a specific range of k are. How do the measures introduced in the present work address this shortcoming?
- The conclusions state that average-link is a better choice than complete-linkage and single-linkage when cohesion and separability are important. Are there cases when average-link should *not* be preferred?
- Dasgupta's cost function seems to accommodate the case where similarities are asymmetric whereas the current work assumes metric spaces. Could the analyses and bounds be extended to cases of asymmetric similarities between points?
Minor points:
- The heading "Smal" in the tables is missing an l.
- Are the labels for tables 3 and 4 in Appendix F placed before their captions in the LaTeX code? This would explain why they are referred to as "F and F" in line 655.
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Limitations are briefly discussed at the end of the main text.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for revising the paper and for the positive evaluation!
**Issue 2** *The paper only mentions complete-linkage and single-linkage as alternative linkage methods for clustering. A brief discussion of other clustering methods and how they compare to average-link would have been useful. Specifically, the paper suggests using average-link rather than complete-linkage or single-linkage when cohesion and separability are relevant but does not relate average-link's performance to other methods with respect to cohesion and separability.*
Reply: It is not clear to us which other clustering methods we should compare with since there are many options. We compared with single-linkage and complete-linkage because they are also quite popular and closely related to average-link. Average-link has been used for a long time, and here we provide some theoretical foundation for its superior performance compared to other popular linkage methods.
**Issue 3** The empirical results are somewhat intransparent since averaging has been done across datasets. It would be easier for the reader to examine the results if they were also presented on a per-dataset basis (for example in the appendix)
Reply: We will add results per dataset to the appendix in our revised version. Moreover, we submitted a one-page PDF with results per-dataset in the rebuttal for all referees.
**Question 1** *The results on empirical datasets in tables 1, 3, and 4 appear to not fully agree with the theoretical results, specifically, complete-linkage and single-linkage out-perform average-link in some cases, suggesting that average-link should not always be the preferred choice. How can this be explained?*
Reply. In fact, for cs-ratio_dm, complete-link outperforms average-link for ranges medium and large, and for sep_av single-link outperforms average-link for the same ranges, by a small margin. However, this is not a contradiction since the theory only guarantees the behavior for worst-case instances, it does not state that for all instances average-link outperforms single-linkage and complete-linkage.
That said, the performance of average-link in our experiments is either the best or close to the best for almost all settings.
**Question 2** *The second-last sentence in the introduction mentions that Dasgupta's function does not reveal how good the clusters for a specific range of k are. How do the measures introduced in the present work address this shortcoming?*
Reply. Our measures are calculated for each k-clustering and not for the whole hierarchy, so they can depend on the number of clusters $k$. For instance, our bound (Theorem 5.3) indicates that average-link has a better approximation in terms of cohesion (max-diam) for small k than for large k.
**Question 3** *The conclusions state that average-link is a better choice than complete-linkage and single-linkage when cohesion and separability are important. Are there cases when average-link should not be preferred?*
Reply. Yes, as an example, if one is primarily concerned about avoiding clusters that contain distant points (minimizing max-diam) then Complete-Link seems to be a more interesting choice: the available bound for Complete-Link (Dasgupta and Laber 24) is better than that of Average-Link (Theorem 5.3) and our experiments also suggest that Complete-Link is better than Average-Link regarding this criterion.
As a second example, if one is interested in maximizing the minimum spacing between clusters (Kleinberg and Tardos, Greedy Chapter), then single-linkage gives the optimal solution, so it is the best choice.
**Question 4** *Dasgupta's cost function seems to accommodate the case where similarities are asymmetric whereas the current work assumes metric spaces. Could the analyses and bounds be extended to cases of asymmetric similarities between points?*
Reply. Right now we do not have an answer, this could be an interesting direction for future work. Thanks for the nice question
---
Rebuttal Comment 1.1:
Comment: Thank you for your replies! The clarifications helped me understand the paper better; I am raising my scores regarding presentation and contribution, and maintain my overall rating. | Summary: This paper studies the performance of average-link clustering in metric spaces, focusing on criteria that offer better interpretability than Dasgupta's cost function for cohesion and separability. By investigating how well average-link balances the compactness of clusters (cohesion) with the distinctiveness between clusters (separability), the analysis sheds light on the ability to produce meaningful and well-separated clusters. The paper provides instances where single-linkage and complete-linkage clustering are exponentially worse than average-link clustering with respect to average separability. It presents lower bounds on the maximum diameter of clusters generated by average-link, providing insights into the method's clustering quality and performance compared to single linkage. Experiments conducted with real datasets confirm that the theoretical results align with practical observations, suggesting that average-link performs better than other methods when both cohesion and separability are considered.
Strengths: + The paper tackles an interesting topic
+ The Related Work section is extensive, and the proposed approaches are well-placed in the existing literature
+ The methodology is adequately sound and well-explained
+ The experimental setting is extensive, and results seem to demonstrate their effectiveness
Weaknesses: I don't have any specific questions for the authors, as I'm mostly satisfied with the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: Just out of curiosity, I would only like to know if there are any plans for future directions of the work, as they were not indicated in the paper.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: I don't see any additional limitations (apart from what is already mentioned), and I'm mostly satisfied with the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks again for revising our paper and for your positive evaluation!
**Question** *Just out of curiosity, I would only like to know if there are any plans for future directions of the work, as they were not indicated in the paper*
One potential direction for future work is addressing the case in which the input is given by similarities rather than distances.
---
Rebuttal Comment 1.1:
Comment: I thank the authors and keep my original rating. | Summary: This paper theoretically investigates the effectiveness of average linkage for hierarchical agglomerative clustering. The authors consider the setting where we are clustering in a metric space and consider well motivated definitions of separability and cohesion of clustering. The performance of average linkage and other methods wrt these settings is then considered. The authors conclude the work with an empirical analysis.
Strengths: The paper presents an interesting analysis of average linkage as a hierarchical clustering method.
The results help to explain the effectiveness of the method.
Strengths include:
* Thoughtful analysis of a popular and well studied method
* Understanding of cost functions and how average linkage optimizes them
* Empirical analysis following theoretical results
Weaknesses: I think the paper has several merits as listed above. Weaknesses include:
* I think the presentation of Table 6 could be clearer -- I think directly presenting the results per dataset per method would be clearer.
* Perhaps more text could be used to describe the novelty of proof techniques
* Given some of the motivations, I wonder if more of the analysis on random hierarchies should be included in the main paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: Can you say more about the relationship between your work and [Großwendt et al., 2019]?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Adequately addressed the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are glad that you saw several merits in our submission!
**Issue 1**: *I think the presentation of Table 6 could be clearer -- I think directly presenting the results per dataset per method would be clearer*
Reply. I believe you mean Table 1. This table is the best solution we found for the following problem: how to give a reasonable overview of our empirical findings in a relatively small table? Given our focus on the theoretical results, we allocated only a small space in the main text for the experiments.
That said, we agree that it is good to have results per dataset per method. We will add them to the appendix in our revised version. Moreover, we submitted a one-page .pdf with these results in the rebuttal for all referees.
**Issue 2**: *Perhaps more text could be used to describe the novelty of proof techniques*
**Issue 3** *Given some of the motivations, I wonder if more of the analysis on random hierarchies should be included in the main paper.*
Reply for 2 and 3. The final version allows an extra page. If the paper is accepted, we will use this page to add more discussion about our proof techniques and we will also move to this page some of the analyses of random hierarchies.
**Question**: *Can you say more about the relationship between your work and [Großwendt et al., 2019]?*
Reply. [Großwendt et al., 2019] study Ward’s method, which is a linkage method that at each step greedily merges the clusters that yield the minimum increment in a k-means cost function. The paper shows a lower bound on the approximation of the clustering built by Ward. Regarding upper bounds, it shows that Ward has a constant approximation for instances on the real line and instances that admit well-separated clusters. No upper bound for the general case is presented.
As in our paper, the focus is on analyzing a popular linkage method. However, [Großwendt et al., 2019] address the k-means cost function with Euclidean norm while our paper addresses other criteria (max-diam, sep_min, cs-ratio, etc) in general metric spaces. Moreover, we have a way more comprehensive set of results and there is no (at most little) overlap of techniques. | Summary: This paper studies the well-known average linkage algorithm. The paper notes that average linkage has better approximation guarantees with respect to (variants of) Dasgupta's cost compared to complete and single linkage. However, in certain other settings such as metric graphs the approximation factor of average linkage does not outperform random HC trees. This paper therefore deviates from analysing Dasgupta's cost, and analyses average linkage with respect to seperability and cohesion criteria. There are also some experimental results.
Strengths: 1.) There are quite a few solid results regarding the approximation guarantee with respect to several cohesion and separability criteria (cs-ratio, OPT_sep, and several others), and shows that it outperforms single and complete linkage. Furthermore, several tight instances are provided as well. A lot of these results are good contributions to the HC literature, and provide a more clear theoretical picture of why average linkage performs so well.
2.) The experimental results also show fairly robustly that the theoretical guarantees can be seen in practice as well.
3.) The paper is well-written and enjoyable to read.
Weaknesses: There are only (very) minor weaknesses:
1.) No new algorithmic contribution, e.g., some improvement to average linkage that could improve some of these bounds.
2.) Most results only hold for points in metric space.
3.) Font size changes from page 3 onwards (line 138)
Technical Quality: 3
Clarity: 3
Questions for Authors: 1.) Are there any downstream settings/tasks where approximation guarantees with regards to max-diam, cs-ratio etc. are used? A stronger motivation for why these objectives should be/are studied would be nice to include in the paper.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: yes discussed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are glad you enjoyed reading our paper and found our results to be solid.
**Issue** *Font size changes from page 3 onwards (line 138)*
Thanks for pointing it out, we will fix it.
**Question** *Are there any downstream settings/tasks where approximation guarantees with regards to max-diam, cs-ratio etc. are used? A stronger motivation for why these objectives should be/are studied would be nice to include in the paper.*
Reply: The main goal of our paper is to provide a comprehensive theoretical study of average-link, a popular method for Hierarchical Clustering, for which good results are often reported in the literature. Our metrics were chosen because they are natural, allow easy interpretation, and capture cohesion and separability (or both). For instance, we believe they are way easier to explain to students or practitioners than Dasgupta's cost function.
We believe that it is reasonable to design algorithms to optimize our criteria, but we do not identify anything special about our criteria to justify a greater focus on them rather than on other available criteria that capture cohesion and separability.
About downstream applications, in facility location problems, the k-center (radius) criterion is used to ensure that every client is served by a "close" facility. In metric spaces, the optimal maximum diameter (max-diam) and the optimal k-center differ by a factor of at most 2. Thus, by optimizing the max-diam we are also optimizing the k-center (ignoring the factor of 2).
We will add this discussion to our revised version.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the rebuttal, and I maintain my positive evaluation of the paper. I also agree that the metrics are more natural and easier to interpret than Dasgupta's cost function. | Rebuttal 1:
Rebuttal: We thank all the referees for their time and valuable feedback! We are happy that all reviewers are positive about our submission and that it was recognized that our results provide a more clear theoretical picture of why the well-known average linkage method performs so well.
We are attaching a one-page PDF with results per dataset (suggested by reviewers k54Y and QwrC).
In the PDF we have six graphs, one for each of our six criteria.
For a given criterion $C$, dataset $D$, and method $M$, the bar height summarizes the results for the different values of $k$ achieved by method $M$ on dataset $D$, regarding criterion $C$. More precisely, the bar height is obtained by taking the average of $m_k$ for every $k$ considered in our experiments, where $m_k$ is the ratio between the value of criterion $C$ achieved by method $M$ on dataset $D$ divided by the best value of $C$ among single-link, average-link and complete-link achieved on dataset $D$.
In the two graphs at the top (criteria that should be maximized), higher values are better. For the other graphs (criteria that should be minimized), lower values are better. One can see that average-link is either the best or close to the best for most settings.
Pdf: /pdf/eab52fb029a92d803e8b6e60ca95df5b736d9c34.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Distributional Preference Alignment of LLMs via Optimal Transport | Accept (poster) | Summary: The paper proposes a new technique for preference alignment, named Alignment via Optimal Transport (AOT). The proposed technique supports both paired and unpaired alignment settings. The paper introduces a new viewpoint for preference alignment based on stochastic dominance i.e., making the reward distribution of the positive samples stochastically dominant in the first order on the distribution of negative samples. From this perspective, the paper explains DPO as a special case i.e., a pointwise preference approach with relaxation through the logistic loss. In addition, the paper shows that using convex relaxation is equivalent to minimizing a one-dimensional optimal transport problem. To enhance the differentiability of the objective through the sorting operator (for approximating the continuous optimal transport), the paper uses the SInkhorn-Knopp algorithm instead of the conventional sorting. On the experimental side, AOT leads to state-of-the-art models in the 7B family of models when evaluated with Open LLM Benchmarks and AlpacaEval (using Llama3-70B-Instruct instead of GPT4).
Strengths: * The paper is well-written and easy to follow.
* The paper proposes an original preference alignment approach based on stochastic dominance and optimal transport. The connection to optimal transport is interesting and novel. In addition, the paper also provides theoretical results on the sample complexity of the empirical estimation of the objective.
* The proposed approach can work for both unpaired and paired alignment settings.
* Experiments are extensive on various datasets i.e., UltraFeedback for paired setting, PKU BeaverTails, and HelpSteer for unpaired setting. The paper also provides a free LLM-judge for local Alpaca evaluations.
* ATO achieves SOTA on AlpacaEval benchmark and competitive results on other metrics compared to DPO, KPO, and IPO.
Weaknesses: The evaluation is conducted using Llama3-70B-Instruct instead of GPT4 which is not the standard. However, as described in the paper, the usage of Llama3-70B-Instruct leads to approximately the same.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. It seems that hard sorting is also comparable to soft sorting. Using normal sorting could help to improve the computational speed compared to soft sorting. Should the paper recommend hard sorting as the default variant?
2. How many interactions are used for the Sinkhorn algorithm? What is the choice of the entropic regularization hyperparameter? Do these hyperparameters significantly affect the results?
3. Why does the unpaired setting lead to better results than the paired setting? Do the authors have any explanation?
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive review of the paper and for their insightful questions that we address below:
___
**The evaluation is conducted using Llama3-70B-Instruct instead of GPT4 which is not the standard. However, as described in the paper, the usage of Llama3-70B-Instruct leads to approximately the same**
Thanks for this comment, we have also reported GPT4 eval on Merlinite, and for the rest we used Llama 70B to lower evaluation costs as mentioned by the reviewer.
___
**It seems that hard sorting is also comparable to soft sorting. Using normal sorting could help to improve the computational speed compared to soft sorting. Should the paper recommend hard sorting as the default variant?**
We recommend hard sorting as a default since soft sorting did not lead to substantial improvements. And as noted by the reviewer, hard sorting is also faster.
___
**How many interactions are used for the Sinkhorn algorithm? What is the choice of the entropic regularization hyperparameter? Do these hyperparameters significantly affect the results?**
The entropic regularization has to be small ~ 0.1 to ensure accurate sorting otherwise the sorting will be wrong. Note that the soft [sorting paper](https://arxiv.org/pdf/2002.08871) uses PAV (Pool adjacent violators) to compute the soft sorting, which, unlike Sinkhorn, does not need a fixed number of iterations as it reduces finding the soft sorting to an isotonic regression that can be computed in closed form with an algorithm $O(n \log n)$.
___
**Why does the unpaired setting lead to better results than the paired setting? Do the authors have any explanation?**
Please note that AOT unpaired outperforms AOT paired on AlpacaEval because it is more robust to noise, if some paired positive and negative answers are noisy, AOT paired will fit to that noise, while AOT unpaired since it does not use this pairing will be more robust to noise. Hence in Figure 2 we see that AOT unpaired outperforms the paired one on AlpacaEval.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thank you for your response,
I will keep my score since I believe this paper proposes an interesting framework for LLM alignment i.e., stochastic dominance and optimal transport.
Best, | Summary: The motivation of this paper is that current alignment approaches ensure reward dominance at the sample-level but not on the distributional level. With this in mind, the authors set their goal to design an alignment approach which satisfies First Order Stochastic Dominance (FSD) in rewards for positive examples compared to negative examples. With this goal in mind, they make the following contributions:
1. They write this as a constraint satisfaction problem, which can be relaxed into an unconstrained optimization problem using different surrogates of the 0/1 loss.
2. Using Santambrogio (2015)'s result from optimal transport, they develop a computationally efficient algorithm called ATO.
3. They provide an upper bound for the violation in stochastic dominance of this algorithm using a Rademacher and symmetrization argument.
4. They provide experiments that show that ATO is competitive with existing approaches like DPO, KTO.
Strengths: **Originality**
1. The connection to optimal transport is quite interesting.
**Quality**
2. The empirical results are competitive with other alignment approaches.
3. The theoretical results are sound
**Clarity**
4. The paper is well-written with background appropriately introduced and clear explanations.
Weaknesses: 1. There could be some more motivation for why the FSD condition is practically desirable over the conditional (on x) dominance condition of DPO from Equation (2). Is this more than a theoretical nicety?
2. There is not a clear improvement in the empirical results over prior approaches, and each approach seems to enjoy success on certain benchmarks -- with the differences between performance quite small in most cases.
This invites the question of in what settings ATO would perform better than other approaches in the literature.
3. The theoretical results do not shed much light on when ATO is likely to perform well. Uniform Convergence results are well-known in recent years to not explain generalization behavior of modern ML models. So, while nice, the insight from Theorem 2 is not super obvious to me.
Minor:
1. Please call the Santambrogio (2015) result something other than a theorem since it is not a contribution of the present work.
Technical Quality: 3
Clarity: 3
Questions for Authors: Since KTO also works with unpaired preference data, a more thorough comparison with KTO is warranted:
1. Is there a unique solution that satisfies the FSD condition? If there are multiple solutions, how does ATO break ties? How does KTO break ties?
2. Is the reason that the behavior is different from KTO that they have different inductive biases and hence converge to different policies? Or the variances of the estimators are different, even though they converge to the same?
3. Is there a way to compare Theorem 2 with the statistical properties of KTO/DPO/RLHF? Or any reason to believe that ATO has a statistical advantage?
Despite the weaknesses, I think the connection to optimal transport is interesting and the experiments are sound and comprehensive. I would be happy to adjust my score accordingly if the authors alleviate my concerns through discussion.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful comments:
___
**Weaknesses**
___
**1) There could be some more motivation for why the FSD condition is practically desirable over the conditional (on x) dominance condition of DPO from Equation (2). Is this more than a theoretical nicety?**
Thanks for bringing this up we will make the following discussion more salient:
* It is more expensive to collect paired preference datasets, that have for each prompt a preferred and a rejected answer. There are plenty of unpaired datasets in the sense for each prompt we have either a preferred or a rejected answer. AOT and KTO allow the use of these datasets in alignment.
* AOT unpaired by not relying on pairs of rejected / preferred answers but rather on their distributions, is more robust to noise in the pairing. While DPO would suffer on a pointwise comparison from noise in the pairing, AOT unpaired would not.
___
**2) There is not a clear improvement in the empirical results over prior approaches, and each approach seems to enjoy success on certain benchmarks -- with the differences between performance quite small in most cases. This invites the question of in what settings ATO would perform better than other approaches in the literature.**
Please note that the alignment scoring we use is AlpacaEval (instruction following), all other metrics from the Open LLM Leaderboard such as (ARC, MMLU, Winogrande, GSM8K) test that despite the alignment the model retains or improves on the capabilities it has acquired from the pretraining model. To benchmark alignment methods AlpacaEval has been well adopted by the community, and the Open LLM Leaderboard metrics assess whether the alignment deteriorates other capabilities of the LLMs. AOT, while having the highest score on AlpacaEval, is competitive on all other metrics.
___
**3) The theoretical results do not shed much light on when ATO is likely to perform well. Uniform Convergence results are well-known in recent years to not explain generalization behavior of modern ML models. So, while nice, the insight from Theorem 2 is not super obvious to me.**
Please note that Theorem 2 shows that despite AOT can be cast as a min/max game between the LLM and the OT potential, the one dimensional nature of the OT problem does not curse the statistical properties of the overall problem. The generalization bound we provide enjoys the parametric rate $1/\sqrt{n}$. Note that in Figure 3, we see that the generalization error (on the AlpacEval) improves as the sample size grows which echoes findings in Theorem 2. The main insight of Theorem 2 is while AOT has an inner optimal transport problem to solve in order to update the LLM parameters, this does not curse statistically the rate, since the OT problem is one dimensional.
___
**Please call the Santambrogio (2015) result something other than a theorem since it is not a contribution of the present work.**
Thank you for the remark, we will take this into consideration in the final version of the paper. We will add the following sentence before this theorem: The following theorem is a restatement of Theorem 2.9 in Santambrogio (2015).
___
**Questions**
___
**1) Since KTO also works with unpaired preference data, a more thorough comparison with KTO is warranted:
Is there a unique solution that satisfies the FSD condition? If there are multiple solutions, how does ATO break ties? How does KTO break ties?**
One can study the uniqueness of the solution in simplified setups where the log-likelihood is a linear model, since the problem becomes convex in the weights, the AOT problem properly regularized with an $L_2$ regularizer over the weight will have a unique solution. Nevertheless, this is not the case for general nonlinear models such as transformers or LoRA adapters when finetuning the LLM, then multiple solutions may exist. Studying the connectivity of the minimas with an alignment loss as it is done in deep learning with classification objectives is an interesting avenue for future research and that is beyond the scope of the paper. We believe if multiple minima exist they will achieve the same soft violation of FSD (that is the loss of AOT), and those solutions will be equivalent and most likely connected.
___
**2) Is the reason that the behavior is different from KTO that they have different inductive biases and hence converge to different policies? Or the variances of the estimators are different, even though they converge to the same?**
We believe they have different cost functions and hence they will converge to different policies. While AOT compares quantiles of rewards between positive and negative rewards, KTO compares positive rewards to the average negative reward, and negative rewards to the average positive rewards. KTO will lead to a policy that favors sentences whose reward is above the average of negative rewards in the training set, and AOT will lead to a policy that distributionally prefers positive over negative responses.
___
**3) Is there a way to compare Theorem 2 with the statistical properties of KTO/DPO/RLHF? Or any reason to believe that ATO has a statistical advantage?**
Statistically speaking KTO, DPO, and AOT have similar statistical rates. AOT does not suffer statistically from having to solve an OT problem since this is a one dimensional problem, and hence still has a good statistical rate while being a min/max problem between the LLM and the potential of the OT problem (the sorting). For RLHF the statistical properties are more intricate since there is the reward learning and the sampling from the policy in training. The following paper gives a statistical analysis of a [variant of RLHF](https://arxiv.org/pdf/2405.21046 ). Different from our analysis this RLHF variant has also regret bounds that scale like $1/\sqrt{T}$.
---
Rebuttal Comment 1.1:
Title: Rebuttal response
Comment: I thank the authors for clarifying my questions, and accordingly increase my score to 6. | Summary: This works proposed using Optimal transport in 1D to derive a better alignment guided by the preference data. The main idea is to work with the log likelihood ratio of the marginal distributions of the preference data. The alignment is made through Optimal Transport loss in 1D based on the concept of stochastic dominance between quantiles of two distributions.
Strengths: The framework of the paper is very easy to follow.
The proposed problem and solution's direction are interesting.
Settings, theoretical results are adequate to support the proposed method.
Weaknesses: My main concern of this paper is the empirical results in experiment section. Table 1 shows that AOT paired/unpaired do not outperform other methods at least in 4 out 7 cases (ARC, MMLU, Winogrande, GSM8K). When they are versus each other, there is no clear winner between AOT paired and AOT unpaired. Meanwhile, I believe that with the AOT paired, when we have more information, the task must be easier, please correct me if I am wrong. I have similar concerns for their performances in Figure 2.
Technical Quality: 3
Clarity: 3
Questions for Authors: In Figure 3, it appears that small $\beta$ will produce better results for all method except the IPO. Have the authors tried to test with smaller value of $\beta$, i.e. $\beta = 0.005$? Is there any explanation for this trend in that figure?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: It is fine.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their encouraging feedback.
___
**My main concern of this paper is the empirical results in experiment section. Table 1 shows that AOT paired/unpaired do not outperform other methods at least in 4 out 7 cases (ARC, MMLU, Winogrande, GSM8K). When they are versus each other, there is no clear winner between AOT paired and AOT unpaired. Meanwhile, I believe that with the AOT paired, when we have more information, the task must be easier, please correct me if I am wrong. I have similar concerns for their performances in Figure 2.**
Please note that the alignment scoring we use is AlpacaEval (instruction following), all other metrics from the Open LLM Leaderboard such as (ARC, MMLU, Winogrande, GSM8K) test that despite the alignment the model retains or improves on the capabilities it has acquired from the pretraining model. To benchmark alignment methods AlpacaEval has been well adopted by the community, and the Open LLM Leaderboard metrics assess whether the alignment deteriorates other capabilities of the LLMs. AOT, while having the highest score on AlpacaEval, is competitive on all other metrics.
Please note that AOT unpaired outperforms AOT paired on AlpacaEval because it is more robust to noise, if some paired positive and negative answers are noisy, AOT paired will fit to that noise, while AOT unpaired, since it does not use this pairing, will be more robust to noise. While in AOT paired the task may seem easier, it is less robust. Hence in Figure 2 we see that AOT unpaired outperforms the paired one on AlpacaEval.
___
**$\beta$ will produce better results for all method except the IPO. Have the authors tried to test with smaller value of $\beta$, i.e. $\beta=0.005$**
Please note this is an artifact of the formulation of IPO and its implementation in [hugging face]( https://github.com/huggingface/trl/blob/main/trl/trainer/dpo_trainer.py#L1145C1-L1146C1) while for all other methods $\beta$ plays the role of a margin 1/$\beta$ plays the role of the margin in IPO. This explains this trend in Figure 3, IPO peaks on high values of beta and others on lower values, note this same trend has been also observed in the [hugging face blog]( https://huggingface.co/blog/pref-tuning) comparing these alignment techniques.
---
Rebuttal Comment 1.1:
Title: Reply to the rebuttal
Comment: I would like to thank the authors for their answers. I would like to keep my score unchanged based on the fact is that the empirical results on other metrics are only competitive, although the theory appears to be sufficient. | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Not Just Object, But State: Compositional Incremental Learning without Forgetting | Accept (poster) | Summary: This paper presents a novel setting of incremental learning, named Compositional Incremental Learning (composition-IL). This setting differs from existing ones as it involves recognizing not only new objects (e.g., a shirt), but also their states (e.g., red) and the resulting compositions (e.g., a red shirt). To address this novel setting, the authors present a prompting-based approach that leverages a pre-trained transformer network to learn the tasks incrementally, thus sharing some similarities with existing methods such as L2P, Dual-Prompt, and CODA-Prompt. Additionally, the authors devise a three-way retrieval mechanism, devoting tailored prompt pools to state, object, and composition-level representation learning. In the experimental section, the authors assess several aspects: the comparison with the state of the art, the benefits of the three-way prompting strategy, the impact of injecting object-level information into the state-level prompt, and the contributions of each involved loss function.
Strengths: 1) The paper is clear and fluent. Both the dataset and the model are clearly described.
2) Novelty. To the best of my knowledge, the experimental setting is novel and deserving.
3) While existing prompting strategies often organize the prompt pool in a flat structure, this paper advances the field by leveraging on two concepts: multiple prompts pool related to different concepts/tasks, and possible dependencies between prompts of different concepts.
Weaknesses: **Many hyperparameters/tuning complexity (major)**. From a methodological perspective, the model incorporates several loss terms, each requiring a tailored balancing hyperparameter. Additionally, other factors such as the length of each prompt and the size of the prompt pool must be considered. In truth, the authors report the chosen configuration in their experiments in the second paragraph of Sec. 5.2. As can be observed, there are many hyperparameters, which could complicate the model’s application in real-life scenarios. In this respect, the authors should provide a simplified (yet still reliable) version of their approach, pruning some loss terms and configurations that have a negligible impact on results. Indeed, by examining the results of the ablation studies, it appears that there is room for simplifying the training stage.
Statistical significance (major). Concerning Tables 3 and 4, the results achieved by the final configuration of the Compiler are sometimes very close to the simpler, ablated versions. For example, consider Table 4(a) where 'no injection' is close to 'O->S', and Table 3 where 'C+S' is close to 'C+S+O'. In these cases, it is difficult to conclude whether the benefits are due to effective technical advances or should be ascribed to statistical noise. In this respect, I strongly encourage the authors to perform multiple runs for each experiment and average the final results. This approach would render the comparisons more significant.
**More rehearsal baseline (major)**. I would suggest including in the experimental section the results for DER++ [b] and ER-ACE [c], two straightforward baselines based on rehearsal. As the main contribution of this paper concerns the introduction of a new experimental setting, I believe it is important to provide an extensive evaluation of the existing literature. Moreover, the authors should compare their approach with SLCA [d], a recently introduced baseline for the continual fine-tuning of pre-trained models.
Relation with existing works (major). There is a recent research line on multi-label incremental learning; see [a], one of the most cited papers in the field. In this respect, how does the proposed setting deviate from the multi-label scenario? To justify the introduction of a novel setting, the authors should discuss potential overlaps with existing benchmarks more thoroughly. From a more methodological perspective, instead, I would encourage the authors to provide more references while introducing the learning objectives used in their approach. For example, CODA-Prompt devises orthogonality constraints between prompts, sharing strong similarities with the objectives outlined by Eq. 2 and 3. Moreover, aligning the query and the selected prompts is common in prompting-based approaches (see L2P).
**Lacking ablation studies (minor)**. In Tab. 3, it would be important to see the results of single-pool prompt learning, especially for ‘S’ and ‘O’ (‘C’ has instead been already provided). Indeed, I would guess that a simple strategy based only on object-level prompts would be enough to reach satisfactory performance.
**Clarification requested (minor)**. In Sec. 4.2, the authors discuss an attention-based mechanism to inject object-level cues into the state-level prompt. In this respect, they use three learnable projections, indicated by W_Q, W_K, and W_V. However, as these transformations are continuously updated while tasks progress, did the authors consider that these projections could suffer from catastrophic forgetting? It seems that there are no explicit countermeasures against forgetting regarding these variables, which could hinder the efficacy of the retrieval stage.
**Formatting issue (minor)**. Table 4 appears before Table 3.
[a] Kim, C. D., Jeong, J., & Kim, G. (2020). Imbalanced continual learning with partitioning reservoir sampling. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIII 16 (pp. 411-428). Springer International Publishing.
[b] Buzzega, P., Boschini, M., Porrello, A., Abati, D., & Calderara, S. (2020). Dark experience for general continual learning: a strong, simple baseline. Advances in neural information processing systems, 33, 15920-15930.
[c] Caccia, L., Aljundi, R., Asadi, N., Tuytelaars, T., Pineau, J., & Belilovsky, E. (2021). New insights on reducing abrupt representation change in online continual learning. arXiv preprint arXiv:2104.05025.
[d] Zhang, G., Wang, L., Kang, G., Chen, L., & Wei, Y. (2023). Slca: Slow learner with classifier alignment for continual learning on a pre-trained model. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 19148-19158).
Technical Quality: 3
Clarity: 3
Questions for Authors: I have no questions.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The limitations are briefly discussed in the supplementary materials.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1: Many hyperparameters/tuning complexity
A1: Thank you for your constructive suggestions. We consider that assigning weights to different loss terms is a common practice. We highly agree with your suggestion to provide a simplified version of the model. As you suggested, the 'C+S' and 'C+O' in Tab 3 can be viewed as simplified CompILer which are denoted as CompILer-S and CompILer-O for clarity. The whole results are shown in the manuscript appendix Tab 6. Note that the all results are computed by composition pool for fair. The methods significantly lag behind the full CompILer on Split-UT-Zappos because when state features have higher semantics or are visually less apparent, the impact of ambiguous composition boundaries becomes more pronounced. Thus, primitive scores are particularly crucial in such cases. We will continue to explore more simplified models to reduce the number of hyperparameters and present the results in the next version.
Q2: Statistical significance
A2: We conducted the multiple runs with different random seeds as shown in PDF Tab 1. CompILer achieves SOTA results across different random seeds, indicating that the performance improvement is not due to statistical noise. We will extend this setting to all methods and experiments in the next version.
Additionally, we want to elaborate on why 'C+S' is close to 'C+S+O'. The purpose of the multi-prompt pool is to be viewed as a decoupling mechanism. The state disentanglement helps obtain clean state features while implicitly decoupling object and aiding compositional learning. Specifically, the tokens used for composition classification interact with the state prompt during MHA operation, allowing the composition to understand what clean state information looks like, which aids the model's learning of state in the composition branch. Meanwhile, the model realizes information irrelevant to the state prompt is object information. Thus, the classifier becomes better at distinguishing which information is useful for state classification and which is for object classification during classification, achieving a 1+1>2 effect. Thus, using two prompt pools yields impressive experimental results. However, this does not imply that C+S are optimal. Only by explicitly incorporating two prompt pools for state and object can they complement each other better to achieve optimal results.
Moreover, the reason 'no injection' is close to 'O->S' is due to the limitations in prompt selection. Prompt selection follows the key-value mechanism, but one drawback of this paradigm is that it cannot be optimized end-to-end. The key and query are used to select a prompt index from the pool, relying on a second, localized optimization to learn the keys because the model gradient cannot backpropagate through the key/query index selection. Unfortunately, object-injected state prompting also suffers from this separate optimization issue, as it is constrained by the same drawback. This is a common problem in prompt-based continual learning. We plan to explore alternative methods in future work.
Q3: More rehearsal baseline
A3: We have included these baselines in Tab 4 of the PDF, where we demonstrate their performance alongside CompILer. For each method, we allocated a memory buffer size of 800, meaning that each class has 10 samples stored on average. As the results show, CompILer achieves the best performance without any rehearsal.
Q4: Relation with existing works
A4: We recognize that there are significant differences between multi-label IL and composition-IL. In a multi-label scenario, ground truth labels correspond to multiple independent entities within an image, where these labels are independent of each other and lack semantic interactions or fine-grained information. In contrast, the labels in composition-IL consist of a single object combined with its descriptive state. Therefore, composition-IL labels describe a single entity in a more fine-grained fashion rather than multiple entities. To better illustrate the difference, we present a figure in Fig 2 of the PDF. Additionally, we will include references to CODA-Prompt and L2P in the methods section of the next version.
Q5: Lacking ablation studies
A5: We have taken your suggestion, and the results are shown in Tab 5 of the PDF. The first and second rows report the results for state and object classification. An interesting finding is that the accuracy of single primitive classification does not even reach that of using a single pool for composition classification. This is because the experimental setup transitions from the traditional Class Incremental Scenario to a Stochastic Incremental Blurry Task Boundary Scenario (Si-blurry) [1]. Si-blurry faces continuous change of classes between batches leads to intra- and inter-task forgettings, making it difficult for the model to retain previously learned knowledge.
Q6: Clarification requested
A6: The object-injected state prompting module primarily focuses on selecting state prompts to enhance network plasticity and achieve Complementary Learning Systems (CLS). CLS posits that humans continually learn through the synergy of two learning systems: the hippocampus, which specializes in learning pattern-separated representations from specific experiences, and the neocortex, which focuses on acquiring more general and transferable representations from sequences of past experiences. The continuously updated weights Q, K, and V act as a global knowledge transmitter, propagating knowledge from old to new, thereby sharing the knowledge from old to new. Additionally, we would like to emphasize that CompILer addresses catastrophic forgetting by freezing the pre-trained ViT which aligns with findings in [2].
Q7: Formatting issue
A7: Thanks a lot. We will revise their orders.
[1] Online Class Incremental Learning on Stochastic Blurry Task Boundary via Mask and Visual Prompt Tuning
[2] Consistent Prompting for Rehearsal-Free Continual Learning
---
Rebuttal Comment 1.1:
Comment: I sincerely thank the reviewers for the efforts they spent during the rebuttal period. I still have several considerations and questions for the authors.
**Hyperparameters and complexity**. While I acknowledge that tuning several loss coefficients is a standard practice in deep learning, my concerns still persist. Therefore, I urge the authors to reconsider their approach in the final version and, if feasible, reduce the number of terms involved during optimization. Based on the experimental results, there appears to be some flexibility to achieve this reduction. It's important to keep in mind that future users will need to apply this method to new benchmarks, and the abundance of hyperparameters could make this process quite challenging.
**Forgetting** In my original review, I expressed some concerns regarding the learnable projections W_Q, W_K, and W_V, which are likely to suffer from catastrophic forgetting as no explicit countermeasures have been taken to protect these parameters. The authors' response, which relies on biological arguments, is somewhat difficult for me to follow. Could the authors provide a more practical discussion on this point? I believe addressing this issue is important.
---
Rebuttal 2:
Comment: We appreciate the reviewer EVGX’s willingness to engage in further discussion. The followings are our point-point answer:
**Q1: About hyperparameters and complexity.**
**A1:** We appreciate your suggestion very much. We thereby have acted on it by developing a simplified model to reduce the number of hyperparameters. Specifically, we focus on the 'C+S+O' configuration, namely Sim-CompILer, which eliminates the directional decoupled loss, RCE loss, and Object-injected State Prompting, accordingly removing $\lambda_1$, $\lambda_2$, and $\alpha$ in the total loss cost. As a result, Sim-CompILer is constrained by vanilla CE loss and a surrogate loss, represented as $L_{total}=L_{CE}+\lambda_3L_{sur}$, where the remaining hyperparameter is $\lambda_3$ only, which controls the balance between CE and surrogate loss. Apart that, $\beta$, is still needed to adjust a balance between compositions and primitives. The results, shown in the table below, show that, while CompILer performs best across both datasets, Sim-CompILer slightly lags behind but still achieves second-best results compared to the baselines in Table 1 and Table 2. We opted for three pools rather than two due to poor performance with dual prompt pools in Split-UT-Zappos, as noted in our previous rebuttal. We will include the results for this Sim-CompILer alternative in the final version. Hopefully, this simpler alternative would provide more flexibility for the potential users to conduct their related research.
| Name | Split-Clothing (5 tasks) | | | | | Split-UT-Zappos (5 tasks) | | | | |
|---------------|--------------------------|---|---|---|---|---------------------------|---|---|---|---|
| | Avg Acc | FTT(↓) | State | Object | HM | Avg Acc | FTT(↓) | State | Object | HM |
| Sim-CompILer | 88.22 | 8.37 | 90.95 | **96.47** | 93.36 | 46.43 | 19.31 | 56.83 | **79.58** | 66.31 |
| CompILer | **88.74** | **6.98** | **91.61** | 96.34 | **93.92** | **47.06** | **18.84** | **57.52** | 79.53 | **66.75** |
**Q2: About forgetting**
**A2:** We apologize our explanation confused you due to the complex biological theory. We acknowledge that we have not implemented explicit methods to circumvent catastrophic forgetting, and this choice was intentional for the following reasons:
First of all, following L2P, our model avoids catastrophic forgetting by freezing a ViT backbone across all incremental sessions. Thanks to this simple yet effective strategy, L2P and its improved variants (such as Dual-Prompt, CODA-Prompt, LGCL and our CompILer) achieve much lower forgetting rates than EWC and LwF, as shown in Table 1 of the submitted manuscript. In addition, following the suggestion from Reviewer x9g2, we perform more comparison with NCM and FeCam. While these two methods are not built based on L2P, they still suggest freezing the backbone to overcome the catastrophic forgetting effectively. Inspired by these results, we can note that the core of learning without forgetting relies on a frozen backbone, and a limited number of learnable parameters have less impact on raising the forgetting. This finding is consistent with the fact that L2P did not exploit any explicit countermeasures between the learnable prompts to alleviate the forgetting yet. Akin to the learnable prompts, we advocate learning the parameters in the object-injected state prompting module devoid of extra explicit countermeasures.
On the other hand, we would like to emphasize again that the aim of continual learning is to pursuit an optimal balance between stability and plasticity. Since the stability has been obtained by freezing the backbone, another challenge is how to improve the plasticity when adapting the model on new tasks. Thanks to the learnable parameters in the prompt pool and object-injected state prompting module (even though their number is slighter than that of the backbone), it allows us to improve the adaptability of the model and increase the accuracy further.
Last but not least, the model inclines to forget the compositions rather than objects and states, because the object and state primitives may reappear in new tasks. Hence, we apply object-injected state prompting to the primitive branches, leading to minor forgetting on the compositions.
---
Rebuttal Comment 2.1:
Comment: The authors have answered my questions. I would like to thank them for their efforts, which address most of my initial concerns. As a result, I will raise my score to weak accept. | Summary: This work presents a new task called Compositional Incremental Learning (composition-IL).
This new task extends the existing class incremental learning to a more fine-grained scenario for more realistic applications.
This work formulates and designs a new composition-IL benchmark based on Clothing16K and UT-Zappos50K datasets.
Technically, the authors propose several novel strategies based on prompting on pretrained ViT.
Specifically, this work suggests using three prompt pools for states, objects, and their compositions respectively.
To overcome the gap between the pretrained task and the composition task, the object prompts are further injected to guide the selection of state prompts.
The authors also suggest a learnable generalized-mean prompt fusion scheme for prompt reducing.
Extensive experiments are provided to evaluate the models.
Strengths: 1. This paper has done a great job in presentation, with clear wiring and figuring, enabling the fast understanding of reviewers familiar with continual learning.
2. The contribution of proposing novel and more realistic tasks is always welcomed. And this work has done some essential initial jobs such as creating a benchmark for composition-IL.
3. The proposed method looks convincing to me.
Weaknesses: 1. The main concern for me is the proposed composition-IL itself. From the experimental results (Tables 1 and 2), it seems that the previous prompt-based works can also handle this task with comparable performance. So my double is that if this work overclaims the difficulty of composition-IL? As someone working in the continual learning field, I think new settings and tasks are always welcomed only if the new task is challenging and practical enough.
2. The effectiveness of the proposed components. As shown in Table 3, I think the performance gap between C+S+O and C+S/C+O is small, so the effectiveness of multi-pool prompt learning is not that significant. Similar results are also in Table 4. In Table 4(a), there is only a 0.32% improvement in HM when introducing the object injection. In Table 4(b), there is also only a 0.3% improvement when using the GeM instead of the naive mean pooling.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the 'Weakness' section for detailed questions.
In conclusion, I expect the authors to provide their thoughts about the necessity of the proposed composition-IL setting, as well as some explanations of the small performance gap when equipped with the proposed modules.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1: About the composition-IL
A1: Thank you for your great comment. Firstly, we would like to emphasize that previous prompt-based works have achieved satisfactory results by employing strategies that contradict the paradigm of continual learning. The compared prompt-based methods all involve task-specific parameters except L2P, which means that their network parameters increase gradually with the introduction of more incremental tasks. This approach effectively enhances the performance but goes against the fundamental principles of continual learning, which aims for a fixed model to incrementally learn new tasks while overcoming forgetting of old tasks. The training paradigm of increasing parameters clearly contradicts this requirement. These competitive results achieved by network expansion should not decrease the awareness of significance of composition-IL.
Instead of network expansion, L2P shows a large gap compared to the Upper Bound, as evidenced in Table 1 of the paper, demonstrating the significant necessity for further research. L2P is the only method in previous work that does not employ the aforementioned tricks. It shows a considerable performance collapse across three settings compared to One-batch learning, failing to even reach 50% of the Upper Bound. This significant degradation underscores the challenges faced by composition-IL due to ambiguous composition boundaries, highlighting the continued necessity for research. Hence, exploring methods that strictly adhere to the continual learning paradigm and achieve satisfactory results remains highly demanded and challenging.
Furthermore, we underscore the research significance of composition-IL from a theoretical perspective. The hallmark of the human cognitive system is compositionality, allowing people to decompose and recombine knowledge to better understand and facilitate learning of new knowledge. Therefore, much scholarly attention has focused on how to equipe the model with compositionality. Unfortunately, there is little focus on studying compositionality in incremental learning scenarios. We recognize that in many real-world scenarios, decisions often rely on state-object pairs. Therefore, we propose Composition-IL to address the gap in modeling states within incremental learning. For instance, in autonomous driving systems, decisions often hinge not just on detecting pedestrians but also on understanding their state to determine the vehicle's next actions. Hence, this task is highly practical and worthy of pursuit.
Q2: The effectiveness of the proposed components.
A2: About C+S+O: The purpose of the multi-prompt pool can be seen as a decoupling mechanism. Compared to a single prompt pool, the three-prompt pool exhibits a significant improvement, increasing performance by 8.73%. Introducing an additional pool (decoupling a single primitive), can lead to substantial performance gains. This is because arbitrary primitive disentanglement helps to obtain clean, independent primitive features while implicitly decoupling another one and aiding compositional learning. For example, when using two prompt pools to separately learn composition and state, the state prompt and composition prompt are concatenated together with feature sequence and processed through MHA. During this process, the tokens used for composition classification interact with the state prompt, allowing the composition to understand what clean state information looks like, which aids the model's learning of state in the composition branch. Simultaneously, the model recognizes information irrelevant to the state prompt as object information. As a result, during classification, the classifier performs better at distinguishing which information is useful for state classification and which is for object classification, achieving a 1+1>2 effect. Therefore, using two prompt pools yields impressive experimental results. However, this does not imply that two pools are necessarily optimal. Only by explicitly incorporating two prompt pools for state and object can they complement each other better to achieve optimal results, as shown in manuscript Table 3.
About Object-injected State Prompting: The expected improvement from Object-injected State Prompting is not as apparent due to limitations in the prompt selection mechanism. Like L2P, prompt selection follows the key-value mechanism. A drawback of this paradigm is that it cannot be optimized in an end-to-end fashion because it uses keys and queries to select a prompt index from the pool. This reliance on a second, localized optimization to learn the keys means that the model gradient cannot backpropagate through the key/query index selection, leading to only minor improvements in the key. Unfortunately, injected state prompting also suffers from this limitation, as it is subject to separate optimization without model gradient improvements. This is a common issue in prompt-based continual learning and is worth exploring further. We plan to investigate alternative methods in future work.
About Generalized Mean Pooling: GeM can be seen as a balance between Max Pooling and Mean Pooling. We have analyzed it from the derivatives view in our appendix. We leverage GeM to filter out irrelevant information from the selected prompts. As a result, the fewer prompts selected, the higher relevance between these prompts and the images. Therefore, the effectiveness of GeM is linearly related to the amount of information to be integrated. The more information that needs to be fused, the more pronounced the effect of GeM Pooling.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their rebuttal.
The rebuttal solved part of my concerns, so I will keep my score unchanged. | Summary: This paper introduces a novel task termed Compositional Incremental Learning (composition-IL), which aims to enable models to recognize a variety of state-object compositions incrementally. The authors propose a new model called CompILer, which employs multi-pool prompt learning, object-injected state prompting, and generalized-mean prompt fusion to overcome the challenges in this task. The study utilizes two newly tailored datasets, Split-Clothing and Split-UT-Zappos, and demonstrates state-of-the-art performance through extensive experiments.
Strengths: 1. The introduction of composition-IL addresses a critical gap in incremental learning by focusing on the recognition of state-object compositions, which traditional class-IL and blur-IL approaches overlook
2. The proposed CompILer model is well-conceived, leveraging multi-pool prompt learning, object-injected state prompting, and generalized-mean prompt fusion to enhance the learning of compositions.
Weaknesses: 1. The paper shows that CompILer only slightly outperforms LGCL and CODA-Prompt on the Split-Clothing and Split-UT-Zappos datasets. However, the authors do not detailedly analyze why LGCL and CODA-Prompt perform well in these tasks or why CompILer's improvements are limited. A detailed comparative analysis is needed.
2. Lack of analysis on model efficiency, especially on the size of learnt prompts comparing to current prompt-based methods
3. The paper lacks a systematic analysis of the hyperparameters $\lambda_1$ and $\lambda_2$ in Equation 9.
Technical Quality: 3
Clarity: 3
Questions for Authors: Model Efficiency: How does the size of the learnt prompts in CompILer compare to current prompt-based methods? Please provide a detailed analysis of model efficiency.
Hyperparameter Analysis: Can you provide a systematic analysis of the hyperparameters $\lambda_1$ and $\lambda_2$ in Equation 9? How do variations in these values impact the model's performance?
Performance Comparison: Why do LGCL and CODA-Prompt perform well on the Split-Clothing and Split-UT-Zappos tasks, and what factors limit the performance improvements of CompILer over these methods? A detailed comparative analysis would be beneficial.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have acknowledged the limitations of their work, particularly concerning the performance challenges on the Split-UT-Zappos dataset due to long-tail distributions and the difficulty in distinguishing state classes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1: About performance comparison.
A1: Thanks for your question. We are glad to answer it. First, we would like to emphasize that CompILer has achieved SOTA performance across all settings. Notably, on the Split-Clothing dataset, CompILer beats LGCL and CODA-Prompt with an improved average accuracy of 1.55% and 1.63%, respectively. The reason why the performance improvement might not meet the reviewer's expectations is that both LGCL and CODA-Prompt rely on parameter expansion training paradigms. As what we have mentioned in line 263 of our paper, the parameters of these two methods increase significantly when more new tasks are incoming. Although introducing task-specific parameters undoubtedly boosts the performance, this approach contradicts the core principle of continual learning, where the network parameters should not increase unlimitedly as the number of tasks grows. Additionally, as noted in line 255 of our paper, LGCL introduces external semantic priors to achieve language guidance. Although this trick can further enhance the performance, it also limits the model's applicability. For instance, LGCL fails to operate on the 5-task Split-UT-Zappos since the total length of class names exceeds the limitation. In contrast, CompILer does not employ any parameter-increasing techniques or additional semantic knowledge. Compared to our baseline L2P, which does not use parameter expansion, CompILer shows a significant improvement, with 8.73%, 4.91%, and 3.13% across the three settings. We will shed more light on the comparison in the next version. We will emphasize this point again in the camera-ready version and expand on this section.
Q2: About model efficiency.
A2: We have followed your suggestion and further compared the size of prompts used in CompILer and other prompt-based methods. As shown in Tab. 3 of the attached PDF, the size of the prompt pool in CompILer is smaller than that in CODA-Prompt but is larger than that in Dual-Prompt and LGCL. As anticipated, CompILer demonstrates higher model efficiency for a longer sequence of tasks, because we will not dynamically increase the model parameters when we have pre-define the size of prompt pool. Consequently, in such scenarios, we can conjecture that CompILer’s memory footprint and inference speed are more efficient compared to Dual-Prompt and CODA-Prompt.
Q3: Lack of hyperparameter analysis
A3: We have added more experiments regarding the parameters $\lambda_1$ and $\lambda_2$, as shown in Fig. 1 of the attached PDF. The model's average accuracy shows an initial increase followed by a decrease within the interval, reaching a peak at $\lambda_1=1.0$ and $\lambda_2=3e-5$. Therefore, we consider the model to be at a local optimum in this setting and choose these parameters as the final setting.
---
Rebuttal Comment 1.1:
Comment: Dear Authors,
Thank you for your clear responses and additional analyses. I appreciate your work and believe it adds valuable contributions to the field. I raise my rating to 6. | Summary: The paper propose compositional Incremental Learning to enable models to recognize state-object compositions incrementally. The paper provides two tailored datasets for composition-IL by modifying two existing datasets in the fashion domain. The paper propose a new prompt-based model comprising of multi-pool prompt learning, object-injected state prompting and generalized-mean prompt fusion. The method achieves competitive performance on the two proposed datasets.
Strengths: 1. The paper discusses a new direction which aims to exploit state primitives of objects for incremental classification. The authors propose a new incremental setting where objects or states can reappear in new classes in new tasks.
2. The paper provides two curated datasets in the fashion domain to study composition-IL.
3. The paper propose a new prompt-based model for composition-IL. It is nice to report HM accuracy for better evaluation.
Weaknesses: I have major concerns with the experimental part of the paper.
1. Hyperparameters: It is very concerning that six hyperparameters are tuned on the testing set. The hyperparameter values even differ for different task splits on the same dataset. It is weird that the authors use 25 epochs for one dataset, while for the other dataset, 10 epochs for 5-task setting while 3 epochs for the 10 task setting. Even different learning rates are used for different settings on same datasets. How are these decided? It looks like everything is optimized for the test sets in all settings. This is not a fair way of doing experiments. It is acceptable if the authors fine-tune the model for one dataset and use the same parameters for all settings across different datasets (I think this is commonly done in continual learning domain). How can the proposed method be useful/practical if it needs to fine-tune so many hyper parameters on every setting using the test set to get good results?
2. Lack of experiments with random seeds: The experiments are conducted using a single random seed. The proposed method has improvements of 1% or even less in some settings. It is standard practice in CL to report results with multiple random seeds for fair evaluation and to establish the robustness of the model.
3. Competitive recent baselines like HiDe-prompt [1] are not included in the comparison.
4. Simple methods like NCM classifier [2] and Mahalanobis-distance based classifier [3] outperforms prompt methods like L2P on several datasets with first-task adaptation and no training in new tasks (using frozen model after the first task and doing continual evaluation). It would be interesting to see how these strong baseline methods work in the proposed settings.
[1] Wang, Liyuan, et al. "Hierarchical decomposition of prompt-based continual learning: Rethinking obscured sub-optimality." Advances in Neural Information Processing Systems 36 (2023).
[2] Paul Janson, et al, A simple baseline that questions the use of pretrained models in continual learning. arXiv preprint arXiv:2210.04428, 2022.
[3] Dipam Goswami, et al, Fecam: Exploiting the heterogeneity of class distributions in exemplar-free continual learning. In Thirty-seventh Conference on Neural Information Processing Systems, 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. I am curious what is the difference between object-state pairs and a group of concepts describing a class? Like why limit the concepts to just object and state, there can be more concepts attached to a class. So, a class can also be described as a group of concepts and classes can then be learned incrementally with overlapping concepts from old classes.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations of this work is not explicitly addressed in paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1: About hyperparameter tuning
A1: Thanks for your question. Firstly, we claim that finding the optimal balance between hyperparameters is important yet challenging for continual learning. Since the feature distributions within each incremental session may vary considerably, the optimal hyperparameters for each session are often different. Fortunately, our work focuses on using a set of hyperparameters across all sessions, rather than tuning different hyperparameters for each individual session.
Moreover, we state that due to the complexity of the dataset and the differences in feature distributions, different hyperparameters are often assigned to different datasets. We find this phenomenon is quite common in prompt-based continual learning. For example, Dual-Prompt trains CIFAR-100 with a learning rate of 0.03 for 5 training epochs, whereas it trains ImageNet-R with a learning rate of 0.005 for 50 epochs. Likewise, LGCL's class-level and task-level objective weights on CIFAR-100 are 0.137 and 0.323, respectively, accurate to even three decimal places.
Furthermore, for the same dataset, different settings are often assigned with different hyperparameters to ensure the optimal performance. It is a common strategy in compositional learning. For example, in the milestone work Co-CGE [1] in Compositional Zero Shot Learning (CZSL), the method trains for 300 epochs in a closed-world scenario on the CGQA dataset but only for 200 epochs in an open-world scenario. We emphasize that in CZSL, the training data for both open-world and closed-world scenarios is exactly the same, which is essentially no different from our case of hyperparameter setting. In compositional filed, the reason that different hyperparameters can be assigned for the same dataset under different settings is due to the complex interplay of the feature space resulting from compositionality; without appropriate hyperparameters, the model’s potential may be severely limited. Our empirical choice of different hyperparameters is based on: for Split-Clothing, with its simpler primitive description, a larger number of epochs should be chosen to ensure learning a more accurate representation, whereas due to the visual invisibility of the state in Split-UT, excessively large epochs may lead to overfitting on some irrelevant information. For example, the model might mistakenly associate “Suede” with “black” because many “Suede shoes” in the dataset appear in black.
Finally, the core contributions and innovations of our work lie in the novel task setting, two corresponding datasets, and the method for solving ambiguous composition boundaries. We acknowledge that setting the same parameters across all datasets is an optimal setup as it demonstrates a model's generalizability, but this is not the central problem our work addresses. However, we will also consider the reviewers' feedback and explore a more robust model in future work, which will be discussed in the next version.
Q2: Lack of experiments with random seeds
A2: We greatly appreciate the reviewer's reminder. We conducted experiments with 4 different random seeds as shown in Tab 1 of the PDF. Although there are slight fluctuations in the results, it consistently achieves SOTA performance across all metrics. Although single seed experiments are acceptable in recent continual learning works [2][3], we will use multiple random seeds for a more thorough comparison.
Additionally, we clarify that CompILer does not use any extra semantic knowledge, does not store any information of old session, does not allocate any task-specific parameters, and does not dynamically increase network parameters. We believe that these techniques (which are commonly used by prompt-based methods except L2P) undoubtedly contribute significantly to performance improvements, but they are contrary to the principles of continual learning, as noted in line 263 of our manuscript.
Q3: Lack of methods
A3: We have conducted comparisons with the methods you suggested, and the results are shown in Tab 2 of the PDF. Due to limited rebuttal period, we mainly compare Compiler with the methods on Split-UT-Zappos. HiDe-Prompt surpasses CompILer due to the task-specific prompt settings and the storage of statistics of old classes which seem to contradict the principles of continual learning to some extent.
For NCM and Fecam, unfortunately, although this first-task adaptation approaches achieve a lower forgetting rate, their plasticity on incremental tasks is quite poor because they only work well when the first task contains the most classes. We will include these methods as baselines in the next version and provide proper citations.
Q4: The relationship with a group of concepts.
A4: Some datasets associate classes with a group of concepts. For example, in the CUB dataset, a number of attribute concepts are provided for classes, such as the color of the leg/back, the pattern of the head/breast, etc. However, these attribute information is locally semantic, and cannot encapsulate the global state features of a class. As a result, it limits the attribute information forming a pair with the object. In contrast, the composition-IL task combines objects with global states, so that objects and states are considered on an equal status and exhibit compositionality. Notably, the original datasets Clothing16K and UT-Zappos50K only provide a single state description for each class which prevents Split-Clothing and Split-UT from forming an object with a group of global concepts. Nevertheless, we also acknowledge your advice and it is potential to explore a new benchmark with multiple global states further.
[1] Learning Graph Embeddings for Open World Compositional Zero-shot Learning (TPAMI 2022)
[2] When Prompt-based Incremental Learning Does Not Meet Strong Pretraining (ICCV 2023)
[3] Fecam: Exploiting the heterogeneity of class distributions in exemplar-free continual learning (NeurIPS 2023)
---
Rebuttal Comment 1.1:
Comment: I appreciate the detailed rebuttal for the authors which answers most of my concerns.
1. Hyperparameters: While the authors argue that several papers use different training hyper-parameters for different settings even using the same dataset, I still believe that this is not a robust solution, it is just optimizing the results on the test set which does not make sense. Just saying that some existing papers do the same overfitting on the test set is not enough. I would urge the authors to reduce the number of hyper-parameters (following the discussion with Reviewer EVGX) to make the method usable and more realistic. Improving the accuracy by small margins is not everything, what's more important is how robust and practical your solution is.
2. Comparison with baselines: It is appreciable that the authors reproduced the baseline methods. HiDE-prompt was expected to do better, it's good to have a discussion in the paper about the trade-offs. NCM is a baseline which can be considered as a lower bound since it does not update the model. FeCam is naturally expected to do better than NCM since it uses Mahalanobis-distance and has improved over NCM using ViTs [3]. So, it's quite surprising to see that it gets worse. I would ask the authors to look into the implementation or find appropriate justifications for this since FeCam results in Table 2 looks like an error or faulty implementation. It is important to discuss these baselines since they form the lower bounds and shows the scope of improvement using the prompt methods.
I still maintain the concerns for the hyper parameters and urge the authors to look into this. I raised my rating to 5.
---
Rebuttal 2:
Comment: Thank you very much for further discussion and comments. We appreciate very much for your raising the rating. We are happy to answer your questions below:
**Q1: About hyperparameters.**
**A1:** We have acted on your suggestion to reduce the number of hyperparameters by streamlining the model. Specifically, we implement a simplified alternative model (namely Sim-CompILer) , which removes the directional decouple loss, RCE loss, and Object-injected State Prompting from the original CompILer, thereby eliminating hyperparameters $\lambda_1$, $\lambda_2$, and $\alpha$, Note that, $\lambda_3$ still remains in order to balance CE loss and surrogate loss, and $\beta$ is still needed to adjust the trade-off between compositions and primitives. The results, shown in the table below, demonstrate that, even though CompILer achieves the best performance across both datasets, Sim-CompILer slightly underperforms but still ranks second. We agree with your suggestion and believe this simplified version is more practical and easy-to-follow. We will include the results for Sim-CompILer in the final version. Hopefully, this simpler alternative would provide more flexibility in potential practical applications.
| Name | Split-Clothing (5 tasks) | | | | | Split-UT-Zappos (5 tasks) | | | | |
|---------------|--------------------------|---|---|---|---|---------------------------|---|---|---|---|
| | Avg Acc | FTT(↓) | State | Object | HM | Avg Acc | FTT(↓) | State | Object | HM |
| Sim-CompILer | 88.22 | 8.37 | 90.95 | **96.47** | 93.36 | 46.43 | 19.31 | 56.83 | **79.58** | 66.31 |
| CompILer | **88.74** | **6.98** | **91.61** | 96.34 | **93.92** | **47.06** | **18.84** | **57.52** | 79.53 | **66.75** |
**Q2: Comparison with baselines.**
**A2:** Thank you for pointing out this issue. After checking our experiments, we confirmed that there was an error in the implementation of NCM and FeCAM. Concretely speaking, FeCAM provides two official implementations, one under the Avalanche Library [1] and the other under PyCIL [2]. Since the Avalanche Library includes both NCM and FeCAM, we conducted our experiments within this framework to ensure a fair comparison. Unfortunately, during the process, we overlooked the need to update the class mask, determining which classes have been seen during training the model incrementally. This negligence leads to the unreasonable results you have noticed. Also, we made the same mistake when implementing the NCM method, which also obtained unsatisfactory results. To correct the error, we have fixed the mask and re-conducted the experiments from scratch. The new results, as shown in the following table, verify the fact that FeCAM should achieve higher accuracy than NCM.
CompILer outperforms NCM and FeCAM by enhancing adaptation through the tuning of learnable prompts across all sessions. In contrast, NCM and FeCAM are limited by their inability to update the model during incremental learning. Additionally, it is noteworthy that CompILer significantly surpasses NCM and FeCAM in Object accuracy. This is because CompILer freezes the backbone across all sessions and leverages the pre-trained parameters, while the other two methods update the backbone in the first session, which weakens their effectiveness in object recognition. We will include these methods as baselines in the next version and provide proper citations.
| Method | Prototype | Avg Acc | State | Object | HM |
|----------|-----------|---------|--------|--------|--------|
| NCM | ✓ | 31.09 | 41.91 | 39.71 | 40.78 |
| FeCAM | ✓ | 33.71 | **46.32** | 40.44 | 43.18 |
| CompILer | ✗ | **34.66** | 45.82 | **77.06** | **57.47** |
[1] Avalanche: an End-to-End Library for Continual Learning (CVPRW 2021)
[2] Pycil: a python toolbox for class-incremental learning (SCIENCE CHINA Information Sciences 2023)
---
Rebuttal Comment 2.1:
Comment: I appreciate the authors efforts in improving the paper and clarifying most of my concerns.
It is good that the authors propose a simpler version of the method which still works with lesser hyper-parameters and should be included in the paper. I expect the authors to have the discussion in the paper on the need to reduce the hyper-parameters and use simpler models.
It is interesting to see that the baselines of NCM and FeCam performs very competitively with the prompt-based methods and is better in predicting the state while worse in object prediction. It is good to have a detailed discussion on comparison with these frozen-model baselines which highlights how the learnable prompt-parameters (also using frozen model) are affecting the performance. It is also good to add them in the model efficiency analysis (like in Table 3 rebuttal) since they do not learn any new parameters.
Overall, I am now quite positive about the contribution of this work which opens up a very interesting direction in Composition-IL with very intuitive and novel setting, method, datasets and very good analysis. The extensive experiments (including the baselines and also more rehearsal-based method comparisons added during rebuttal) makes it a good benchmark for future works.
One minor remark for the final version would be to add proper informative captions for all figures in the paper which now is quite empty, it makes the paper more readable and easy to understand. I raise my rating to 6. | Rebuttal 1:
Rebuttal: We sincerely thank the reviewers x9g2 (R1), B271 (R2), KbBF (R3), and EVGX (R4) for their constructive comments and acknowledgements: “the proposed new task composition-IL is novel and welcome”(R1, R2, R3, R4); “the proposed benchmarks are well-constructed”(R1, R3); “CompILer is well-conceived”(R2, R3); “the paper is good presentation”(R3, R4); “the metric is meaningful”(R1).
Pdf: /pdf/74e4192ff190c585ff466f0b7f046f503b454c49.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Mini-Sequence Transformers: Optimizing Intermediate Memory for Long Sequences Training | Accept (poster) | Summary: The paper introduces the MINI-SEQUENCE TRANSFORMER (MST), a technique designed to enhance the efficiency and accuracy of LLM training, particularly when dealing with extremely long sequences. The core concept behind MST is the partitioning of input sequences into smaller "mini-sequences," which are then processed iteratively to alleviate the burden on intermediate memory usage. The authors claim that MST, when combined with activation recomputation, leads to substantial memory savings in both the forward and backward passes of LLM training. The technique's efficacy is demonstrated through experiments with the Llama2-8B model, where MST reportedly allows for training with sequences up to 12x longer than standard implementations without compromising throughput or convergence. The authors emphasize that MST is a general, implementation-agnostic solution that can be seamlessly integrated into existing LLM training frameworks with minimal code modifications.
Strengths: 1. Novel Approach to Memory Optimization: The paper introduces a new method, MST, to address the critical challenge of memory management in LLM training, particularly for long sequences. The approach of partitioning sequences and iterative processing of mini-sequences is innovative and shows promising results in reducing intermediate memory usage.
2. Significant Improvement in Sequence Length: The experimental results demonstrate that MST enables training with sequences up to 12 times longer than standard implementations, which is a substantial advancement. This capability can significantly impact various NLP tasks that require reasoning over extended contexts.
3. Generality and Implementation Agnostic: The paper emphasizes that MST is a general methodology that can be applied to various LLM training frameworks with minimal code changes. This generality enhances the technique's potential for widespread adoption and impact.
Compatibility with Distributed Training: The paper also discusses the extension of MST to distributed settings, showcasing its potential for large-scale training scenarios. This scalability further strengthens the technique's applicability to real-world LLM training.
4. Thorough Evaluation: The paper provides a comprehensive evaluation of MST, including ablation studies and comparisons with existing techniques. The results convincingly demonstrate the effectiveness of MST in reducing memory overhead and enabling longer sequence training.
Weaknesses: 1. Limited Model Scope: The experimental evaluation primarily focuses on the Llama2-8B model. While the authors claim that MST is general, further evaluation on a wider range of LLM architectures would strengthen the paper's claims and demonstrate the technique's broader applicability.
2. Lack of Comparison with State-of-the-Art: The paper could benefit from a more extensive comparison with other state-of-the-art memory optimization techniques for LLM training. This would provide a clearer understanding of MST's relative performance and advantages.
3. Potential Impact on Training Time: While MST shows promising results in reducing memory usage, its potential impact on overall training time, especially for shorter sequences, needs further investigation. The paper acknowledges this limitation but could provide a more in-depth analysis of the trade-offs between memory savings and training time.
4. Implementation Complexity: Although the authors claim that MST is easy to integrate, the actual implementation details and potential challenges in adapting it to different training frameworks could be elaborated further. Providing more guidance on implementation could facilitate wider adoption.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Impact on Model Quality: While the paper demonstrates that MST does not degrade model quality for the evaluated tasks, a more extensive evaluation on a broader range of NLP tasks would be valuable to confirm its impact on model performance across different domains.
2. Sensitivity to Hyperparameters: The paper could provide more insights into the sensitivity of MST's performance to its hyperparameters, such as the number of mini-sequences and the partitioning strategy. This would help users understand how to tune MST for optimal results in different scenarios.
3. Applicability to Other Memory Optimization Techniques: The paper could explore the potential of combining MST with other memory optimization techniques, such as quantization or gradient checkpointing. This could lead to further improvements in memory efficiency and training capabilities.
4. Long-Term Impact on LLM Research: The paper could discuss the potential long-term impact of MST on LLM research and development. Enabling training with longer sequences could open new avenues for exploring larger and more capable LLMs, potentially leading to breakthroughs in various NLP applications.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: 1. Performance Degradation for Short Sequences: As acknowledged in the paper, MST may lead to performance degradation when applied to models with short sequences due to the overhead introduced by partitioning and gradient accumulation. This limits its applicability to scenarios where long sequence training is not the primary concern.
2. Dependence on Activation Recomputation: The effectiveness of MST is closely tied to its combination with activation recomputation. While this combination yields significant memory savings, it also introduces additional computational overhead, potentially impacting overall training time.
3. Limited Evaluation on Distributed Settings: Although the paper discusses the extension of MST to distributed training, the evaluation in this setting is relatively limited. Further experiments on larger-scale distributed systems would provide more insights into its performance and scalability in real-world scenarios.
4. Potential for Further Optimization: The current implementation of MST may have room for further optimization, particularly in terms of minimizing the overhead associated with partitioning and gradient accumulation. Exploring more efficient implementation strategies could enhance its performance and broaden its applicability.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer's thorough assessment and insightful comments. Our rebuttal addresses concerns and introduces a significant optimization: chunk-based mini-sequence transformers (MST). This advancement directly addresses the "Potential for Further Optimization" and "Sensitivity to Hyperparameters" concerns, with additional data to support our claims.
# 1. Further Optimization: Chunk-based Mini-Sequence Transformers
We've developed chunk-based MST to enhance efficiency in memory management and reduce computational overhead. This method:
- Partitions input sequences into fixed-size sub-chunks
- Integrates seamlessly with existing MST design
Key concept: The input sequence $S$ is split into sub-chunks of size $C$, where chunk size $C = S/M$ (M being the number of mini-sequence). This approach effectively addresses challenges in training both long-sequence transformers and small sequences, with no slowdown for the latter.
Hyperparameter selection: We've found that setting chunk_size $C = d$ (d being hidden size) yields optimal performance.
Analysis: MST returns I/O complexity as $O(Sd + SI + dI * M)$. $S$ is sequence length, $d$ is hidden size, $I$ is intermediate size, $M$ is the number of mini-sequence. As long as $dI * M<= SI$, IO complexity remains as $O(SI)$. In conclusion, $C = S / M >= d$ is the best selection for the MLP block.
LM-head uses original MST with hyperparameters $M = V/d$. $V$ represents the vocabulary size.
# 2. Addressing Weaknesses
# W1 Limited Model Scope:
To demonstrate the broader applicability of MST, we conducted additional experiments with Qwen, Mistral, and Gemma-2 models:
| Implementation | Maximum Sequence Length (K) |
|--|--|
| Mistral-7B-v0.1 vanilla | 5 |
| Mistral-7B-v0.1 activation recomputation | 42 |
| **Mistral-7B-v0.1 MST** | **70** |
| Qwen2-7B-Instruct vanilla | 4 |
| Qwen2-7B-Instruct activation recomputation | 13 |
| **Qwen2-7B-Instruct MST** | **74** |
| Gemma-2-9b vanilla | 1.5 |
| Gemma-2-9b activation recomputation | 5 |
| **Gemma-2-9b MST** | **36** |
These results show significant improvements: 12x, 18x, and 24x context length increases for Mistral-7B, Qwen2-7B-Instruct, and Gemma-2-9b, respectively. MST performs exceptionally well for Gemma, likely due to its large vocab size (256k) and MLP intermediate size (14k).
# W2 Comparison with Lossy Methods:
We've extended our comparisons to include state-of-the-art lossy quantization methods:
| Implementation | Maximum Sequence Length (K) |
|-|-|
| 8-bit | 5 |
| 8-bit + activation checkpointing | 26 |
| 4-bit | 10 |
| 4-bit + activation checkpointing | 28 |
| MST | 60 |
| **MST + 8-bit** | **110** |
| **MST + 4-bit** | **140** |
MST outperforms both 8-bit and 4-bit quantization for sequence length enablement (60k vs 28k vs 22k). Moreover, MST combined with 8-bit quantization achieves 110K tokens, a 22x improvement over standard 8-bit training.
# W3 Training Time:
We conducted additional experiments with chunk-based MST to address concerns about training time:
Table 2: MST training with two epochs on LongAlpaca-12k
| Implementation | Context length | LongAlpaca-12k (ppl) | loss | Training Time (hours) |
|:-|-:|-:|-:|-:|
| Act. recomputation | 8k | 9.34 | 2.23 | 25.6 |
| MST | 8k | 7.41 | 2.003 | 26.5 |
| MST | 16k | 3.53 | 1.26 | 62.5 |
| MST | 30k | 3.45 | 1.23 | 233 |
These results demonstrate that MST can train without significant throughput reduction (<5% is considered negligible).
# W4 Implementation Complexity:
MST's core idea is conceptually straightforward, primarily targeting MLP and LM-Head blocks. We offer two integration methods:
a) Customized Hugging Face Transformer:
Modify existing Hugging Face Transformer implementation.
```
git clone transformer_mst
pip install transformer_mst
```
b) Wrapper Mode:
One-line wrapper for MST:
```python
from mini-s import mst
model = mst(model)
```
# Addressing Reviewer Questions
# Q1 Impact on Model Quality:
Take this evaluation suggestion: Our experiments show that LLAMA3 with MST and a 30K context achieved a 23%- 270% better perplexity than the baseline.
# Q2 Sensitivity to Hyperparameters:
The development of chunk-based MST allowed us to optimize hyperparameter selection. Setting chunk_size $C$ = hidden size $d$ provides the best balance between keeping the MLP layer compute-bound and avoiding additional memory movement. Setting $M = V/d$ provides the best memory efficiency for the LM head.
# Q3 Applicability to Other Memory Optimization Techniques:
Our comparison with lossy methods shows that MST combines effectively with quantization techniques. MST + 8-bit achieves a 22x improvement in maximum sequence length over standard 8-bit training, while MST + 4-bit pushes this boundary even further to 140K tokens.
Also, MST + activation checkpoint outperforms the gradient checkpointing (also known as activation recomputation) model from 1.5x to 11x for long sequence enable, as shown in the paper and Qwen, Mistral, and Gemma-2 model experiments. As the reviewer claims, "The effectiveness of MST is closely tied to its combination with activation recomputation," so the applicability with gradient checkpointing is obvious.
# Q4 Long-Term Impact on LLM Research:
We've demonstrated that MST works seamlessly with context parallelism (also known as sequence parallelism). Our experiments have achieved a remarkable 480k sequence length by combining context parallelism and MST, showcasing the potential for training with extremely large sequences. As we know, Llama 3.1 incorporates context parallelism to introduce additional sequences of 128k into the model architecture with 16 A100 GPUs. Our research demonstrates that MST integrates seamlessly with context parallelism, enabling significantly longer sequence processing than Llama 3.1 without throughput and performance downgrade.
---
Rebuttal 2:
Title: Reply to limitation
Comment: This official comment focuses on the reply to limitations.
# Limitations 1: Performance Degradation for Short Sequences
We acknowledge this limitation, and our further optimization of chunk-based MST effectively solves this problem with negligible slowdown by carefully tuning Hyperparameters of $M$ and chunk size $S$
# Limitations 2: Dependence on Activation Recomputation
We also recognize that the Dependence on Activation Recomputation is one of the critical limitations. Our key is that the additional computational overhead can be compensated by memory saving where large batch training is enabled to improve performance, as illustrated in section 4.2, Faster Long Sequence Training with MINI-SEQUENCE TRANSFORMER (MST). Also, we have proved that this combination would not affect the total training performance by whole training experiments:
Table 2: MST training with two epochs on LongAlpaca-12k
| Implementation | Context length | LongAlpaca-12k (ppl) | loss | Training Time (hours) |
|:-|-:|-:|-:|-:|
| Act. recomputation | 8k | 9.34 | 2.23 | 25.6 |
| MST | 8k | 7.41 | 2.003 | 26.5 |
| MST | 16k | 3.53 | 1.26 | 62.5 |
| MST | 30k | 3.45 | 1.23 | 233 |
# Limitations 3 Distributed setting:
With our optimization for MST, we successfully enabled the training of LLAMA3 with 480k context on 8 A100, a significantly longer sequence length than 128k of LLAMA 3.1 can achieve on 16 A100s. The limited evaluation of distributed settings is partly due to our limited budget to extend a more significant scale over 8 GPUs. Therefore, we would open-source our code and welcome the LLM community to try our work on large-scale distributed systems.
The maximum sequence length of Llama3-8B runs on the distributed setting.
| Model Implementation | 2 GPUs | 4 GPUs | 8 GPUs |
|:-|-:|-:|--:|
| Llama3-8b-hf MST | 120 | 240 | 480 |
| Llama2-7B-hf MST | 160 | 320 | 640 |
# Limitations 4: Further Optimization
The chunk-based MST implementation provides a way to answer the limitations of Performance Degradation for Short Sequences and Potential for Further Optimization.
Again, we thank the reviewer for the valuable feedback that helped us improve our work. We really learned a lot from the suggestions of "Potential for Further Optimization" and "Sensitivity to Hyperparameters."
---
Rebuttal 3:
Title: Thank you for the response.
Comment: Thank the authors for their detailed feedback. I have raised the rating from 5 to 6. | Summary: The paper targets an LLM-specific but significant challenge: training the LLM model for long-context understanding. The author proposed a Mini-Sequence Transformer method that enables LLM training with extremely long sequences. The method is motivated by the mini-batch, which forwards and backs a chunk of data instead of the whole sequence. Incorporating with gradient accumulation, the procedure can be adopted for long sequence training. The experience compares the performance of the proposed method with vanilla implementation and activation re-computation, but the training throughput improvement is limited. Moreover, the convergence experiment is not convincing and might need additional evaluation.
Strengths: - Training with extremely long sequences is important but challenging for understanding long contexts. This paper proposes a possible solution to enable full parameter training beyond GPU memory constraints.
- The method shows its potential for training up to 6K sequence length for Llama3-8B, which is helpful for long-context tasks and possibly reduces the barrier of training LLM.
- The paper is well-written and easy to follow.
Weaknesses: Although the idea of this method is promising, the experiments can not adequately address my concerns about its effectiveness compared to the existing method.
Fig.3 shows the training loss curve comparison of MsT and baseline. However, the model's training loss can not effectively demonstrate the method's performance on long sequences with around 1000 steps.
Technical Quality: 3
Clarity: 3
Questions for Authors: For the training performance (Sec 4.2 Table 6), would that be possible for the author to explain the reason MsT can not significantly improve the TFLOPS?
- If the long-sequence input makes the training memory-bounded, then the reduction of each intermediate feature should be able to speed up the I/O.
- Since the activation re-computation requires additional computation, it can still achieve better TFLOPS than MsT when using the same batch size.
For the convergence experiments (Sec 4.3), are they trained from the Llama3-8B pre-trained model?
- If this is the case, the training loss only reveals the correctness of the prediction's cross-entropy. It can not effectively show the long-sequence performance, which I believe is the motivation for training LLMs using extremely long sequences.
- is there another way to prove its effectiveness, for example, evaluation on the long-context benchmark?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: - The experiment does not well support the motivation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful feedback. We have conducted new experiments to address concerns and demonstrate the effectiveness of Mini-Sequence Transformer (MST).
# 1. Addressing Weakness
> Weakness: However, the model's training loss can not effectively demonstrate the method's performance on long sequences with around 1000 steps.
To address this concern, we have conducted new experiments on the LongAlpaca dataset:
- Trained Llama3-8B model for two epochs (around 10000 steps)
- Evaluated the trained model using perplexity
- Split the dataset for 90% training dataset and 10% evaluation dataset
- Carefully select Hyparameter M (the number of mini-sequences) to avoid throughput downgrade (M=2 for 8k, M=4 for 16k, M=8 for 30k)
Table 1: MST training with one epoch on LongAlpaca-16k
| Implementation | Context length | LongAlpaca-16k (ppl) | loss | Training Time (hours) |
|:-|-:|-:|-:|-:|
| Act. recomputation | 8k | 290.0 | 5.67 | 6.7 |
| MST | 8k | 301.9 | 5.71 | 7.0 |
| MST | 16k | 240.3 | 5.48 | 21.2 |
| MST | 30k | 222.4 | 5.40 | 61.2 |
Table 2: MST training with two epochs on LongAlpaca-12k
| Implementation | Context length | LongAlpaca-12k (ppl) | loss | Training Time (hours) |
|:-|-:|-:|-:|-:|
| Act. recomputation | 8k | 9.34 | 2.23 | 25.6 |
| MST | 8k | 7.41 | 2.003 | 26.5 |
| MST | 16k | 3.53 | 1.26 | 62.5 |
| MST | 30k | 3.45 | 1.23 | 233 |
These results show that MST enables training with much longer contexts (up to 30k) and improves perplexity compared to the 8k baseline (23% for LongAlpaca-16k and 270% for LongAlpaca-12k).
Also, we highly valued the reviews' suggestion that:
> Question 2.2: Is there another way to prove its effectiveness, for example, evaluation on the long-context benchmark?
Therefore, we evaluate the long-context datasets with perplexity to demonstrate MST as an effective way to enable long-context training.
# 2. Addressing Questions
> Question 1: For the training performance (Sec 4.2 Table 6), would that be possible for the author to explain the reason MsT can not significantly improve the TFLOPS?
> Question 1.1: If the long-sequence input makes the training memory-bounded, then the reduction of each intermediate feature should be able to speed up the I/O.
We clarify that long sequences make training computation-bound, not memory-bound. Moreover, MST doesn't reduce intermediate features but reuses memory space. That means all $M$ mini-sequence intermediate data are still generated, albeit in the same memory space. Therefore, MST is trading IO complexity with memory overhead, and it is worth for MLP and LM-head as they are computation-bound. We provide complexity analysis to support our claims:
- Computation complexity: $O(S * d * I)$ for both standard MLP and MST. Let $S$ be the sequence length, $d$ be the hidden dimension, $I$ be the intermediate size, and $V$ be the voice size. Standard MLP returns $O(S * d * I)$ computation. For MST, each mini-sequence computation requires $O(S / M * d * I)$ computation, and MST repeats $M$ times. In the end, MST returns $O(S / M * d * I *M) = O(S * d * I)$ FLOPS. The computation complexity is unchanged for any MST setting.
- I/O complexity: $O(Sd + SI + dI * M)$ for MST, slightly increased. Standard MLP requires $O(Sd + SI + dI)$ HBM access. The $O(Sd)$ represents the movement of input into GPU and output into HBM. The $O(SI)$ represents the movement of the intermediate. The $O(dI)$ represents the movement of weights. Each mini-sequence requires $O(Sd/M + SI/M + dI)$, and MST repeats M times, loading weight for every time. In the end, MST returns $O (((Sd/M + SI/M + dI)* M) = O (Sd + SI + dI* M)$ HBM access. The IO complexity is increasing for MST.
> Question 1.2: Since the activation recomputation requires additional computation, it can still achieve better TFLOPS than MsT when using the same batch size.
The key to the TFLOPS problem is that MST adopts activation recomputation during experiments for better memory efficiency.
We have extended training performance (Sec 4.2 Table 6) to clarify this. Here are the key results:
| Model Implementation | Batch Size | Training Time Per Step (s) | TFLOPS |
|:-|:-:|-:|-:|
| Llama3-8B-hf vanilla | 1 | OOM | OOM |
| Llama3-8B-hf Act. recomputation | 2 | 5.01 | 3271.42 |
| Llama3-8B-hf MST | 2 | 5.13 | 3194.90 |
| Llama3-8B-hf MST | 8 | 19.35 | 3386.13 |
| Llama2-7B-hf vanilla | 1 | 1.24 | 3290.88 |
| Llama2-7B-hf Act. recomputation | 1 | 1.52 | 2684.67 |
| **Llama2-7B-hf MST without Act. recomputation** | 1 | **1.31** | **3115.03** |
| Llama2-7B-hf Act. recomputation | 8 | 8.85 | 3703.48 |
| Llama2-7B-hf MST | 8 | 9.33 | 3511.39 |
| Llama2-7B-hf MST | 16 | 17.92 | 3656.17 |
Keys:
- MST independent achieves a 16% speedup over activation recomputation for Llama2-7B-hf (b=1)
- MST independent is only 4% slower than vanilla PyTorch for Llama2-7B-hf (b=1)
- MST + activation recomputation only introduces a 2.4% slowdown compared to activation recomputation for Llama3-8B-hf (b=2)
- Carefully select MST hyparameters $M$ can further improve performance (10% improvement for Llama3-8B-hf MST b=2, it was 2825.12 TFLOPS on the original paper)
> Question 2: For the convergence experiments (Sec 4.3), are they trained from the Llama3-8B pre-trained model?
No, we train the Llama3-8B from scratch for the original convergence experiments.
In contrast, the new convergence experiments on the LongAlpaca dataset are trained from the Llama3-8B pre-trained model, known as fine-tuning Llama3-8B.
> Question 2-1: the motivation for training LLMs using extremely long sequences is better performance.
We think so, and that motivates us to do all the new experiments.
In conclusion, our new experiments shows MST:
1. Has minimal impact on throughput (<5% loss for step training Llama2-7B and <5% loss for complete training llama3-8b)
2. Enables training with significantly longer contexts (up to 30k)
3. Demonstrate effectiveness on long-context tasks (23%-270% perplexity improve)
---
Rebuttal 2:
Title: Details rebuttal for Q1 and Q2
Comment: # Detailed rebuttal on Q1: Training Performance
We appreciate the reviewer's question about why MST doesn't significantly improve TFLOPS. To address this, we offer the following explanation:
1. Trade-off between Memory and Computation:
- MST reduces memory footprint by processing smaller mini-sequences.
- Like gradient accumulation, it trades computation time for memory savings.
2. Computational Complexity:
- MLP and LM-HEAD operators in long sequence training are compute-bound.
- For sequence length $S$, hidden dimension $d$, and intermediate size $I$:
• Standard MLP: $O(S * d * I)$ computation
• MST: $O(S / M * d * I) * M = O(S * d * I)$ computation
- Computational complexity remains unchanged for MST.
3. I/O Complexity:
- Standard MLP: $O(Sd + SI + dI)$ HBM access
- MST: $O(Sd + SI + dI * M)$ HBM access
- I/O complexity increases slightly for MST due to repeated weight loading.
4. Memory Savings vs. I/O:
- MST reuses memory space for mini-sequences, reducing HBM memory cost by factor M.
- All intermediate features are still generated, maintaining I/O complexity for intermediates.
- Increased I/O for weight loading is offset by the compute-bound nature of long-sequence training.
5. Performance in Different Scenarios:
- Long-sequence training: Compute-bound, so increased I/O has minimal impact on speed.
- Short-sequence training: I/O-bound, potentially slowing down TFLOPS when $M$ is large.
6. Optimization:
- To address potential slowdowns in short-sequence scenarios, we introduced chunk-based MST. The key is to select a small $M$ for a small sequence.
- This new implementation optimizes the memory-throughput trade-off.
In conclusion, MST's primary benefit is significant memory reduction without substantial TFLOPS loss in long-sequence scenarios. The compute-bound nature of long-sequence training generally outweighs the slight increase in I/O complexity. For short sequences, our new chunk-based MST implementation helps maintain performance.
# Training Setting
We train a Llama3-8B MST on the LongAlpaca datasets. The LongAlpaca dataset has two variants,
LongAlpaca-12k has 12k text with a maximum of 191k characters, and LongAlpaca-16k has 6.28k text with a maximum of 73.9k characters. The training lasts for one epoch for LongAlpaca-16k and two epochs for LongAlpaca-12k; each epoch requires about 5000 steps. For all implementations, we use the AdamW optimizer. We use a weight decay of 0.001, gradient clipping of 1.0, and a constant learning rate of 1e-4. All batch sizes equal 1, with a gradient accumulation step of 32. The bf16 precision is also deployed.
---
Rebuttal 3:
Comment: Thank you for your detailed response!
- Thank you for the extra experiment demonstrating MST's effectiveness on long-context training and the corresponding evaluation for long-context tasks. According to the benchmarking result, MST helps mitigate the GPU capacity limitation for long-context understanding. From my perspective, there is supposed to be no barrier between MST and the current training techniques (e.g., tensor parallel, parameter parallel).
- Thank you for providing a detailed explanation of the model's FLOPS. The computation bound and the memory are related not only to the model's complexity but also to the GPU capacity (See `roofline model`). I believe this part helps readers to understand the advances brought by MST comprehensively.
---
Based on the paper's results and rebuttal, I believe this paper provides a promising solution for a critical problem and will boost the community. I will adjust my rating based on the rebuttal and discussion with ACs and other reviewers.
---
Rebuttal Comment 3.1:
Comment: Thank you for your kind words and for recognizing the significance of the additional experiments on long-context training. We agree that the relationship between computational bounds, memory, and GPU capacity is crucial, and we are glad that our explanation of the model's FLOPS in the context of the roofline model was helpful.
We will consider adding a blurb about the roofline model example to help readers understand MST's advances.
Thank you once again for your thoughtful feedback. | Summary: This paper introduced minibatching along the sequence length for inputs to the MLP and LM-head parts of a Transformer-based model. This method does not change the functionality of the transformer but improves the memory requirement when inputs are of long sequence length.
Overall, while this paper leaves to be desired with the language and the writing, the idea is simple and it's impact is well quantified. Therefore, I vote to accept this paper.
Strengths: 1. The method is incredibly simple. Splitting among the sequence length for the MLP and LM-Head parts of the attention head is a simple and easy to implement idea and can be used very broadly.
2. The ability to handle longer context sequences is very well shown through the experiments. Showing the ablation across different batch sizes is also interesting and a worthwhile addition.
3. The background in this paper is extremely thorough and well-done. It provides intuition to even a novice in ML system knowledge.
4. The ability of this method to synergize with existing methods such as DeepSpeed and gradient accumulation is quite nice.
Weaknesses: 1. While the author mentions that lossy methods such as LongLoRa differ from the lossless method presented in this paper, I still believe that providing some further detail on the amount of context length improvement from lossy methods would help contextualize the results in this paper. Indeed, plotting the context length improvement vs. performance degradation would be useful.
2. The notation in this paper is chosen extremely poorly. $I$ denotes a constant but $I_i$ denotes a vector for example.
3. What is the difference between $\mathbf{O}'$ and $\mathbf{O}$.
4. G is never defined.
5. What is $I_{up},I_{gate},W_{up},W_{gate}$?
6. I do not understand Algorithm 1 at all. When computing a MLP, why are you multiplying two different weight matrices against the input? Why are you doing a matrix elementwise multiplication or convolution operation in Line 3? You define $I'$ but never use it? You define $O'$ but never use it? In general, this is written very poorly.
7. There are many typos and informal language throughout this paper. The author should take time to go through this paper and fix them. Some of the language reads as a stream of thought and more structured writing would be helpful.
Technical Quality: 4
Clarity: 1
Questions for Authors: 1. For different architectures outside of LLaMa, do you see similar context length improvements? For example, if this was repeated with Qwen or Mistral, does the context length increase stay the same or does it differ more?
2. What are the empirical values of the Peak Intermediate sizes for Attention, LM-head, and MLP? Despite the schema provided in Table 1, it would be nice to have empirical measurements of the memory requirements for each part of the architecture?
3. I believe adding a blurb about activation recomputation would be helpful for the reader who is unaware of this method.
4. Could you measure the slowdown on smaller context lengths? This is mentioned in the paper but I do not believe you point to any exmpirical result.
Confidence: 4
Soundness: 4
Presentation: 1
Contribution: 3
Limitations: No negative social impact and limitations are mentioned.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough review and valuable feedback. We appreciate your support and address your concerns below.
# 1. Addressing Weaknesses
# 1.1 Comparison with lossy methods
We've conducted additional experiments comparing our Mini-Sequence Transformer (MST) approach with quantization techniques:
| Llama3 Implementation | Maximum Sequence Length (K) |
|:-|-:|
| 8-bit | 5 |
| 8-bit + activation checkpointing | 26 |
| 4-bit | 10 |
| 4-bit + activation checkpointing | 28 |
| MST | 60 |
| **MST + 8-bit** | **110** |
| **MST + 4-bit** | **140** |
This table shows the maximum sequence lengths achievable on a single A100 GPU. MST enables significantly longer sequences (60K) than standard and lossy approaches and extends to 140K when combined with 4-bit quantization.
# 1.2 MLP Algorithm Presentation
We've revised the Mini-Sequence MLP algorithm presentation for clarity, removing some details of how MLP works:
Algorithm: Mini-Sequence MLP
Input: Matrix $X \in \mathbb{R}^{N \times d}$, MLP block
1. Partition matrix X into $M$ blocks $X_1, ..., X_M$ of size $N_m \times d$, where $N_m = N/M$
2. For $i = 1$ to $M$:
Compute $O_i' = \text{mlp}(X_i)$, where $O_i' \in \mathbb{R}^{N_m \times d}$
3. Concatenate $O = [O_1', ..., O_M'] \in \mathbb{R}^{N \times d}$
4. Return $O$
# 1.3 Notation Correction
We apologize for the confusion in our notation. Here's a clarified glossary of terms:
- $O'$: Mini-sequence output
- $O$: Final output after concatenation Mini-sequence output
- $I_{up}, I_{gate}$: Intermediate values and outputs of $gate\_{proj}$ and $up\_{proj}$
- $W_{up}, W_{gate}$: Weights of $gate\_{proj}$ and $up\_{proj}$
- $G$: Number of groups used in grouped query attention (GQA)
# 2. Answering Questions
# 2.1 Performance across different architectures
We've conducted additional experiments with Mistral, Qwen2, and Google's Gemma-2:
| Implementation | Maximum Sequence Length (K) |
|:-|-:|
| Mistral-7B-v0.1 vanilla | 5 |
| Mistral-7B-v0.1 activation recomputation | 42 |
| **Mistral-7B-v0.1 MST** | **70** |
| Qwen2-7B vanilla | 4 |
| Qwen2-7B activation recomputation | 13 |
| **Qwen2-7B MST** | **74** |
| gemma-2-9b vanilla | 1.5 |
| gemma-2-9b activation recomputation | 5 |
| **gemma-2-9b MST** | **36** |
MST provides substantial context length increasing across all tested models:
- 12x for Mistral-7B
- 18x for Qwen2-7B
- 24x for Gemma-2-9b
Interestingly, MST performs best with Gemma-2, which uses the largest vocabulary size (256k) compared to Mistral-7B (32k) and Qwen2 (152k). MST can effectively optimize Gemma-2's large peak memory. A larger $M=64$ also benefits.
# 2.2 Empirical values of Peak Intermediate sizes
We discussed Peak Intermediate memory usage in Appendix D. Using LLAMA3-8b with 4k training as an example:
- Vanilla: 75GB
- Activation recomputation: 52GB
- MST: 47GB
For vanilla and activation recomputation, peak memory occurs in the LM-head. For MST, peak memory is shared between the LM-head and MLP.
# 2.3 Activation Recomputation Overview
As requested, we've added a brief explanation of activation recomputation:
Activation recomputation, also known as gradient checkpointing, is a memory-saving technique for training large neural networks. This method trades computation for memory by discarding intermediate activations during the forward pass and recomputing them as needed during the backward pass. In standard training, all activations must be stored to compute gradients, which can lead to significant memory usage for large models or long sequences. Activation recomputation alleviates this by only saving activations at certain checkpoints. The forward pass is partially recomputed during backpropagation to obtain the necessary intermediate values.
# 2.4 Performance on Smaller Context Lengths
We measured the slowdown on smaller context lengths in Appendix F of table: MLP execution times (in seconds) for various sequence lengths and mini-sequence settings:
| MLP Sequence | 1024 | 2048 | 4096 | 8192 | 20000 | 40000 | 80000 |
|:-|-:|-:|-:|-:|-:|-:|--:|
| standard | 0.05 | 0.08 | 0.16 | 0.31 | 0.74 | 1.52 | 2.96 |
| M=2 | 0.05 | 0.10 | 0.17 | 0.32 | 0.76 | 1.49 | 3.05 |
| M=4 | 0.07 | 0.11 | 0.19 | 0.33 | 0.79 | 1.52 | 2.99 |
| M=8 | **0.12** | 0.15 | 0.22 | 0.38 | 0.81 | 1.58 | 3.05 |
As observed, increasing $M$ leads to longer execution times, particularly for shorter sequences (e.g., $M=8, SEQ=1024$ introduces a 2.4x slowdown). However, the impact is minimal for longer sequences (e.g., 80000) as the IO overhead becomes negligible compared to computation overhead .
# 3. New Implementation: Chunk-based MST
We developed a new implementation of Chunk-based MST to address the concerns about slowdown for shorter sequences. This approach splits the sequence into fixed-size chunks (4096 for LLAMA3)
The core idea is to split the input sequence $S$ into sub-chunks of fixed size $C$, where the number of chunks equals $M = S/C$. This provides an effective solution to the challenges of training with small sequences, as there are no splits (or tiny splits) for short sequences, thus introducing minimal slowdown.
We found that the optimal selection of $C$ and $M$ for best performance is $C = d, M = S / d$, where $d$ is the hidden size. The MLP block uses chunk-based MST to balance speed and memory. The LM-Head uses the original MST for memory saving, and the optimal setting for $M$ of the LM-head is determined by $M = V/d$, 32 for llama3, and 64 for Gemma-2.
We conducted an additional training experiment to show that Chunk-based MST training performance as:
Table 2: MST training with two epochs on LongAlpaca-12k
| Implementation | Context length | LongAlpaca-12k (ppl) | loss | Training Time (hours) |
|:-|-:|-:|-:|-:|
| Act. recomputation | 8k | 9.34 | 2.23 | 25.6 |
| chunk-based MST | 8k | 7.41 | 2.003 | 26.5 |
| chunk-based MST | 16k | 3.53 | 1.26 | 62.5 |
| chunk-based MST | 30k | 3.45 | 1.23 | 233 |
---
Rebuttal 2:
Title: Details Presentation for Mini-sequence MLP
Comment: # Details Presentation for Mini-sequence MLP
For clear presentation, we take MistralMLP MLP as an example to present Mini-Sequence MLP notation:
(mlp): MistralMLP
(
(gate_proj): Linear(in_features=4096, out_features=14336, bias=False)
(up_proj): Linear(in_features=4096, out_features=14336, bias=False)
(down_proj): Linear(in_features=14336, out_features=4096, bias=False)
(act_fn): SiLU()
)
# Notation correction
$O′$ is the mini-sequence output.
$O$ is the final output after concat.
$I_{up}$ $I_{gate}$ is the intermediate values and output of $gate_{proj}$ and $up_{proj}$,
$W_{up}$ and $W_{gate}$ are the MLP weights of $gate_{proj}$ and $up_{proj}$.
$W_{down}$ is the MLP weight of $down_{proj}$
$G$ is the number of groups used in grouped query attention (GQA); we add the definition to the paper.
We appreciate the reviewers' suggestions for improving our paper.
# Optimization Details: Chunk-based Mini-Sequence Transformers
We've developed chunk-based MST to enhance efficiency in memory management and reduce computational overhead. This method:
- Partitions input sequences into fixed-size sub-chunks
- Integrates seamlessly with existing transformer architectures
- Requires minimal code modifications
Key concept: The input sequence $S$ is split into sub-chunks of size $C$, where chunk size $C = S/M$ (M being the number of mini-sequence). This approach effectively addresses challenges in training both long-sequence transformers and small sequences, with no slowdown for the latter.
Hyperparameter selection: We've found that setting chunk_size $C = d$ (d being hidden size) yields optimal performance.
Analysis: MST maintains the same computation complexity. MST increased I/O complexity as $O(Sd + SI + dI * M)$. $S$ is sequence length, $d$ is hidden size, $I$ is intermediate size, $M$ is the number of mini-sequence. As long as $dI * M<= SI$, IO complexity remains as $O(SI)$. In conclusion, $C = S / M >= d$ is the best selection for MLP block. | null | null | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers for their thorough and insightful feedback. We appreciate the recognition of our work's potential impact on long-context LLM training. In response to the valuable comments received, we have conducted additional experiments and provided further implementation optimization for Mini-Sequence Transformer (MST):
# Implementation and Hyperparameter Optimization:
We introduced the chunk-based MST, which addresses concerns about performance degradation for small sequences and simplifies hyperparameter tuning. This approach splits the sequence into fixed-size chunks $C$ (4096 for LLAMA3). It can be considered a carefully selected MST in which the number of mini-sequence $M$ dynamically changes with sequence length, using small $M$ for the small sequence to maintain performance and large $M$ for the larger sequence to save memory. We found that the optimal selection of $C$ and $M$ for best performance is $C = d, M = S / d$, where $d$ is the hidden size and $S$ is the sequence length. We use chunk-based MST for MLP block to balance speed and memory, while we use the original MST for LM-head with $M≈V/d=32$ for memory saving. Here $V$ is the vocabulary size.
# Training Performance and TFLOPS:
We've introduced new experiments demonstrating that the Mini-Sequence Transformer (MST) achieves comparable or better TFLOPS than activation recomputation for various models, including Llama2-7B and Llama3-8B. The chunk-based MST implementation is deployed here and shows minimal throughput reduction (<5%) compared to vanilla implementations.
| Model Implementation | Batch Size | Training Time Per Step (s) | TFLOPS |
|:-|:-:|-:|-:|
| Llama3-8B-hf vanilla | 1 | OOM | OOM |
| Llama3-8B-hf activation recomputation | 2 | 5.01 | 3271.42 |
| **Llama3-8B-hf MST** | 2 | 5.13 | **3194.90** |
| **Llama3-8B-hf MST** | 8 | 19.35 | **3386.13** |
| Llama2-7B-hf vanilla | 1 | 1.24 | 3290.88 |
| Llama2-7B-hf activation recomputation | 1 | 1.52 | 2684.67 |
| **Llama2-7B-hf MST without activation recomputation** | 1 | 1.31 | **3115.03** |
| Llama2-7B-hf activation recomputation | 8 | 8.85 | 3703.48 |
| **Llama2-7B-hf MST** | 8 | 9.33 | **3511.39** |
| **Llama2-7B-hf MST** | 16 | 17.92 | **3656.17** |
# Convergence and Long-Context Performance:
New experiments on the LongAlpaca dataset demonstrate MST's effectiveness in long-context understanding. Training Llama3-8B with 30K context length achieved a 270% improvement in perplexity compared to the 8K baseline, showcasing MST's ability to leverage longer contexts effectively.
Table 2: MST training with two epochs on LongAlpaca-12k
| Implementation | Context length | LongAlpaca-12k (ppl) | loss | Training Time (hours) |
|:-|-:|-:|-:|-:|
| Act. recomputation | 8k | 9.34 | 2.23 | 25.6 |
| MST | 8k | 7.41 | 2.003 | 26.5 |
| MST | 16k | 3.53 | 1.26 | 62.5 |
| MST | 30k | 3.45 | 1.23 | 233 |
# Applicability to Different Architectures:
We've extended our evaluation to include Mistral-7B, Qwen2-7B, and Google's Gemma-2-9B, demonstrating significant increases in maximum sequence length (12x-24x) across these architectures.
| Implementation | Maximum Sequence Length (K) |
|:-|-:|
| Mistral-7B-v0.1 vanilla | 5 |
| Mistral-7B-v0.1 activation recomputation | 42 |
| **Mistral-7B-v0.1 MST** | **70** |
| Qwen2-7B-Instruct vanilla | 4 |
| Qwen2-7B-Instruct activation recomputation | 13 |
| **Qwen2-7B-Instruct MST** | **74** |
| gemma-2-9b vanilla | 1.5 |
| gemma-2-9b activation recomputation | 5 |
| **gemma-2-9b MST** | **36** |
# Comparison and Combination with Lossy Methods:
We've comprehensively compared MST with quantization methods and the combinations between MST and quantization. This comparison demonstrates MST's superiority in enabling longer sequences for Llama3 training on a single A100 GPU. MST alone (60K tokens) outperforms these lossy approaches (4bit 28k). When combined with quantization techniques, MST achieves even more impressive results: MST + 8-bit reaches 110K tokens (a 22x improvement over standard 8-bit), while MST + 4-bit pushes the boundary to 140K tokens.
| Llama3 Implementation | Maximum Sequence Length (K) |
|:-|-:|
| 8-bit | 5 |
| 8-bit + activation checkpointing | 26 |
| 4-bit | 10 |
| 4-bit + activation checkpointing | 28 |
| MST | 60 |
| **MST + 8-bit** | **110** |
| **MST + 4-bit** | **140** |
# Integration and Compatibility:
We've clarified MST's integration process, offering a customized Hugging Face Transformer implementation and a simple wrapper mode for easy adoption. MST is compatible with other optimization techniques, such as activation recomputation and sequence parallelism (also known as context parallelism).
# Broader Impact and Future Directions:
We highlight MST's potential for enabling extremely long context training combined with lossless techniques like context parallelism, opening new avenues for LLM research and applications.
As we know, Llama 3.1 incorporates context parallelism (also known as sequence parallelism) to introduce additional sequences of 128k into the model architecture with 16 A100 GPUs. Our research demonstrates that Multi-Sequence Training (MST) integrates seamlessly with context parallelism, enabling significantly longer sequence processing. We believe this combination has the potential to handle extremely large sequence lengths. Our experiments with updated implementation successfully achieved a sequence length of 480k tokens by combining context parallelism and MST on 8 A100 GPUs. This represents a substantial advancement in Large Language Model (LLM) training capabilities.
The maximum sequence length of Llama3-8B runs on the distributed setting.
| Model Implementation | 2 GPUs | 4 GPUs | 8 GPUs |
|:-|-:|-:|--:|
| Llama3-8b-hf MST | 120 | 240 | **480** |
| Llama2-7B-hf MST | 160 | 320 | 640 | | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Revisiting Adversarial Patches for Designing Camera-Agnostic Attacks against Person Detection | Accept (poster) | Summary: The authors note that existing physical adversarial attack methods overlook the transitioning from the physical domain to the digital domain, which involves the camera. Therefore, they propose a camera-agnostic attack to enhance the stability of adversarial patches across different cameras. The proposed adversarial optimization framework employs a differentiable camera ISP proxy network as a defender, engaging in a zero-sum game with the attacker to enhance the attack performance of adversarial patches. Experimental validation across multiple imaging devices demonstrates the effectiveness of the proposed approach in this paper.
Strengths: 1. The authors identify a flaw in the existing pipeline of physical adversarial attack methods, specifically the oversight of transitioning from the physical domain to the digital domain, which they assert is mediated by the camera. They subsequently evaluate the camera's influence on the performance of physical adversarial attacks. Their research approach is logical and well-founded.
2. The adversarial optimization framework proposed in this paper, utilizing a camera ISP proxy network as a defender, is concise and intuitive.
3. The multi-box detection issue introduced in this paper is a significant problem worthy of attention.
Weaknesses: 1. Explanation of hyperparameters and their value ranges for the camera ISP proxy network are needed.
2. Essentially, this paper aims to address the issue of camera black-box attacks. However, the authors overlook another black-box scenario, namely the model. The transferability of attack methods across different models is also an important research direction. This paper lacks discussion on the model black-box aspect.
3. Placing Figure A from the supplementary materials into the main text can help readers better understand the motivation behind this paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. While introducing a camera ISP module is reasonable, the authors chose 6 hyperparameters for the camera ISP proxy network and listed their corresponding ranges. What criteria guided the selection of these hyperparameters and their ranges?
2. Could the authors provide more disscussion on the threat model and the black-box model scenario?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors discuss the limitations of this paper, specifically regarding the simulation of the camera module.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Explanation of hyperparameters and their value ranges for the camera ISP proxy network are needed.**
A1: Thank you for your insightful comment regarding the hyperparameter selection for our camera ISP proxy network.
Camera ISPs consist of multiple processing stages. Tseng *et al.*[1] summarize common imaging pipelines into the following stages: (1) Optics, (2) White Balance & Gain, (3) Demosaicking, (4) Denoising, (5) Color & Tone Correction, and (6) Color Space Conversion & Compression. The first three stages pertain to RAW data acquisition and processing, while the latter three involve operations on RGB values. Since our task primarily focuses on images in RGB format, we chose conditional parameters for the proxy network from the latter three stages. We selected six parameters that significantly impact visual attributes, such as Brightness Contrast Control and Gamma Adjustment, as detailed in Table 2 of the paper. Empirical evaluations demonstrate that these six parameters affect attack performance.
We observed that certain combinations could lead to complete information loss in the image. To balance the quality and diversity of the images generated by the ISP proxy network, we selected the ranges specified in Table 2 of the paper.
For parameter $a$, values below 64 may result in images being too dark. For parameter $b$, values below 64 may cause colors to appear washed out. For the $\gamma$ setting, values above the specified range may introduce noise in dark areas, while values below the range can reduce image contrast. For parameter $c$, deviations from the specified interval may lead to poor image quality. For parameter $d$, values below the specified range may result in insufficient spatial filtering, causing excessive noise. For parameter $e$, values less than 1 may not effectively remove noise
We will include a more comprehensive discussion of this topic in the revised manuscript.
[1] Hyperparameter optimization in black-box image processing using differentiable proxies. ACM TOG 2019.
**Q2: Essentially, this paper aims to address the issue of camera black-box attacks. However, the authors overlook another black-box scenario, namely the model. The transferability of attack methods across different models is also an important research direction. This paper lacks discussion on the model black-box aspect.**
A2: Thank you for your insightful comment. We agree that exploring the transferability of our proposed attack method across different models is an important avenue for future research. We have therefore included additional experiments with other detectors, as shown in the table below.
YOLOv3:
| Method | AP (the lower the better) | ASR (the higher the better) |
| ------- | ------- |------- |
| Random Noise | 71.3 | 11.3 |
| AdvPatch | 48.1 | 33.3 |
| AdvT-shirt | 55.8 | 24.4 |
| AdvCloak | 52.7 | 30.5 |
| NAP | 66.2 | 14.0 |
| LAP | 65.2 | 14.6 |
| TC-EGA | 56.9 | 24.7 |
| CAP (Ours) | **41.5** | **43.3** |
YOLOv8:
| Method | AP (the lower the better) | ASR (the higher the better) |
| ------- | ------- |------- |
| Random Noise | 77.9 | 4.0 |
| AdvPatch | 75.6 | 8.8 |
| AdvT-shirt | 77.2 | 6.2 |
| AdvCloak | 73.7 | 10.1 |
| NAP | 78.1 | 5.0 |
| LAP | 78.6 | 4.6 |
| TC-EGA | 77.3 | 6.7 |
| CAP (Ours) | **60.5** | **14.7** |
The above comparative evaluations were conducted under a black-box setting. The detection models were pretrained on the COCO dataset and then fine-tuned on the INRIAPerson dataset. We observe that, although the effectiveness of our CAP attack decreases compared to white-box attacks (see Table 3 in the paper), it still outperforms other methods.
Your suggestion is valuable and has inspired us to not only consider the black-box camera setting but also the black-box model setting. In future work, we will explore physical adversarial attacks under a double black-box scenario.
**Q3: Placing Figure A from the supplementary materials into the main text can help readers better understand the motivation behind this paper.**
A3: Thank you for your detailed comment. We agree that including Figure A in the main text would offer a more direct and intuitive understanding of the motivation behind our work. We will carefully consider your suggestion.
**Q4: Discussion on the threat model.**
A4: Thank you for your careful review. We discuss the threat model from the following three aspects:
- **Attack Goal**: Our method aims to make the target person evade detection. This is achieved by applying a carefully crafted adversarial perturbation to the target individual's physical appearance, ensuring that the perturbation remains effective across various camera settings.
- **Adversary Capability**: The adversary has: (1) knowledge of the object detection model and (2) the ability to generate and physically realize adversarial perturbations.
- **Design Requirements**: (1) The method must be physically realizable and (2) maintain effectiveness across different imaging devices.
We appreciate your high evaluation of our work and the valuable suggestions for future improvements. We believe that the novel perspective introduced by our approach will significantly contribute to the field.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thank you for your rebuttal, your responses have address my concerns, so I am willing to recommend a positive rating score. | Summary: This paper proposed a cross-camera physical adversarial attack, Caera-Agnostic Patch (CAP) attack against person detection.
This method incorporates a differentiable camera Image Signal Processing (ISP) proxy network to compensate for physical-to-digital domain transition gap. Additionally, the camera ISP proxy network serves as a defense module, forming an adversarial optimization framework with the attack module. The attack module optimizes adversarial perturbations to maximize effectiveness, while the defense module optimizes the conditional parameters of the camera ISP proxy network to minimize attack effectiveness. Experimental results demonstrate the effectiveness of the proposed Camera-Agnostic Patch attack on different hardware.
Strengths: This paper focuses on a commonly overlooked aspect of adversarial attacks—the ISP—and demonstrates good performance across different devices.
The experiments are comprehensive. Good quantitative results are observed even in challenging physical world evaluation.
Weaknesses: 1, In the experimental setup, only the patch size is provided, while the input image size is missing.
2, Some key experimental settings are missing. The total number of images used to calculate ASR is not provided.
3, The robustness of the proposed CAP against real-world disturbances (e.g., random noise, blur, etc.) should also be discussed.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weakness.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: In the experimental setup, only the patch size is provided, while the input image size is missing.**
A1: Thank you for your careful review of our paper. The input image size we used is **640x640**, consistent with the official YOLOv5 repository. It is worth noting that YOLOv5 supports both 640x640 and 1280x1280 input image sizes, and we used the former. We will include the specific input image sizes used in our experiments in the revised version of the paper.
**Q2: Some key experimental settings are missing. The total number of images used to calculate ASR is not provided.**
A2: Thank you for your valuable feedback.
In the **digital-space** evaluation, we used the publicly available INRIAPerson dataset, which consists of 613 training images and 288 test images. Specifically, the training set contains 3,019 person instances, while the test set contains 855 person instances. Therefore, the ASR in the digital space is calculated based on **288 images containing 855 person instances**.
In the **physical-space** evaluation, we collected images using 6 cameras at 4 different times to avoid interference from unrelated factors. For each patch setting, 5 images were captured per camera per session, resulting in 6x4x5=120 images per patch. We compared 6 adversarial patches in the physical domain, yielding a total of 6x120=720 images. The ASR in the physical space is thus calculated based on these **720 images**.
We will clarify the specific number of images in the revised version of the paper.
**Q3: The robustness of the proposed CAP against real-world disturbances (e.g., random noise, blur, etc.) should also be discussed.**
A3: Thank you for your insightful comment. We acknowledge the importance of robustness in real-world disturbances. During the training of CAP, we implemented techniques to enhance robustness, specifically (1) adding random noise and (2) adding random rotations. These measures strengthen the robustness of the CAP attack, supporting its effectiveness in real-world scenarios, as demonstrated in Section 4.3 of the paper.
Blur, commonly caused by camera motion in videos, was not considered in our evaluation as the detectors we targeted, such as the YOLO series and Faster R-CNN, are image-based. Your comment highlights an important area for future research, particularly the impact of blur on physical adversarial attacks. To the best of our knowledge, this has not been explored extensively. In future work, we plan to investigate physical adversarial attacks on video detection models and evaluate the effects of blur. We believe that modeling video motion and incorporating temporal frame consistency during training are promising approaches.
We appreciate your positive feedback on our work and the valuable suggestions for our future research. We believe that the new perspective offered by our method will significantly contribute to the field. | Summary: The authors present an improved method for human-detection-blocking adversarial patch attacks, with a particular focus on ensuring said attacks are robust to changes in camera. This approach is motivated by a study of the impact of camera Image Signal Processing (ISP) pipelines, which the authors show is a considerable factor in the inconsistency of real-world adversarial patch attacks. The authors’ proposed Camera-Agnostic Patch (CAP) attack uses a proxy network to simulate the ISP, and said network is optimized through an adversarial learning approach to make patches that are robust across different cameras. The authors demonstrate this with real data collected with 6 different imaging devices.
Strengths: The author’s motivation is well presented and well justified, as they point out that much research has been invested into converting adversarial perturbations from digital to physical, but little research focuses on the impact of the camera, which converts the physical patch back into a digital image before it is actually fed into the network.
The CAP method is effective and achieves the stated goal of making attacks robust to changes in camera. This also helps to address significant limitations in the reproducibility of prior works with physical adversarial patches.
The authors present comprehensive experiments, including digital and physical evaluations, along with ablations and adversarial defenses.
The authors also include insightful analysis of the best performing baseline (T-SEA) and show that while it does disrupt the detections, it does not fully remove them and instead fragments them into smaller pieces.
Overall, the work is clear and well presented.
Weaknesses: While the authors experimental evaluation is quite comprehensive, one aspect they did not test for was differences in target model, as the same victim detector is used in all experiments. This could potentially be improved by demonstrating CAP attacks for different models. In addition, it would be interesting to see if the CAP attacks are any better at generalizing across target models, such as for black-box style attacks.
Technical Quality: 4
Clarity: 4
Questions for Authors: I note that the supplemental material includes an additional visualization video, and I am aware that videos have their own unique artifacts due to their encoding. Does the difference between video capture and image capture have an impact on attack effectiveness? Does CAP help improve the generalization of attacks from image to video?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors have discussed some limitations of the work, which include other camera settings like exposure time and aperture size, which are not modelled in their current approach. As this is a work presenting an adversarial attack, the authors have also discussed the potential risks of such attacks and they have also presented some defense evaluations also.
The authors present results with six different imaging devices. This number could of course be increased, though this is not strictly necessary in my opinion.
As this is a work focused on presenting an adversarial attack, there is some risk for negative societal impact.
The statement about the random seed (line 189) may not be necessary in the main paper.
There is duplicated text on lines 254-255.
Flag For Ethics Review: ['Ethics review needed: Safety and security']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: While the authors experimental evaluation is quite comprehensive [...].**
A1: Thank you for your valuable feedback. We acknowledge that evaluating the performance of our approach on other detectors would be meaningful. To address your concerns, we have conducted additional experiments, as detailed in the table below.
YOLOv3:
| Method | AP (the lower the better) | ASR (the higher the better) |
| ------- | ------- |------- |
| Random Noise | 71.3 | 11.3 |
| AdvPatch | 48.1 | 33.3 |
| AdvT-shirt | 55.8 | 24.4 |
| AdvCloak | 52.7 | 30.5 |
| NAP | 66.2 | 14.0 |
| LAP | 65.2 | 14.6 |
| TC-EGA | 56.9 | 24.7 |
| CAP (Ours) | **41.5** | **43.3** |
YOLOv8:
| Method | AP (the lower the better) | ASR (the higher the better) |
| ------- | ------- |------- |
| Random Noise | 77.9 | 4.0 |
| AdvPatch | 75.6 | 8.8 |
| AdvT-shirt | 77.2 | 6.2 |
| AdvCloak | 73.7 | 10.1 |
| NAP | 78.1 | 5.0 |
| LAP | 78.6 | 4.6 |
| TC-EGA | 77.3 | 6.7 |
| CAP (Ours) | **60.5** | **14.7** |
The above comparative evaluations were conducted under a black-box setting. The detection models were pretrained on the COCO dataset and then fine-tuned on the INRIAPerson dataset. We observe that, although the effectiveness of our CAP attack decreases compared to white-box attacks (see Table 3 in the paper), it still outperforms other methods.
Your suggestion is valuable and has inspired us to not only consider the black-box camera setting but also the black-box model setting. In future work, we will explore physical adversarial attacks under a double black-box scenario.
**Q2: Video artifacts.**
A2: Thank you for your insightful comment. The difference between video capture and image capture indeed affects attack effectiveness. The key distinction lies in the inherent camera and scene motion in video, which leads to scene shifts and introduces motion blur-related artifacts that significantly impact attack effectiveness. We observed that attacks sometimes failed in the presence of blur.
Our CAP attack focuses on images because detectors like the YOLO series and Faster R-CNN are image-based. Consequently, we did not consider the effects of video artifacts. However, to enhance the generalization of the adversarial perturbations, we incorporated tricks such as (1) adding random noise and (2) applying random rotations during the optimization process.
Your comment provides valuable insights for future research, especially regarding physical adversarial attacks on video detection models. To the best of our knowledge, no work has been done in this area. The generalization from images to videos remains an open problem. We believe that modeling video motion and considering temporal frame consistency during training are promising approaches.
**Q3: Camer number.**
A3: Thank you for your valuable comment. We chose to design and validate our method on smartphone cameras (iPhone, Redmi, Huawei, and Samsung) and typical consumer cameras (Sony and Canon) first because these cameras are portable, widely used, and more susceptible to attacks. In contrast, they are less reliable than industrial cameras, especially in challenging environments like high temperatures, humidity, and electromagnetic interference.
We agree that expanding the range of cameras would further enhance the applicability of our method. In future work, we plan to extend our evaluation by using industrial cameras to validate the attack's effectiveness. The following table lists industrial cameras commonly used in autonomous driving and video surveillance that we have surveyed.
| Manufacturer | Model | Application Areas |
|---|---|---|
| Sony | IMX415 | Autonomous vehicles, robotics |
| ON Semiconductor | AR0132 | Surveillance, drones |
| Hikvision | DS-2CD2043G1-I | Surveillance |
**Q4: Negative societal impact.**
A4: We fully acknowledge it is crucial to consider these potential negative societal impacts. The ethical statement about our work is given below:
Our work successfully achieves physical adversarial attacks in person detection tasks. Given the effectiveness of our attack method across various imaging devices, its real-world application is feasible. This exposes potential security risks in existing DNNs-based applications, particularly when the technology is leveraged for malicious purposes. We advocate for the responsible and ethical use of technology. Furthermore, we offer comprehensive methodological descriptions and openly address the implications of our work, encouraging discourse within and beyond the scientific community to contribute to the advancement of trustworthy and dependable AI.
Additionally, to mitigate the potential risks of our CAP attacks, we discuss defense strategies in Section 4.5 of the paper. Through evaluation across three defense strategies, the experimental results indicate that adversarial training effectively mitigates our CAP attack in both digital and physical spaces. Moving forward, we will actively collaborate with the security community to explore the broader implications of our work and identify further mitigation strategies.
**Q5: Random seed.**
A5: Thank you for your careful review. As per your suggestion, we will remove the statement about the random seed from the main paper and include it in the appendix, leaving more space in the main text for more valuable discussions.
**Q6: There is duplicated text on lines 254-255.**
A6: Thank you for bringing this to our attention. We will eliminate the redundancy and thoroughly proofread the entire paper to ensure clarity throughout.
We appreciate your high praise for our work and the valuable suggestions for our future research. We believe that the new perspective offered by our method will significantly contribute to the field.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their responses and discussion. I believe the authors' focus on the role of the camera is critically underexamined, and their work has very important implications for real-world adversarial attacks. I maintain my original rating in support of accepting this work. | Summary: The paper "Revisiting Adversarial Patches for Designing Camera-Agnostic Attacks against Person Detection" addresses the limitations of current physical adversarial attack methods, which often fail to consider the variability introduced by different camera Image Signal Processing (ISP) pipelines. This oversight leads to instability and reduced effectiveness of attacks in real-world scenarios.
The authors propose a novel approach that incorporates a differentiable camera ISP proxy network into the adversarial patch generation process. This network models the transformations that images undergo when captured by different cameras, making the adversarial patches more robust and effective across various camera systems. The approach involves an adversarial optimization framework with two key components: an attack module that generates adversarial perturbations and a defense module that optimizes the ISP parameters to mitigate the attack's effectiveness.
Strengths: By introducing a differentiable camera Image Signal Processing (ISP) proxy network, the authors address a critical and previously overlooked aspect of these attacks: the variability introduced by different camera systems. This novel inclusion significantly enhances the robustness and generalizability of adversarial patches, marking a substantial departure from traditional methods that often neglect the role of the camera.
Weaknesses: 1. Some typographical errors: The citation in line 52 is incorrect; citation [22] should be changed to [40]. There are similar errors with multiple citations throughout the paper. It is recommended to thoroughly check each citation to avoid confusion. Moreover, there is a duplication error in line 255.
2. The method flowchart is not very clear and does not seem related to attacking person detection. It is recommended to be more specific and should include detailed training methods and more details about the ISP proxy network.
3. Although different smartphone cameras and typical cameras were tested, object detection is often used in autonomous driving or video surveillance. It is recommended to expand the testing to include cameras commonly used in these systems and to focus on these cameras.
4. The paper is overly focused on engineering aspects and lacks thorough theoretical analysis. For example, it is recommended to explain in detail why the training strategy in the GAN framework is effective.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Does the method in this paper have any differences from mainstream patch-based methods in terms of the patch training process?
2. Is this paper primarily considering targeted attacks or untargeted attacks? In equation 1, it seemes to be an untargeted attack, however, the goal of concealing person instances is more likely to be a targeted objective. If considering untargeted attacks, it is more appropriate to use AP (Average Precision) as a metric. If considering targeted attacks, ASR (Attack Success Rate) is more relevant.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors deserve commendation for their transparency and integrity in acknowledging the limitations of their work and the potential ethical issues associated with their research.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Typographical errors [...].**
A1: We have made the following corrections:
- Corrected the citation in line 52: Zhang *et al.*[40] employed [...].
- Corrected the citation in line 129: as Zhang *et al.*[40] demonstrated [...].
- Removed the duplication in line 255.
We will thoroughly proofread the paper.
**Q2: Flowchart.**
A2: Thanks for your feedback. The revised flowchart can be found in the one-page PDF file.
**Q3: Although different smartphone [...].**
A3: Thanks for your comment. Smartphone and consumer cameras are portable and widely used but less reliable in harsh environments (e.g., high temperatures, electromagnetic interference) compared to industrial cameras used in autonomous driving and surveillance. Exploring their security implications is crucial, so we initially validated our method on these cameras.
To substantiate the claim in the paper, our approach introduces a wide parameter space for the ISP proxy network and uses six different cameras, offering a broad sampling space for training and evaluating.
In future work, we plan to include industrial cameras to enhance applicability in autonomous driving and surveillance. The following table lists commonly used industrial cameras we have surveyed.
| Manufacturer | Model | Application Areas |
|---|---|---|
| Sony | IMX415 | Autonomous vehicles, robotics |
| ON Semiconductor | AR0132 | Surveillance, drones |
| Hikvision | DS-2CD2043G1-I | Surveillance |
**Q4: Theoretical analysis.**
A4: As outlined in Sections 3.2 and 3.4, our approach is inspired by the GAN theoretical framework. Attacker acts as Generator, producing adversarial perturbations, while Defender acts as Discriminator, optimizing hyperparams. Our alternating optimization mirrors GAN convergence analysis. We provide detailed theoretical proof:
**Global Optimality of the Adversarial Optimization Framework**
We begin by analyzing the optimal Defender strategy given any Attacker strategy.
**Proposition 1:** For fixed Attacker strategy, optimal Defender strategy minimizes adversarial perturbation effectiveness.
**Proof:** Attacker aims to maximize error induced by adversarial perturbations, while Defender aims to minimize this error. Formally, let $P$ represent the perturbations and $\theta_d$ the hyperparams. Attacker's objective:
$$
L_A = \max _P \mathbb{E} _{x \sim p _\text{data}} \left[ \ell(f(x + P), y) \right]
$$
where $\ell$ is the loss function, $f$ is the neural network, and $y$ are the GT labels.
The Defender seeks to minimize the effectiveness of these perturbations by optimizing $\theta_d$:
$$
L_D = \min _{\theta_d} \mathbb{E} _{x \sim p _\text{data}} \left[ \ell(f(g(x + P; \theta_d)), y) \right]
$$
where $g$ represents the ISP proxy network processing.
The optimal Defender strategy $\theta_d^*$ satisfies:
$$
\theta_d^* = \arg \min_{\theta_d} L_D
$$
Given this optimal strategy, the Attacker's objective function becomes:
$$
L_A^* = \max_P L_D(\theta_d^*)
$$
The interplay between the Attacker and Defender can be modeled as a minimax game:
$$
\min _{\theta _d} \max _P \mathbb{E} _{x \sim p _\text{data}} \left[ \ell(f(g(x + P; \theta_d)), y) \right]
$$
Since the ISP proxy network aims to minimize the impact of adversarial perturbations, the optimal strategy for the Defender minimizes the Attacker's maximum achievable loss, concluding the proof.
**Convergence of the Optimization Algorithm**
**Proposition 2:** If Attacker and Defender have sufficient capacity, and at each iteration of optimization algorithm, Attacker and Defender optimize respective objectives, adversarial perturbations converge to equilibrium where effectiveness is minimized by ISP proxy network.
**Proof:** The optimization algorithm follows an alternating training strategy, akin to the GAN. At each step, Attacker optimizes the perturbations $P$ while keeping the ISP parameters $\theta_d$ fixed, and vice versa.
Attacker’s update rule is:
$$
P^{(t+1)} = P^{(t)} + \alpha \nabla_P L_A
$$
Defender’s update rule is:
$$
\theta_d^{(t+1)} = \theta_d^{(t)} - \beta \nabla_{\theta_d} L_D
$$
By iteratively updating $P$ and $\theta_d$ in this manner, the algorithm performs gradient descent on a convex-concave function, ensuring convergence to a local Nash equilibrium. At this equilibrium, the adversarial perturbations $P^*$ are countered effectively by the optimized ISP parameters $\theta_d^*$, concluding the proof.
**Q5: Differences of the patch training process?**
A5: Mainstream patch-based methods apply a random patch to the target image and update it to maximize the discrepancy between predictions and GT, analogous to the Attacker in our method. Our approach further introduces a Defender, represented by a ISP proxy network. This addition enables the Attacker and Defender to form an adversarial optimization framework, enhancing the attack transferability across different cameras.
**Q6: Targeted or untargeted attacks.**
A6: Our approach is an **untargeted attack**. In classification tasks, targeted attacks mislead the classifier into categorizing the target instance as a specific incorrect class, while untargeted attacks cause the classifier to misclassify the target instance into any incorrect class. Based on this criterion, we tend to classify the task of concealing person instances as an untargeted attack.
Our method, as an untargeted attack, was evaluated using AP. However, AP alone can be misleading, especially in multi-box detection scenarios where AP may be low despite ineffective attacks (see Section B of our supplementary material). ASR compensates for this limitation of AP, as it is not affected by an increase in True Positive samples. Thus, we employ both AP and ASR.
Lastly, we would like to thank you for acknowledging the strengths of our method, as you noted: (1) a **critical** [...] and (2) This **novel** [...]. We hope our responses address your concerns and contribute to an improved rating for our paper. | Rebuttal 1:
Rebuttal: We thank the reviewers for their insightful and constructive comments. As summarized by reviewers xDi9, eDby, and ai8f, our work focuses on a critical and previously overlooked aspect—the impact of camera variability on physical adversarial attacks—and proposes an effective and robust adversarial patch capable of cross-camera attacks. To the best of our knowledge, ours is the first work to design and evaluate across multiple imaging devices, aligning with real-world scenarios.
All reviewers acknowledged the value of our work. We are pleased that most reviewers agreed that:
* Incorporating a differentiable camera ISP proxy network into the adversarial patch generation process is a **novel** approach. This inclusion significantly **enhances** the robustness and generalizability of adversarial patches. [reviewer xDi9]
* Our motivation is **well presented** and **well justified**. [reviewer N8jz]
* The experiments are **comprehensive**. [reviewers N8jz and eDby]
* The research approach is **logical** and **well-founded**. The framework is **concise** and **intuitive**. [reviewer ai8f]
We have provided point-by-point responses to each reviewer's comments. Specific feedback will be addressed in our individual responses. We hope to demonstrate the validity of our claims through detailed discussions and additional experiments.
Pdf: /pdf/18aa9faf4eed6bf8fb79b28182b72c9fa145aa92.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
The Sample Complexity of Gradient Descent in Stochastic Convex Optimization | Accept (poster) | Summary: The paper shows a new lower bound for the generalization error of gradient descent for Lipschitz functions. The bound shows a linear dependency on the dimension that closes the gap between lower and upper bounds in the sample complexity of GD under several regimes. The construction of the function relies on a block version of a variation of Nemirovski function and a reduction from a sample-independent oracle to a sample-dependent oracle.
The authors also propose a set of open problems related to the generalization error of GD.
Strengths: - The paper shows a clever way to make changes and leverage Nemirovski and Feldman's function so as to control the dynamics of the sub-differentials
- The authors try to give the best intuition for the construction of the function in the presentation of the paper
- The paper is technically challenging and the proofs have been explained well. I have read some parts of the proofs and didn't see any problem.
Weaknesses: - I think the main weakness in the paper is the presentation of the results and improvements. I know that these have all been written in the paper, but I found myself lost and rereading many times to see the significance of the results. This could be rewritten in a more concise way with all the improvements under which regimes spelled out explicitly (maybe a table would make sense).
- There are a few typos: Line 294: sequences. Line 296: receive(s). Line 300: $S_{1:0}$, should S be bolded? Line 320: punctuation and line break.
- It should be mentioned that Eq 16 will be proven in Appendix D.
Technical Quality: 3
Clarity: 2
Questions for Authors: - I'm really confused by the fact that all bounds and the proofs use $F(0)$ for the gap. Why is this the case? Why isn't it $\min_w F(w)$? If the algorithm starts at $w_0=0$, doesn't it mean the function value increases?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the review and detailed comments.
- > This could be rewritten in a more concise way with all the improvements under which regimes spelled out explicitly (maybe a table would make sense).
A table is an excellent suggestion. We will follow it, thanks!
- > It should be mentioned that Eq 16 will be proven in Appendix D.
Thanks for catching this. It will be mentioned.
Also thanks for catching typos!
- > I'm really confused by the fact that all bounds and the proofs use 𝐹(0) for the gap. Why is this the case?
Thanks for pointing this out. As to the bounds, it will be a good idea to state the results in terms of $\min_{w\in W} F(w)$ and not $F(0)$. (clearly, $F(w)-F(0) \ge F(w)-min_{w\in W} F(w)$).
As for the proofs you are right. We get the excess error w.r.t initialization, which may seem strange. There is no contradiction here, as GD minimizes the empirical loss and the increase is w.r.t population loss. But overall it is strange that you start at an initialization that is better than where you end up with. This also happens, btw, in other similar constructions (see [3])
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the response.
Regarding my question, Is there a typo where the authors wrote $F(w) - F(0) \ge F(w) - \min_w F(w)$ because it clearly should be the other direction. Here, my question explicitly is given an the bound on $F(w) - F(0)$, how to I obtain a bound for $F(w) - \min_w F(w)$, because it's not obvious to me, unless I'm missing something.
---
Rebuttal 2:
Comment: Yes, it's a typo, sorry for that. We have, $F(w)- \min_{w\in W} F(w) \ge F(w)- F(0)$ and not the other way around. This is also the direction we actually want.
So, any lower bound of the type $F(w)-F(0) = \Omega(f(m,\eta,T,d))$ automatically yields a lower bound of the type:
$F(w)- \min_{w\in W} F(w)= \Omega(f(m,\eta,T,d))$. In particular, theorem 1 corollary 2 and corollary 3 can all be stated with respect to the minimizer instead of $F(0)$.
Does that answer your question?
---
Rebuttal Comment 2.1:
Comment: Yes, I somehow was confused and didn't realize that. Thank you for the response! I maintain my current score. | Summary: The paper proved a tight lower bound for the sample complexity of full-batch gradient descent. The authors also presented some open questions in this area.
Strengths: I think the paper is not ready.
Weaknesses: - The structure of the paper is not standard. There isn't any conclusion, and it seems that the paper is written in a very rushed manner.
- The notation section is not well-written.
- It seems that the main contribution of the paper is in Theorem 1, but it is not clear where its proof is located. The proof in the appendix is very short, and it appears that some parts of the proof are in the main body of the paper and some in the appendix. It is not organized very well.
Technical Quality: 1
Clarity: 1
Questions for Authors: I couldn't understand the paper very well, so I do not have any questions. Please refer to the weaknesses section.
Confidence: 2
Soundness: 1
Presentation: 1
Contribution: 2
Limitations: I think the paper is not ready for this conference, and I am not sure how important the result is.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for taking the time to review the manuscript.
- The conclusions are clearly presented in the manuscript, but are also present in the other reviews. If, after reading the other reviews, you still feel that certain contributions are not highlighted enough, please point concretely and we will gladly highlight them.
- As to notations, please share concrete examples to missing/confusing/non-standard notations. We will gladly fix any such confusion.
- As to organization, all proofs appear in the appendix. Main theorems and main Lemmas appear in the main text. This is a standard practice.
Unless there are strong reasons for rejection, we ask the reviewer to higher the score so it will not reflect negatively on the paper.
---
Rebuttal Comment 1.1:
Comment: Dear Authors, I’m sorry that my score is lower than that of the other reviewers. As you requested, I reviewed the opinions of the other reviewers and noticed that reviewer gMtP also found this paper difficult to read. I even reread the paper to better understand it and reconsider my score, but I still couldn’t fully grasp the content. This remains my main issue with the paper.
I’ve provided some suggestions that might help improve the paper. For example, adding a "Notation Section" and a "Conclusion Section" and reorganizing the paper and proofs to enhance readability would be beneficial. In its current version, your paper ends with a Lemma, which is quite unusual. As you can see, my confidence score is 2 because I couldn’t fully understand the paper, and therefore, I cannot give you a high score.
However, if the other reviewers are confident in their scores and support accepting this paper, I will not oppose their decision.
Additionally, during the remaining discussion period, I will make another effort to fully understand your work, and if possible, I will adjust my review accordingly. | Summary: This paper studies the sample complexity of full-batch gradient descent under stochastic convex optimization problem. The main result of this work provides a lower bound of the generalization gap of GD of order $\Omega(\min\\{\frac{d}{m},1\\} \cdot \min\\{\eta d^{3/2}, \eta\sqrt{T},1\\})$, where $d$ denotes dimension, $m$ denotes sample size, and $\eta,T$ denotes learning rate and number of iterations of GD. Notably, this lower bound, together with upper and lower bounds of GD or general ERM established in prior works, implies meaningful results under different regime. In the over-parametrized regime where $d=\Omega(m+T^{1/3})$, the main result, together with lower bound of empirical risk of GD, yields a lower bound of $\Omega(\min\\{\eta\sqrt{T}+\frac{1}{\eta T}, 1\\})$. Moreover, when $T=O(m^{3/2})$ and $\eta=\Theta(1/\sqrt{T})$, the main results yields a lower bound of $\Omega(\min\\{\frac{d}{m}+\frac{1}{\sqrt{m}},1\\})$, matching the known upper bound of general ERM algorithms.
Strengths: This paper provides an improved lower bound on the sample complexity of full-batch gradient descent for SCO. Under several regimes, the lower bound match existing upper bounds of GD or general ERM and becomes optimal. Moreover, it implies that the worst case sample complexity of GD is the same as that of general ERM. These results contribute to better understanding the sample complexity of the SCO problem.
Weaknesses: The technical results are solid, but I think the writing could be improved. In particular, the connection between the high-level intuition provided in Sec 4 and the actual constructions in Sec 4.1 is not apparent. For example, it's not clear to me why the oracle $\mathcal{O}^{(t)}$ defined in Eq. 11 satisfies the zero-chain property depicted in Figure 1.
Technical Quality: 3
Clarity: 3
Questions for Authors: - From Thm 1 to Cor 3 when $\eta=\Theta(1/\sqrt{T})=\Omega(m^{-3/4})$ and $d\le\sqrt{m}$: I agree that the first term in Thm 1 is lower bounded by $\min\\{\frac{d}{m},1\\} \ge \min\\{\frac{1}{\sqrt{m}},1\\}$, but the scond term $\eta d^{3/2} \gtrsim d^{3/2}m^{-3/4}$ could potentially be smaller than 1. If so, should the lower bound actually be $\min\\{\frac{1}{\sqrt{m}},1\\}\cdot \min\\{\frac{d^{3/2}}{m^{3/4}}, 1\\}$?
- line 219: could the authors elaborate why this new oracle defined in 214-218 only activates $O(d)$ coordinates in $d^3$ iterations? From my intuition, to activate a new coordinate, say $k+1$, the oracle needs to increase all previous coordinates to $k$. Then this would only require $d^2$ iterations to activate $O(d)$ coordinates. I think I miss something here.
Minor comments:
- It would be clear to clarify early in the paper that $w(i)$ denotes the $i$-th coordinate of a vector $w$.
- 194: $\partial N(0)$ => $\partial N_0(0)$
- 299: Let => let
- 300: $\mathbf{S}\_{1:0}$ => $S\_{1:0}$
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the constructive comments and the positive score. Below are answers to concrete questions.
- > For example, it's not clear to me why the oracle 𝑂(𝑡) defined in Eq. 11 satisfies the zero-chain property depicted in Figure 1.
Notice that figure 1 refers to eq. 16 and not eq. 11. The figure depicts the specific dynamics of choice and the subdifferentials as appear above eq. 16. The caption of the figure should be rephrased as: “Depiction of the dynamics depicted **above** Eq. (16)”. Or to refer to the main text instead ““Depiction of the dynamics depicted below Eq. (9)”.
Frames 3,6,7 depict the case of item 1 (below eq. 9), and frames 1,2,4,5 the case of item 2 (below eq. 9) and last frame is the last item (partial = 0). Is that more clear?
- > should the lower bound actually be $\min ( \frac{1}{\sqrt{m}},1)\cdot \min(\frac{d^{3/2}}{m^{3/4}},1)$ [[as opposed to $1/\sqrt{m}$]]
Notice that the $\Omega(1/\sqrt{m})$ term does not follow from thm 1 but, as stated in the manuscript, “is a well known-information theoretic lower bound for learning”. A citation will be added to the manuscript to clarify. For example, this lower bound can be found in lecture notes: https://optmlclass.github.io/notes/optforml_notes.pdf (theorem 16.7) where it is proven that there are two instances of stochastic convex optimization where every algorithm “fails” on one of them with excess error $1/\sqrt{T}$ unless more than $T$ examples are observed (in our notation $T=m$)
- > Could the authors elaborate why this new oracle defined in 214-218 only activates $O(d)$ coordinates in $O(d^3)$
iterations?
Roughly, to increase coordinate $k$ you need to increase each previous coordinate by $+1$. Increasing coordinate t requires k-t iterations (increasing $k$ and propagating it back). So overall $\sum_{t=0}^1 k-t= O(k^2)$ iterations in order to increase the $k$’th iteration by 1. So overall $O(1+2^2+3^2….. + d^2)$ to increase all coordinates by one. Which amounts to $O(d^3)$.
Regarding minor comments, thanks for the suggestions and for catching typos. We will follow the reviewers' advice and corrections.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for their detailed response. After considering your explanation and re-evaluating the manuscript, I now have a clearer understanding of how the oracle functions. Thanks for the clarification, and I maintain my original rating. | Summary: ### Summary:
The authors study the generalization error and sample complexity of GD for GD in stochastic CO. Their results show that one can achieve the generalization error of $\tilde{\Theta}(d/m + 1/\sqrt{m)}$ which is the same as the generalization error of ERM. Indeed, they prove that the linear dependence to dimension $d$ in unavoidable. Moreover, they show that $1/\epsilon^4$ steps are needed for GD to avoid overfitting where $\epsilon$ is the optmality gap.
Strengths: ### Pros:
- interesting theoretical problem/results
- excellent citation to related work
- extremely well-written
Weaknesses: ### Cons:
- some discussion about SGD is missing
Technical Quality: 4
Clarity: 4
Questions for Authors: ### Questions/Comments:
This is an excellent paper, I recomment acceptance as I found the paper well-written and the contributions/motivations are clear.
- Can you explain how this analysis restricts to full-batch GD and what makes it impossible to obtain results for SGD? I recommend adding a bit of discussion to the paper regarding this.
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for taking the time to review the manuscript, and particularly for the positive review.
> - Can you explain how this analysis restricts to full-batch GD and what makes it impossible to obtain results for SGD? I recommend adding a bit of discussion to the paper regarding this.
Thank you for the question. First, just to clarify, the results cannot be obtained for SGD as SGD has dimension-independent sample complexity and this is discussed in the paper. But the question remains how SGD circumvent the construction. A discussion about that will be added to the paper as follows:
The main issue with the proof is that it relies on a reduction to sample dependent oracle (defined in section 4.1). SGD circumvent the reduction (Lemma 9). In more detail, the Lemma assumes the update is given by eq. 12 (the full-batch update rule). Because of this full-batch update rule, we can "encode" the whole sample in the trajectory. This allows the reduction, because the oracle that depends on the parameter can "decipher" the sample and behave like sample dependent oracle. If the sample points are coming one-by-one, as the case in SGD, then we do not know the sample in the first steps, hence we cannot reduce to a sample-dependent first order oracle.
---
Rebuttal Comment 1.1:
Comment: Thanks! I appreciate the authors' response and promise to include the new discussion in the next version of the paper. I still support accepting the paper, so I keep my score positive. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Differentiable Task Graph Learning: Procedural Activity Representation and Online Mistake Detection from Egocentric Videos | Accept (spotlight) | Summary: The paper presents a novel approach to learning task graphs from procedural activities observed in egocentric videos. Task graphs represent the partial ordering of key-steps needed to complete a task and are crucial for developing intelligent agents that can assist users. The proposed method utilizes direct maximum likelihood optimization of edge weights, enabling gradient-based learning that can be integrated into neural network architectures. The study demonstrates significant improvements in task graph prediction accuracy and online mistake detection using datasets like CaptainCook4D, Assembly 101, and EPIC-Tent.
The main motivation behind this work is twofold. Firstly, it aims to enhance the ability of intelligent systems to assist users in performing complex tasks by automatically learning flexible and accurate task graph representations from video observations, rather than relying on manually crafted procedures. Secondly, the authors seek to explicitly model task graphs to provide a more interpretable and human-understandable representation of procedural activities.
The authors introduce two primary approaches for task graph learning: Direct Optimization (DO) and Task Graph Transformer (TGT). The DO method uses the Task Graph Maximum Likelihood (TGML) loss to directly optimize the adjacency matrix representing the task graph. This optimization is performed using gradient descent, making the process straightforward and effective. On the other hand, the TGT model employs a transformer encoder to process key-step text or video embeddings and predict the adjacency matrix from these embeddings. This model includes a regularization loss to maintain distinctiveness among embeddings, ensuring accurate and meaningful task graph generation.
Strengths: The main contributions of the paper are as follows:
1. **Approach:** This paper presents an new error detection approach via learning task graphs by directly optimizing a maximum likelihood objective. This approach enables gradient-based task graph learning, allowing task graph methods to be applied to the problem of mistake detection in programs.
2. **Performance:** The proposed methods achieve superior performance in task graph generation and mistake detection compared to existing methods.
3. **Code Availability**: The availability of code for replication enhances the study's transparency and usability.
Weaknesses: 1. **Dependence on Labeled Key-Step Sequences:** The method proposed in this work relies on action recognition models or Ground-truth Key-Step Labels. On the one hand, this will bring additional performance overhead, and on the other hand, relying on other action recognition models may introduce noise when errors occur during recognition. It's uncertain whether it will also affect the results of mistake detection. More discussions on these issues are needed.
2. **Limitation in Versatility and Scalability:** In this approach, a new model (task graph) needs to be trained for each new sequence of actions in order to achieve mistake detection. This method to some extent limits the versatility and scalability of the model, as retraining is required for each new task. Additionally, it seems that the proposed method has no versatility on order-unrelated mistakes, such as technique/measurement errors, which are highlighted in latest datasets (e.g., HoloAssist-ICCV23[1], CaptainCook4D-ICML23[2], EgoPER-CVPR24[3])
3. **Insufficient Qualitative Analysis**: The paper only provides one qualitative result on generated task graph in Figure 5. However, there is no qualitative analysis of the generated task graphs for detecting the mistakes. Also, it is suggested to provide discussions on the corner cases that may be challenging for the proposed method, which would bring insightful observations for the error detection community.
[1] Wang, Xin, et al. "Holoassist: an egocentric human interaction dataset for interactive ai assistants in the real world." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.
[2] Peddi, Rohith, et al. "CaptainCook4D: A dataset for understanding errors in procedural activities." arXiv preprint arXiv:2312.14556 (2023).
[3] Lee, Shih-Po, et al. "Error detection in egocentric procedural task videos." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
Suggestions for authors:
My main concerns are about:
- The versatility of the proposed method for various types of mistakes.
- The detailed qualitative analysis on generated graphs when used to detect mistakes.
Please do read the weakness carefully, I would consider to raise my rating if my concerns are well addressed.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see Weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Q1: Dependence on Action Recognition Models
**Computational Overhead** while it is true that the action recognition model introduces a computational overhead, at inference, our approach is a lightweight module on top of the detected actions which can be easily plugged into systems already including an action recognition system to tackle other tasks, such as checking whether an action has been performed to provide guidance with Augmented Reality. It is also worth noting that compute-efficient online action recognition system such as MiniROAD [1] are available in the literature.
**Noise in Action Detections** Table 3 of the paper shows that our model works even in the presence of imperfect predictions. To further investigate the effect of noise, we conducted an analysis based on the controlled perturbation of ground truth action sequences, with the aim to simulate noise in the action detection process. At inference, we perturbate each key-step with a probability $\alpha$ (the "perturbation rate"), with three kinds of perturbations: insert (inserting a new key-step with a random action class), delete (deleting a key-step), or replace (randomly changing the class of a key-step). For each prediction, we perform a replace with probability $\alpha$ on the current key-step, then for each previous key-step we perform either a replace, delete or insert with probability $\alpha$. The results presented in Figure 2 of the attached PDF show that, while our approach is affected by noise in the predictions of the action recognition system, it still exhibits a certain degree of robustness. Error bars on 5 runs.
[1] An, Joungbin et al. Miniroad: Minimal RNN framework for online action detection. ICCV 2023
## Q2: Limitation in Versatility and Scalability
**On Scalability:** While DO requires a new training for each procedure as correctly observed by the reviewer, TGT can take different sets of key-step emebddings at each forward pass, hence ideally enabling better scalability. This aspect was not assessed in the original paper, and we thank the reviewer for their insightful comment and opportunity to better explore this point. We performed two experiments.
- In the first experiment, we trained a single TGT-text model for all CaptainCook4D procedures. This is possible thanks to the ability of TGT to receive a different set of embeddings at each forward pass, allowing to alternate the optimization of the different procedures during training. As highlighted in Table 1 of the attached PDF, our unified model brings small improvements over the single models ($72.1 \to 74.9$ F1) while reducing training time by a factor of $\approx 24$ (the number of procedures), thus improving scalability.
- In the *second experiment*, we assess the transfer learning abilities of TGT. We followed a "leave-one-out" scheme in which we train the TGT on all procedures except one, and then fine-tune the model on $5$ sequences for the held-out procedure (hence a $5$-shot regime). The results of this experiment reported in Table 2 of the attached PDF show that our approach greatly improves over competitors which are unable to leverage transfer learning. Due to time constraints we only considered 5 procedures but will extend to all scenarios in the camera ready.
**On Versatility:** We followed the setup of PREGO [2], which defines the Assembly101-O and EPIC-tent-O datasets as curated versions of the original datasets to account for open-set procedural errors. The procedural errors considered by the authors of [2] are those involving "order", "omission", "correction" and "repetition", which are considered to be procedural mistakes, in contrast to "proficiency errors" as described in [3]. As correctly observed by the reviewer, being designed to address procedural mistakes at the abstract level of the executed actions, our method would not be directly applicable to the detection of proficiency errors. In a real system, we expect that this limitation can be overcome by integrating different subsystems responsible for handling different types of mistakes, while an integrated approach would still be possible and a good venue for future works. We will add this discussion in the "limitations" paragraph of the paper.
[2] Flaborea, Alessandro et al. PREGO: online mistake detection in PRocedural EGOcentric videos. CVPR 2024
[3] Grauman, Kristen et al. Ego-exo4d: Understanding skilled human activity from first-and third-person perspectives. CVPR 2024
## Q3: Insufficient Qualitative Analysis
**Qualitative Examples** We will add more qualitative examples of generated graphs in the style of that reported in Figure 5 of the supplementary material. For reference, the supplementary material contains all task-graphs predicted on CaptainCook4D. We will base our illustrations on those examples. We will also include qualitative examples of mistake detections produced by the proposed model. Please see Figure 3 of the attached PDF for a qualitative example on EPIC-Tent. We will add other similar examples also based on Assembly101.
**Corner Cases** We will discuss corner cases in the "limitations" section of the main paper. The performance of our method depends on the quality of recognized actions. If the action recognition module fails to detect an action, our method may incorrectly signal a missing pre-condition. Conversely, if it detects an action that wasn't performed, our method may miss signaling a mistake. We expect improvements in online action recognition to enhance our method's robustness. Our method does not explicitly model "optional" key-steps, which can lead to incorrect mistake signaling if optional steps are treated as mandatory. This issue could be resolved with specialized modules to detect optional nodes. Our approach works at an abstract level, focusing on whether an action has been performed rather than how it has been performed, leaving room for improvement in handling incomplete or sub-optimally performed actions.
---
Rebuttal Comment 1.1:
Comment: ## Differentiable Task Graph Learning: Procedural Activity Representation and Online Mistake Detection from Egocentric Videos
Many thanks to the authors for answering the questions. I'm glad that the authors dispelled my doubts in "Q2: Limitation in Versatility and Scalability" and added Qualitative Analysis, but I still feel that there are shortcomings in "Q1: Dependence on Labeled Key-Step Sequences/Action Recognition Models". Even though there are some shortcomings, the method proposed by the authors has made some improvements over previous methods and achieved good performance. **I will change my rating to weakly accept.**
### Q1: Dependence on Labeled Key-Step Sequences/Action Recognition Models
Just as mentioned in the author's response in Q3, in extreme cases, when the Action Recognition Model predicts missing or additional actions, it will modify the action sequence required for Mistake Detection, thereby affecting the judgment results. The fundamental reason lies in the noise issue brought by the Action Recognition Model. As shown in Table 3 of the original paper, there is a performance drop when using the Action Recognition Model compared to using GT labels.
### Q2: Limitation in Versatility and Scalability
I am very pleased to see that the authors have provided experimental results in this regard, and the results are quite good. I think the authors can also add some explanations of the experimental results in the final article, for example: why the performance of the Unified method is better than training on individual tasks in Table 1 of the attached PDF.
### Q3: Insufficient Qualitative Analysis
Thanks to the authors for filling in the Qualitative Analysis, which will help readers understand the article better.
---
Reply to Comment 1.1.1:
Comment: We are glad that our reply helped clarifying the reviewer's doubts.
As suggested by the reviewer, we will extend the discussion on why the unified method achieves better results than training on individual tasks, in the final version of the paper. We believe this is mainly due to an improved generalization ability which is possible thanks to the increased training data volume obtained by merging examples from all procedures.
We sincerely thank the reviewer for the valuable suggestions. | Summary: This paper introduced a learning-based task graph generation for procedural actions and mistake detection. Compared to previous approaches which usually consider natural language based descriptions, the proposed approach aims to use backward propagation to adaptively optimize procedural action sequences. To the reviewer, the introduced approach is much more flexible compared to other "hard-coded" language-based descriptions. In summary, this papers introduced a simple yet inspiring approach for graph generation.
Strengths: 1. The proposed optimization based graph generation seems inspiring and novel. Unlike other existing hand-crafted methods, learning-based graph generation can be end-to-end trained with task model parameters. This has the potentials in significantly boosting model performances, since the graph is no longer hard-coded, which makes models able to automatically search for the optimal correlations.
2. The methodology is mostly clear and easy to follow. The authors well demonstrate how to establish and end-to-end optimize trainable graphs.
3. The experimental results are quite promising in the experiment sections, and the improvements are non-trivial and encouraging.
Weaknesses: 1. The explanations of contrastive loss are somehow hard to follow. For typical contrastive learning, to push away or to pull close the distances always require positive/negative and anchor samples. The authors are expected to provide more detailed discussions to help readers understand.
2. In line 140, the authors make an assumption that no repetitive actions exists in sequences, without any further discussions. It is suggested that authors could provide more explanations on why they have this assumption and what impact this assumption causes to the proposed approach.
3. It is suggested that how the video and text embeddings are sent to the overall framework should be clearly demonstrated in the Figure 3. In the current submitted version, it is somehow confusing to the readers how the video features are considered into the training process.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to weakness for details.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations that the authors introduced are reasonable considering current submission, and they do not have any potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive assessment of our work and for their supportive remarks and constructive feedback. In the following, we report our replies to the reviewer's specific queries.
## Q1: The explanations of contrastive loss are somehow hard to follow. For typical contrastive learning, to push away or to pull close the distances always require positive/negative and anchor samples. The authors are expected to provide more detailed discussions to help readers understand.
Our loss function is defined following the Maximum Likelihood framework, hence, by optimizing it, we aim to obtain the graph which maximizes the probability of the sequences observed in the training set. For better interpretability of the mechanism by which the proposed learning framework allows to obtain meaningful graph presentations, we noted that the derived maximum-likelihood loss function reported in Eq. (6) also has a contrastive interpretation in which edges between the currently observed key-step and all key-steps appearing before the current one in a given sequence are considered as positive examples of dependencies, while edges between future key-steps and the currently observed ones are considered as negative examples of dependencies. By maximizing the weight of positive dependencies (term in cyan in Eq. (6)) and minimizing the weight of negative dependencies (term in green in Eq. (6)), we aim to learn informative key-step dependencies (the current step depends on the past ones), beyond all obvious dependencies (all future key-steps depend on all past key-steps). As in other contrastive learning frameworks [1, 2], our approach only includes positives and negatives and it does not explicitly consider anchor examples. We agree with the reviewer on the lack of clarity and will add this more detailed discussion in the paper.
[1] Oord, A. van den, et al."Representation learning with contrastive predictive coding." arXiv preprint arXiv:1807.03748 (2018).
[2] Radford, Alec, et al. "Learning transferable visual models from natural language supervision." International conference on machine learning. PMLR, 2021.
## Q2: In line 140, the authors make an assumption that no repetitive actions exists in sequences, without any further discussions. It is suggested that authors could provide more explanations on why they have this assumption and what impact this assumption causes to the proposed approach.
We model the task-graph as a directed acyclic graph in which key-steps are nodes and edges represent dependencies. To define our learning framework in the context of Maximum Likelihood, we interpret each action sequence as a possible topological sort of the graph. Since topological sorts cannot contain repetitions [3], we then assume that our *training* action sequences do not contain repetitions. Practically, we map each sequence with repetitions to multiple sequences with no repetitions as described at page 4, footnote 2 (e.g., ABCAD → (ABCD, BCAD)).
When defined as directed acyclic graphs, task-graphs have a limited ability to model key-step repetition, which is common to other works [4-6]. While future works should focus on overcoming such limitations, we observe that our approach to mistake detection can still be used in all those cases in which a key-step can be arbitrarily repeated (e.g., spread peanut butter), as repetitions are not removed at inference and pre-conditions can still be checked even for repeated key-steps.
The reviewer is also referred to the reply to Q1 of reviewer 1 for an in-depth discussion on how our method deals with key-step repetitions and how future works can improve on this aspect.
[3] Cormen, T. H., Leiserson, C. E., Rivest, R. L., & Stein, C. (2022). *Introduction to algorithms*. MIT press.
[4] Ashutosh, Kumar, et al. "Video-mined task graphs for keystep recognition in instructional videos." Advances in Neural Information Processing Systems 36 (2024).
[5] Zhou, Honglu, et al. "Procedure-aware pretraining for instructional video understanding." *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2023.
[6] Grauman, Kristen, et al. "Ego-exo4d: Understanding skilled human activity from first-and third-person perspectives." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
## Q3: It is suggested that how the video and text embeddings are sent to the overall framework should be clearly demonstrated in the Figure 3. In the current submitted version, it is somehow confusing to the readers how the video features are considered into the training process
Our Task Graph Transformer (TGT) approach can take as input either D-dimensional text embeddings or video embeddings extracted from key-step segments. Text embeddings are extracted from key-step names, whereas video embeddings are extracted from key-step video segments. In both cases, we use a pre-trained EgoVLPv2 model. When using text embeddings, at each forward pass, the TGT takes as input the same batch of $n$ text embeddings and outputs the $(n + 2) \times (n + 2)$ adjacency matrix, which is optimized with the proposed loss function. When training with video embeddings, since more than one embedding (hence more than one video segment) can be associated to a given key-step, at each forward pass, we randomly sample one embedding per key-step, thus obtaining a set of $n$ embeddings which is used to generate the $(n+2) \times (n+2)$ adjacency matrix, supervised with the proposed loss. We would like to note that we either train our model using only text embeddings or only video embeddings. We will clarify these points in the paper and we have revised Figure 3 to be more detailed and accurate. The reviewer is referred to Figure 1 in the PDF file attached to the author rebuttal for details.
---
Rebuttal Comment 1.1:
Comment: The authors addressed most of my concerns. I would like to change my rating to acceptance. | Summary: The paper proposes a differentiable loss function based on the maximum likelihood of generating task graphs from video features. The two proposed methods (DO and TGT) show strong performance, verifying their effectiveness in generating task graphs. Moreover, the predicted graphs are evaluated on downstream tasks to demonstrate the influence of explicit graph representations for general video understanding.
Strengths: In general, I like the paper from every perspective. It advances the progress of using task graphs for video understanding.
- The paper is well-written and easy to understand.
- The literature survey is comprehensive enough, which helps readers easily understand the motivation and the problem setting.
- The experimental comparison is extensive enough, including a rich set of recent baselines that vary from different types of task graph generation.
- The two proposed methods show strong performance on the evaluated datasets, verifying the effectiveness of the proposed differentiable task graph generation.
- The generated explicit task graph representation is also evaluated on the downstream tasks.
- The discussed limitations, e.g., the assumption of availability of key-step sequences, are insightful and are critical to the future development of task graphs for video understanding.
Weaknesses: - L140 describes that the authors separate sequences if the key-step repetition is available. I am wondering if this will limit the applications for some daily scenarios where a key step needs to be conducted many times, e.g., spreading peanut butter multiple times.
- In L329, it would be nice to have a reference of those action recognition works that assume key-step sequences are available.
Technical Quality: 3
Clarity: 4
Questions for Authors: See weaknesses. My concern is the potential limitation for applications to long-form daily-life procedure activities where repetition is crucial.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Yes, the authors have discussed the limitations of this work, which is insightful and can facilitate future works of task graphs for video understanding.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive assessment of the paper and for providing constructive feedback and suggestions for improvement. In the following, we report our answers to the reviewer's specific queries.
## Q1: L140 describes that the authors separate sequences if the key-step repetition is available. I am wondering if this will limit the applications for some daily scenarios where a key step needs to be conducted many times, e.g., spreading peanut butter multiple times.
As correctly observed by the reviewer, in order to model action sequences as topological sorts of a directed acyclic graph (the task-graph), *at training time*, we assume that sequences do not contain repetitions and map all those sequences which may contain some repetitions to sequences where each action appears only once (e.g., ABCAD → (ABCD, BCAD) - see footnote 2, page 4).
We would like to note that the limited ability of task graphs to explicitly model repeatable key-steps is a current limitation of the task-graph representation, also affecting previous approaches which do not explicitly model key-step repetitions [1, 2, 3, 4]. Limited work towards modeling key-step repetitions has been very recently reported in [9], where the classic task-graph definition has been extended to also include a "repeatable" node attribute. It is worth noting, however, that the work of [9] is only carried out in the context of the manual labeling of task-graphs (i.e., "repeatable" attributes are manually added - not computed), while it is still unclear how such attributes should be effectively learned from data.
Despite this limitation, our error detection model still correctly handles all those cases in which it is possible to execute a key-step multiple times (e.g., spreading peanut butter). Indeed, at test time, it will always be possible to verify the pre-conditions of a key-step through the predicted task-graph, even if that key step has previously appeared in the sequence.
Nevertheless, we believe that a better and more native modeling of key-step repetitions (e.g., to handle cases in which some key-steps may need to be executed exactly $n$ times in a correct procedure - "cut three slices of bread") is an interesting venue for future works and extensions to the proposed learning framework and plan to include the discussion above in the "limitations" paragraph of the manuscript.
[1] Ashutosh, Kumar, et al. "Video-mined task graphs for keystep recognition in instructional videos." Advances in Neural Information Processing Systems 36 (2024).
[2] Zhou, Honglu, et al. "Procedure-aware pretraining for instructional video understanding." *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2023.
[3] Sohn, Sungryull, et al. "Meta reinforcement learning with autonomous inference of subtask dependencies." arXiv preprint arXiv:2001.00248 (2020).
[4] Jang, Yunseok, et al. "Multimodal subtask graph generation from instructional videos." arXiv preprint arXiv:2302.08672 (2023).
## Q2: In L329, it would be nice to have a reference of those action recognition works that assume key-step sequences are available.
We thank the reviewer for their suggestion. Video understanding methods focusing on action analysis, such as action recognition and temporal action detection, assume that the start and end times of actions, as well as their categories, are marked in videos. All these datasets can be exploited within our framework to learn task-graph representations of the considered activities. We will revise the paper to clarify this point and will add references to the following works proposing datasets and formalizing tasks with such characteristics:
[5] Caba Heilbron, Fabian, et al. "Activitynet: A large-scale video benchmark for human activity understanding." Proceedings of the ieee conference on computer vision and pattern recognition. 2015.
[6] Kay, Will, et al. "The kinetics human action video dataset." arXiv preprint arXiv:1705.06950 (2017).
[7] Jang, Youngkyoon, et al. “Epic-tent: An egocentric video dataset for camping tent assembly.” Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops. 2019.
[8] Grauman, Kristen, et al. "Ego4d: Around the world in 3,000 hours of egocentric video." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
[9] Grauman, Kristen, et al. "Ego-exo4d: Understanding skilled human activity from first-and third-person perspectives." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their comprehensive response and all my concerns have been well-addressed.
I am glad that the authors plan to include the repetition issue in the limitation discussion (also mentioned by reviewer tC41 in the items on scalability and versatility) as this may be a more crucial direction for real-world application and can inspire the follow-up works.
After reading all the comments, I will keep my score as strong accept. | null | null | Rebuttal 1:
Rebuttal: We thank the reviewers for their insightful comments and constructive feedback.
All reviewers appreciated the proposed method. **pcHQ** highlighted that our method demonstrates strong performance on the analyzed datasets, confirming the effectiveness of differentiable task graph generation. **PJ9t** found our approach to task graph generation inspiring and innovative, particularly appreciating our ability to establish end-to-end optimized trainable task graphs. **tC41** acknowledged that our method represents a novel and promising approach to address the problem of error detection through task graph learning.
Furthermore, all reviewers expressed satisfaction with the experiments conducted and the performance achieved with our method. **pcHQ** highlighted that *“the experimental comparison is extensive enough, including a rich set of recent baselines that vary from different types of task graph generation”* and appreciated the evaluation of the proposed representation on downstream tasks. **PJ9t** stated that *“the experimental results are quite promising in the experiment sections, and the improvements are non-trivial and encouraging”*. **tC41** wrote that *“the proposed methods achieve superior performance in task graph generation and mistake detection compared to existing methods”*.
We are particularly pleased to read that **pcHQ** found our paper *“well-written and easy to understand”*, with a *“comprehensive literature survey”*, hence clearly stating the positioning with respect to prior works and with insightful discussed limitations; **PJ9t** considered our method *“mostly clear and easy to follow”*; and **tC41** appreciated *“the availability of code for replication”*, which is a recognition of our commitment to transparency and reproducibility of research.
Besides this positive feedback, reviewers also identified areas of improvement for the proposed work. We addressed the comments of each reviewer below and believe that their feedback has been extremely useful in improving various aspects of our work. In short:
- (**pcHQ**, Q1 - **PJ9t**, Q2) We elaborated on the ability of our approach to handle key-step repetitions and how this affects mistake detection. We highlighted that this is a common limitation of methods based on task-graph and that our mistake detection algorithm is not affected by key-step repetitions in most cases;
- (**pcHQ**, Q2) We improved presentation by adding references where needed, as suggested by reviewers, and particularly on those tasks considering a similar setup to ours in which fine-grained key-step annotations are available in videos;
- (**PJ9t**, Q1) We clarified the description of the contrastive loss in terms of positives and negatives. We clarified that our loss is defined based on the Maximum Likelihood principle, but it can still be interpreted as a form of contrastive loss for more insights into the learning mechanism;
- (**PJ9t**, Q3) We improved Figure 3 to better clarify how text and video embeddings are handled in the proposed TGT architecture. The revised figure with its caption is reported in Figure 1 in the PDF attached to this rebuttal, and will be included in the final version of the paper;
- (**PJ9t**, Q3) We discussed how the performance of the proposed system depends on the performance of the underlying action recognition system, both from the point of view of the computational overhead and from the point of view of the effect of noisy action prediction. This latter point has been addressed by performing controlled experiments in which action recognition noise is simulated by perturbing ground truth annotations. The results of these experiments are reported in Figure 2 of the attached PDF and highlight that the proposed method exhibits a certain degree of robustness with respect to noise in the detected actions;
- (**tC41**, Q1) We discussed the versatility and scalability of the proposed approach, reporting new experiments assessing the generalization and transfer learning abilities of the proposed TGT method. The results of these experiments are reported in Tables 1 and 2 of the attached PDF and show that 1) TGT allows to replace several models (one per procedure) with a unified one, and 2) TGT exhibits transfer learning abilities enabling few-shot learning;
- (**tC41**, Q2) We will improve the qualitative analysis by adding more examples of generated graphs in the text based on the predicted graphs included in the supplementary material, and by adding qualitative mistake detection results. We report two instances of such qualitative examples in Figure 3 of the attached PDF and plan to add more examples in the final version of the paper. We also reported a discussion of corner cases, better highlighting the limitations of the proposed approach. We will add this discussion to the paper.
Overall, we believe that the revision process improved the quality of the paper and we are grateful to the reviewers for their feedback. The reviewers and area chair are referred to the individual rebuttals and to the attached PDF for more details on the points briefly discussed above.
Pdf: /pdf/f308d4bdb2508a1a42a2452ff36fe400942b82e6.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
The Star Geometry of Critic-Based Regularizer Learning | Accept (poster) | Summary: The paper presents a theoretical analysis of learning regularizers for inverse problems using a critic-based loss. By focusing on a specific family of regularizers (gauges of star-shaped bodies), amenable to theoretical analysis, the authors provide a number of theoretical insights towards existence, uniqueness within existing frameworks (based on wasserstein distance) and further extensions to f-divergences. This is further connected to the existing literature on learned regularization by considering star bodies corresponding to weakly convex regularisers.
Strengths: The paper presents a very novel idea of utilising the theoretical framework of star-bodies in order to provide theoretical interpretability of critic based regularization. Given the recent interest in learned regularization, this paper opens up a number of new research directions both theoretically and numerically.
Weaknesses: There are a few weaknesses, which in my opinion are not limiting. To be precise, the paper focuses on a specific class of regularizers (gauges of star-shaped bodies) and a specific type of critic-based loss functions (derived from variational representations of statistical distances). It would be interesting to see if the results can be extended to other classes of regularizers and loss functions.
The paper primarily investigates this class of regularizers theoretically and with very few experiments. The paper does not include any experiments to demonstrate the practical performance of the learned regularizers, and while the theoretical results are valuable, it would be helpful to see how they translate into practice. This could be of interest as future work for practitioners working on inverse problems.
Technical Quality: 3
Clarity: 4
Questions for Authors: The paper is very well-writen, and as such there are very few questions that I have:
* One of the motivations, also discussed in 1.2 (and line 57), is that uniqueness of the transport potential does not hold when considering the wasserstein 1 based loss. I would like to refer the authors to arxiv.org/abs/2211.00820, as in fact (under some assumptions), this uniqueness can be shown to be unique $D_n$ almost everywhere. With this result in mind, could you explain intuitively why in Theorem 2.4, it is possible to prove uniqueness without a.e.?
* Line 130 "this map" - which map is this referring to?
* I am not entirely sure what the relevance of Remark 2.8 is. In practice, rescaling the distribution destroys information from the true distribution - the critic that is desired is the one that would be operating on $D_r$ and $D_n$, and not $D_r$ and $\lambda D_n$.
* It would be very intersting to see whether the optimal regularisers derived as minimisers of the variational objective are also optimal regulariser in the sense of Leong et al.
*Line 50 "about the measurements". Clasically the measurements themselves live in a different space from the original data. For this reason Lunz et al. utilises backprojection to first map it to the same space.
* Line 29 "ill-posed meaning that there are an infinite number ..." - In the inverse problem literature, ill-posedness does not correspond to non-uniqueness only. I suggest referring to Hadamards definition of well-posedness. E.g. see Shumaylov et al. or Arridge, Simon, et al. "Solving inverse problems using data-driven models." Acta Numerica 28 (2019): 1-174.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: * See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed review and positive comments that our paper is "very well-written" and that our theoretical framework "opens up a number of new research directions both theoretically and numerically." Below we discuss some of the main questions that were raised.
>How do we obtain uniqueness
Thank you for raising this point. The reason why Theorem 2.4 does not require an "almost everywhere" stipulation is that our optimization problem is over the class of star body gauges, as opposed to star-shaped gauges, or a more general function class. At a high-level, star body gauges have additional structure that allow for us to obtain uniqueness, which can be lost when one considers more general star-shaped gauges.
In more detail, star bodies are in one-to-one correspondence with their radial functions (or the reciprocal of their gauge) and are uniquely defined by them. As a result, dual mixed volumes also exactly specify star bodies, e.g., for star bodies $K,\tilde{K}, L,$ and $\tilde{L}$, we have $\tilde{V}(K,\tilde{K}) = \tilde{V}(L,\tilde{L})$ if and only if $K = L$ and $\tilde{K} = \tilde{L}$ (see Lutwak [52]). This property aids in establishing uniqueness in our main Theorem.
If we relax the requirement of being a star body to simply being star-shaped, we can lose such a property. In particular, if we consider gauges of star-shaped sets, it could be possible for the dual mixed volumes of different sets to be equal because two star-shaped gauges can be equal "almost everywhere". For example, consider the following two star-shaped sets in $\mathbb{R}^2$: $K = B_{\ell_2}$ is the unit $\ell_2$ ball, and $L = B_{\ell_2} \cup \set{(0,t) : t \geqslant 0}$. Both sets are star-shaped with respect to the origin, but $K$ is a star body while $L$ is not. This is because the ray $\set{t e_2 : t \geqslant 0}$ where $e_2 = (0,1)$ intersects the boundary of $L$ infinitely many times. The gauges of $K$ and $L$ are equal to $||x||_{\ell_2}$ for any $x$ outside of the set of measure zero $\set{(0,t) : t \geqslant 0}.$ This is because for any $(0,t)$ with $t > 0$, $||(0,t)||_K = |t|$ while $||(0,t)||_L = 0$.
>Line 130
Our apologies, "this map" refers to the map $x \mapsto ||x||_K$. We will fix this in the manuscript.
>Remark 2.8
We thank the reviewer for noting this. Our goal in this remark was to show that if the distributions do not satisfy the assumptions of the theorem, there is a way to reweight the objective so that the assumptions of the theorem are satisfied. By positive homogeneity of the gauge, this essentially amounts to reweighting/scaling one of the terms $\lambda\mathbb{E}_{\mathcal{D}_i}[||x||_K]$ for some $\lambda \geqslant 0$. We will update the discussion in the manuscript to make this more clear.
>Line 50
Thank you for pointing this out. We intended to use the phrase "integrate information about the measurements" in a vague sense, meaning that the "bad" distribution $\mathcal{D}_n$ depends on $y$ in some way. We will update this phrasing to avoid this confusion.
>Line 29
Thank you for making this point. Indeed, ill-posedness additionally encompasses a lack of a solution's existence and whether it varies continuously in the data. We will update this in the revised manuscript and cite Arridge et al's survey.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their clarifications and answers. I would suggest adding a note about the 'almost everywhere' as a distinguishing factor from the rest of the literature. I would be very happy to see the paper accepted.
---
Reply to Comment 1.1.1:
Comment: Thank you to the reviewer for taking the time to consider our rebuttal and for the suggestion. We agree that this is a good point to highlight, and will be sure to do so in a revised version of the manuscript. We sincerely appreciate the reviewer's support of our paper. | Summary: This paper explores the learning of task-dependent regularizers using critic-based loss functions in the context of variational regularization for statistical inference and inverse problems. It particularly focuses on a specific family of regularizers, namely gauges of star-shaped bodies, which are common in practice and similar to those parameterized by deep neural networks. The study introduces a novel approach utilizing tools from star geometry and dual Brunn-Minkowski theory, which allows the derivation of exact expressions for the optimal regularizer under certain conditions and explores the properties of neural network architectures that can yield such regularizers. This work contributes to a deeper understanding of the structure of data-driven regularizers and their optimization characteristics.
Strengths: The problem setup and the theoretical framework of the paper appear rigorous and methodologically sound. The motivation behind the study is robust, addressing the theoretical gaps in understanding how regularizers are learned.
Weaknesses: Some of the results presented in the paper are complex and difficult to interpret, which may limit their accessibility to a broader audience. Moreover, the paper does not clearly articulate the practical implications of these theoretical findings for real-world applications, which could hinder its impact.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Could you provide additional context on the adversarial regularization framework, particularly regarding the roles and definitions of the two distributions $D_r$ and $D_n$?
2. The significance of the results in Theorem 3.1 is not clear to me. Could you elaborate on why these results are important and what they contribute to the field of regularizer learning?
3. The paper lacks experimental validation of its theoretical constructs through simulations or empirical data. Some experiments (even simple ones) will be helpful for better understanding.
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed review and positive comments that our theoretical framework is "rigorous and methodologically sound" and that our results "address the theoretical gaps in understanding how regularizers are learned." Below we discuss some of the main questions that were raised.
>Additional context on the adversarial regularization framework
In the adversarial regularization framework of Lunz et al [51], the distributions $\mathcal{D}_r$ and $\mathcal{D}_n$ are user-specified distributions that are aimed to capture “real” or “good” data and “noisy” or “bad” data, respectively. The choice of these distributions depends on the specific application one is interested in. In the context of inverse problems, $\mathcal{D}_r$ is often chosen to be the distribution of images similar to the image we wish to reconstruct and $\mathcal{D}_n$ is the distribution of noisy or poor reconstructions. For example, suppose one is interested in solving an inverse problem $y = \mathcal{A}(x) + \eta$ where $\mathcal{A}$ is known and $\eta$ is drawn from some noise distribution $\mathcal{P}$. Suppose we have prior knowledge about $x$ that it is drawn from a distribution $\mathcal{D}_r$ and we have access to many samples from this distribution (e.g., we want to solve inpainting problems on human faces, and we have access to many samples from the CelebA dataset). One choice for $\mathcal{D}_n$ corresponds to the distribution of backprojected reconstructions, which corresponds to taking one's measurements and "naively" inverting them using the pseudoinverse of $\mathcal{A}$: $\hat{x} \sim \mathcal{D}_n$ if and only if $\hat{x} = \mathcal{A}^{\dagger} y$ where $y = \mathcal{A}(x) + \eta$, $x \sim \mathcal{D}_r$, and $\eta \sim \mathcal{P}$. These images will be corrupted by noise that depends on both the measurement matrix $\mathcal{A}$ but also the additive noise $\eta$.
This backprojected distribution is a good choice for the "bad" distribution for two reasons. One is that the distribution contains noise that is pertinent to the inverse problem (i.e., the samples depend on $\eta$ and $\mathcal{A}$). Another reason is that backprojected reconstructions are often used as the initial iterate for gradient-based methods to solve
$$\min_{x \in \mathbb{R}^d} \frac{1}{2}|| y - \mathcal{A}(x)||_2^2 + \mathcal{R}(x).$$
Hence, if the regularizer $\mathcal{R}(x)$ has been learned to assign very low likelihood to backprojected reconstructions, using them as initial iterates to a gradient-based algorithm would aid in the algorithm making good progress initially, as the regularizer would give gradients that would push the algorithm away from these poor, noisy solutions.
We will add this additional discussion as background on the adversarial regularization framework in an updated version of the manuscript.
>The significance of Theorem 3.1
Thank you for raising this point. At a high level, Theorem 3.1 is significant and has implications for regularizer learning for three reasons:
- This theorem introduces and analyzes a novel critic-based loss function for learning regularizers. To our knowledge, the only other critic-based loss function that has been considered in the literature is the loss from the adversarial regularization framework of Lunz et al [51]. Introducing novel loss criteria to learn regularizers is broadly useful for the field of regularizer learning, and opens new avenues for future research, both from a theoretical and practical perspective.
- The theorem offers novel insights into the structure of regularizers one would learn from this loss. We give explicit expressions for their structure and are able to compare and contrast such regularizers with those found via the adversarial regularization framework of Lunz et al [51]. This can be seen by comparing the visual examples shown in Figures 2 and 4.
- Analyzing this loss brings new challenges and technical contributions for both the field of star geometry and regularizer learning. Please see our global comment for more details on the new technical challenges that this theorem addresses.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer,
Thank you again for your detailed review and positive comments. We wanted to provide you with a brief update on our work. Based on your and other reviewer's comments, we have conducted new experiments to support our theory. Please see the official comment titled "New Experimental Results". We hope that these experiments along with our rebuttal can potentially address your concerns.
---
Rebuttal Comment 1.2:
Comment: Thank you for answering my questions. I keep my original score.
---
Reply to Comment 1.2.1:
Comment: We thank the reviewer for considering our rebuttal and keeping their positive score. | Summary: This paper leverages the star geometry and dual Brunn-Minkowski theory to study the optimal critic-based regularizers. The authors illustrate the optimal regularizer can be interpreted using dual mixed volumes that depend on data distribution. Theorems are proved for the existence and uniqueness of the optimal regularizer. The authors also identify the neural network architectures for learning the star body gauges for the optimal regularizer.
Strengths: The paper leverages the star geometry in understanding the geometry of unsupervised regularizer learning.
Weaknesses: As cited in the submission, this paper is closely related to [50]. Many concepts, theoretic results, even examples resemble or coincide with those in [50]. The paper failed to clearly distinguish itself from the existing work [50].
Technical Quality: 3
Clarity: 3
Questions for Authors: In what scope the submitted work extends [50]?
On line 68, "assigns" should be "assigned"?
On line 118, what does "[x, y]" mean if both x, y are points in R^d? Did you mean the line segment connecting x and y?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 1
Limitations: See the weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and questions. Below we would like to address concerns/questions that were raised.
>Regarding novelty in relation to [50]
We appreciate the reviewer's concern regarding novelty in relation to [50]. Please see our global comment regarding the novelty and significance of our work. The following points summarize our work's impact and novelty:
- Our research introduces novel theory and a new framework for understanding critic-based regularizer learning, addressing a significant gap in existing theoretical foundations.
- We tackle several novel technical challenges inherent to the critic-based setting, distinguishing our work from [50].
- This work also demonstrates the applicability and utility of star geometry tools for regularizer learning and broader machine learning contexts.
We would also like to highlight that while our results and example look similar to those in [50], all results in the present work are novel. Moreover, all examples we visualize in the paper are different than those in [50]. The only example that is similar is the one shown in Figure 7 in the supplemental. Our experiment is different, however, since we are showing that this star body induces a weakly convex (squared) gauge, which was not explored in [50].
>Typos and notation
Thank you for your catching those typos. We will update them in our revised submission. Yes, $[x,y]$ refers to the line segment between these two points in $\mathbb{R}^d$, i.e., $[x,y] := \set{(1-t)x + ty : t \in [0,1]}.$ We will add this definition in the manuscript.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their clarification. I see the contribution better and will raise my score.
---
Reply to Comment 1.1.1:
Comment: Thank you to the reviewer for taking the time to consider our rebuttal. We sincerely appreciate raising your score. | Summary: This submission extends the techniques of [50], i.e., tools from star geometry and dual Brunn-Minkowski theory, to characterize the optimal regularizer under the adversarial regularization framework [51] of inverse problems. \alpha-Divergence as loss functions for learning regularizers is also discussed, with the dual mixed volume interpretations. The weak convexity and compatible neural network architectures are further discussed for computational concerns related to the proposed star body regularizers.
Strengths: Extending the analysis and results of [50] to the adversarial regularization framework and showing its connections to the \alpha-divergence is an interesting theoretical contribution. The specified neural network layers compatible with the star body regularizer can also shed light on practice.
Weaknesses: My major concern with this work is that it is unclear if the proposed new \alpha-divergence-based loss functions are useful for the adversarial regularization problem this submission studies. The original adversarial regularization work [51] for inverse problems, albeit published in NeurIPS 6 years ago, has reported experimental results to validate the proposed framework. On the other hand, if positioned as a pure theory work, given the existence of [50], I feel that the theoretical contribution of this submission seems a bit short for publication in NeurIPS.
As a minor thing, it would be helpful for readability if an overview of the organization and the flow of the paper could be briefly presented in the introduction.
After the authors' adding new experiments during the discussion phase
---------------------------------------------------------------------------------------------
These two concerns are partially addressed. On one hand, I can see the potential of the proposed approach from these experiments. On the other hand, the experiments are still preliminary and small scale.
Technical Quality: 3
Clarity: 3
Questions for Authors: Prop. 4.3 requires positive homogeneity of each layer, which limits the choice of activation functions in practice to a subset of piecewise linear functions, such as ReLU and its variants. I feel that this could be a limitation of the proposed approach. Moreover, while each layer is an injective function, due to the homogeneity of activations, the composite of such layers will not be an injective (see Sec. 3 of the paper below for example), will this break the proof of Prop. 4.3?
Dinh, Laurent, et al. "Sharp minima can generalize for deep nets." ICML 2017.
After the authors' rebuttal
---------------------------------------------------------------------------------------------------
The question regarding the composite of injectivities does not apply to Prop. 4.3.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Mostly. Please refer to the Questions session for a concern I have regarding limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed review and positive comments that our insights offer "an interesting theoretical contribution". Below we would like to address concerns/questions that were raised.
>Regarding novelty in relation to [50]
We appreciate the reviewer's concern regarding novelty in relation to [50]. Please see our global comment above regarding the novelty and significance of our work. We would additionally like to note that the new proposed $\alpha$-divergence-based loss functions are meant to offer an alternative to the adversarial regularization framework that still falls under the umbrella of "critic-based" regularizer learning frameworks. While we did not provide experiments in this paper, we believe our theoretical contributions provide novel insights into unsupervised, critic-based regularizer learning, an area that has been underdeveloped theoretically, and offer convincing evidence that other critic-based losses are worth exploring both from a theoretical and practical perspective.
>Adding an overview
Thank you for this suggestion. We agree that this would help with readability of the manuscript. We will add this in an updated version of the manuscript.
>Regarding Proposition 4.3
Yes, we agree that requiring positive homogeneity of each layer limits the activation functions one can use. However, such activations are commonly employed in such tasks. For example, the experiments that were done in Lunz et al [51] and in subsequent applications also used a network of this form, i.e., a convolutional neural network with no biases and LeakyReLU activations.
Regarding the concern of a lack of injectivity, we first note that Prop 4.3 requires that each layer $f_i(\cdot)$ is injective and positively homogenous. Note that the composition of any two injective and positively homogenous functions is again injective and positively homogenous. To see this, consider two injective and positively homogenous functions $f$ and $g$. Then note that for any $\lambda \geqslant 0$ and $x$, $f(g(\lambda x)) = f(\lambda g(x)) = \lambda f(g(x))$. To see injectivity, note that $g(x) = g(y)$ implies $x = y$. The same holds true for $f$. Hence if $f(g(x)) = f(g(y))$, then $g(x) = g(y)$ by injectivity of $f$. But by injectivity of $g$, $x = y$. Hence $f(g(\cdot))$ is injective.
When applying this to neural networks, note that our result requires each layer $f_i(\cdot)$ to be injective and positively homogenous. This means that we require that the linear layer $W_i$ *composed with the activation function* $\sigma_i$ must be injective and positively homogenous, i.e., $f_i(\cdot) = \sigma_i(W\cdot)$ must be injective and positively homogenous. This is automatically satisfied if the linear layer is injective and the activation is LeakyReLU, since it is bijective. This can fail, however, if one uses ReLU. For example, note that $\mathrm{ReLU}(I \cdot)$ is not injective, even though the identity matrix $I$ is invertible. Certain matrices $W$, however, allow for injective maps $x \mapsto \mathrm{ReLU}(Wx)$. See, for example, the notion of a "directed spanning set" in Puthawala et al or the injective 1x1 convolutions of Kothari et al.
Puthawala et al, Globally Injective ReLU Networks, Journal of Machine Learning Research 23 (2022) 1-55
Kothari et al, TRUMPETS: Injective Flows for Inference and Inverse Problems, Uncertainty in Artificial Intelligence (2021)
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the detailed response to concerns and questions, which helps me to better access the novelty and theoretical contribution of this submission. As a result, I have raised my rating.
My previous question regarding the composite of injectivities actually apply to the mappings from parameters to network functions, rather than the input-output network functions, so this does not apply to Prop. 4.3 and the authors' explanation is correct.
With all that said, my concerns regarding limited activation function choices and lack of experimental validations remain. That is why I am not able to raise the rating to a firm accept for NeurIPS.
---
Reply to Comment 1.1.1:
Comment: Thank you to the reviewer for considering our rebuttal and for raising their rating. Based on your concerns, we conducted experiments on two points that were raised, namely the applicability of the $\alpha$-divergence based loss and the importance of the homogeneity of the activation function. Please see the official comment posted titled "New Experimental Results".
Thank you for voicing your concerns and suggesting we explore this further. We believe that these additional experiments are valuable in providing support for our theory and will significantly improve the paper quality. Please let us know if you have any additional questions or comments. We hope that these experiments can potentially address your concerns. | Rebuttal 1:
Rebuttal: We thank all reviewers for their detailed feedback and positive comments that our work is "rigorous and methodologically sound" and that our theory "opens up a number of new research directions both theoretically and numerically". We will address each reviewer's specific questions and concerns individually, but we would also like to address a main concern that was shared by multiple reviewers here. In particular, we would like to discuss the novelty and significance of our results in relation to [50]. We intend to update the manuscript with additional discussion highlighting the differences between our work and [50].
## Novelty in relation to [50]
We appreciate the reviewers' concerns regarding the novelty of our work. While it is true that our current paper employs similar mathematical tools to [50], we would like to highlight the following key differences and novel contributions:
**1. Novel setting and new theory for unsupervised regularizer learning**
The current work focuses on understanding critic-based regularization, an inherently different problem than the one considered in [50]. Critic-based regularizers must learn to prefer a pre-defined "clean/good" distribution over a "noisy/bad" distribution, whereas the setting in [50] asks to find a regularizer that maximizes the likelihood of the data. Understanding how these two distributions differ from one another creates novel challenges that make this setting inherently different than [50], a point that we will discuss further in our novel technical contributions.
Moreover, the theoretical foundations of critic-based regularization remain underdeveloped. To date, aside from the seminal work of Lunz et al. [51], there has been minimal theoretical exploration of the structure of regularizers learned in the critic-based setting. Our results demonstrate that star-geometric tools can help establish new theoretical frameworks for critic-based regularizer learning, addressing this notable gap in the literature.
**2. Novel technical contributions**
While our tools are superficially similar, the novel setting of critic-based regularizer learning brings new challenges that we believe are of interest to the regularizer learning community. Additionally, this work provides new analyses of star body gauges that were not present in [50]. We highlight specifics below:
- **New challenges**: As an example, the new critic-based loss for regularizer learning in equation (5) brings novel challenges in star geometry. In particular, we show that this loss has a dual mixed volume interpretation, but its dependency on $K$ is more complicated than in [50]. As a result, its extremizers no longer directly follow from the dual mixed volume inequality (as they do in [50]). We show in Theorem 3 that we can explicitly characterize extremizers of the dual mixed volume inequality and provide visual examples in Figure 4. Finally, we show in the supplemental (Section B.3.1, Theorem 5) that these results can be generalized to a broader class of objectives. These types of challenges were not present in [50]. To our knowledge, this type of result is novel even in the star geometry/Brunn-Minkowski literature, let alone the regularizer learning literature.
- **New applications:** We present new results on the application of star body regularizers that are relevant to both the inverse problems and machine learning communities. These include identifying beneficial properties for optimization, such as weak convexity, and providing guidelines for designing neural network architectures to learn these regularizers. Our findings offer valuable insights into the behavior of these regularizers in downstream tasks and practical methods for learning them. These types of results were not present in [50].
- **New visualizations:** All examples we visualize in the paper are different than those in [50]. The only example that is similar is the one shown in Figure 7 in the supplemental. Our experiment is different, however, since we are showing that this star body induces a weakly convex (squared) gauge, which was not explored in [50].
**3. Impact and significance**
We would like to highlight and summarize the impact and significance of our work:
- Our research introduces novel theory and a new framework for understanding critic-based regularizer learning, addressing a significant gap in existing theoretical foundations.
- We tackle several novel technical challenges inherent to the critic-based setting, distinguishing our work from [50].
- This work also demonstrates the applicability and utility of star geometry tools for regularizer learning and broader machine learning contexts. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Language Models Encode Collaborative Signals in Recommendation | Reject | Summary: The paper studies an important and open question, how much user behavior knowledge (generally captured by collaborative filtering models) are present in large language models. This has been a topic attracting significant research interest in recent years. The authors propose that simple linear mappings done on top of LM encoder representations are sufficient to capture collaborative filtering signals in recommendations, and propose a new recommendation method, AlphaRec, which takes pretrained language model content embeddings as input, transforms them via MLPs and lightweight graph convolutions, followed by a contrastive loss. The authors conduct experimental analysis for AlphaRec in both standard settings and zero-shot settings.
Strengths: **S1**: the topic studied is important. It is generally believed that language model and collaborative filtering (recommendations) learn different representation spaces, and methods to bring the two spaces closer is of significant interest to the large community working on search, recommendations, ads, and related topics.
**S2**: the particular approach proposed (linear mapping from textual space to collaborative filtering space) is understudied in prior work on LLM and (Generative) CF, despite numerous papers in recent years.
Weaknesses: **W1**: the writing in this paper, esp. recommendation system paradigm related discussions, misrepresents (or ignores) significant prior work done in the field. e.g., "AlphaRec follows a new CF paradigm, which we term the language-representation-based paradigm." and related writings.
- Content signals and/or embeddings have been used as the dominant recommendation paradigm in the field, even well before the seminal YouTube DNN paper [1] was published (see e.g., Pazzani and Billsus, 2007 [12]). For recent examples of related work, see e.g., [10, 11] from Pinterest and Meta in KDD'22 (but one should be able to easily find similar papers in WWW KDD etc in prior years as well).
- Replacing ResNet/ViT- or GPT-/BERT- generated embedding with LLaMa- or Mixtral- generated embeddings cannot and should not be viewed as a paradigm shift, especially given the core architecture of AlphaRec is not substationally different from prior work.
**W2**: AlphaRec needs to be compared with stronger baselines. This applies to many major experiments in the paper. Here are some examples of baselines missing, which may significantly change conclusions obtained and discussions etc:
- vs ID-based recommenders (Table 3).
- Equation (2) and line 171-173 for $N_u$ already captures set of items that a user is related to ("user interaction history" / "user engagement history") to a large extent. Thus, the authors should compare AlphaRec with at least some SotA sequential/generative recommenders, such as SASRec, BERT4Rec, TIGER, HSTU [3, 4, 6, 7]. All of them are missing in the current paper.
- Given AlphaRec uses the transposed item id representation - the one layer $N_i$ formulation (equation (2)), relevant work in recent years include Dual contrastive network [8] and User-centric ranking [9]. The authors should compare with or at least discuss some work in this category as related work.
- Zero-shot performance. (Table 4)
- "Book Crossing" is not a commonly used benchmark dataset. The "Industrial" dataset (per citation [1] on line 273) seems to be a small-scale "Yelp" dataset, and should be renamed to avoid confusions.
- For ML-1M, the SotA approach one year ago (LLMRank [14]) already achieved 53.73 NDCG@20, significantly higher compared with 32.15 (AlphaRec) in this work.
**W3**: many other formulations/experiments/writings could be significantly improved. Examples include:
- The proposed task formulation does not reflect how recommendation systems work in practice. e.g., "Line 97-99. Personalized item recommendation with implicit feedback aims to select items i ∈ I that best match user u’s preferences based on binary interaction data Y = [yui], where yui = 1 (yui = 0) indicates user u ∈ U has (has not) interacted with item i [58]." -- here "selecting the item that user will interact with" is not the same as "selecting the item with the highest reward", as the interaction itself can be negative (e.g., disliking a recommendation, abandoning session, etc.). See [1, 2] for references.
- A key contribution of this work should be the linear mapping finding. But Table 1 uses a questionable set of baselines for both LMs and CF baselines, which weakens linear mapping related claims.
- To claim "Moreover, with the advances in LMs, the performance of item representation linearly mapped from LMs exhibits a rising trend, gradually surpassing traditional ID-based CF models" -- I would expect the authors to compare with a single set of models (e.g., LLaMa-2 7B 13B 70B or GPT-3 1.3b 2.7b 6.7b 13b 175b) trained on identical data. As it stands, all models are trained and/or finetuned with different data, so a simpler hypothesis explaining the LM (Linear Mapping) trend is that people are including more and more data into LLM pretraining/finetuning stages, which happen to capture more and more aspects relevant to recommendations.
- On the CF side, "MF" "MultiVAE" and "LightGCN" do not represent SotA baselines on Amazon Review datasets (see W2).
- Table 1. Please highlight the particular K used for Recall and HR metrics (hard to find in the paper, applies to other tables too). Most work on recommendation models also report HR/NDCG/etc. over at least 2-3 Ks to help readers understand how metrics vary with different approaches.
- Table 4. [1] should not be labeled as an "Industrial" dataset. The cited paper (per line 273) is Ni et al. "Justifying Recommendations using Distantly-Labeled Reviews and Fine-Grained Aspects" which in turn seems to be an publicly available review dataset provided by Yelp. Please use appropriate language as the current writing leads readers to think that AlphaRec is an industrially deployed system. Please refer to industrial papers (e.g., KDD ADS track papers [2, 9, 11, 12]) for how to describe testings done on publicly available industrial sampled datasets (like Yelp), vs real deployments.
- Misc: Contrastive loss is widely used and should not be viewed as a contribution of AlphaRec. See [15, 11] etc.
Technical Quality: 2
Clarity: 3
Questions for Authors: **Q1**: Given the paper makes very significant claims, I would strongly recommend setting up experiments that properly corroborate those claims to avoid the paper appearing like overselling. Examples include:
* Given linear mapping is a very weak baseline for utilizing LLMs, comparison with recent LLM4Rec approaches including in-context learning, transfer learning, etc. like [14] is necessary.
* To justify LLMs can increasingly capture collaborating filtering signals, please fix a LLM baseline (eg LLaMa-2 or GPT-3 or Gemma) and vary parameter count (eg up to 70/175b) to show that collaborating filtering knowledge is captured with increased model complexity. Right now a simpler interpretation of Table 1 is that different models use more data, with more data used for recent LLM training.
* Please compare with appropriate CF baselines, incl popular sequential recommenders in recent years as discussed in W2.
These experiments would make the findings/conclusions significantly more convincing, and I'm very happy to change rating if linear mapping/AlphaRec remains a competitive approach after these experiments.
**Q2**: Please consider ways to reframe the CF paradigm discussions as it's unclear how AlphaRec differs from building GCNs or RNNs/Transformers on top of ResNet-/BERT-/GPT-based embeddings, which has been a popular baseline for a long time eg [5, 11, 12].
**References**:
- [1] Convington et al. Deep Neural Networks for YouTube Recommendations. RecSys'16.
- [2] Zhou et al. Deep Interest Network for Click-Through Rate Prediction. KDD'18.
- [3] Kang et al. Self-Attentive Sequential Recommendation. ICDM'18.
- [4] Sun et al. BERT4Rec: Sequential Recommendation with Bidirectional Encoder Representations from Transformer. CIKM'19.
- [5] Hidasi et al. Session-based Recommendations with Recurrent Neural Networks. ICLR'16.
- [6] Rajput et al. Recommender Systems with Generative Retrieval. NeurIPS'23.
- [7] Zhai et al. Actions Speak Louder than Words: Trillion-Parameter Sequential Transducers for Generative Recommendations. ICML'24.
- [8] Lin et al. Dual contrastive network for sequential recommendation. SIGIR'22.
- [9] Zhao et al. Breaking the Curse of Quality Saturation with User-Centric Ranking. KDD'23.
- [10] Pancha et al. PinnerFormer: Sequence Modeling for User Representation at Pinterest. KDD'22.
- [11] Rangadurai et al. NxtPost: User To Post Recommendations In Facebook Groups. KDD'22.
- [12] Pazzani and Billsus. Content-based recommendation systems. In The adaptive web. 2007.
- [13] Naumov et al. Deep Learning Recommendation Model for Personalization and Recommendation Systems. 2019.
- [14] Hou et al. Large Language Models are Zero-Shot Rankers for Recommender Systems. ECIR'24.
- [15] Klenitsky et al. Turning Dross Into Gold Loss: is BERT4Rec really better than SASRec? RecSys'23.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to Reviewer $\color{orange}{\text{EzpW}}$**
Thanks for your acknowledgment of the importance of the topic we studied. We believe you are an industrial expert with a deep understanding of sequential recommendation. We greatly appreciate your careful reading of our paper and your responsible comments. Some of your comments can significantly improve our paper. Below are our responses to your comments. **Due to the character limitation, results are presented in the attached PDF.**
> **Comment 1: The discussion of CF paradigm**
Thanks for your concerns. We do not aim to distort nor omit the previous important works using content information and embeddings, as we include these previous studies as the early explorations of the language-representation-based paradigm in line 209-212 and compare AlphaRec with previous works like UniSRec. The primary goal of the CF paradigm discussion here is to summarize this research line, including the previous important works, into a unified paradigm (since there is no acknowledged definition of such paradigm; although content-based recommendation has been used previously, it is not accurate for this paradigm), and showcase the advantage of using the advanced LM representations. In hindsight, we agree using the word "new" is a little glib. We wish to apologize for this overstatement.
Additionally, we still address the importance of the shift from BERT-style representations to LLM-based representations, for three key factors: more encoded user preference similarities, superior zero-shot recommendation ability, and user intention-aware ability.
> **Comment 2: Appropriate experiments to verify the claim**
Thanks for your comments. The mentioned experiments will help make our claim more convincing. We would like to explain the setting differences of our paper with sequential recommendation first. It's true that sequential recommendation (or next item prediction) is important. However, general recommendation [1] (or direct recommendation [2]), which aims to discover the general preferences of users rather than predicting the next item, is still an innegligible issue even recently [3, 4]. They are deployed in domains like music recommendation [3]. The motivation for adopting general recommendation in our paper is reasonable: we do not wish to introduce any non-linearity for studying the representation space relationship, while the sequential relationship is usually captured by non-linear transformation such as attention [5] or RNN. As a result, the general recommendation is an appropriate task for our research goal.
1. **Appropriate experiments and stronger baselines**
Considering the general recommendation setting we adopted, the baselines in our paper are the latest and SOTA methods (XSimGCL [3] and RLMRec [4]). We also compare the linear mapping method with these baselines in **Table 13**.
We also consider your suggestion about comparing linear mapping with sequential models in the sequential setting. The results are presented in **Table 14**. AlphaRec is not compared due to the GCN module is not designed for the sequential recommendation, making it hard to adapt to sequential recommendation.
It's worth noting that linear mapping does not contain any model design, but still maintains comparable performance with specially designed sequential models SASRec. These results are inspiring and make our claim more convincing.
2. **Fix LLM type and vary parameter size.**
We fix the model of Llama2 family and change the parameter size. We report the performance in **Table 15** in the PDF. A clear performance increase is observed as the model size increases.
3. **Zero-shot performance**
In our zero-shot setting, no candidate set is adopted. While in LLMRank [6], a candidate set with the size of 20 is used, which leads to the reported high performance. We follow the same setting of LLMRank, equipping 19 random negative items for each positive item, to conduct the zero-shot experiments again. As indicated in **Table 16**, AlphaRec significantly outperforms LLMRank.
> **Comment 3: Improvements in the details**
Once again, thank you for your meticulous reading of our article.
1. **The task formulation**
It is true that "selecting the item with the highest reward" is more practical in industry. However, "selecting items that the user has interacted with" is also a widely used setting for academic research, such as the famous SASRec [4].
2. **The particular K used and 2-3 Ks**
Thanks for your suggestion. We use K = 20 in this paper, and we will highlight this in the latest version of this paper. For more Ks, we have reported more Ks results in our rebuttal period and we will add more Ks results in the appendix of our latest version.
3. **The Industrial dataset**
Thanks for your suggestion. This dataset is the Amazon Industrial dataset on Amazon Review. We use this citation because it is the official citation provided by the author of the dataset Amazon Review [7]. We will rename it as Amazon-Industrial for better clarification.
4. **The contrastive loss**
We do not regard contrastive loss as the contribution of our paper, as we have clarified this in line 149-151 and line 335-337. The contribution of our paper is that we prove that only simple model design can arouse the potential of advanced language representations.
[1] Neural collaborative filtering. 2017 WWW.
[2] Recommendation as language processing (rlp): A unified pretrain, personalized prompt & predict paradigm (p5). 2022 RecSys.
[3] Representation Learning with Large Language Models for Recommendation. 2024 WWW.
[4] XSimGCL: Towards extremely simple graph contrastive learning for recommendation. 2023 TKDE.
[5] Self-attentive sequential recommendation. 2018 ICDE.
[6] Large Language Models are Zero-Shot Rankers for Recommender Systems. 2024 ECIR.
[7] Justifying recommendations using distantly-labeled reviews and fine-grained aspect. 2019 EMNLP.
---
Rebuttal Comment 1.1:
Title: clarifying questions
Comment: Thanks authors for the detailed responses and for the newly added experiments. I have some further clarifying questions.
* For Table 15, do we project LLaMa-2 7B/13B/70B into hidden spaces of the same dimensionality or different dimensionalities? How are the hyperparameters used in "linear mapping" chosen?
* Also as tJs7 asked, how does the linear mapping conclusion change if we provide additional information (e.g., encode both title and descriptions)?
---
Reply to Comment 1.1.1:
Title: Response for new questions
Comment: Thanks for your continued interest in our work.
For the first question, we use the completely same experiment setting for different LM sizes, with the only difference in the input language representations. These language representations are projected into the same dimensionality of 64, which is consistent with the dimension for ID-based methods. Excepting the tau in the InfoNCE loss, the rest of the hyperparameters of linear mapping are also the same: batch size = 4096, learning rate = 0.0005, seed = 101, negative sampling number =256. For tau, 0.15, 0.15, and 0.20 are used for Books, Movies & TV, and Video Games respectively.
We also appreciate the suggestion of providing additional information. The response to this question is on the way, for the necessary time to set up this experiment. Thanks again.
---
Reply to Comment 1.1.2:
Title: Linear mapping performance with additional information
Comment: The mentioned experiment is important and interesting. We conduct experiments on Llama2-7B, with different item information types provided. We have tried two types of additional information: with brand & price and with description. We give examples of the input text with different item information types.
Examples:
**Title only:** Marvel Super Heroes
**Title + Brand & Price:** Marvel Super Heroes. Price: *$79.99*. Brand: Capcom
**Title + Description:** Marvel Super Heroes. Description: This successor to Capcom's wildly popular X-Men allows you to go one-on-one at home with the Marvel Super Heroes. Your super heroes battle through rounds of competition facing off with Dr. Doom and ultimately Thanos. Take control of the amazing Infinity Gem system which grants super-powers such as healing extra attacking strength and infinite super attacks. Master a variety of fantastic attacks super moves and tons of multi-hit combos. For comic book fanatics and fighting game warriors alike Marvel Super Heroes is a must-play masterpiece!
We adopt the same linear mapping setting as before, with the only difference in the input features. We report the performance comparison in **Table 1**.
Table 1 Performance of linear mapping when different item information types
| | | Movies & TV | | | Games | |
| :-------------------: | :-------: | :---------: | :----: | :-------: | :-----: | :----: |
| | Recall@20 | NDCG@20 | HR@20 | Recall@20 | NDCG@20 | HR@20 |
| Title + Description | 0.0863 | 0.0807 | 0.4426 | 0.0962 | 0.0557 | 0.2199 |
| Title + Brand & Price | 0.0924 | 0.0866 | 0.4679 | 0.1253 | 0.0733 | 0.2735 |
| Title only | 0.1027 | 0.0955 | 0.4952 | 0.1249 | 0.0729 | 0.2746 |
As illustrated in **Table 1**, adding additional information does not bring performance improvements. Instead, a performance drop is observed. This finding is consistent with the findings in the previous work in the NLP community [1]. Moreover, adding descriptions leads to more performance drop than adding brand & price.
There may be several reasons for the performance drop:
1. Metadata is missing frequently in the Amazon Review datasets. Around 1/2 of items miss at least one type of information among brand, price, and description. The flaw of the dataset also leads to inevitable noise in the representations.
2. Overwhelmingly long descriptions may serve as noise. The description is always long compared with titles. In most cases, item titles can be identified by the LM. However, when adding descriptions, the identifiable item becomes obscure. Worse still, as the example shows, the descriptions in Amazon Review tend to be a kind of marketing slogan, which leads to more noise.
3. Advanced language models can identify the target item correctly, with no need for additional information. LMs may understand the item title in their feature space [1] through a large amount of training corpus (i.e., items are encoded as a kind of world knowledge).
As a consequence, the noise from the additional information (e.g., metadata missing and overwhelmingly long descriptions) makes it hard to study the representation space relationship. So we only encode item titles in this paper.
The above results are consistent with our response to reviewer tjs7 that "If the language model encodes sufficient world knowledge, it should be able to uniquely identify items using simple item titles. Therefore, we have ignored other descriptions of items to avoid introducing unnecessary noise."
[1] Language models represent space and time. 2024 ICLR. | Summary: This paper states that LLM encodes collaborative signals that make it easy to connect language representation space with an effective recommendation space. Thus, it proposes an effective collaborative filtering model AlphaRec that takes as input only the transformed LLM representations of textual descriptions of items and is trained by InfoNCE loss and graph neural networks. The proposed method outperforms traditional ID-based models and other LM-enhanced methods.
Strengths: 1. This paper is well-written and easy to follow.
2. The paper conducts extensive experiments validating the effectiveness of the methods and proves the validity of the design through ablation study and anlysis.
3. The proposed method exhibits significantly good zero-shot recommendation performance
Weaknesses: 1. One of the most important motivations of the work is that the paper declares large language models encode collaborative signals which indicates the advantage of using representations of large language models for recommendations compared to id embeddings. However, how the preliminary experiments prove this point is insufficiently discussed in the paper. Advanced large language encodes more semantics of the textual descriptions and thus yields better performance. Why this alone doesn't fully explain the performance gain of LLMs should be more explicitly discussed in the main paper.
2. The novelty is limited. Using semantic embeddings of items has been widely used in recommendations. The novelty mostly lies in using the representations of large language models and the implementation details of how to make it effective when combined with traditional recommendation frameworks like non-linear transformations.
3. The paper states that language representation-based methods have low training costs. Still, if taking into account the costs of generating language representations, the computational cost is much higher than ID-based methods.
Technical Quality: 2
Clarity: 3
Questions for Authors: It states that the improvements of using a more advanced LLM when using linear mapping of the representations do not merely come from the better feature encoding ability. However, the validating experiments in Appendix B3 is not convincing enough. Randomly shuffling the item representations eliminates the semantic relations between user-item interaction pairs and makes language models unable to leverage their semantic understanding ability, which explains the decline of the performance. Could you please elaborate more on the points you make in Appendix B3?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to Reviewer $\color{blue}{\text{GXUC}}$**
> **Comment 1: More discussion about the collaborative signals encoded in LMs**
Thanks for your concern and suggestion. We respond to this question as follows.
First. We would like to restate the importance and correctness of the linear mapping method we used. It is acknowledged by the NLP community that a high performance of linear methods (i.e., linear mapping [1] and linear probing [2]) would suggest specific signals have been encoded in advanced LMs [3, 4], since the linear structure ensures the *homomorphism* [5] between two spaces. With such linear methods, lexical [6] and syntax [7] structures have been found inside LMs. More details about this research line can be found in previous works [2, 3] on top-tier NLP conferences. Therefore, a high performance of linear mapping would suggest that collaborative signals are encoded in LMs.
Second. "More semantics of the textual descriptions" does not fully explain the success of linear mapping. Linear mapping captures the user preferences beyond textual similarities. As shown in Figure 1c, homosexual movies such as *My Beautiful Laundrette* and *Food of Love* scatter in the language space, but gather together in the recommendation space. These movies are semantically dissimilar but share similar user preference similarities, which is captured by the linear mapping. Therefore, we can not simply attribute the improvement to more semantics.
Third. We add more experiments about linear mapping according to the suggestion of another reviewer (results are presented in the attached PDF). We also consider the sequential recommendation setting in Table 14 and the effect of LM size in Table 15. These results further indicates that user preference similarities are encoded in advanced LM, and the knowledge inside increases as the model size increases.
> **Comment 2: The novelty is limited**
Thanks for your concern about the novelty of this paper. The novelty of this paper does not lie in replacing previous BERT-style embeddings with advanced LM embeddings, or deliberately designing a new CF module. We address the two key novelties of this paper.
1. We are the first work to use linear mapping to reveal how much knowledge about recommendation is encoded in language models. Although linear mapping and linear probing are widely used and recognized methods for exploring the knowledge embedded in language models in the NLP field [2, 3], relevant exploration in recommendation is rare.
2. We prove that advanced language representations exhibit excellent properties for designing recommenders with multiple abilities. With only a simple model design, language-representation-based CF models can surpass traditional ID-based models on multiple tasks. Moreover, we summarize the advantages of using current advanced LM representations: low training cost, superior zero-shot performance, and user intention-aware ability.
> **Comment 3: The computational cost is high**
Thanks for your concern. We would like to reply to your concern from three aspects.
First. We would like to kindly point out that the computational cost you mentioned does not equal the training cost in our paper. We state in our paper that AlphaRec is of low training cost rather than computational cost.
Second. It is important to highlight that language representations can be computed and documented in advance, and there is only a one-time computational cost.
Third. The time cost is comparable with ID-based methods. The training cost is relatively low, compared with LM-based recommenders. And the total time cost is comparable with ID-based methods. For quantitative analysis, we represent the training time cost comparison as follows.
**Table 1 Training cost comparison**
||Books|Movies & TV|Games|
|:-:|:-:|:-:|:-:|
|LM-based Methods|hours|hours|hours|
|LightGCN|5235.1s|1328.2s|769.7s|
|XSimGCL|761.1s|205.6s|124.6s|
|AlphaRec|1363.4s|479.7s|214.6s|
As indicated in Table 1, AlphaRec maintains competitive training costs with ID-based methods, which are much lower than LM-based methods.
> **Question 1: Elaborate more on the points in Appendix B3**
Thanks for your concern. We assume that the improvement of advanced LMs comes from two possible ways, better feature encoding ability (such as more compact feature space or higher embedding dimension) and any other features or knowledge about recommendation. By shuffling the embeddings, we eliminate any other features or knowledge about recommendation. In this way, the performance only comes from the feature encoding ability. Therefore, the decline in performance proves that the improvement not only comes from better feature encoding ability, which is consistent with the claim in our paper.
To summarize, we follow the following points to prove user preference similarities (i.e., collaborative signals) are encoded in advanced LMs:
1. The linear mapping performance may come from feature encoding ability or knowledge about recommendation (e.g., textual similarity or preference similarity). Random shuffle results are poor -> The performance largely comes from the knowledge about recommendation.
2. The linear mapping performance is excellent -> Previous works in NLP [2, 3, 4, 6, 7] suggest user preference similarities may be encoded in advanced LMs.
3. Textually dissimilar items gather in the recommendation space -> user preference similarities beyond textual similarities are encoded in advanced LMs.
[1] Linearly mapping from image to text space. 2023. ICLR.
[2] Understanding intermediate layers using linear classifier probes. 2017 ICLR.
[3] Language models represent space and time. 2024 ICLR.
[4] Emergent world representations: Exploring a sequence model trained on asynthetic task. 2023 ICLR
[5] Linear algebra and geometry. 1969
[6] Probing pretrained language models for lexical semantics. 2020 EMNLP.
[7] A Structural Probe for Finding Syntax in Word Representations. 2019 NAACL. | Summary: The paper proposes AlphaRec, a novel method to incorporate both knowledge from pre-trained language models and collaborative signals. Authors firstly reveal the advantages brought from pre-trained embedding model, and then propose three modules within AlphaRec. An MLP layer to transform pre-trained embedding to item-representation. A graph convolution to aggregate neighbor’s information, and the InfoNCE loss to train introduced parameters within the MLP for each dataset. Overall, the novelty of this paper lies within exploration of NLP encoded embedding on RecSys. The graph convolution and InfoNCE loss are already widely used techniques.
Strengths: 1. A good exploration on new direction (language-representation-based) RecSys
2. Experiments are conducted from different angles for analyzing their model.
Weaknesses: 1. Insufficient baselines.
2. Uncleared model name definition.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Why do authors only encode titles? There is more information within your used Amazon dataset including item descriptions.
2. I personally do not prefer un-informative model names such as the AlphaRec in this paper. Authors mentioned “This model is named AlphaRec for its originality and a series of good properties”. The reason seems so strange to me.
3. More advanced baselines are needed to compare such as the DirectAU [1] and GraphAU [2]
[1] Wang, C., Yu, Y., Ma, W., Zhang, M., Chen, C., Liu, Y., & Ma, S. (2022, August). Towards representation alignment and uniformity in collaborative filtering. In Proceedings of the 28th ACM SIGKDD conference on knowledge discovery and data mining (pp. 1816-1825).
[2] Yang, L., Liu, Z., Wang, C., Yang, M., Liu, X., Ma, J., & Yu, P. S. (2023, October). Graph-based alignment and uniformity for recommendation. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management (pp. 4395-4399).
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See Weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to Reviewer $\color{green}{\text{tJs7}}$**
We sincerely thank you for the positive feedback and valuable comments! To address your concerns, we present our responses as follows.
> **Comment 1: Why only encode titles** There is more information within your used Amazon dataset including item descriptions.
Thanks for your concern. An important contribution of this paper is the investigation of the how much recommendation knowledge is encoded in LMs through linear mapping [1, 2, 3]. If the language model encodes sufficient world knowledge, it should be able to uniquely identify items using simple item titles. Therefore, we have ignored other descriptions of items to avoid introducing unnecessary noise.
> **Comment 2: Un-informative model names** There is more information within your used Amazon dataset including item descriptions.
Thanks for your suggestion! The model name AlphaRec is also known for the adjustable hyperparameter \alpha for user intention capture, as introduced in Section 4.3. We believe that this is more reasonable for the model name we used.
> **Comment 3: More advanced baselines**
Thanks for your great suggestions! We fully agree that considering more CF methods will further verify the effectiveness of AlphaRec, especially with the research line on the uniformity and alignment in recommendation. We compare AlphaRec with DiretAU and GraphAU. As shown in this table, AlphaRec presents relatively high performance compared with these methods across various datasets.
| || Books ||| Movies & TV ||| Games ||
| :------: | :--: | :---: | :--: | :--: | :---------: | :--: | :--: | :---: | :--: |
| | Recall | NDCG | HR | Recall | NDCG | HR | Recall | NDCG | HR |
| DirectAU | 0.0889 | 0.0734 | 0.3914 | 0.1011 | 0.0992 | 0.4984 | 0.1038 | 0.0635 | 0.2312 |
| GraphAU | 0.0933 | 0.0747 | 0.4101 | 0.1135 | 0.1054 | 0.5163 | 0.1231 | 0.0718 | 0.2511 |
| AlphaRec | **0.0991** | **0.0828** | **0.4185** | **0.1221** | **0.1144** | **0.5587** | **0.1519** | **0.0894** | **0.3207** |
[1] Linearly mapping from image to text space. 2023. ICLR.
[2] Understanding intermediate layers using linear classifier probes. 2017 ICLR.
[3] Language models represent space and time. 2024 ICLR.
---
Rebuttal Comment 1.1:
Title: Keep score unchanged
Comment: I will keep my score unchanged as 7 score is already qualified.
---
Reply to Comment 1.1.1:
Title: Thanks!
Comment: Thank you for your positive comments and valuable feedback! Your comments significantly help us improve our paper. | Summary: This paper proposes AlphaRec, an LLM-based recommender system that utilizes language representations of item textual data for recommendations.
Strengths: + Investigating ID paradigm and LLM paradigm is important.
+ The method is simple but seems to be effective.
Weaknesses: - In this paper, what most confuses me is the usage of the terminology "collaborative filtering" throughout the paper. In traditional recommender system, collaborative filtering information means the interactions among users and items. The authors find that using LM as feature extractors to get user/item embeddings from meta-data can achieve similar results as if CF is used for recommendation. However, this seems to be fundamentally different than LM has the "collaborative information", as for most online service platforms, the interaction data should be confident and open source LMs won't be able to train on that data. Therefore, the main claim in the paper seems questionable.
- It would be beneficial if we could have results on more diverse datasets.
Technical Quality: 2
Clarity: 2
Questions for Authors: Please refer to my summary of weakness.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to Reviewer $\color{red}{\text{NfrM}}$**
We sincerely thank you for your concerns about our paper.
> **Comment 1: Terminology** The usage of the terminology "collaborative filtering"
Thanks for your question. We would like to kindly point out that collaborative filtering is a common paradigm in recommendation [1], and we only use this terminology to describe the paradigm we adopt in this paper.
Additionally, we do not use the terminology of collaborative filtering information in our paper, and there is no formal definition of collaborative filtering information in the literature. I guess you may mean collaborative signals [2] rather than collaborative filtering information. In our paper, we describe the collaborative signals as "*user preference similarities between items*", which is consistent with the definition in the famous paper NGCF [2].
In this way, our claim equals whether user preference similarities are encoded in advanced LMs. In this paper, we use linear mapping to study this claim. The linear mapping or linear probing method is widely used to explore the encoded feature in LMs [3, 4]. It is acknowledged by the NLP community that a high performance with a linear layer would suggest that specific knowledge has been encoded in LMs [5] (since linear mapping ensures the homomorphism [6] between two spaces). As indicated in our paper, linear mapping yields high recommendation performance, which suggests that user preference similarities may be implicitly encoded in advanced LMs. More details of the linear mapping and linear probing method can be found in previous works that have been published in top-tier NLP conferences [4, 7, 8].
According to the suggestions of other reviewers, we also conduct more experiments on sequential recommendation and model size effect of LMs in the attached PDF. These results can better clarify our claim.
> **Comment 2: Training data for LMs** The interaction data is confident and open-source LMs won't be able to train on that data
Thanks for your concern. We would like to answer this question from two aspects.
First. All the datasets we adopted in this paper are public datasets, and everyone has access to these data. Open-source LMs have probably been trained on such user behavior data. It's important to highlight that most of the training data for open-source LMs are crawled from websites, such as the CommonCrawl data [9] used for training the Llama family [10]. Moreover, some LMs clearly state that Amazon Review data is used for training [11].
Second. Features in LMs do not require LMs to be trained on a specific type of data. Evidence for this comes from previous works that find that the lexical [8] and syntax [12] structure is encoded in LMs. Lexical and syntax data rarely appear in the training corpus explicitly, but LMs still learn such features through training on a huge amount of structured natural language data. From this aspect, LMs are also able to learn the preference similarities of items through training on the user behavior data.
> **Comment 3: The main claim in the paper seems questionable.**
Thanks for your concern. We answer this question in our response to comment 2. Moreover, we use experiment results to verify our claim like previous papers [4, 5, 6, 7, 8].
> **Comment 4: More diverse datasets** It would be beneficial if we could have results on more diverse datasets.
Thanks for your comments. However, we would like to kindly point out that six datasets from three platforms (i.e., Amazon, MovieLens, and BookCrossing) have been adopted in our paper.
The scale of this dataset, as well as the scale of our experiments, is quite large in the field of recommendation systems. Moreover, most recommender system papers (including highly influential papers) typically conduct experiments on only 2-3 datasets [13, 15, 16] and a single data platform (e.g., Amazon) [14, 15].
According to your suggestion and comments of reviewer $\color{orange}{\text{EzpW}}$, we add one more experiment on the steam dataset under the zero-shot recommendation setting and compare the performance with the LLM4Rec baseline LLMRank [16]. We follow the task setting of LLMRank, equipping 19 negative items for each positive item and conducting the selection task. As indicated in Table 1, AlphaRec significantly outperforms LLMRank on the Steam dataset.
**Table 1 Zero-shot performance on Steam dataset**
|||Steam|||
|:-:|:-:|:-:|:-:|:-:|
||NDCG@1|NDCG@5|NDCG10|NDCG@20|
|LLMRank|0.3112|0.4413|0.5255|0.5302|
|AlphaRec|**0.4450**|**0.6131**| **0.6394**|**0.6714**|
| Imp. %|42.99%|38.93%|21.67%|26.63%|
[1] Neural collaborative filtering. 2017 WWW.
[2] Neural graph collaborative filtering. 2019 SIGIR.
[3] Linearly mapping from image to text space. 2023. ICLR.
[4] Understanding intermediate layers using linear classifier probes. 2017 ICLR.
[5] Language models represent space and time. 2024 ICLR.
[6] Linear algebra and geometry. 1969
[7] Emergent world representations: Exploring a sequence model trained on asynthetic task. 2023 ICLR
[8] Probing pretrained language models for lexical semantics. 2020 EMNLP.
[9] CCNet: Extracting high quality monolingual datasets from web crawl data. 2020 LREC.
[10] LLaMA: Open and Efficient Foundation Language Models. 2023
[11] SFR-Embedding-Mistral: Enhance Text Retrieval with Transfer Learning. 2024
[12] A Structural Probe for Finding Syntax in Word Representations. 2019 NAACL.
[13] Self-attentive sequential recommendation. 2018 ICDE.
[14] Towards universal sequence representation learning for recommender systems. 2022 KDD.
[15] Recommender Systems with Generative Retrieval. 2024 NeurIPS.
[16] Large language models are zero-shot rankers for recommender systems. 2024 ECIR. | Rebuttal 1:
Rebuttal: We sincerely appreciate the efforts of every reviewer to make this paper better. We are delighted to see the importance of the studied topic in this paper is acknowledged by most of the reviewers ($\color{red}{\text{NfrM}}$, $\color{green}{\text{tJs7}}$, and $\color{orange}{\text{EzpW}}$).
We appreciate all the reviewers for their valuable comments and suggestions, which help us significantly improve this paper. We summarize our response and the updates to the paper as follows:
- **More detailed discussion about the collaborative signals encoded in advanced LMs.** Most of the concerns from reviewers concentrate on whether linear mapping indicates that collaborative signals are encoded in LMs. As acknowledged in the community of NLP, the high performance of linear mapping and linear mapping reflects that LM may have implicitly encoded specific knowledge inside.
We follow the following points to prove user preference similarities (i.e., collaborative signals) are encoded in advanced LMs:
1. The linear mapping performance may come from feature encoding ability or any knowledge about recommendation (e.g., textual similarity or preference similarity). Random shuffle results are poor -> The performance largely comes from the knowledge about recommendation.
2. The linear mapping performance is excellent -> Previous works in NLP [1, 2, 3, 4, 5] suggest user preference similarities may be encoded in advanced LMs.
3. Textually dissimilar items gather in the recommendation space -> user preference similarities beyond textual similarities are encoded in advanced LMs.
- **Highlight the contribution of this paper.** We would like to highlight the two key contributions of this paper. First, we are the paper adopting the linear mapping method to study the knowledge about recommendation encoded in LMs, which has been widely used in the NLP community but rarely appears in the recommendation community. Second, we prove that huge potential exists in the representations from current advanced LMs. With simple model design, such representations yield superior performance across various tasks.
- **Verify the claim.** Following the suggestions of reviewer $\color{orange}{\text{EzpW}}$, we add experiments on sequential recommendation and measure how the knowledge inside LM increases with the rise of LM size.
- **More baselines.** According to the suggestions of reviewer $\color{green}{\text{tJs7}}$, we add two more baselines DirectAU and GraphAU.
- **Comparing with LLM4Rec baselines.** According to the suggestions of reviewer $\color{orange}{\text{EzpW}}$, we compare the zero-shot performance of AlphaRec with LLM4Rec baseline LLMRank [6].
[1] Understanding intermediate layers using linear classifier probes. 2017 ICLR.
[2] Language models represent space and time. 2024 ICLR.
[3] Emergent world representations: Exploring a sequence model trained on asynthetic task. 2023 ICLR
[4] Probing pretrained language models for lexical semantics. 2020 EMNLP.
[5] A Structural Probe for Finding Syntax in Word Representations. 2019 NAACL.
[6] Large language models are zero-shot rankers for recommender systems. 2024 ECIR.
Pdf: /pdf/b063370139aaada319348ea05a70992d0480745e.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Two applications of Min-Max-Jump distance | Reject | Summary: The paper proposes to use a new distance, Min-Max-Jump, which is the minimum largest distance on any path between two points, to be used in k-means clustering to learn clusters.
Strengths: The distance can overcomes some demerits of the convex ("spherical" in the paper) clusters.
Weaknesses: The distance in the paper is, in fact, related to single linkage clustering that assign give a pair of points a distance at which the pair is joint to one cluster. This need to be analyzed to relate to previous work as well as to compute pairwise distances efficiently.
Theoretical property of the distance is poor. The paper should review many other density-based distance functions to put this work into the correct context.
There would be a lot of problems using this distance as many of pairs of nodes would share the same distance. There is no analysis on the metric property of this distance.
On evaluation, the methods need to compare to single-linkage clustering (SSL) at the very least as all the advantages of using this distance with k-means are available in SLL in its simplest form.
Presentation-wise, it is hardly up the standard. There are methods/algorithms/concepts that are mentioned as "a is like b with a difference" without a formal definition. This mixes up definitions and properties.
Technical Quality: 2
Clarity: 1
Questions for Authors: N/A
Confidence: 4
Soundness: 2
Presentation: 1
Contribution: 1
Limitations: The paper uses a new distance without theoretical justifications. It learns nonconvex clusters by using a nonconvex clustering-based distance (without explicitly mentioning it), which is hardly a novelty.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Question:
"The distance in the paper is, in fact, related to single linkage clustering that assign give a pair of points a distance at which the pair is joint to one cluster. This need to be analyzed to relate to previous work as well as to compute pairwise distances efficiently."
"On evaluation, the methods need to compare to single-linkage clustering (SSL) at the very least as all the advantages of using this distance with k-means are available in SLL in its simplest form."
Response:
Thanks for the constructive suggestions. We will revise the paper according to Reviewer VBcM's suggestions.
MMJ-K-means typically outperforms other algorithms in handling irregular clusters. For example, single linkage clustering is very sensitive to outliers, whereas MMJ-K-means is not, as demonstrated in Figure 3, where all datasets contain some outliers.
Density-based clustering algorithms like DBSCAN are highly sensitive to their hyper-parameters. DBSCAN's hyper-parameter "eps," defined as "the maximum distance between two samples for one to be considered as in the neighborhood of the other," is not intuitive and difficult to optimize. When parameter "eps" is fixed, adjusting the coordinates of data points by scaling them can lead to different clustering results. That means if we use a different measurement units for measuring distance (e.g., using kilometers instead of meters), we will get a different clustering result, which is undesirable.
Additionally, DBSCAN has an extra hyper-parameter, "$ min\\_samples $," which is "the number of samples (or total weight) in a neighborhood for a point to be considered as a core point." Having more hyper-parameters requires more effort to tune. In contrast, MMJ-K-means has only one hyper-parameter, "K," which represents the number of clusters. The hyper-parameter K is very intuitive.
Comparison between MMJ-K-means and HDBSCAN is shown in the auxiliary PDF file.
Question:
"Theoretical property of the distance is poor. The paper should review many other density-based distance functions to put this work into the correct context."
"There would be a lot of problems using this distance as many of pairs of nodes would share the same distance. There is no analysis on the metric property of this distance."
Response:
It is true that many of pairs of nodes share the same distance. However, whether this property of the distance is a merit or demerit, is not confirmed. E.g., the distance works good in MMJ-K-means, MMJ-based internal clustering evaluation indices, and the Clustering with Neural Network and Index (CNNI) model. There is no sign that having many duplicated distance values is a demerit. It is possible that having many duplicated distance values is actually a merit, because the problem becomes simpler.
MMJ distance fulfills all the requirements as a metric space, e.g., the triangle inequality holds for MMJ distance. Due to the page limit, we cannot discuss more about this in the paper.
We will review more density-based distance functions literature according to Reviewer VBcM's suggestion.
Question:
"Presentation-wise, it is hardly up the standard. There are methods/algorithms/concepts that are mentioned as "a is like b with a difference" without a formal definition. This mixes up definitions and properties."
Response:
We will polish the writing according to all the reviewers' constructive suggestions. We will employ a professional Academic Editing Service if the paper is accepted.
The Semantic Center of Mass of Length One (One-SCOM) has been defined in a previous paper, there is no need to re-define it in this paper.
Question:
"The paper uses a new distance without theoretical justifications. It learns nonconvex clusters ..., which is hardly a novelty."
Response:
We have provided some theoretical justifications of the properties of the distance. The distance shares some common properties with other similar distances, such as the longest-leg path distance (LLPD), the minimax path problem, and the widest path problem etc. We have cited other similar distances so that readers can have a grasp of other properties of the distance from the literature.
The fundamental contributions of this work include:
1. Implementing the MMJ-based internal clustering evaluation index within the Clustering with Neural Network and Index (CNNI) model, achieving the first inductive clustering model capable of handling non-flat geometry data. An inductive clustering model not only can calculate the label of a point in the dataset, but also it can calculate the label of a new point that is not in the dataset. Its supremacy is obvious. If you know another clustering model with this capability, please inform us.
2. Algorithm 2 (MMJ distance by Calculation and Copy) is currently the fastest algorithm for computing the all points path distance (APPD) matrix. It can address the all-pairs minimax path problem or widest path problem in undirected dense graphs. In a recent study, we implemented and tested the algorithm, and experimental results confirm it as the fastest for solving the APPD matrix. The source code is publicly available, but we cannot provide an URL due to the double-blind review process.
3. As noted by Reviewer vcc5, Algorithm 1 (MMJ distance by recursion) supports online machine learning, where data is sequentially available. The algorithm allows for a warm start, unlike other algorithms for calculating the MMJ distance matrix, which can only be cold-started.
4. MMJ-CH is the state-of-the-art internal clustering evaluation index, achieving an accuracy of 90/145. We have provided an API for readers to test their own internal clustering evaluation indices. If you find an index that outperforms MMJ-CH, please let us know.
5. MMJ-K-means has some very good and unique properties. We have compared MMJ-K-means with other popular clustering models like single linkage clustering, DBSCAN, and HDBSCAN in previous response.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the rebuttal. As also in the rebuttal, the paper lacks much justifications to be accepted in this conference. I'd like to keep my score. | Summary: This paper proposes a new metric, min-max jump distance. Effectively, say we are given a complete graph with vertex set $\Omega$ and edge weights $d(x,y)$ denoting the distance between $x$ and $y$, where $d$ is a metric. Then $MMJ(x,y|\Omega')$ is the minimum, over all paths between $x$ and $y$, of the maximum weight edge between $x$ and $y$ on the subgraph induced on the vertices $\Omega' \subseteq \Omega$. Explained nicely in the paper, if you started at vertex $x$ and wanted to get to $y$ and $d(\cdot,\cdot)$ denoted the distance required to ``jump'' from one vertex to another, what is the minimum distance you need to be able to jump to somehow traverse from $x$ to $y$?
This is a nice intuitive metric, and has strong connections to the minimum spanning tree. In fact, I suspect there is more literature to draw from minimum spanning tree research that could yield conclusions about MMJ. The MST is also nice in clustering since oddly shaped clusters (non-convex, for instance) can have small MSTs. This is the idea of MMJ: use it as a metric for K-means so that it can identify non-spherical clusters.
The paper proves some notable theory about the properties of MMJ. Mostly, they show how: 1) When adding a new vertex $p$ to a set $\Omega$, $p$'s MMJ within the context of $\Omega+p$ can be computed knowing all pairwise MMJ's within $\Omega$ within the context of $\Omega$. This effectively adds a new point and evaluates it within the complete, updated context. 2) Given the MMJs for this additional point $p$ within the context of $\Omega +p$, expand the MMJ context of all other pairs in $\Omega$ to the context of $\Omega+p$.
This can then be used in a very dynamic programming-like manner to start with just two points, add new points $p$ and find the context of $p$ and all other points, and then update all known existing MMJs to the new context. This is their algorithm 1, requiring $O(n^3)$ time. They also use properties of the MST to bypass unnecessary calculation to yield algorithm 2, which takes only $O(n^2)$ time (to find the MST).
They then evaluate the performance of algorithms using the MMJ measure on irregular shaped clusters to verify that MMJ helps identify these. This makes sense, since they likely have small MSTs, but not small average/sum/max/etc distances within clusters. Notably, they show how MMJ improves K-means.
Strengths: I think MMJ is a very cool metric with nice properties and intuition, particularly that related to the MST (I wish the authors had spent more time discussing this!). Their findings are nice and relatively simple to understand (in spite of the presentation). They show that it helps K-means expand to more complicated cluster shapes, and overall it is a very nice, NeurIPS-worthy result. However, as I will explain in the weaknesses section, I do not think this paper is in an acceptable state for NeurIPS.
Weaknesses: There are a few notable downsides. In terms of the result, I'm not entirely convinced of its novelty. How much of this is actually a re-iteration of MST-based methods already understood? Is this really better than other MST-based algorithms on irregular clusters (think single linkage)? I know that there is a lot of literature that explores irregular shaped clusters, but I am not an expert in this area and so I cannot place this work in the context of existing results. I wish the authors would explain that. Though even if these algorithms aren't entirely better than state of the art, the novelty of the nice formulation of MMJ is certainly appreciated.
However, the biggest flaw in the paper is the writing quality. There are places where the paper is nice and concise, but most of the time it just lacks exposition to understand the higher level of things or adequate details to fully understand what is happening. Formal proofs are contained in the paper, but the jumps in some of the proofs are too large. Theorems and proofs are placed back to back with no high level explanation. Algorithms are written and pseudocode with only the briefest justifications, and no thorough explanations. This is not an acceptable paper for NeurIPS, and I think these issues are too extensive to simply ask the authors to revise. Though if other commenters disagree, I am amenable to changing my opinion.
And one of the disappointing things about this is how natural this work is and how much it lends itself to nice intuitive explanations and visual depictions! For instance, you could do some very nice visualizations of Algorithm 1, where you depict a matrix and show which indices have been calculated to what context $\Omega_n$ at each time point. This clarifies the different purpose of the two loops.
I hope to see this paper submitted again later in a more cleaned up state!
Technical Quality: 4
Clarity: 1
Questions for Authors: 1. What exactly is a path-based distance?
2. When you define MMJ, you do not specify exactly what a path is. All you introduce is a point set and a distance function. Are we to assume that a "path" is done on the complete graph with edge weights equal to the distances between points? Please clarify in the paper. I read the paper with this assumption.
3. How would MMJ compare to the similarly defined max-min jump measure? I would think it would be more akin to a similarity measure than a distance.
4. Line 114: what is M(P)? Also, how do you arrive at the contradiction in this proof? This needs more explanation!
5. Line 122: What do you mean there are n possibilities? Aren't there something like n! candidate paths, since you check all paths of all lengths with a fixed set of endpoints?
6. Thm 2 proof: You say X enumerates them all - what about the direct path that just traverses the single edge between the points?
7. Is Corollary 1 actually any different from Thm 2? It seems like in both you just have a set $\Omega_{[1,n]}$ and then you have a point outside of the set that you want to get the MMJ to (with respect to some reference point). I think Corollary 1 is just a cleaner way to state Theorem 2.
8. In Theorem 3, is it actually true that $x_2$ is the MMJ if $\Omega_{n+1}$ is in the path? It doesn't break your argument, but if the solution path doesn't go through $\Omega_{n+1}$, then it's possible that the MMJ path from i to n+1 and from j to n+1 share an edge. So the MMJ path from i to j that goes through n+1 could actually be worse since we aren't allowed to traverse that edge twice. Again, the proof still works, but I think that claim only holds if the MMJ path from i to j does, in fact, go through n+1. Otherwise, we might have something larger, which is ignored by taking the minimum.
9. Section 4.1: Why does k appear twice in the matrix subscripts? Couldn't it just be something like $M_k$?
10. How does MMJ-K-means perform on spherical data compared to K-means? Presumably, worse. It would be nice to quantify how much worse.
Confidence: 3
Soundness: 4
Presentation: 1
Contribution: 3
Limitations: None notable
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the detailed and insightful review.
Reviewer vcc5's main concern is the writing quality. We will try our best to improve the writing quality according to all the reviewers' suggestions. We will employ a professional Academic Editing Service if the paper is accepted.
We have open-sourced the algorithms and models in this paper. Be debugging step-by-step with a simple example, it is not very difficult for readers to grasp all the details of the algorithms and models discussed in the paper.
Reviewer vcc5 indicated a key merit of Algorithm 1 (MMJ distance by recursion): the algorithm can be warm-started. In other words, the algorithm supports online machine learning. While other algorithms (e.g., a modified version of the Floyd-Warshall algorithm) for solving MMJ distance matrix need to be cold-started. A warm-start can be much faster than a cold-start.
Question:
"How much of this is actually a re-iteration of MST-based methods already understood? Is this really better than other MST-based algorithms on irregular clusters (think single linkage)?"
Response:
MMJ-K-means generally performs better than other algorithms which can deal with irregular clusters. E.g., single linkage clustering is very sensitive to outliers. MMJ-K-means is not sensitive to outliers, this merit is confirmed by Figure 3. In Figure 3, all the datasets have some outliers. Density-based clustering algorithms like DBSCAN are very sensitive to their hyper-parameters. DBSCAN has a hyper-parameter "eps", which is "the maximum distance between two samples for one to be considered as in the neighborhood of the other." The "eps" hyper-parameter is not intuitive and difficult to optimize. If we fix the "eps" hyper-parameter and just magnify or shrink each data point's coordinates, we may get a different clustering result. This is not good.
DBSCAN has an extra hyper-parameter "$ min\\_samples $", which is "the number of samples (or total weight) in a neighborhood for a point to be considered as a core point." More hyper-parameters means we need more effort to adjust it. MMJ-K-means has only one hyper-parameter, the "K," which is the number of clusters.
Comparison between MMJ-K-means and HDBSCAN is shown in the auxiliary PDF file.
Question 1:
"What exactly is a path-based distance?"
Response:
Path-based distance is different from coordinate-based distance. In a coordinate-based distance, such as Euclidean or Manhattan distance, the distance between two points is totally determined by their coordinates. In a path-based distance like MMJ distance, we need to check all the paths between two points in a graph, to calculate their distance.
Question 2:
"When you define MMJ, you do not specify exactly what a path is ..."
Response:
Yes. You are right. A dataset can be straightforwardly converted to a complete graph.
Question 3:
"How would MMJ compare to the similarly defined ..."
Response:
MMJ distance is a distance, not a similarity measure. It fulfills all the requirements as a metric space, e.g., the triangle inequality holds for MMJ distance. A similarity measure does not need to fulfill the requirements of a metric space.
Question 4:
"Line 114: what is ..."
Response:
The meaning of M(P) is explained in Line 115 and 116. M(P) is the maximum jump in path P.
The contradiction can be arrived intuitively with the definition of MMJ distance. If we assume $ MMJ(p,q ~| ~\Omega) > \delta $, that means if a person what to reach q from p, she must have the ability of jumping farther than $ \delta $. If the person only has a jumping ability of $ \delta $, then q is unreachable from p for her. However, the person is a really smart girl, she choose to jump from p to i, then i to j along an MMJ path, then j to q. Amazingly, she just reached q from p with a jumping ability of $ \delta $. This is the contradiction. Another contradiction can be arrived similarly. The explanation is wordy, however, the logic is simple.
Question 5:
"Line 122: What do you mean there are ..."
Response:
The $ n! $ candidate paths can be classified into n possibilities. If a person starts jumping from $ \Omega_{n+1} $, how many choices are there for the person to choose as the second point? There are only n choices, $ \Omega_1 $, $ \Omega_2 $, ..., $ \Omega_n $. After the second point in the path is determined, then the person can jump arbitrarily in context $ \Omega_{[1,n]}$. We already know all the MMJ distances under context $ \Omega_{[1,n]}$. Note the paths are loop-less, $ \Omega_{n+1} $ appears as the first point in the path, it will not appear in the remaining points of the path.
Question 6:
"Thm 2 proof: You say X enumerates ..."
Response:
Good question! The direct path that just traverses the single edge between the points is included in the n possibilities. In this case, the second item in Equation 7 is just zero. Note a point's MMJ distance to itself is always zero, no matter under what context.
Question 7:
"Is Corollary 1 actually any different from ..."
Response:
Good point. Corollary 1 only has some notation difference from Theorem 2. It is indeed easier to understand than Theorem 2. However, without Theorem 2, it is harder to explain Step 7 of Algorithm 1 to readers.
Question 8:
"In Theorem 3, is it actually ..."
Response:
Yes. If $ \Omega_{n+1} $ is confirmed in the path. Then $ x_2 $ is the MMJ distance.
Question 9:
"Section 4.1: Why does k appear twice ..."
Response:
The second k indicates the context of the MMJ distance matrix. There could be multiple choices for the second k, even when the first k is fixed. E.g., the second k can be set to k, k+1, k+2, ..., N.
Question 10:
"How does MMJ-K-means perform ..."
Response:
MMJ-K-means performs as good as K-means on spherical data. See data 1, data 62, data 78, and data 89 in Figure 3, they are spherical data.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' responses, however the writing quality issue still stands and I believe the changes required are too great to allow acceptance. | Summary: This paper presents the Min-Max-Jump (MMJ) distance concept and two calculation methods, focusing on path optimization in data analysis and clustering. The contributions include introducing MMJ distance, proposing efficient calculation methods, discussing its properties and applications, and offering a user-friendly approach for practical implementation. Overall, the paper introduces a new distance metric for path optimization and data analysis, providing useful tools and insights for various applications in the field.
Strengths: S1: The paper demonstrates strength through its meticulous use of theorems and proofs, enhancing the credibility and robustness of the research findings.
S2: Clear visualization of results in the paper aids in effectively conveying complex information to the readers, improving understanding and interpretation.
S3: Extensive literature citations throughout the paper showcase a strong foundation of existing knowledge and research, adding depth and scholarly rigor to the study's presentation.
Weaknesses: W1: The paper's writing style deviates from academic norms, indicating a need for improvement in writing proficiency.
W2: The extremely brief Introduction lacks a detailed definition of the problem, its significance, and challenges. Moreover, it lacks citation support for the points presented. While Section 2.1 mentions methods like k-NN, UMAP, HDBSCAN, it fails to provide corresponding references, lacking essential academic backing.
W3: The overall structure of the paper lacks clarity, as it introduces different distance metrics in Section 2.1 but introduces a new distance measurement approach in Section 2.4, leading to disjointed logic.
W4: The presentation of various distance metrics in Section 2.1 appears disorganized and lacks coherence.
W5: The extensive definition provided towards the end of Section 6.3 disrupts the logical flow of the paper, suggesting a need to adjust the paper's structural coherence.
Technical Quality: 3
Clarity: 2
Questions for Authors: Q1:Can the authors provide more insights into the practical implications of their research findings and how they can be applied in real-world scenarios?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The paper does not discuss limitations. The authors seem to perceive their work as solely testing models with datasets without considering the shortcomings of the algorithms themselves.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Reviewer BgWh's main concern is the writing style of the paper. We will polish the writing according to Reviewer BgWh's constructive suggestions. We will employ a professional Academic Editing Service if the paper is accepted.
1. We will revise the introduction section to present a detailed definition of the problem, its significance, and challenges.
2. We will revise Section 2.1 to incorporate corresponding references to k-NN, UMAP, HDBSCAN.
3. We will refine Section 2.1, and merge Section 2.1 with Section 2.4, to improve the structural coherence.
4. We will move the definitions in Section 6.3 to previous sections.
Question:
"Q1:Can the authors provide more insights into the practical implications of their research findings and how they can be applied in real-world scenarios?"
Response:
1. Both Algorithm 1 and Algorithm 2 can be used to solve the all-pairs minimax path problem or widest path problem. Algorithm 1 (MMJ distance by recursion) supports online machine learning, in which data becomes available in a sequential order. While traditional algorithm like a modified Floyd-Warshall algorithm does not have this merit. A new study shows Algorithm 2 (MMJ distance by Calculation and Copy) is the fastest algorithm for solving the all points path distance (APPD) matrix by far (see our response to Reviewer p2N6). Solving the minimax path problem or widest path problem has extensive practical applications in fields like Network Routing, Transportation, Supply Chain Management, and Telecommunication Networks etc.
2. MMJ-K-means model overcomes a key drawback of standard K-means model. It can be used as a replacement of K-means model on clustering data, especially when the datasets that are not the union of well-separated, spherical clusters, where standard K-means is unusable in these settings. For real-world large high dimensional data like ImageNet, we can pre-process the data with dimension reduction or representation learning techniques, then use MMJ-K-means and MMJ-based internal clustering evaluation index to analyze the processed data.
Question:
"The paper does not discuss limitations. The authors seem to perceive their work as solely testing models with datasets without considering the shortcomings of the algorithms themselves."
Response:
A possible limitation of the models is that the models may not work in high dimensional. However, due to "curse of dimensionality," a low-dimensional model does not necessarily need to work in high dimensional, it is acceptable that it only works in low-dimensional settings. A low-dimensional model can still be used to analyze high dimensional data. We just need to pre-process the data with dimension reduction or representation learning techniques, then use the low-dimensional model to analyze the processed data. | Summary: Different distance metrics have been introduced in the literature for data analysis. In this paper the authors consider the min-max-jump distance and apply it in the context two applications, namely, k-means clustering and as an internal clustering evaluation index. They also present two algorithms for computing the min-max-jump distance.
Experimental comparisons reveal that min-max-jump based k-means clustering is better than standard k-means clustering. Also, the min-max-jump distance is shown to be a better internal clustering evaluation index.
This referee feels that this work is rather incremental. Also, experiments have been conducted only on very small datasets.
Strengths: The authors demonstrate the efficacy of the min-max-jump distance on two different applications.
Experimental results have also been supplied to support their assertions.
Weaknesses: The work done is incremental with very limited novelty.
Extensive experiments are called for.
Technical Quality: 3
Clarity: 2
Questions for Authors: None
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Question:
"This referee feels that this work is rather incremental." "The work done is incremental with very limited novelty."
Response:
The research is not incremental, but fundamental. The fundamental contributions of the work include:
1. Applying MMJ-based internal clustering evaluation index to the Clustering with Neural Network and Index (CNNI) model, achieves the first inductive clustering model that can deal with non-flat geometry data. If you know another clustering model has this property, please let us know.
2. Algorithm 2 (MMJ distance by Calculation and Copy) is the fastest algorithm for solving the all points path distance (APPD) matrix by far. It can be used to solve the all-pairs minimax path problem or widest path problem, in undirected dense graphs. In a new study, we have implemented and tested the algorithm. Experiment result shows it is the fastest algorithm for solving the APPD matrix by far. The source code of the algorithm is now publicly accessible. However, we cannot provide an URL of it, because of double-blind review.
3. As indicated by Reviewer vcc5, Algorithm 1 (MMJ distance by recursion) supports online machine learning, in which data becomes available in a sequential order. The algorithm has a merit of warm-start. While other algorithms for solving the APPD matrix can only be cold-started.
4. MMJ-CH is the SOTA (state-of-the-art) internal clustering evaluation index, which achieves an accuracy of 90/145. We have left an API for readers to test their own internal clustering evaluation index. If you have found an index that performs better than MMJ-CH, please let us know.
5. A key drawback of K-means is that it cannot deal with datasets that are not the union of well-separated, spherical clusters. MMJ-K-means has overcome this demerit of K-means. MMJ-K-means generally outperforms other algorithms which can deal with irregular clusters.
Question:
"Also, experiments have been conducted only on very small datasets."
Response:
We have not tested the models on large high dimensional data like ImageNet, because the models introduced in the paper are supposed to be low-dimensional models. Due to "curse of dimensionality," a low-dimensional model does not necessarily need to work in high dimensional, it is acceptable that it only works in low-dimensional settings. A low-dimensional model can still be used to analyze high dimensional data, by pre-processing the data with dimension reduction or representation learning techniques, then use the low-dimensional model to analyze the processed data.
Another reason for not testing the models on large high dimensional dataset is that there is no trustworthy ground-truth labeling in high dimensional data. Almost all of the high dimensional datasets are labeled by some humans, there inevitably
exists subjectivity in labeling high dimensional data points. E.g., if we invite a different group of humans to re-label the MNIST or ImageNet dataset, it is highly possible that we will obtain a different ground-truth labeling of the datasets. Therefore, the ground-truth of high dimensional datasets are actually not real ground-truth. In low dimensional like 2 or 3 dimension, we can understand the dataset by directly observing the layout of the dataset, there is no subjectivity involved when the clusters are well-separated, so the ground-truth is real ground-truth in low dimensions. Even when the clusters are overlapping with each other, we can directly observe how the clusters are overlapped. In high dimensional, we cannot do this.
To test the MMJ-based models with large high dimensional data, we need a trustworthy ground-truth labeling to be compared with. To be a trustworthy ground-truth, the labeling should be objective, it should not be decided by some humans. Unfortunately, there is no such trustworthy ground-truth labeling for high dimensional data.
Question:
"Extensive experiments are called for."
Response:
We have tested the models and algorithms using 145 clustering benchmark datasets. Typically, a standard machine learning paper tests its models or algorithms on several or at most dozens of datasets. In contrast, our study uses hundreds of datasets, making our experiments exceptionally extensive.
---
Rebuttal Comment 1.1:
Comment: Rebuttal has been read. | Rebuttal 1:
Rebuttal: Thanks for all the reviewers' insightful review and constructive suggestions.
The fundamental contributions of the work can be summarized as following:
1. Applying MMJ-based internal clustering evaluation index to the Clustering with Neural Network and Index (CNNI) model, achieves the first inductive clustering model that can deal with non-flat geometry data. An inductive clustering model not only can calculate the label of a point in the dataset, but also it can calculate the label of a new point that has not seen. Its supremacy is obvious. There is no other clustering model has this merit.
2. Algorithm 2 (MMJ distance by Calculation and Copy) is the fastest algorithm for solving the all points path distance (APPD) matrix by far. It can be used to solve the all-pairs minimax path problem or widest path problem, in undirected dense graphs. In a new study, we have implemented and tested the algorithm. The experiment is conducted on an ordinary desktop computer with "3.3 GHz Quad-Core Intel Core i5" CPU and 16 GB RAM. The experiment result shows it is the fastest algorithm for solving the APPD matrix by far. It can calculate the APPD matrix of 10,000 points in about 67 seconds, while other algorithms cannot finish the calculation in two hours. The algorithm is now implemented with Python. If it is implemented with C/C++, it can be even faster. The source code of the algorithm is now publicly accessible. However, we cannot provide an URL of it, because of double-blind review.
3. As indicated by Reviewer vcc5, Algorithm 1 (MMJ distance by recursion) supports online machine learning, in which data becomes available in a sequential order. The algorithm has a merit of warm-start. While other algorithms for solving the all points path distance (APPD) matrix can only be cold-started. As we all known, a warm-start can be much faster than a cold-start. E.g., suppose we have calculated the APPD matrix $ M_G $ of a large graph $ G $, then we got a new point (or node) $ p $, where $ p \notin G$. The new graph is noted $G + p$. To calculate the APPD matrix of graph $G + p$, if we use other algorithms, we need to start from zero. Algorithm 1 has the merit of utilizing the calculated $ M_G $ for computing the new APPD matrix, with the conclusions of the theorems and corollary introduced in this paper. This is especially useful when the graph is a directed dense graph, where starting from zero needs $ O(n^3) $ complexity, but a warm-start of Algorithm 1 (MMJ distance by recursion) only needs $ O(n^2) $ complexity.
4. MMJ-CH is the SOTA (state-of-the-art) internal clustering evaluation index, which achieves an accuracy of 90/145. We have left an API for readers to test their own internal clustering evaluation index. If you have found an index that outperforms MMJ-CH, please let us know.
5. A key drawback of K-means is that it cannot deal with datasets that are not the union of well-separated, spherical clusters. MMJ-K-means has overcome this demerit of K-means.
MMJ-K-means generally performs better than other algorithms which can deal with irregular clusters. E.g., single linkage clustering is very sensitive to outliers. MMJ-K-means is not sensitive to outliers, this merit is confirmed by Figure 3. In Figure 3, all the datasets have some outliers.
Density-based clustering algorithms like DBSCAN are very sensitive to their hyper-parameters. DBSCAN has a hyper-parameter "eps," which is "the maximum distance between two samples for one to be considered as in the neighborhood of the other." The "eps" hyper-parameter is not intuitive and difficult to optimize. If we fix the "eps" hyper-parameter and just magnify or shrink each data point's coordinates, we may get a different clustering result. That means if we use a different measurement units for measuring distance (e.g., using kilometers instead of meters), we will get a different clustering result, which is undesirable.
DBSCAN has an extra hyper-parameter "$ min\\_samples $," which is "the number of samples (or total weight) in a neighborhood for a point to be considered as a core point." More hyper-parameters means we need more effort to adjust it. MMJ-K-means has only one hyper-parameter, the "K," which is the number of clusters. The hyper-parameter is very intuitive. If we fix hyper-parameter K in MMJ-K-means, then adjust the coordinates of data points by scaling them, the clustering result keeps invariant.
MMJ-K-means also performs better than HDBSCAN. Comparison between MMJ-K-means and HDBSCAN is shown in the auxiliary PDF file.
The models introduced in the paper are supposed to be low-dimensional models. A possible limitation of the paper is that the introduced models may not work in high dimensional. However, due to "curse of dimensionality," a low-dimensional model does not necessarily need to work in high dimensional, it is acceptable that it only works in low-dimensional settings. A low-dimensional model can still be used to analyze high dimensional data. We just need to pre-process the data with dimension reduction or representation learning techniques, then use the low-dimensional model to analyze the processed data.
The reviewers have concern about the writing of the paper. We will try our best to improve the writing quality according to all the reviewers' suggestions. We will employ a professional Academic Editing Service if the paper is accepted.
Pdf: /pdf/ff4b02adc8d9ba66e8fd3bf0f25ddf3a3a4a1a8a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Vision Transformer Neural Architecture Search for Out-of-Distribution Generalization: Benchmark and Insights | Accept (poster) | Summary: The paper shows that in-domain accuracy and Training-free NAS accuracy predictions correlate poorly with out-of-domain accuracy, while characteristics of the model such as Flops, Params and Embed-Dim (number of channels) correlate much better with out-of-domain accuracy. To do this, they train a supernet that contains all possible architectures (subnets), then they validate these subnets on 9 datasets ImageNet/C/D/P/O/A/R/Sketch/Stylaized.
Strengths: They explore various ways for increasing OOD accuracy, which is most important for real-world problems. They show that conventional approaches, such as Training-free NAS or relying on in-domain accuracy, do not work. While they found that Flops, Param and Embed-dim correlated well with OOD accuracy. To do this, they introduce OoD-ViT-NAS benchmark for NAS research on ViT’s OOD generalization, that includes 3000 diverse ViT architectures which are evaluated on 9 datasets. This allows us to look for new and better approaches for finding new models structures to improve OOD accuracy.
Weaknesses: You show that the embedding dimension has the highest impact among ViT architectural attributes, while network depth has a slight impact on OoD generalization. But the paper lacks an explanation that although embed-dim (number of channels) correlates with OOD accuracy better than depth, this does not necessarily mean that to increase OOD accuracy only embed-dim should be increased, since the computational cost and memory costs may be different for embed-dim and depth. Therefore, to create either the largest and most accurate model that fits in memory, or the most optimal model in terms of speed and accuracy, it may be optimal to increase the depth rather than embed-dim. There are also papers [1,2,3] that show that it is necessary to simultaneously increase several model parameters (depth, number of channels, resolution) in optimal proportions.
You present charts of OOD accuracy and number of parameters (Fig. 4, K19 - K24), but it would also be great to present charts of OOD-accuracy and latency, and OOD-accuracy and GPU memory consumption. Because these 3 model characteristics (OOD-accuracy, Latency, Memory consumption) are the most important for real-world problems.
1. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks, Mingxing Tan, Quoc V. Le, 2019
2. EfficientNetV2: Smaller Models and Faster Training, Mingxing Tan, Quoc V. Le, 2021
3. EfficientDet: Scalable and Efficient Object Detection, Mingxing Tan, Ruoming Pang, Quoc V. Le, 2019
Technical Quality: 3
Clarity: 3
Questions for Authors: - Have you tried plotting OOD-accuracy versus Latency or GPU memory consumption while increasing various model parameters such as: embed_dim, depth, MLP-ratio, num of heads, etc.? Or find correlations between ODE accuracy and these parameters, taking into account their impact on latency and memory consumption.
- Have you tried to find optimal model scaling factors (similar to works [1, 2, 3]) to increase OOD-accuracy in optimal way with respect to latency or memory consumption?
1. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks, Mingxing Tan, Quoc V. Le, 2019
2. EfficientNetV2: Smaller Models and Faster Training, Mingxing Tan, Quoc V. Le, 2021
3. EfficientDet: Scalable and Efficient Object Detection, Mingxing Tan, Ruoming Pang, Quoc V. Le, 2019
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: It might be worth adding to the limitations:
The results obtained on Imagenet classification datasets may not correlate with the results of real-life problems. Because Imagenet classification requires to predict the class of only one usually large object in the low resolution image without predicting its location (box, polygon or mask), whereas in real problems it is usually necessary to predict the class and position of many objects, incl. very small in high resolution image. Moreover, there is ambiguous in the Imagenet classification task, the class of which of the many objects in the image should be predicted?
Thus, a continuation of the research in this direction may be the finding of correlations, NAS approaches, new network structures, their parameters and scaling factors to achieve the highest OOD accuracy on tasks close to real ones (such as Dense prediction tasks: Segmentation, Detection, etc. ) in an optimal way with respect to latency and memory consumption.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate positive feedback and the valuable comments of the Reviewer.
> **Q1:** You show that the embedding dimension has the highest impact among ViT architectural attributes, while network depth has a slight impact on OoD generalization. But the paper lacks an explanation that although embed-dim (number of channels) correlates with OOD accuracy better than depth, this does not necessarily mean that to increase OOD accuracy only embed-dim should be increased, since the computational cost and memory costs may be different for embed-dim and depth. Therefore, to create either the largest and most accurate model that fits in memory, or the most optimal model in terms of speed and accuracy, it may be optimal to increase the depth rather than embed-dim. There are also papers [1,2,3] that show that it is necessary to simultaneously increase several model parameters (depth, number of channels, resolution) in optimal proportions.
**A1:** Thank you for the insightful comments. Taking into account the constraint on memory, we design an experiment based on human design ViT architecture that provides variability to respond to reviewer’s concerns. The results can be found in Table R.1 in the rebuttal PDF. The results are generally consistent with our findings, where increasing embedding dimension is the most effective solution among ViT architectural attributes to improve OoD generalization.
Increasing multiple ViT structural attributes optimally is very interesting; we will study this direction in the future.
>**Q2:** You present charts of OOD accuracy and number of parameters (Fig. 4, K19 - K24), but it would also be great to present charts of OOD-accuracy and latency, and OOD-accuracy and GPU memory consumption. Because these 3 model characteristics (OOD-accuracy, Latency, Memory consumption) are the most important for real-world problems.
**A2:** Following your suggestion, we provide charts of OOD-accuracy and latency, and OOD-accuracy and GPU memory consumption in Figure R.3 and Figure R.4 in the rebuttal PDF.
> **Q3:** Have you tried plotting OOD-accuracy versus Latency or GPU memory consumption while increasing various model parameters such as: embed_dim, depth, MLP-ratio, num of heads, etc.? Or find correlations between ODE accuracy and these parameters, taking into account their impact on latency and memory consumption.
**A3:** Following your suggestion, we provide charts of plotting OOD-accuracy versus FLOPs while increasing various ViT structural attributes in Figure R.2 in the rebuttal PDF.
> **Q4:** Have you tried to find optimal model scaling factors (similar to works [1, 2, 3]) to increase OOD-accuracy in optimal way with respect to latency or memory consumption?
**A4:** We have not tried this setup yet. In this study, we focus on the impact of individual ViT structural attributes on OoD generalization. Optimal scaling of multiple ViT structural attributes is an interesting direction for our future research.
> **Q5:** It might be worth adding to the limitations: The results obtained on Imagenet classification datasets may not correlate with the results of real-life problems. Because Imagenet classification requires to predict the class of only one usually large object in the low resolution image without predicting its location (box, polygon or mask), whereas in real problems it is usually necessary to predict the class and position of many objects, incl. very small in high resolution image. Moreover, there is ambiguous in the Imagenet classification task, the class of which of the many objects in the image should be predicted? Thus, a continuation of the research in this direction may be the finding of correlations, NAS approaches, new network structures, their parameters and scaling factors to achieve the highest OOD accuracy on tasks close to real ones (such as Dense prediction tasks: Segmentation, Detection, etc. ) in an optimal way with respect to latency and memory consumption.
**A5:** We believe this is a common issue among most OoD generalization methods that focus on image classification. We agree with the reviewer's perspective on this limitation and will add it in the limitation section in the revision. This is an interesting direction for our future work.
We sincerely hope that reviewers could consider increasing the ratings if our responses have addressed all the questions.
---
Rebuttal Comment 1.1:
Title: I leave the same rating: Strong Accept.
Comment: Thanks to the authors for additional comparison results that add confidence to the conclusions of the paper.
It can be seen that increasing embed-dim increases OOD accuracy in an optimal way relative to increasing the inference latency.
Two minor remarks to the rebuttal:
1. It is better in tables R.1 and R.2 to use GPU memory consumption instead of the number of parameters, which do not always correlate well with GPU memory consumption .
2. In the Figure R.2, it is better to use larger ranges for MLP-Ratio and #Heads, so that curves are visible instead of points in one place.
---
Reply to Comment 1.1.1:
Title: Thank you for the insightful comments and the Strong Accept rating
Comment: We sincerely appreciate the reviewer’s positive feedback and the time taken to review our rebuttal. Thank you for your suggestion, we will update our paper in the revision. | Summary: This paper introduces OoD-ViT-NAS, a benchmark designed for evaluating ViT architectures' ability to generalize under Out-of-Distribution (OoD) shifts. This paper reveals that ViT design significantly impacts OoD generalization, In-Distribution (ID) accuracy does not reliably predict OoD performance, and simpler metrics like parameter count can sometimes predict OoD accuracy better than complex training-free NAS methods.
Strengths: This paper propose a new benchmark for ViT NAS.
Weaknesses: 1. While the paper introduces a new benchmark, it doesn't sufficiently discuss the practical implications of the findings or potential applications. Including a section on how these insights can influence future ViT designs would add significant value. Furthermore, assessing whether ViTs designed based on these insights outperform those designed by humans would be meaningful to include in experiments and results.
2. The analysis of the experimental results is somewhat superficial. It presents observations without delving into deeper discussions. There should be more profound insights and detailed discussions on why certain ViT architectural attributes perform better under OoD conditions. This could involve providing theoretical justifications or proposing hypotheses that could be explored in future research.
3. The paper states that simple proxies like #Param and #Flop outperform more complex Training-free NAS methods but does not provide enough insight into why this might be the case. A deeper analysis of these findings, including possible reasons and implications, would strengthen the argument.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please address the weaknesses mentioned above.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: No potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate positive feedback and the valuable comments of the Reviewer.
>**Q1:** While the paper introduces a new benchmark, it doesn't sufficiently discuss the practical implications of the findings or potential applications. Including a section on how these insights can influence future ViT designs would add significant value.
**A1:** Thank you for your suggestion. We clarify that in addition to the new benchmark, our study produces significant insights for guiding the design of ViT architectures. Specifically, among ViT structural attributes, increasing embedding dimension can generally improve OoD generalization of ViT architectures. Our insight leads to a simple method, which can achieve ViT that can outperform well-established human-designed ViT architectures. Please refer to **Q2** for the details.
Besides, our investigation also provides valuable ViT architectural guidelines for OoD generalization:
- First, departing from existing OoD generalization methods, our study is the first to show that ViT architecture design has a considerable influence on OoD generalization of ViT. This observation encourages future research to put more focus on ViT architecture research for OoD generalization.
- Second, our study suggests that current architectural insights based on ID accuracy might not translate well to OoD generalization.
- Third, when utilizing training-free NAS to search for an OoD generalization ViT, the state-of-the-art zero-cost proxies methods are ineffective. Instead, simple proxies like #Param or #FLOPs could be the best options available so far.
> **Q2:** Furthermore, assessing whether ViTs designed based on these insights outperform those designed by humans would be meaningful to include in experiments and results.
**A2:** Thank you for the insightful feedback. Following the reviewer's suggestion, we include a comparison between ViT architectures designed based on our insights and well-established human-designed ViT architectures, as shown Table. R.2 in the rebuttal PDF. The results demonstrate that architectures based on our insights outperform those designed by humans.
For example, scaling up ViT architecture (e.g., from ViT-B-32 to ViT-L-32) by humans typically involves compound scaling of multiple ViT structural attributes. However, our findings suggest that not all ViT structural attributes need to be increased to benefit OoD generalization. Among these attributes, increasing the embedding dimension is the most crucial factor for improving OoD generalization. By only increasing the embedding dimension, our ViT architectures (e.g., Increasing embedding dimension of ViT-B-32) are significantly more efficient and outperform compound scaling architectures (e.g., ViT-L-32).
>**Q3:** Providing theoretical justifications or proposing hypotheses that could be explored in future research.
**A3:** Following Reviewer’s suggestion, in this rebuttal, we provide an additional frequency analysis on our main finding showing that increasing embedding dimensions helps ViT learn more High-Frequency-Component (HFC), leading to better OoD generalization.
Particularly, in the literature, a model achieves higher accuracy on HFC of the test samples indicates that the model has learned more HFC [c1, c4]. By learning more HFC, the model improves OOD generalization [c1, c2, c3].
Our hypothesis is that increasing embedding dimension helps ViTs learn more HFC resulting in improving OOD generalization. We strictly follow the experiment setups in [c3] for the HFC filtering process. As shown in Figure R.1 in the rebuttal PDF, when increasing embedding dimensions, the performances obtained on HFC of test samples are improved. According to [c1, c4], our observation supports that increasing embedding dimension helps ViT learn more HFC. This leads to better OoD generalization following [c1, c2, c3].
>**Q4:** Simple proxies like #Param and #Flop outperform more complex Training-free NAS methods but does not provide enough insight into why this might be the case. A deeper analysis of these findings, including possible reasons and implications, would strengthen the argument.
**A4:** Thank you for your feedback. Our first study on NAS for ViT on OoD generalization initially show that #Param or #Flop outperforms more complex Training-free NAS methods. A possible reason is that existing Training-Free NAS are specifically designed for ID performance, not for OoD generalization. In the future work, we aim to further understand this observation.
We sincerely hope that reviewers could consider increasing the ratings if our responses have addressed all the questions.
[c1] Bai et al. "Improving vision transformers by revisiting high-frequency components." ECCV 2022.
[c2] Gavrikov et al. "Can Biases in ImageNet Models Explain Generalization?." CVPR 2024.
[c3] Wang et al. "High-frequency component helps explain the generalization of convolutional neural networks." CVPR 2020.
[c4] Shao et al. "On the adversarial robustness of vision transformers." arXiv 2021.
---
Rebuttal Comment 1.1:
Title: Thank you for your response. One remaining concern is regarding #Param and #Flop in A4.
Comment: Thank you for your responses and the latest results provided in A2.
A2 and A3 have largely addressed my concerns. A1 has partially addressed my concerns. Although I think the author's response in A1 is not perfect, I acknowledge that this is an open issue worthy of further discussion, so what the authors have done so far is acceptable.
One remaining concern is regarding #Param and #Flop in A4. From the results in Fig. 4 of the paper, it appears that the model's performance is generally monotonically related to #Param and #Flop. Does this mean that for this task, fine-grained architectural design might not be necessary, and that simply increasing the model size could improve performance? Besides, the conclusion that larger models perform better seems very intuitive, but aside from demonstrating that the architecture scales well, what further conclusions can be drawn from this? Additionally, this correlation trend is overall, but within certain #Param and #Flop ranges, there is some variance. So, when model size and computational cost are in certain ranges, does this #Param and #Flop-based approach become ineffective? Is it difficult to more accurately assess the performance differences between two models of similar size?
---
Rebuttal 2:
Title: Thank you Reviewer Lm5n for the prompt feedback
Comment: >**QA-Q1:** A2 and A3 have largely addressed my concerns. A1 has partially addressed my concerns. Although I think the author's response in A1 is not perfect, I acknowledge that this is an open issue worthy of further discussion, so what the authors have done so far is acceptable.
**QA-A1:** We are deeply grateful to the reviewer for taking the time to review our rebuttal. We are glad that we have addressed the reviewer's concerns.
>**QA-Q2:** One remaining concern is regarding #Param and #Flop in A4. From the results in Fig. 4 of the paper, it appears that the model's performance is generally monotonically related to #Param and #Flop. Does this mean that for this task, fine-grained architectural design might not be necessary, and that simply increasing the model size could improve performance?
**QA-A2:** Thank you for your comment. We note that variance in Fig. 4 suggests that simply increasing the model size may not always improve; this is also supported by Tab. 2 in our main paper showing that #Param and #Flop are not accurate Training-free NAS. Importantly, other factors need to be considered (e.g. fine-grained architecture design as we investigate in our paper).
The observation of variance is also mentioned by the Reviewer.
>**QA-Q3:** Besides, the conclusion that larger models perform better seems very intuitive, but aside from demonstrating that the architecture scales well, what further conclusions can be drawn from this?
**QA-A3:** From Fig. 4, larger models perform better in general but there is some variance (also mentioned by reviewer), thus it is not necessarily true that scaling up architecture can get better performance. Our analysis on architecture attributes suggests we need to scale up architecture carefully, e.g. with more focus on embedding dimension as our work points out.
>**QA-Q4:** Additionally, this correlation trend is overall, but within certain #Param and #Flop ranges, there is some variance. So, when model size and computational cost are in certain ranges, does this #Param and #Flop-based approach become ineffective?
**QA-Q4:** Yes, they do. As shown in Fig. 4, within certain ranges of #Param, some architectures outperform others. This suggests that #Param and #Flop-based approaches become ineffective for predicting architecture performance within specific model size ranges. To further address Reviewer’s question, we analyze the correlation of #Param for models in certain ranges, with results presented in Tab. C1.a-b below. Specifically, to illustrate the constraint on #Param, we conduct a Kendall τ correlation analysis on our benchmark using architectures sampled from Autoformer-Small, applying different constraints on #Param ranges.
**Table C1. a.** Kendall τ ranking correlation between the OoD accuracies and the #Param proxy. To illustrate the constraint on #Param, we gradually reduce the constraints on #Param
| #Param range (M) | Correlation with ID Acc | Correlation with OoD Acc |
|:----------------:|:-----------------------:|:------------------------:|
| 14-34 | 0.7885 | 0.5789 ± 0.2944 |
| 14-30 | 0.7851 | 0.5754 ± 0.2819 |
| 14-26 | 0.7633 | 0.5741 ± 0.2314 |
| 14-22 | 0.7728 | 0.5487 ± 0.2003 |
| 14-18 | 0.5692 | 0.3056 ± 0.1988 |
|
**Table C1. b.** Kendall τ ranking correlation between the OoD accuracies and the #Param proxy. To illustrate the constraint on #Param, we investigate different ranges of #Param
| #Param range (M) | Correlation with ID Acc | Correlation with OoD Acc |
|:----------------:|:-----------------------:|:------------------------:|
| 14-34 | 0.7885 | 0.5789 ± 0.2944 |
| 30-34 | 0.1577 | 0.1419 ± 0.0818 |
| 26-30 | 0.4591 | 0.1998 ± 0.2906 |
| 22-26 | -0.0471 | 0.0513 ± 0.2215 |
| 18-22 | 0.7247 | 0.4036 ± 0.2225 |
| 14-18 | 0.5605 | 0.2955 ± 0.1945 |
|
>**QA-Q5:** Is it difficult to more accurately assess the performance differences between two models of similar size?
**QA-Q5:** Yes, this is challenging, but our work makes contribution on this. Specifically, by creating the OoD-ViT-NAS benchmark with fine-grained architectural differences and a range of ViT attributes, we enable the assessment of performance differences between two models of similar size. This is possible as hinted by Fig. 4: we can specify the range of models of similar size, and examine these models' OoD accuracy.
We hope that reviewer can consider increasing the rating if our response can address your concerns.
---
Rebuttal Comment 2.1:
Comment: Thank you for your response. I believe most of my concerns have been addressed. I also believe this paper may have a moderate-to-high impact on the field. Therefore, I would adjust my rating to a weak accept.
---
Reply to Comment 2.1.1:
Title: Thank you for the insightful comments and increased rating
Comment: We are deeply grateful to the reviewer for the positive comments and the increased rating. We are glad that our response has addressed your concerns.
Once again, we thank reviewer's valuable time, which we appreciate a lot. If convenient, **we would really appreciate if the initial category assessment (Soundness, Presentation, Contribution) may be reviewed.** | Summary: This paper studies the out of distribution generalization of vision transformer architecture designs. Specifically the paper studies how in-distribution accuracies relate to OOD accuracies for a set pf 3000 architectures. In addition the paper also studies the correlation between different zero-cost proxies and OOD accuracies (on different variants of imagenet), observing that simply parameters or FLOPs are good proxies. Furthermore, the paper also studies the impact of different architectural decisions on the OOD accuracies, discovering that embedding dimension is of utmost importance for OOD generalization. The authors release code and the raw results to reproduce their experiments on AutoFormer-T,B and S space.
Strengths: - The paper is the first one to study out-of-ditribution generalization of architectures discovered by NAS
- The paper also studies zero cost proxies on the ViT space showing ineffectiveness of modern/newer ZCPs
- The paper is written in a clear and easy to understand manner
- Experiments on ZCPs are thorough and the results are insigntful
Weaknesses: - Currently I am missing the potential use-cases of the benchmark. The paper is more of a study and not a method (and fits better in the datasets and benchmark track of NeurIPS). Also though the paper is posed as a benchmark,I miss the intended use-case of the benchmark. Do the authors intend on releasing surrogates based on proxies, to allow API based querying these metrics for any possible architecture?
- The authors only study 3000 architectures, which are too few to lead to generalizable insights.
- Analysis of architecture importance: The search space of AutoFormer is a bit biased in the sense that there is a larger (factor two) variabilities in embedding dimension and a less variability for number of layers. I wonder if this is the reason embedding dimension has a higher correlation with OOD accuracy compared to other architecture dimensions like number of heads, embedding dimensions?
- Multi-Objective Search: Since inheriting weights of a supernet and evaluation is fast, generally one would perform multi-objective random or evolutionary search directly on the pretrained supernet. This would lead to different Pareto-Fronts for different dataset distributions. Could the authors also present the results of multi-objective search directly on the OOD datasets?
- What is the cost/search time different in computing Pareto-Fronts based on MO-search v/s Zero-Cost-Proxies (a lot of these proxies are actually not zero cost)?
Technical Quality: 3
Clarity: 3
Questions for Authors: - Do the authors train their own supernet, or train a one-shot supernet from scratch?
- How does this benchmark contribute to aiding development of future NAS techniques for OOD generalization?
- Check weakness
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors discuss limitations and broader impact sufficiently.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the positive feedback and valuable comments.
>**Q1:** The paper is more of a study and not a method (and fits better in the datasets and benchmark track of NeurIPS).
**A1:** We justify that our work is suitable for the NeurIPS main track as we develop novel insights leading to a simple yet effective method to improve OoD generalization for ViT. Specifically, our insight of increasing embedding dimensions improves OoD generalization of ViT leading to a simple yet effective method can produce high OoD performance ViTs architectures. This method can achieve ViT architectures that outperform well established human-designed ViTs (See Table R.2 in the rebuttal PDF)
We create the benchmark due to lack of comprehensive data for our in-depth analysis of ViT architecture attributes for OoD generalization. Subsequently, this benchmark is also valuable for future research and other use-cases.
In contrast, papers (e.g [b2, b3]) in benchmark and dataset track usually merely construct datasets, propose benchmark frameworks for specific tasks, or discuss improvements in dataset development.
>**Q2:** Also though the paper is posed as a benchmark, I miss the intended use-case of the benchmark.
**A2:** Our OoD-ViT-NAS serves as a comprehensive benchmark for training-free NAS, akin to other benchmarks [10, 11, 17, 61]. It allows us to assess the effectiveness of new training-free NAS. Compared to other benchmarks, our OoD-ViT-NAS emphasizes high-resolution input and ViT search spaces, addressing not only ID performance but also the first for OoD generalization.
Additionally, our benchmark allows us to analyze and discover new and improved approaches for finding ViT architectures that improve OoD accuracy.
>**Q3:** Do the authors intend on releasing surrogates based on proxies, to allow API based querying these metrics for any possible architecture?
**A3:** Thank you. We will consider your suggestion when releasing our benchmark publicly.
>**Q4:** The authors only study 3000 architectures, which are too few to lead to generalizable insights.
**A4:** We study 3000 ViT architectures, which is significantly more than existing research on ViT architectures on OoD generalization (3 ViT architectures in [13], 10 ViT architectures in [14] and 22 ViT architectures in [15]).
Our architectures consider important ViT structural attributes for model capacities [6] and support a wide range of model sizes.
>**Q5:** If the bias in Autoformer search space is the reason embedding dimension has a higher correlation with OOD accuracy compared to other architecture dimensions like number of heads, embedding dimensions?
**A5:** We argue that the bias of ViT structural attributes in Autoformer does not affect our main finding on embedding dimensions.
Reviewer is correct and there is a bias for the variability for embedding dimension than other structural attributes within Autofomer search space as we utilize pre-trained supernets Autofomer to efficiently and effectively produce a large number of ViTs enabling us to deeply study ViT architecture attributes for OoD generalization.
To address this bias, we conduct experiments on human design ViT. To achieve large variabilities in depth, MLP ratio and #Heads, we increase these attributes to the extent that the capacities of the altered architectures based on these attributes are larger than that based on embedding dimensions. As shown in Table R.1 in the rebuttal PDF, the results further confirms our findings that embedding dimension is the most important ViT structural attribute to OoD generalization.
>**Q6:** Could the authors also present the results of multi-objective search directly on the OOD datasets?
**A6:** Following the reviewer's suggestion, we provide additional Evolutionary Search (ES) results directly on OoD datasets in Table. 1B
To clarify, our main purpose is to produce a large number of ViTs with trained weights to analyze the impact of ViT structural attributes to OoD generalization. Therefore, we do not conduct multi-objective random or ES. In contrast, other NAS papers conduct such search algorithms to search for the best ID Acc architecture.
In response, we provide ES results directly on OoD datasets, conducting ES on the Autoformer-Small search space. We strictly follow the ES setting in [6], except using OoD dataset instead of ID dataset during ES.
**Table 1B.** Our ES experiment.
|Metrics during ES|IN-R OoD Acc|IN-Sketch OoD Acc|
|:-:|:-:|:-:|
|OoD Acc|32.09|33.81|
|DSS|29.85|30.44|
|#Param|31.65|33.28|
>**Q7:** What is the cost/search time different in computing Pareto-Fronts based on MO-search v/s Zero-Cost-Proxies (a lot of these proxies are actually not zero cost)?
**A7:** While the setup for MO-search particular for Autoformer is faster than other setups such as [b1], evaluating each candidate still takes longer time than computing “Zero-Cost-Proxies” for that candidate
We apologize for any confusion on the naming. To clarify, we consistently use the term “Zero-Cost-Proxies” as in previous works [10, 17, 60, 62].
>**Q8:** Do the authors train their own supernet, or train a one-shot supernet from scratch?
**A8:** To clarify, we did not train the supernet ourselves; instead, we used pre-trained supernets from Autoformer [6] to efficiently and effectively produce a large number of ViTs with trained weights
> **Q9:** How does this benchmark contribute to aiding development of future NAS techniques for OOD generalization?
**A9:** Please refer to **A1**
We hope the reviewer could consider increasing the ratings if our responses have addressed all questions.
[b1] Real et al. "Regularized evolution for image classifier architecture search." AAAI 2019.
[b2] Zheng et al. "Graph robustness benchmark: Benchmarking the adversarial robustness of graph machine learning." NeurIPS Benchmark & Dataset 2021
[b3] Croce et al. "Robustbench: a standardized adversarial robustness benchmark." NeurIPS Benchmark & Dataset 2021
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: I thank the authors for their response. I am increasing my score to 6.
---
Reply to Comment 1.1.1:
Title: Thank you for the positive feedback and increased rating
Comment: We sincerely appreciate Reviewer pt1z for the positive feedback and the increased rating. | Summary: This work presented OoD-ViT-NAS, a comprehensive NAS benchmark with a focus on out-of-distribution generalisation of vision Transformer architectures. The authors created a benchmark with 3000 diverse ViT architectures evaluated on 8 common large-scale OoD datasets, and provided the first comprehensive investigation of how ViT architectures affect OoD generalisation. They also conducted the first study exploring NAS for ViT’s OoD, analysing the impact of individual ViT architectural attributes on OoD generalisation.
Strengths: This study introduces the first large-scale benchmark for ViT OoD generalisation, which could be a valuable resource to the NAS field.
The authors conducted a thorough investigation on the impact of ViT architectures in relation to OoD generalisation, covering multiple aspects such as architectural attributes, correlation between ID and OoD performance and the effectiveness of training-free NAS methods.
This study reported several findings, such as the low correlation between ID and OoD accuracy, the superiority of simple proxies, i.e., model params and FLOPs, over more computationally complex training-free metrics. Although the latter has been revealed in other work on computer vision tasks, the authors further validate that point on the OoD tasks.
The reproducibility of this work is good as the authors provided a substantial amount of details of their methodology, hyper-parameters and computational resources.
Weaknesses: This study is limited to the Autoformer architecture space, which may limit the generalisability of the findings to other ViT architectures. Also it comprises the novelty of this work.
Although the empirical studies are comprehensive, there are not many theoretical explanations for the observed phenomena. More analysis and insights should be provided to support this work.
The presentation can be improved. For example, Fig 1 appears on Page 2, but is first mentioned at Line 226, Page 7.
Technical Quality: 3
Clarity: 3
Questions for Authors: Line 119 shows that the work is based on Autoformer. What is the reason behind this choice? What is new?
Line 145, OoD Classification Accuracy, what is the difference to the ID Acc in Line 142? Maybe a formula could clearly explain that.
Line 154, we make use of the One-Shot NAS approach. Why not use zero shot NAS instead? What are the benefits that one short NAS can bring but not zero shot NAS?
Table 2 shows correlations of some zero proxies. Can SWAP be applied here as it demonstrates a much higher correlation than those on the table? SWAP-NAS: Sample-Wise Activation Patterns for Ultra-fast NAS. Peng et. al. ICLR'24.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors discussed the limitations of this work, but in the appendix instead of the main text.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate positive feedback and the valuable comments of the Reviewer.
>**Q1:** This study is limited to the Autoformer architecture space, which may limit the generalisability of the findings to other ViT architectures.
**A1:** Autoformer is the most widely used search space in many recent ViT works [7, 65, 60, 10, 11, 17, 66, 67]. In addition, a comparison of other ViT search spaces is summarized in Table A1 below. In summary, AutoFormer allows us to effectively and efficiently pool a large number of ViTs for our OoD-ViT-NAS benchmark and enables us to deeply study ViT architecture attributes for OoD generalization.
**Table. A1.** Compare Autoformer with other spaces.
|Criteria|Autoformer [6]|S3 [16]|BossNAS [a1]|NAS-ViT [12]|PiT [a6]|
|-|:-:|:-:|:-:|:-:|:-:|
|One-shot NAS w/o the need of further finetuning/retraining|✓|✓|X|✓|X|
|Availability of range of ViT architectural attributes in the sampled ViT|✓|✓|X|X|✓|
|Accessible source code/models|✓|X|✓|✓|✓|
|
Our findings could be applicable to numerous ViT works utilizing this search space. Additionally, the Autoformer search space is accessible and comprehensive, including important ViT structural attributes for model capacities [6] and accommodating a wide range of model sizes.
Furthermore, we further confirm our main findings on human design ViT architecture. As shown in Table R.1 in the rebuttal PDF, the observation is consistent with our findings in the main paper, where embedding dimension is the most important ViT structural attribute to OoD generalization.
>**Q2:** More analysis and insights should be provided to support this work.
**A2:** Following Reviewer’s suggestion, in this rebuttal, we provide an additional frequency analysis on our main finding showing that increasing embedding dimensions helps ViT learn more High-Frequency-Component (HFC), leading to better OoD generalization.
Particularly, in the literature, a model achieves higher accuracy on HFC of the test samples indicates that the model has learned more HFC [a2, a5]. By learning more HFC, the model improves OOD generalization [a2, a3, a4].
Our hypothesis is that increasing embedding dimension helps ViTs learn more HFC resulting in improving OoD generalization. We strictly follow the experiment setups in [a4] for the HFC filtering process. As shown in Figure R.1 in the rebuttal PDF, when increasing embedding dimensions, the performances obtained on HFC of test samples are improved. This observation holds true across setups varying radius r when filtering HFC. According to [a2, a5], our observation supports that increasing embedding dimension helps ViT learn more HFC. This leads to better OoD generalization following [a2, a3, a4].
>**Q3:** Fig 1 appears on Page 2, but is first mentioned at Line 226, Page 7.
**A3:** We will improve the presentation as suggested in the revision.
>**Q4:** What is the reason behind the choice of Autofomer? What is new?
**A4:** Our reason for selecting AutoFormer is that: AutoFormer is the most widely used [7, 65, 60, 10, 11, 17, 66, 67]; In addition, AutoFormer allows us to effectively and efficiently pool a large amount of ViT for our OoD-ViT-NAS benchmark and study a range of ViT structural attributes to OoD generalization.
Please refer to **Q1** for the details.
>**Q5:** What is the difference between OoD and ID Acc.
**A5:** The OoD Acc and ID Acc metrics are similar, except for computing on different evaluation data (i.e., ID dataset and OoD dataset for ID Acc and OoD Acc, respectively).
Specifically, the classification accuracy (Acc) is the number of correct predictions divided by the total number of data points: $Acc = \frac{1}{N}\sum_{i=1}^N\mathbf{1}(T(x_i) = y_i)$
Where $T(.)$ is the classifier. $A = {(x_i,y_i)}_{i=1}^N$ is the evaluation dataset, where $x_i$ is the input and $y_i$ is the label for the i-th data point. $N$ is the number of data points in $A$. Here, the difference between ID Acc and OoD Acc are Acc computed on different dataset $A$.
>**Q6:** What are the benefits that one short NAS can bring but not zero shot NAS to construct OoD-ViT-NAS?
**A6:** One-Shot NAS allows us to efficiently produce a large number of ViT (i.e., 3,000) with trained weights, by sampling from the supernet [20, 6]. In contrast, the Zero-Shot NAS approach does not provide ViT with trained weights. Therefore, Zero-Shot NAS is unable to construct the benchmark necessary for our investigation.
>**Q7:** Result for SWAP proxy
**A7:** We provide additional results for SWAP. It achieves promising results for CNN search spaces. However, as shown in Table A2 below, when dealing with ViT-based search spaces, SWAP becomes less effective than ViT-design proxies and simple proxies such as #Params or FLOPs, which is consistent with our observations.
**Table. A2.** Experimental results for SWAP proxy
|Training-free NAS|Originally Proposed For Performance|Originally Proposed For Architecture|Correlation with ID Acc|Correlation with OoD Acc|
|:-:|:-:|:-:|:-:|:-:|
|SWAP|ID Acc|CNNs|0.2651 ± 0.3381|0.1201 ± 0.1069|
|
>**Q8:** The authors discussed the limitations of this work, but in the appendix instead of the main text.
**A8:** Thank you, we will bring limitation to main text if possible.
We sincerely hope that reviewer could consider increasing the ratings if our responses have addressed all the questions.
[a1] Li et al. "Bossnas: Exploring hybrid cnn-transformers with block-wisely self-supervised neural architecture search." ICCV 2021.
[a2] Bai et al. "Improving vision transformers by revisiting high-frequency components." ECCV 2022.
[a3] Gavrikov et al. "Can Biases in ImageNet Models Explain Generalization?." CVPR 2024.
[a4] Wang et al. "High-frequency component helps explain the generalization of convolutional neural networks." CVPR 2020.
[a5] Shao et al. "On the adversarial robustness of vision transformers." arXiv 2021.
[a6] Heo et al. "Rethinking spatial dimensions of vision transformers." ICCV 2021.
---
Rebuttal Comment 1.1:
Comment: The score is increased to 7 as the rebuttal has addressed most of my concerns and questions.
---
Reply to Comment 1.1.1:
Title: We are glad that our rebuttal addresses your concerns and Thank you for the increased rating
Comment: We are deeply grateful to the reviewer for the positive comments and the increased rating. We are glad that our response has addressed your concerns. | Rebuttal 1:
Rebuttal: We thank all reviewers for their valuable time and effort to review our work. We appreciate Reviewers’ kind comments, such as:
- "The authors conducted a thorough investigation on the impact of ViT architectures in relation to OoD generalisation, covering multiple aspects such as architectural attributes, correlation between ID and OoD performance and the effectiveness of training-free NAS methods" (Reviewer HQRy)
- "They explore various ways for increasing OOD accuracy, which is most important for real-world problems" (Reviewer f7Da)
- "The paper is the first one to study out-of-ditribution generalization of architectures discovered by NAS" (Reviewer pt1z)
- "Experiments on ZCPs (Zero-Cost Proxies) are thorough and the results are insightful" (Reviewer pt1z)
- "This study introduces the first large-scale benchmark for ViT OoD generalisation, which could be a valuable resource to the NAS field" (Reviewer HQRy)
- "OoD-ViT-NAS benchmark allows us to look for new and better approaches for finding new models structures to improve OOD accuracy" (Reviewer f7Da)
- "The reproducibility of this work is good as the authors provided a substantial amount of details of their methodology, hyper-parameters and computational resources" (Reviewer HQRy)
- "The paper is written in a clear and easy to understand manner" (Reviewer pt1z)
We would also like to express our appreciation to all the Reviewers for giving us the opportunity to clarify our work, as well as the constructive comments.
Based on Reviewers’ suggestions, in this rebuttal, we include:
- Additional results to further validate our findings on human design ViT setups
- Additional analysis to further understand why increasing ViT embedding dimension can improve OoD generalization
- Additional results on the SWAP zero-cost proxy
- Additional results on evolutionary search directly on OoD dataset
- Additional observations on Latency and Memory consumption
Importantly, additional results are consistent with our findings in the main paper.
In what follows, we provide comprehensive responses to all questions. We could provide more details if there are further questions. We hope that our responses can address the concerns.
Pdf: /pdf/f127ad83fa6bf51f49bd2aee5980a0b1e4510b97.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
EvolveDirector: Approaching Advanced Text-to-Image Generation with Large Vision-Language Models | Accept (poster) | Summary: The paper introduces "EvolveDirector," a novel framework designed to train a text-to-image generation model that can compete with advanced models using only publicly available resources. The authors aim to address the limitations posed by proprietary data and the inaccessibility of parameters in state-of-the-art models by leveraging public APIs to obtain text-image pairs for training.
The core of EvolveDirector relies on the interaction with advanced models through their APIs to generate data for training a base model. A significant challenge is the need for a large volume of data (10 million samples or more), which is both time-consuming and computationally expensive. To mitigate this, the authors employ pre-trained large vision-language models (VLMs) to dynamically update and refine the training dataset through various operations like discrimination, expansion, deletion, and mutation.
The framework's efficiency is demonstrated through experiments that show a significant reduction in the required data volume, with only 100k samples needed for comparable performance to models trained on 11 million samples. Furthermore, the paper claims that their final trained model, "Edgen," outperforms several advanced models and will be open-sourced for downstream tasks.
The paper is well-written and presents a strong case for the capabilities of EvolveDirector. It would be beneficial to see additional experiments or comparisons with other state-of-the-art models to further establish the framework's superiority. Moreover, the paper could delve deeper into the specific mechanisms of knowledge transfer from the VLMs to the base model.
Strengths: The paper's core strength lies in its innovative approach to overcome the limitations posed by proprietary data and models. By leveraging publicly available APIs and pre-trained vision-language models (VLMs), EvolveDirector offers a feasible solution to access advanced text-to-image generation capabilities in an open-source environment.
The trained model, Edgen, shows impressive performance, outperforming several advanced models in various aspects. This speaks to The paper includes extensive experiments and ablation studies that validate the framework's effectiveness. The use of human evaluation for model comparison adds a layer of qualitative analysis to the quantitative results.
The authors acknowledge potential biases in the generated content and the importance of safety. They propose integrating debiasing methods and human feedback, showing a responsible approach to AI development.
The methodology is technically sound, with a clear explanation of the framework's components and the role of VLMs in guiding the evolution of the base model. The paper also details the training strategies and hyperparameter settings, contributing to its reproducibility.
The paper discusses the broader impacts of the work, including potential positive societal impacts such as revolutionizing digital media creation and negative impacts like the risk of bias and misinformation. This demonstrates a well-rounded consideration of the technology's societal effects.
Weaknesses: 1. The layout needs to be adjusted. For example, Figure 1 takes up an entire page.
2. The research question 1 & 2 is insightful, which is an important research topic in the perspective of distillation and data annotation.. However I see limited analysis or exploration for this question. Only one result from the trained DiT model to PixART may not be enough as the answer to question 1. As the authors motivate this work with question 1 & 2, introducing more related analysis would be helpful.
3. It seems like the data construction pipeline could be improved by introducing images in reality as [1]. Given that you use VLMs for data refinement, it may be beneficial to use images in reality as ground truth and use VLMs to generation prompts.
4. In Section 4.3, models trained without Discrimination & Expansion & Mutation (first line) can gain comparable results as models trained with Discrimination & Expansion & Mutation (last line). This may cause confusion, as it may lead to a questions: would it be necessary to perform Discrimination & Expansion & Mutation? Would it be all we need to just query advanced generative models to get data?
[1] Zhao, Haozhe, et al. "UltraEdit: Instruction-based Fine-Grained Image Editing at Scale." arXiv preprint arXiv:2407.05282 (2024).
Technical Quality: 3
Clarity: 3
Questions for Authors: See weakness.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, this paper has limitation section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for acknowledging that our approach is innovative, "EvolveDirector offers a feasible solution to access advanced text-to-image generation capabilities in an open-source environment", "The trained model, Edgen, shows impressive performance", "The paper includes extensive experiments and ablation studies ", and "The methodology is technically sound, with a clear explanation ...", etc.
We will address your concerns as follows.
----
**W.1**:
In the revision, we will consider shrinking the space occupied by Figure 1, and use the saved space to add more discussions and analyses.
**W.2**:
Thank you for acknowledging the two research problems. Generating large-scale data based on the naive pipelines to train the base model is extremely costly and time-consuming (hundreds of A100 GPU days for training the base model on million-scale data), let alone training multiple base models to approach multiple advanced models. To tackle this question more efficiently, we simplify the task by fixing the same base model and advanced model to examine the impact of different data scales on the performance of training the base model to approach the advanced model, as shown in the 7 sets of experimental results in Table 2. The results show that the generative capabilities of the advanced model can be learned through training on its large-scale generated data, which is over 11 million. However, if the training data is reduced to 10% or even 1% of the original amount, the performance of the trained model will significantly decrease. The proposed EvolveDirector can train the model in a data-efficient manner, thus offsetting the decrease in model performance caused by reducing the amount of training data.
Regarding question 2, the base model can gain better performance by learning to approach multiple advanced models simultaneously, thus providing an answer to this question. This success is attributed to VLM selecting the best ones from generated images of multiple advanced models as training data.
We will add the discussions and clarification in the revision.
**W.3**:
Thanks for your comment. Introducing real images as base images and modifying them to generate more training samples is an interesting idea. The mentioned work [1] is suitable to benefit the instruction-based image editing task. However, when pre-train a T2I foundation model, this pipeline may struggle to generate imaginative samples since it creates new samples based on editing the real ones. Besides, in the paper, we focus on exploring how to approach the advanced models based on their generated data. The experimental results demonstrate that relying solely on generated data is sufficient to approach advanced models.
We will add an introduction to this reference [1] in the related works.
**W.4**:
Please note that the data volumes of these two training methods are significantly different, as shown in Table 2. With Discrimination & Expansion & Mutation, the base model can achieve similar generation performance on a much smaller data scale. We will highlight this to avoid the confusion.
----
[1] Zhao, Haozhe, et al. "UltraEdit: Instruction-based Fine-Grained Image Editing at Scale." arXiv preprint arXiv:2407.05282 (2024). | Summary: This paper investigates how to train the text-to-image model comparable to advanced models using publicly available resources. Specifically, EvolveDirector collects training data with the APIs of advanced models, and further uses a VLM to continuously refine the training dataset. The proposed VLM refinement significantly reduces the data volume needed to teach a base T2I model and improves the training efficiency.
Strengths: 1. The proposed framework is novel, and can significantly reduce the volume of data required to approach the SOTA text-to-image generation performance.
2. This paper investigates one interesting direction: training the base T2I model to approach the advanced models using their generated data, which can be crucial for bridging the gap between public T2I models and close-sourced T2I models.
3. The way of utilizing VLMs is interesting and effective. The VLM dynamically maintains the training dataset to achieve efficient training.
4. Edgen shows good human evaluation performance on generation faithfulness to text prompts, especially in multi-object generation and text generation.
5. The paper is well-organized and technically clear.
Weaknesses: 1. The explanation of how to instruct VLM is not detailed. For example, how to ensure the generated text prompts can be parsed correctly and free of errors, which could introduce noise into the training data.
2. It would be better to also evaluate the final model on other benchmarks (e.g., DSG, TIFA).
3. Missing references:
(1) DreamSync: Aligning Text-to-Image Generation with Image Understanding Feedback
(2) SELMA: Learning and Merging Skill-Specific Text-to-Image Experts with Auto-Generated Data
Technical Quality: 4
Clarity: 4
Questions for Authors: Typos: in line 336, “outperform” -> “outperforms”.
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your acknowledgment and comments. We will address your concerns as follows.
Notations: W: Weakness, Q: Question
-----
**W.1**:
The detailed instructions for VLM are provided in the supplementary. We structure the outputs of VLM to ensure the generated text prompts can be parsed correctly and free of errors. The diversity of output formats from VLM can pose challenges for automated parsing. We found that by providing specific instructions to the VLM, its output format can be standardized. Specifically, when prompting the VLM to generate more text prompts, we offer instructions such as “Arrange them in the format of the list ["Text description 1", "Text description 2", ...].” This approach directs the VLM to generate outputs in a consistent format.
**W.2**:
Thanks for the suggestion. TIFA [1] and DSG [2] are automatic evaluation methods designed to call LLMs to generate several questions and utilize the VQA model to answer them, so as to evaluate the alignment of the generated samples with the text prompts. Since DSG is the newest benchmark and more comprehensive. We evaluate the models on DSG and the results are reported as follows. The open-sourced mPLUG-large [3] is selected as the VQA model.
| | | | | | | | |
|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| | Base Model | Pixart-$\alpha$ | DeepFloyd IF | Playground 2.5 | Ideogram | Stable Diffusion 3 | Edgen (ours) |
| DSG (↑) | 68.61 | 72.60 | 74.49 | 74.98 |79.97 | 80.05 | 80.61 |
**W.3**:
Thank you for providing these references. We will discuss these works in the related works section to strengthen it. These two works both aim to motivate T2I models to learn from their self-generated images. DreamSync [4] is proposed to improve the T2I models by selecting their own generations and fine-tuning them on the selections. SELMA [5] utilizes the LoRAs to fine-tune the T2I model on different skill-specific image-text pairs, and then merges these LoRAs to build a unified model.
**Q.1**:
Thanks for pointing this out. We will do our best to check and correct the typos in the paper.
----
[1] Hu, Yushi, et al. "Tifa: Accurate and interpretable text-to-image faithfulness evaluation with question answering." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.
[2] Cho, Jaemin, et al. "Davidsonian scene graph: Improving reliability in fine-grained evaluation for text-image generation." arXiv preprint arXiv:2310.18235 (2023).
[3] Ye, Qinghao, et al. "mplug-owl: Modularization empowers large language models with multimodality." arXiv preprint arXiv:2304.14178 (2023).
[4] Sun, Jiao, et al. "Dreamsync: Aligning text-to-image generation with image understanding feedback." Synthetic Data for Computer Vision Workshop@ CVPR 2024.
[5] Li, Jialu, et al. "SELMA: Learning and Merging Skill-Specific Text-to-Image Experts with Auto-Generated Data." arXiv preprint arXiv:2403.06952 (2024).
---
Rebuttal Comment 1.1:
Title: Final Rating
Comment: Thanks for the response. The additional experiments address my concerns. I'll keep my rating as strong accept (8).
---
Rebuttal 2:
Title: Reply to Reviewer i5a7
Comment: Dear Reviewer i5a7,
Thank you for your recognition and for providing insightful comments.
Best regards,
Authors of Paper 1029. | Summary: This paper explores the effectiveness of training a text-to-image (T2I) model using synthetic examples generated by existing T2I models. The authors find that on the order of 10M image-text pairs are necessary to approach the quality of a good model like PixArt-Alpha, while using only 1M or 100k examples results in serious deterioration.
To improve sample efficiency, the authors introduce their online learning framework, EvolveDirector, which leverages a vision-language model (VLM) to curate a better online training set. The VLM is used to compare the current trained checkpoint's generated images against the reference T2I model's images, in order to determine which training examples to drop from the training set, and which training example prompts to generate more variants of for further training. Occasionally, entirely new training prompts are introduced to increase diversity. Overall, this results in a training process which matches the quality of PixArt-Alpha according to human comparisons using only 100k examples.
The authors perform further ablations to show the necessity of each part of the framework, and train a model using four different advanced T2I models as targets, resulting in a final T2I model, named Edgen, which surpasses each of the target models in overall human preferences.
Strengths: **Originality.** The paper demonstrates the possibility of a training framework using synthetic data which leverages expert T2I models and VLMs to maximize data efficiency. This opens up an avenue of research into better use of expert models for informing the training procedure in T2I.
**Quality.** In certain areas, the authors perform thorough ablations to determine the best procedures, and to provide concrete numbers to compare against. For example, they evaluate the candidates for an expert VLM along several axes to justify their choice, and train baseline T2I models at several data scales to demonstrate the performance drop with decreasing scale.
**Clarity.** The high-level approach and framework are explained clearly.
**Significance.** The paper shows that existing T2I model performance can be replicated using only about 1% of the data needed with naive knowledge distillation.
Weaknesses: **Quality.** While most of the experimental details are explained thoroughly, the paper omits any description of the text data used. The text data is a crucial part of the proposed framework, since it is used to seed the training data, as well as for the mutation operation, presumably. There is also no mention of the source of the data used for evaluation. Along the lines of evaluation, the dataset used to compute FID score should also be mentioned.
**Clarity.** The paper fails to clarify many useful details earlier in the paper (abstract + introduction), leaving the overall understanding of the method vague until reaching Section 3. For example, the key components of expansion, deletion, and mutation are mentioned but not explained at all in the introduction.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. What is the source of your training data? The training images are generated by the expert/advanced T2I models, but where do the input prompts come from?
2. Following Q1, how do you know the seed prompts for training (i.e. the initial training set) cover a diversity of prompts? While using LLMs to create variations of prompts can result in new combinations of objects or concepts, the resulting new prompts are unlikely to capture the same diversity of text present in natural datasets.
3. Where does the *mutation* operation source the new prompts from? Are they also generated by the same VLM, or taken from the same source as your initial training data? If they come from an existing dataset, what is the benefit of adding them midway through the training process instead of just starting with a initial training set?
4. To my understanding, LLaVA-NeXT (and many other current VLMs) are not trained on examples involving multiple images. Are the images for comparison concatenated or passed separately to LLaVA? Do you notice any statistical confounders, such as choosing the first image more often than the second?
5. The instructions for "expansion" to the VLM, to generate new variations on the prompt, specify only to replace nouns in the prompt. Why is this the strategy used to generate similar prompts? What happens if the T2I model is struggling to generate a specific noun, and the syntactic structure of the prompt is not the weak point? In that case, it feels like the new prompts would not be relevant for further training.
6. When selecting which VLM to use, you say "The output [of the VLM] is scored by 0 for wrong or 1 for correct by human raters". Why not ask the human raters to rank the generations themselves, and then compute how many examples the VLM and the humans agreed on? Having the human raters rate the VLM output seems like it could induce bias in the raters.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors note that their method could result in a T2I model which inherits biases from both the teacher T2I model and the VLM used for evaluation. They also describe the potential positive and negative broader societal impacts of the research, including improving model training costs and potential for misuse.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your recognition and for providing detailed comments. We will address your concerns as follows.
Notations: W: Weakness, Q: Question
-----
**W.1 & Q.1**:
We will incorporate more details about the source text prompts in the revision. Initially, the text prompts are randomly selected from both the SAM captioning dataset, aiming to capture the diversity of natural data, and the community-sourced text prompts, to ensure alignment with user preferences and interests. The ratio of samples from these two resources is equal to each other. These text prompts will be made available for public access. For evaluation and FID calculations, the text prompts are also sampled from these two sources and exclude the ones sampled for training. Below are some randomly selected examples of the text prompts.
SAM Captions:
- "A woman and a young child, likely a toddler, standing together in a dimly lit area. The woman is wearing a headscarf, and the child is wearing a striped shirt. The woman is holding the child close to her, possibly providing comfort or protection. The overall atmosphere of the image is warm and intimate, showcasing the bond between the woman and the child."
Community-sourced text prompts:
- "Text, "Deisy Yaneth" in one line, one word, no spaces, creative colorful sign made of watter and smoke, black background, artistic,ultra realistic, splash effect, incredible details, rich colors, high quality, in the style of Dan Mumford and Daniel Merriam, 3D rendering, poster, photo, fashion, cinematic, typography, 3d render"
- "A realistic cat, act as a nurse, give a injection to me with syringe"
**W.2**:
Thanks for this comment. We will add more details and explanations in these sections.
-----
**Q.2**:
As mentioned in the response to W1&Q1, at the beginning of training, we sample the text prompts both from the SAM captions and community-sourced text prompts, to ensure a broad coverage of prompt diversity. Based on this, the VLMs are encouraged to generate more diverse text prompts, such as the example shown in Fig. 3. Besides, as shown in Table 1, the human raters agree with the high diversity of the generated text prompts. It is worth highlighting that there is no strict need to capture the distribution of text prompts in existing datasets, such as the SAM and LAION, where the long tail distribution, etc., may not be the most beneficial for training. Our proposed method aims to adaptively generate the samples that are high-value for training.
**Q.3**:
We agree with you that if new prompts come from an existing dataset, there is no extra benefit. When developing our method, we took this into consideration and designed the mutation operation to create completely new text prompts using the VLM, without relying on existing ones. The mutation is operated in the training process to encourage the model to explore and learn from a broader domain of text prompts.
**Q.4**:
We concatenate the two images to form a single contrast question input for the VLM. In our concatenating and comparing manner, as shown in Table 1, LLaVA-NeXT achieves a similar performance as the Qwen-VL-Max, which is able to interact with multiple images. We arrange the two images in a random order to avoid any potential bias in selection. In our statistical validation, no location bias was found in the selections of LLaVA-NeXT. The results in the "Discrimination" column in Table 1 also show that the selections of LLaVA-NeXT match human raters closely.
**Q.5**:
Please note that the expansion is to generate more text prompts and will not remove the original ones. If the T2I model is struggling to generate images aligned with the text prompts, the original text prompts will continuously be involved in the training. Besides, whether the new prompts are relevant for further training is determined by the VLM based on their value for learning.
**Q.6**:
We agree with you that ranking the generations by human raters is an alternative way for evaluations. But in practice, human raters not only evaluate the discrimination, but also evaluate expansion accuracy and diversity, and the latter two are evaluated by scoring outputs of VLM. To streamline the evaluation process and ensure a consistent execution way for human raters, we choose the evaluation approach outlined in the paper. To eliminate potential bias, each output of VLMs is independently evaluated by five human raters, and the types of VLMs are anonymous to them.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I would appreciate adding more details into the paper about the sources of data for all uses, as you explained in response to Q1. In addition, I believe the paper would benefit from more details on the Mutation operation, since at the moment there is only one line describing the operation. For example, what prompt(s) do you use to get the LLM to generate entirely new T2I prompts?
Assuming this information is added to the paper/appendix, I am willing to raise my score to 7.
---
Rebuttal 2:
Comment: Dear Reviewer gzrb,
Thanks for your valuable feedback. Yes, we will add the details in the response and provide more explanations of mutation in the main paper. The detailed prompts we use to instruct the VLM will be provided in the appendix. Thanks again for your comments on improving our paper.
Best regards,
Authors of Paper 1029.
Title: Reply to Reviewer gzrb
---
Rebuttal 3:
Title: Reply to Reviewer gzrb
Comment: Dear Reviewer gzrb,
To further address your concerns, we provide detailed text prompts for mutation as follows, which will be included in the revised paper.
Text Prompt:
'Now, exercise your imagination to generate 1 new text description for visual contents, {*enhanced_prompt*}. It should be completely unrelated to the previous images and have a completely different structure from the previous text descriptions. Arrange it in the format of a list ["Text description"].'
The *enhanced_prompt* is a prompt controlling the length and the granular level of the generated samples.
The *enhanced_prompt* is randomly sampled from the following ones:
- 'which should contain less than 10 words and be rough and short'
- 'which should contain less than 30 words with different granular levels of details'
- 'which should contain over 30 words with a lot of details'
Three corresponding generated results are as follows:
- ["The sun rises over a calm sea."]
- ["A lone adventurer stands at the edge of a cliff, gazing into the distance, with a single white bird flying overhead and a black raven perched on a nearby rock."]
- ["In the serene setting of a lush garden, a group of vibrant flowers and a variety of exotic fruits coexist harmoniously. The garden is teeming with life, from the delicate petals of the flowers to the succulent fruits hanging from the trees. The colors are a feast for the eyes, with the reds, blues, and yellows of the flowers contrasting beautifully against the green foliage. The fruits, in shades of red, orange, and yellow, add a pop of color to the scene. The garden is a symphony of nature, where every element has its place and purpose."]
Best regards,
Authors of Paper 1029. | null | null | Rebuttal 1:
Rebuttal: We would like to express our gratitude to the reviewers for their insightful comments. We appreciate the recognition of the strengths of our paper, including the novelty and effectiveness of the proposed EvolveDirector, and the recognition of extensive experiments and ablation studies.
We would like to highlight the following recognitions from the reviewers:
- Reviewer gzrb: "This opens up an avenue of research into better use of expert models for informing the training procedure in T2I."
- Reviewer i5a7: "The proposed framework is novel" and "The way of utilizing VLMs is interesting and effective."
- Reviewer 9FQn:
"EvolveDirector offers a feasible solution to access advanced text-to-image generation capabilities in an open-source environment", "The trained model, Edgen, shows impressive performance", and "The paper includes extensive experiments and ablation studies that validate the framework's effectiveness."
We address the concerns of reviewers in detail in the subsequent responses respectively. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.